Get the Linkedin stats of Joe Rhew and many LinkedIn Influencers by Taplio.
open on linkedin
I build a GTM system for early stage VC-backed B2B startups. Some examples of what we do for our clients below: 1. Outbound - End-to-end email infrastructure setup and maintenance - Diversification of mailboxes across 3 ESPs - Hyper-personalized and relevant copies - Methodical testing of permutations - Granular segmentation of TAM - Waterfall enrichment of emails - LinkedIn connection requests - Lead magnet strategy 2. Inbound - Lead qualification - Lead enrichment - Lead routing 3. Signal-based - Website visitor de-anonymization - Linkedin engagements - Second party intent - Recent fundraising - Hiring for a role - New in role 4. CRM enrichment - AI prompting for enrichment - Data hygiene & deduping - Fully automated triggers
Check out Joe Rhew's verified LinkedIn stats (last 30 days)
Use Taplio to search all-time best posts
Attio just launched a native call recorder. Here's a quick walkthrough of what makes it useful today, and what features might make it even more useful in the near future. And below is my take on what this might mean for the broader call recorder market: 1. Most call recorders these days are built off of recall.ai (Granola and Gong are two exceptions that I'm aware of) 2. Because of this, the basic features like video recording, transcription, etc. have been commoditized 3. The value (and therefore, the extractable margin) that call recorders building on top of this commodity provide is largely in their UX + custom summarization prompts + integration capabilities 4. When I ran my previous evaluation process pre-Attio, my sole focus was on the *integration* capabilities, because I was going to write my own prompts + push the calls into Notion and CRM. Didn't care about the UX or prompts 5. With this native embedding of call recording into a CRM, the rationale for keeping a third-party call recorder has weakened. I like Attio's UX, I can still write my own prompts, and now the transcripts live inside the CRM - no need for custom integrations. 6. In reality, it's not as simple as I'm making it out to be. Attio's call recorder is very much a v1 and there are some areas that I think could be improved (e.g. more control over prompt engineering). And there are call recorders like Granola that have a vastly different approach to the UX, and are gaining market share because of it (only possible because they aren't constrained by recall.ai providing the underlying recording tech). Or call recorders like Gong that have a huge market share in the enterprise segment. 7. All this to say, I have no idea how the market is going to shake out but we might see some of: - Previous recall.ai customers deciding to build from the ground up - Legacy CRMs acquiring call recorders - UX becoming the key differentiator At any rate, I appreciate that Attio is re-imagining what is possible for CRMs in the AI era and leading the charge here.
My good friend Matteo Tittarelli ⚡️is launching a 4-week course on GTM Engineering. This course will help you go 0 to 1 on some of the most essential tools in today's stack. The course also has a pretty stacked (no pun intended) instructor lineup, with people I have learned and continue to learn from everyday. Just to drop a few names on the roster: 🦾Eric Nowoslawski Kellen Casebeer Eoin Clancy Jorge B. Macías 🇵🇷 Steven Male Zach Vidibor Arpit Choudhury Tariq Minhas 🧱 A great opportunity to learn different tools from people who build/use them daily and build a solid foundation of how to engineer and run AI-enabled GTM motions. Matteo has given me a couple promo codes that I can give out to folks who are interested + would be a good fit. Please dm me if you're interested. p.s. I am not getting paid to make this post or share the promo code.
Testing an end-to-end, AI-driven outreach workflow. Sharing in case this is helpful. I have a large repository of sales resources for a client: - website content, landing page language + blogs, etc. - sales decks - call recording transcripts with personas we're targeting Process all of these through a series of LLM prompts to distill into 2 summary documents: 1. What does the company do? 2. What are the main pain points of [target persona]? 3. How does what the company do help [target persona]? 2 and 3 are dynamic and can be swapped out for another persona. The 2 summary documents are turned into JSON for easy feeding into an LLM. On top of the 2 docs, we added a one pager on cold email best practices - incorporating the wisdom of our favourite expert copywriters. These get turned into a cached prompt to be repeatedly fed into an LLM, for lower cost + latency. Then there is a non-cached component of the prompt - which takes automated research on a prospect. Their professional experiences, headlines, etc. Have yet to incorporate their content activity (posts, comments, likes, etc.) but this could be interesting. Wrap all of this into an API endpoint that takes the company URL + person's LinkedIn username as params, and spits out an email sequence. Tech stack powering this is: - Firecrawl for crawling web data - Anthropic for all LLM calls - Typescript for code - Render for hosting Everything I mentioned is currently doable using off-the-shelf software so I may very well be re-inventing the wheel here. But my thinking is that taking the boilerplate route doesn't give you the full control and visibility over the prompts, custom weighting of data points, etc., and THEREFORE, not as iterable and experimentable. We'll see if this hypothesis proves out to be true.
The nodejs server I started building last year now sits at ~14,000 lines of code. I've recently added a bunch of endpoints to it that dramatically increase my quality of life, like: - Finding an email address for a contact - Validating an email address - Generating an email sequence - Fetching a person's profile info - Fetching a person's posts It's gratifying to see these small incremental improvements add up over time and drive value for clients. And it's a reminder that there are no shortcuts or instant gratifications; things worth doing take time...unless you're a cracked vibe coder. p.s. need to refactor this code
Build modular prompts for better performance and scalability, while saving 50-90% of input token costs. When using LLMs in GTM, most of the prompts we write will have 2 high-level components: 1. Context about our (or our client's) business: This context is uniform across all prompts 2. Specific context/instructions for the job: This might be different every time. For example, we might feed in enriched lead data - their professional career, interests, etc. - to personalize the outreach on an individual basis For example, we might be writing a cold outreach email to 1,000 leads. In this case, we'd re-use Component 1 a thousand times while Component 2 will be different each of those thousand times. Enter prompt caching. Prompt caching allows you to cache parts of your prompt that you plan to re-use over and over again at a steep discount (50% for OpenAI; 90% for Anthropic). For OpenAI, the caching is done automatically. For Anthropic, you have to explicitly call the caching (and pay 25% more on the initial caching run, but pay 90% less on all subsequent runs). This bifurcation of an evergreen component vs. dynamic component in your prompt helps you design better prompts. It forces one to try to pull the commonalities into the evergreen component. Hit the right level of abstraction. Evaluate it against multiple examples. And identify and handle any edge cases. The screenshot example, although much of it is blurred, is using a generic cold email best practices prompt that I put together, which can be applied to any company because it does not contain any company-specific instructions. *This won't work every time as sometimes you need to give specific examples and instructions, but it's a good place to start* And so, this evergreen component can be reused across multiple projects. One caveat with prompt caching is that your prompt is only cached for a short time and refreshed every time it is used - 5 minutes in Anthropic's case. So you have to batch your jobs to run a short burst of cached prompts.
Langfuse is great for LLM Ops. I'm just getting started with it, but love their data model consisting of spans (tracks the temporal duration of a series of events), events (discreet events), and generations (calls to LLMs). Spans help users understand the latency of the overall code execution at a very granular level, and how one might be able to optimize each step. Generations help track every detail of the LLM call, from the model, input tokens, output tokens, etc. so you can really see what's going on under the hood and debug any wonky outputs. Their free tier offers 50,000 events per month AND they're open-source and have a self-host option 🤯 If you're interested in levelling up your LLM Ops in GTM, you should absolutely check it out. Excited to dig into other features like prompt management and evals. Will share more on how those work. p.s. I'm not getting paid for this post. p.p.s. their docs could be better; took me a while to set it up correctly
walkthrough: how I use Attio + AirOps to research a person based on their LinkedIn profile and posts Here is how I've automated researching anyone that fills out an inquiry form on our website. We use Attio as our source of truth + AirOps to run the AI workflows. 1. We find the person's linkedin profile URL by triangulating across the data points we have from the form (Company name, URL + person's name, email) 2. We feed the linkedin URL to custom API endpoints to enrich with the person's profile as well as recent posts 3. We combine the results with a custom prompt that asks LLMs to extract information that is specifically relevant to me (i.e. tell me about the prospect's values/character based on their posts) and our business (i.e. how knowledgeable are they about AI + GTM) 4. We then push the research into Attio as notes and Notion as markdown pages tl;dw takeaways: - OpenAI's web search does a good job finding a person's LinkedIn URL based on the company's domain + person's name. Based on a small sample size but I was impressed - the value in building custom workflows like this is the control you have over prompts; you can write a prompt to get you unique data points that others sleep on - always build your automations in a modular, loosely coupled fashion so you can swap in/out components For this v0 we're only doing qualitative research, which helps me prep for the initial call. But I'm starting to think about quantifying the key data points and generating a lead score as our inbound volume picks up. Hope this helps anyone thinking about how to automate inbound prospect research.
Is this what people call vibe GTM-ing these days? Connecting Claude to MCPs to streamline the research process makes standing up an experiment super easy. I'll record a walkthrough on this soon using The Workflow Company as an example.
Corporate life can feel monotonous. Days turn into weeks, months into quarters, and years into more years. Then suddenly, on some random day, something happens that reshapes how you think about work and what it means to you. For me, it was a Friday night at the investment bank where I was a lowly analyst. A big RFP had dropped that afternoon, one that would make or break the firm's year. Our weekend was gone, the next few weeks shot. Our VP stepped out early evening, saying he'd see us Saturday morning for a work session. But around 10pm, he unexpectedly returned to the office. When we asked about his daughter's birthday he was supposed to attend, he told us he'd gone to dinner but left early because this RFP was too important. When we asked if his daughter was sad, he said, "Yeah, she was devastated. But when she grows up, she'll be grateful for the sacrifice I'm making." I remember realizing that I was looking at what I might one day become. That short exchange defined how I thought about family, work, and money. I could see through this man's lens - he'd already gone far down a track and was committed to seeing it through, making justifications for deprioritizing family. I quit banking not long after. I knew if I stayed, I'd start making the same trade-offs and justifications to myself and my values will start morphing to be congruent with my actions. Today, with a toddler at home, I'm building my business with clear non-negotiables: - I'm there for breakfast with my son every morning - I take him to daycare myself - I'm home for dinner, bath time, and bedtime every night It hasn't been easy. The uncertainty and sporadic income is taxing. There were months where I wasn't making any money, watching peers get promotions and big payoffs, constantly questioning my choice. But seeing the alternative through that VP's experience made me certain that designing a life aligned with my values was worth it. I've learned to treat strong emotional reactions as compasses. When something feels deeply right or wrong, that's valuable information about what matters to me. The daily monotony can cloud this clarity, making it easy to drift. But recognizing these moments gives you chances to steer back, making choices—even small ones—that better reflect who you really are.
Content Inspiration, AI, scheduling, automation, analytics, CRM.
Get all of that and more in Taplio.
Try Taplio for free