Get the Linkedin stats of Olga V. Mack and many LinkedIn Influencers by Taplio.
open on linkedin
As a child in Ukraine, I dreamed I’d create a stunning art portfolio, graduate 8th grade, and leap from vocational art school to a career in fine art. Science, math, and history—who needed them when university costs were too high? My engineer parents were supportive but wary. At age 13, we moved to Candy Land (the San Franciso Bay Area and Silicon Valley). There I learned that technology can artfully lead to beautiful and functional results when promoting the rule of law. Today, law + tech = a lifelong career of embracing change, running toward risk, enhancing work/life balance, and transforming the practice of law. I’m always writing the next chapter in the Big Book of Legal Transformation focused on the technology trends shaping our legal future. I advocate for innovation the way true tech geeks do: with unbridled enthusiasm. I’ve led through transitions and tech transformations in BigLaw firms, Fortune 500 legal departments, and countless pre-IPO startups, teaming up with the latest technologies (e.g., SaaS, data analytics, AI, ML, blockchain, crypto, and Web3) before they earned their buzz. And when the next shot of tech disruption arrives, I’ll be ready to add my fun twist to the mix. The rule of law is integral to how we and our governments, organizations, and households prosper. My belief that law should benefit every individual, every day, seamlessly drives me to: 🖼️ Craft a universal language of law that speaks to all communities. Let’s use practical design and visual elements to explain abstract legal concepts. “Legalese” is like spaghetti code, a mess of mish tangled in the mash. As a young immigrant to the US, challenges speaking broken English showed me how valuable a shared language is for promoting unity through mutual understanding. ⚖️ Enable “law as a service for all” using technology to solve issues of access and affordability. I’m like a kid in a candy shop hollering, “More legal innovation, please!” Let’s wield tech like a magic wand to streamline well-structured processes and support human relationships to make the law more transparent and accessible to all. 💻 Bring the law up to pace with modern society. Law can no longer tick along like an old watch: expensive, clunky, and barely functional. Not when business pros can set a smart contract to self-execute on a blockchain and AI can increasingly automate mundane tasks. Using disruptive technologies, we will all flourish when our legal systems meet the needs of our times. Join me for our next adventure!
Check out Olga V. Mack's verified LinkedIn stats (last 30 days)
Use Taplio to search all-time best posts
If your AI is technically flawless but socially tone-deaf, you’ve built a very expensive problem. AI isn’t just about perfecting the math. It’s about understanding people. Some of the biggest AI failures don’t come from bad code but from a lack of perspective. I once worked with a team that built an AI risk assessment tool. It was fast, efficient, and technically sound. But when tested in the real world, it disproportionately flagged certain demographics. The issue wasn’t the intent—it was the data. The team had worked in isolation, without input from legal, ethics, or the people the tool would impact. The fix? Not more code. More conversations. Once we brought in diverse perspectives, we didn’t just correct bias—we built a better, more trusted product. What this means for AI leaders: Bring legal, ethics, and diverse voices in early. If you’re not, you’re already behind. Turn compliance into an innovation edge. Ethical AI isn’t just safer—it’s more competitive. Reframe legal as a creator, not a blocker. The best lawyers don’t just say no; they help find the right yes. Design for transparency, not just accuracy. If an AI can’t explain itself, it won’t survive long-term. I break this down further in my latest newsletter—check it out! What’s the biggest challenge you’ve seen in AI governance? How can legal and engineering work better together? Let’s discuss. -------- 🚀 Olga V. Mack 🔹 Building trust in commerce, contracts & products 🔹 Sales acceleration advocate 🔹 Keynote Speaker | AI & Business Strategist 📩 Let’s connect & collaborate 📰 Subscribe to Notes to My (Legal) Self
Your watch can now see you. And your dog. And your barista. And that stranger in the background just trying to live their life. Apple’s rumored AI-powered Watch isn’t just about fitness anymore—it’s a camera-equipped, always-on, data-collecting machine on your wrist. And... it’s capturing biometric and visual data—like faces, surroundings, even people who never signed up to be part of your Apple ecosystem. That kind of data isn’t just “interesting.” It’s sensitive, and in many places, it’s legally protected in some. So… what now? Are those images stored? Shared? Used to train AI? If your watch grabs someone’s face without consent, who’s on the hook? Product lawyers have to figure out where this fits in biometric data frameworks—and fast. Because it’s not just about compliance. It’s about designing for trust, accountability, and real-world risk. I dive into the privacy angle in my video Watches That Watch You. Watch it (pun intended). This one’s bigger than step counts. -------- 🚀 Olga V. Mack 🔹 Building trust in commerce, contracts & products 🔹 Sales acceleration advocate 🔹 Keynote Speaker | AI & Business Strategist 📩 Let’s connect & collaborate 📰 Subscribe to Notes to My (Legal) Self
AI contracts may look like SaaS agreements on the surface—but under the hood, the terms around data use can diverge significantly. According to TermScout data featured in my recent Law.com article, 92% of AI vendor contracts grant providers broad rights to customer data, compared to 63% across broader SaaS agreements. That’s a meaningful gap—one that’s worth examining more closely. Often, vendors include language like “performance improvement” or “aggregated analytics” to support model development and service enhancements. These clauses aren't inherently problematic—but without clear boundaries, they can lead to unintended outcomes, such as reuse of customer data for broader commercial purposes. If you’re reviewing or negotiating an AI contract, here are a few ways to strike a more thoughtful balance: • Align data rights with purpose. Define use narrowly to what's essential for service delivery, and clarify what “improvement” really means. • Address training and reuse upfront. Consider whether your organization is comfortable with its data being used to train models that power other customer experiences. • Plan for offboarding. Set clear expectations for data deletion, retention, and anonymization when the relationship ends. • Clarify aggregation. “Aggregated and anonymized” data often lives in a gray area—define how it can and can’t be used. These are not just hypothetical concerns. In highly regulated industries like healthcare and finance, vague terms around data rights can carry real risk—both legal and operational. That’s why we’re taking this conversation further in: AI Contracts Explained – Episode 5 — link in the comments 🗓 Friday, April 4, 2025 🕛 12 PM ET | 9 AM PT on LinkedIn Live With: • Laura Frederick, CEO of How to Contract • Linsey Krolik, Professor at Santa Clara Law • Will Dugoni, Head of Commercial Legal at Webflow We’ll walk through actual contract language and share strategies for building balanced, future-ready agreements. This post draws on data and insights from a Law.com article—link in the comments. As data becomes the engine of innovation, protecting its use in contracts isn’t about control—it’s about clarity. Will you be tuning in? -------- 🚀 Olga V. Mack 🔹 Building trust in commerce, contracts & products 🔹 Sales acceleration advocate 🔹 Keynote Speaker | AI & Business Strategist 📩 Let’s connect & collaborate 📰 Subscribe to Notes to My (Legal) Self
[LinkedIn Post – "Legal Challenges in AI and Emerging Technologies"] Everyone talks about AI risk, but here’s something no one tells you—your company can be held liable for decisions your AI makes, even if no human was directly involved. “The algorithm did it” is not a legal defense. I once worked with a company that built an AI-powered hiring tool. It was efficient, fast… and unknowingly biased. Candidates from certain backgrounds were being filtered out—not because anyone programmed it that way, but because the AI learned from biased historical hiring data. When I pointed this out, the team’s first reaction was, “But we didn’t intend for that to happen.” My response? “Intent doesn’t matter—impact does.” That’s the legal reality of AI. So how do you stay ahead of AI’s legal pitfalls? One of the biggest shifts I’ve seen? Stop treating AI like a product—start treating it like a legal entity. Just like a company has governance, AI needs policies, audits, and accountability structures. Who’s responsible for its decisions? How is it monitored? How is bias corrected? These aren’t “nice to haves”—they’re business survival strategies. If you’re in legal, don’t wait for AI laws to be written—help shape responsible policies now. If you’re in product, assume your AI will be challenged legally and build for that reality. Want more insights? Check out my latest video, where I break this down even further. Let’s make AI legally defensible! 🚀 -------- 💥 I’m Olga V. Mack 🔺 Expert in AI & transformative tech for product counseling 🔺 Upskilling human capital for digital transformation 🔺 Leading change management in legal innovation & operations 🔺 Keynote speaker on the intersection of business, law, & tech 🔝 Let’s connect 🔝 Subscribe to Notes to My (Legal) Self newsletter
[LinkedIn Post – "What Business Leaders Wish Their Lawyers Knew"] Ever get legal advice that feels more like a history lesson than a solution? Business leaders don’t need case law—they need clear, actionable guidance. And if legal teams don’t deliver that, we risk becoming obstacles instead of partners. Early in my career, I realized something: product teams don’t ignore legal because they want to. They ignore legal because sometimes, we make it too hard to follow. If our advice is vague, overly cautious, or buried in legalese, teams will either avoid us or take risks without us. And that is where real problems start. So how do we fix this? One of the biggest shifts I made? Stop over-explaining—start simplifying. If legal advice isn’t clear in 30 seconds, it’s too complicated. Instead of, “This raises enforceability concerns under X statute,” say, “Here’s how we can structure this so it actually works.” Instead of a 10-page memo, try a one-page decision framework. The easier legal is to apply, the more teams will apply it. If you’re in legal, challenge yourself: Are you giving advice that teams can act on right now? If you’re a business leader, don’t see legal as a barrier—engage us early, and let’s build together. Want more insights? Check out my latest video, where I break this down even further. Let’s make legal a business accelerator! 🚀 -------- 💥 I’m Olga V. Mack 🔺 Expert in AI & transformative tech for product counseling 🔺 Upskilling human capital for digital transformation 🔺 Leading change management in legal innovation & operations 🔺 Keynote speaker on the intersection of business, law, & tech 🔝 Let’s connect 🔝 Subscribe to Notes to My (Legal) Self newsletter
What happens when your patent meets a satellite moving 17,000 mph? Spoiler: It’s not pretty, and definitely not covered in most law school syllabi. We’ve entered the era where intellectual property is physically colliding with infrastructure. I’m talking about Apple and SpaceX sparring over satellite-powered smartphone coverage—a battle that’s part tech turf war, part IP showdown, part “who’s-launching-what-tomorrow?” In my video, I explain how spectrum is the new real estate. Check it out. But here’s where it gets really interesting for business leaders and in-house lawyers: Products are no longer static—they rely on entire ecosystems: satellites, signals, APIs, even weather patterns. Your IP is only as strong as the infrastructure it rides on. Patent disputes? Welcome to orbital enforcement. Who owns the tech that makes off-grid messaging possible—and what if it needs a competitor’s satellite to function? Legal strategy now is product strategy. Drafting a license today might mean planning how your app survives a geopolitical spectrum dispute in 2027. This isn’t about hypothetical law. This is happening right now, and legal teams are behind the scenes making sure innovation actually works in the real (and now orbital) world. Curious how legal counsel fits into product ecosystems like this? Let’s talk. Or better yet, start by watching the video I just dropped. It might change the way you see your phone—and your next contract. -------- 🚀 Olga V. Mack 🔹 Building trust in commerce, contracts & products 🔹 Sales acceleration advocate 🔹 Keynote Speaker | AI & Business Strategist 📩 Let’s connect & collaborate 📰 Subscribe to Notes to My (Legal) Self
Ever feel like you need a court stenographer, a mind reader, and a personal assistant just to survive a day of Zoom meetings? Zoom just rolled out a wave of new AI-powered features — like meeting summaries, multi-speaker tools, and curated screen sharing — and honestly, it’s like they finally read the room. Literally. I used to pretend I’d remember what happened on back-to-back calls. I didn’t. Then I tried the AI summaries. At first, I thought, “Hmm… this recap feels a little off.” Then I realized… so am I. It’s kind of amazing how much of a conversation I’m actually missing, even when I’m fully present. Now, the AI quietly picks up what my brain doesn’t — organizing action items, tracking who said what, and letting me skip the painful rewatch. Honestly, I’m in love. Full disclosure: I fall in love easily. But this time it feels justified. This isn’t a flashy tech flex — it’s a sanity feature. Built in, ready to go, no IT ticket required. So now I’m wondering: Is AI your new favorite meeting attendee? How are you using these features with your team? And be honest — are you reading the summary instead of showing up live yet? -------- 🚀 Olga V. Mack 🔹 Building trust in commerce, contracts & products 🔹 Sales acceleration advocate 🔹 Keynote Speaker | AI & Business Strategist 📩 Let’s connect & collaborate 📰 Subscribe to Notes to My (Legal) Self
Let’s stop pretending that privacy is something we toggle in a settings menu. It’s not. It’s a business model — and increasingly, a broken one. 23andMe’s bankruptcy didn’t just expose a company in distress — it exposed a business model that monetizes trust until it collapses. The real product wasn’t the test kit. It was your data. And now, your DNA — the ultimate fingerprint — is part of a fire sale. This isn’t just a tech story. It’s a legal one, an ethical one, and frankly, a human one. Here’s what this means: If privacy is sold, it was never protected. If data can be transferred without your say, consent was never meaningful. And if companies profit more from your data than your purchase, you’re not the user — you’re the revenue stream. Lawyers: it’s time to help build products with privacy baked into the business model, not slapped on at the end. Everyone else: start asking harder questions before you share your data. Check out my latest video — “You’re the Product” — to see how 23andMe flipped the script on who the real customer was. Privacy isn’t dead. But we have to stop outsourcing it to companies that treat it like inventory. -------- 🚀 Olga V. Mack 🔹 Building trust in commerce, contracts & products 🔹 Sales acceleration advocate 🔹 Keynote Speaker | AI & Business Strategist 📩 Let’s connect & collaborate 📰 Subscribe to Notes to My (Legal) Self
Ever tried picking a side in a tech debate and realized… it’s not that simple? Right now, AMD is betting big on ROCm, its open-source AI platform. Nvidia’s CUDA? Proprietary. Two giants, two different bets on how the future of AI should be built. As a product counsel and tech optimist, here’s what I see: this isn’t just a question of licensing models—it’s a leadership question. Open vs. closed is also collaborative vs. controlled. Inclusive vs. curated. And neither is always right. For lawyers, this impacts everything from IP strategy to product velocity. Open-source can attract ecosystems fast—but also needs stronger risk management. Proprietary tools may offer control—but can slow adoption and raise lock-in questions. And for the rest of us—entrepreneurs, builders, technologists—it’s about understanding the kind of future we want to shape. One where innovation is distributed… or one where it’s centralized. Lisa Su’s engineering mindset (and her strategy shift at AMD) shows us how to think in systems, not silos. I talked more about that in my video: “Engineer Your Leadership.” Check it out. There’s real wisdom in leading like a builder. What kind of ecosystem are you building? -------- 🚀 Olga V. Mack 🔹 Building trust in commerce, contracts & products 🔹 Sales acceleration advocate 🔹 Keynote Speaker | AI & Business Strategist 📩 Let’s connect & collaborate 📰 Subscribe to Notes to My (Legal) Self
We all love the promise of AI that “just works.” But what happens when AI doesn’t just assist—it decides? With ServiceNow’s $2.85B acquisition of Moveworks, we’re seeing the rise of agentic AI—systems that don’t just provide answers but take action. That’s a massive leap forward for efficiency. It also raises a huge question: Who’s accountable when AI makes the wrong call? I’ve been thinking about three key areas where businesses (and their leaders and yes, lawyers) need to get ahead of this: 1️⃣ Liability & Accountability – If an AI system takes an action that causes harm—financial, legal, or reputational—who’s responsible? The developer? The enterprise using it? The end user? Courts and regulators are just starting to grapple with this. 2️⃣ Bias & Transparency – AI assistants rely on vast datasets, but what if that data bakes in hidden biases? Agentic AI means decisions happen fast, and without visibility into how those decisions are made, bias can scale unchecked. 3️⃣ Regulation & Compliance – With new AI laws emerging globally, businesses need to ensure AI-driven decisions don’t violate privacy, anti-discrimination, or industry-specific regulations. The legal landscape is shifting fast. This acquisition is just the beginning. AI that acts autonomously isn’t some distant future—it’s here. The companies that figure out how to build trustworthy AI will define the next era of business. What do you think? Should AI have more autonomy, or does this open the door to too many risks? Let’s discuss. And if you're curious about how AI M&A is shaping the industry, check out my latest video on the business of AI acquisitions. -------- 💥 I’m Olga V. Mack 🔺 Expert in AI & transformative tech for product counseling 🔺 Upskilling human capital for digital transformation 🔺 Leading change management in legal innovation & operations 🔺 Keynote speaker on the intersection of business, law, & tech 🔝 Let’s connect 🔝 Subscribe to Notes to My (Legal) Self newsletter
Southwest Airlines just shook up its entire identity by scrapping its “Bags Fly Free” policy. Customers are outraged. But here’s the bigger issue—what happens when a company changes a key policy you relied on? Do you, as a consumer, have any real rights? I covered the fragility of brand loyalty in my last video—how trust erodes when businesses abandon their differentiators. But let’s talk about your power as a consumer. Because whether it’s airlines, subscription services, or tech platforms, companies change policies all the time. And often, you don’t have much say. Here are three things to keep in mind when businesses pull a Southwest on you: First, read the fine print. Many companies reserve the right to change policies at any time. Airlines, for example, bury these changes in their contracts of carriage. Second, loyalty programs aren’t customer guarantees. Southwest’s Rapid Rewards members feel blindsided, but airline loyalty perks aren’t legally protected. Businesses tweak them whenever it benefits their bottom line. If you depend on a company’s perks, remember—they can disappear overnight. Third, consumers do have power—when they act together. While an individual complaint might not matter, class-action lawsuits, regulatory pressure, and public backlash do get companies to rethink their decisions. So, what’s the move? If you’re frustrated with Southwest, or any company changing the rules on you, speak up. Companies tend to listen when their bottom line is at risk. What are your thoughts? Have you ever felt burned by a policy change? Let’s discuss this. And if you missed my take on brand loyalty, check out my last video—because this issue is bigger than just airlines. -------- 💥 I’m Olga V. Mack 🔺 Expert in AI & transformative tech for product counseling 🔺 Upskilling human capital for digital transformation 🔺 Leading change management in legal innovation & operations 🔺 Keynote speaker on the intersection of business, law, & tech 🔝 Let’s connect 🔝 Subscribe to Notes to My (Legal) Self newsletter
Intel’s new CEO, Lip-Bu Tan, is on a mission to make Intel faster—streamlining operations, rethinking AI, and cutting friction. Because in tech, speed wins. But speed isn’t just about AI chips. It’s about how business gets done. And nothing kills deals quietly like slow contracts. I’ve seen it happen—sales teams build momentum, the deal is hot… then legal sends a contract no one trusts. Emails slow. Redlines pile up. The deal stalls. The fastest-moving companies flip the script. They make contracts trust signals, not hurdles. They remove hidden risks, so buyers don’t get stuck in endless back-and-forth. And they stick to market standards, keeping negotiations smooth. Intel is proving that speed is everything. Are your contracts keeping up? If not, check out my video on how contracts can accelerate trust instead of slowing deals down. 🚀 -------- 🚀 Olga V. Mack 🔹 Building trust in commerce, contracts & products 🔹 Sales acceleration advocate 🔹 Keynote Speaker | AI & Business Strategist 📩 Let’s connect & collaborate 📰 Subscribe to Notes to My (Legal) Self
Who’s watching the AI watchers? Meta is making its own AI chips. That means more control over its AI models, better efficiency… and a lot more responsibility. In my vide below, I talk about antitrust risks—how Big Tech’s chip wars could reshape competition. But let’s talk about something even bigger: data security and privacy. Meta’s AI models run on massive datasets. Your messages. Your photos. Your interactions. When they relied on Nvidia’s chips, at least there was some third-party oversight. Now? Meta controls the hardware, the software, and the data. So, what does this mean for you? 1️⃣ Less oversight. When companies control their entire AI pipeline, there’s less transparency. Who’s auditing their security? 2️⃣ Bigger risks. A single breach could expose entire AI training datasets. Think Cambridge Analytica, but on steroids. 3️⃣ More legal battles. Regulators are already investigating AI and privacy violations. If Meta mismanages this, expect lawsuits and new privacy laws. AI is the future, but who’s keeping the AI giants accountable? Check out my take on antitrust in the video. Now, let me know—do you trust Meta to handle its own AI security? Or is this another privacy disaster waiting to happen? -------- 💥 I’m Olga V. Mack 🔺 Expert in AI & transformative tech for product counseling 🔺 Upskilling human capital for digital transformation 🔺 Leading change management in legal innovation & operations 🔺 Keynote speaker on the intersection of business, law, & tech 🔝 Let’s connect 🔝 Subscribe to Notes to My (Legal) Self newsletter
Google just made its biggest acquisition ever—$32B for Wiz. And it’s not just about security. It’s about who controls the cloud. For years, Google Cloud has been behind AWS and Microsoft Azure. Now, with Wiz’s cybersecurity tech, Google is making a power move. But will it make the cloud safer—or more controlled? First, security is the new battleground. AI and cloud adoption are exploding, and whoever owns security owns the future. Second, Google says Wiz will still work with AWS and Azure. But will regulators believe them, or see this as Big Tech locking down competition? Third, this deal could shake up tech M&A. If Google is spending $32B, will Amazon and Microsoft respond with billion-dollar moves of their own? I just broke down what this means for you in my latest video. Check it out! Do you think this deal makes the cloud safer or shifts too much power to one company? Let’s talk. -------- 🚀 Olga V. Mack 🔹 Building trust in commerce, contracts & products 🔹 Sales acceleration advocate 🔹 Keynote Speaker | AI & Business Strategist 📩 Let’s connect & collaborate 📰 Subscribe to Notes to My (Legal) Self
AI. ESG. And other acronyms that weren’t supposed to define a legal career. But here we are. AI governance is no longer optional—it’s the next frontier for in-house counsel. And the shift from ESG to AI governance? It’s happening in real-time. That’s why Christine Uri will be joining Notes to My (Legal) Self LinkedIn Live on Wednesday, March 12, 2025, at 10:10 AM PST. A legal executive and former Chief Legal and Sustainability Officer at ENGIE Impact, Christine has built a career on making companies more profitable, sustainable, and human-centric. Now, through CURI Insights, she helps corporate leaders up-level ESG performance and navigate the uncharted waters of AI risk management. The conversation will cover: ✅ Why ESG and AI governance are more connected than they seem ✅ The biggest AI risks facing corporate legal teams today ✅ How GCs and CLOs can balance compliance, innovation, and AI ethics ✅ What in-house legal teams must do right now to stay ahead of AI regulation A must-attend event for legal professionals who want to stay ahead of AI-driven change. Bring questions, join the conversation, and gain practical insights on navigating AI governance before the robots start writing the rules. Before the session, let’s discuss: 1️⃣ What’s the biggest AI governance challenge facing in-house legal teams today? 2️⃣ How should legal teams prioritize AI risks when everything is evolving so fast? 3️⃣ What’s one AI-related policy or regulation that will be a game-changer? Drop thoughts in the comments—and don’t forget to RSVP to the LinkedIn Live! -------- 💥 I’m Olga V. Mack 🔺 Expert in AI & transformative tech for product counseling 🔺 Upskilling human capital for digital transformation 🔺 Leading change management in legal innovation & operations 🔺 Keynote speaker on the intersection of business, law, & tech 🔝 Let’s connect 🔝 Subscribe to Notes to My (Legal) Self newsletter
Content Inspiration, AI, scheduling, automation, analytics, CRM.
Get all of that and more in Taplio.
Try Taplio for free
Vaibhav Sisinty ↗️
@vaibhavsisinty
446k
Followers
Ash Rathod
@ashrathod
73k
Followers
Sahil Bloom
@sahilbloom
1m
Followers
Shlomo Genchin
@shlomogenchin
49k
Followers
Sam G. Winsbury
@sam-g-winsbury
45k
Followers
Sabeeka Ashraf
@sabeekaashraf
20k
Followers
Amelia Sordell 🔥
@ameliasordell
228k
Followers
Daniel Murray
@daniel-murray-marketing
147k
Followers
Richard Moore
@richardjamesmoore
103k
Followers
Matt Gray
@mattgray1
1m
Followers
Andy Mewborn
@amewborn
212k
Followers
Guillaume Moubeche
@-g-
80k
Followers
Wes Kao
@weskao
107k
Followers
Izzy Prior
@izzyprior
81k
Followers
Tibo Louis-Lucas
@thibaultll
6k
Followers
Luke Matthews
@lukematthws
186k
Followers