Get the Linkedin stats of Luiza Jarovsky and many LinkedIn Influencers by Taplio.
open on linkedin
Luiza Jarovsky is the co-founder of the AI, Tech & Privacy Academy and leads AI governance training programs attended by 900+ participants worldwide. She is also the author of Luiza's Newsletter, a weekly publication on AI policy, compliance & regulation with 35,000+ subscribers in 150+ countries. In 2023, Luiza was named by LinkedIn a Top Voice in AI; in 2021, she received a Westin Scholar Award from the IAPP; and in 2020, she was awarded by the President of Israel the “President's Scholarship for Excellence in Science and Innovation” for her Ph.D. research. She previously worked as a startup lawyer and published three books in Portuguese. She also edited the law book "Direito para Empreendedores" (Law for Entrepreneurs). Originally from Brazil, Luiza graduated in Law from the University of São Paulo, completed her Master's at Tel Aviv University, and is doing her Ph.D. there as well. She speaks English, Portuguese, French, Spanish, Italian, German, and Hebrew and is a proud mother of 3. Relevant links and contact form: www.luizajarovsky.com
Check out Luiza Jarovsky's verified LinkedIn stats (last 30 days)
Use Taplio to search all-time best posts
🚨 BREAKING: The U.S. Copyright Office SIDES WITH CONTENT CREATORS, concluding in its latest report that the fair use exception likely does not apply to commercial AI training. From the report's conclusion: "Various uses of copyrighted works in AI training are likely to be transformative. The extent to which they are fair, however, will depend on what works were used, from what source, for what purpose, and with what controls on the outputs—all of which can affect the market. When a model is deployed for purposes such as analysis or research—the types of uses that are critical to international competitiveness—the outputs are unlikely to substitute for expressive works used in training. But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries. For those uses that may not qualify as fair, practical solutions are critical to support ongoing innovation. Licensing agreements for AI training, both individual and collective, are fast emerging in certain sectors, although their availability so far is inconsistent. Given the robust growth of voluntary licensing, as well as the lack of stakeholder support for any statutory change, the Office believes government intervention would be premature at this time. Rather, licensing markets should continue to develop, extending early successes into more contexts as soon as possible. In those areas where remaining gaps are unlikely to be filled, alternative approaches such as extended collective licensing should be considered to address any market failure. In our view, American leadership in the AI space would best be furthered by supporting both of these world-class industries that contribute so much to our economic and cultural advancement. Effective licensing options can ensure that innovation continues to advance without undermining intellectual property rights. These groundbreaking technologies should benefit both the innovators who design them and the creators whose content fuels them, as well as the general public." - My comments: Although this is a pre-publication version, the report states: "The Office is releasing this pre-publication version of Part 3 in response to congressional inquiries and expressions of interest from stakeholders. A final version will be published in the near future, without any substantive changes expected in the analysis or conclusions." It's GREAT NEWS for content creators/copyright holders, especially as the U.S. Copyright Office's opinion will likely influence present and future AI copyright lawsuits in the U.S. As I and many others wrote before, licensing deals seem to be the future of AI training. 👉 Read the full report in the link below. 👉 NEVER MISS my updates and analyses: join my newsletter's 61,000+ subscribers using the link below.
🚨 Did you know that OpenAI is building a social network? Most people won't notice it, but it's another PRIVACY trick. Here's why: According to a report published by The Verge, OpenAI's early social media prototype would expand ChatGPT's new image generation capabilities to a social feed. If you read my article about the Ghibli Effect, which was cited by dozens of media articles worldwide (link below), you know that the way OpenAI launched the new image generator was a smart privacy+PR trick by OpenAI to collect thousands of personal images to train its AI models. When people *voluntarily* upload images to ChatGPT, it's much easier for OpenAI: they don't need to consider the privacy law challenges of scraping human faces from the internet or closing licensing deals with image banks. People are giving them to OpenAI for free. The trend is still ongoing, and people are voluntarily uploading personal images to turn them into miniature dolls or whatever the new avatar trend is. OpenAI did not expect such a viral effect from the new image generator. According to reports, OpenAI DOUBLED its user base after the Ghibli Effect. Sam Altman also said that "10% of the world now uses our systems a lot." OpenAI's engineers are smart and realized that there is something special about this image generator. Something that resonates with people. Not only that, it also generates the gold OpenAI needs to train its models: personal data. So they had this idea: why not turn the image generation feature into a social feed? People would get the trendy avatars - or whatever other images they want - they will share them with friends, and OpenAI would have a constant flow of the gold they need: personal data (which their competitors X and Meta already have). So when OpenAI starts marketing its social network as a "new way to connect with friends," please don't fall for it. It's 2025, and OpenAI, Meta, X, and others only need your personal data to train AI. 👉 Subscribe to my newsletter and join our 58,900+ strong AI governance community using the link below #AI #AIGovernance #OpenAI #SocialNetwork #ChatGPT #Privacy
🚨 BREAKING: A new U.S. Congress report on DeepSeek highlights the current state of the AI race between the U.S. & China and what might come next. If you're interested in AI, privacy, and security, you CAN'T MISS it. Key quotes: "DeepSeek represents a profound threat to our nation’s security. Although it presents itself as just another AI chatbot, offering users a way to generate text and answer questions, closer inspection reveals that the app siphons data back to the People’s Republic of China (PRC), creates security vulnerabilities for its users, and relies on a model that covertly censors and manipulates information pursuant to Chinese law." - "Equally troubling, the model appears to have been built using stolen U.S. technology on the back of U.S. semiconductor chips that are prohibited from sale to China without an export license and when it was released, PRC-affiliated social media accounts amplified and celebrated the model, according to Graphika research." - "This report documents some of the risks DeepSeek poses and explains how its development is based on common Chinese Communist Party (CCP) tactics designed to unlawfully undermine U.S. technological leadership and critical American policies to protect national security." - "The Committee’s investigation found: 1. DeepSeek funnels Americans’ data to the PRC through backend infrastructure connected to a U.S. government-designated Chinese military company. 2. DeepSeek covertly manipulates the results it presents to align with CCP propaganda, as required by Chinese law. 3. It is highly likely that DeepSeek used unlawful model distillation techniques to create its model, stealing from leading U.S. AI models. 4. DeepSeek’s AI model appears to be powered by advanced chips provided by American semiconductor giant Nvidia and reportedly utilizes tens of thousands of chips that are currently restricted from export to the PRC." - "The Committee therefore makes the following recommendations: 1. Take swift action to expand export controls, improve export control enforcement, and address risks from PRC AI models. 2. Prevent and prepare for strategic surprise related to advanced AI." - 👉 Download the full report below. 👉 If you want to stay up to date with the LATEST developments in AI governance, I invite you to join our 59,000+ strong AI governance community using the link below. #AI #AIGovernance #AIPolicy #AIRace #DeepSeek #China #AIRegulation
Content Inspiration, AI, scheduling, automation, analytics, CRM.
Get all of that and more in Taplio.
Try Taplio for free