エピソード

  • S3E15: 'New Certification: Enabling Privacy Engineering in AI Systems' with Amalia Barthel & Eric Lybeck
    2024/07/23

    In this episode, I'm joined by Amalia Barthel, founder of Designing Privacy, a consultancy that helps businesses integrate privacy into business operations; and Eric Lybeck, a seasoned independent privacy engineering consultant with over two decades of experience in cybersecurity and privacy. Eric recently served as Director of Privacy Engineering at Privacy Code. Today, we discuss: the importance of more training for privacy engineers on AI system enablement; why it's not enough for privacy professionals to solely focus on AI governance; and how their new hands-on course, "Privacy Engineering in AI Systems Certificate program," can fill this need.

    Throughout our conversation, we explore the differences between AI system enablement and AI governance and why Amalia and Eric were inspired to develop this certification program. They share examples of what is covered in the course and outline the key takeaways and practical toolkits that enrollees will get - including case studies, frameworks, and weekly live sessions throughout.

    Topics Covered:

    • How AI system enablement differs from AI governance and why we should focus on AI as part of privacy engineering
    • Why Eric and Amalia designed an AI systems certificate course that bridges the gaps between privacy engineers and privacy attorneys
    • The unique ideas and practices presented in this course and what attendees will take away
    • Frameworks, cases, and mental models that Eric and Amalia will cover in their course
    • How Eric & Amalia structured the Privacy Engineering in AI Systems Certificate program's coursework
    • The importance of upskilling for privacy engineers and attorneys


    Resources Mentioned:

    • Enroll in the 'Privacy Engineering in AI Systems Certificate program' (Save $300 with promo code: PODCAST300 - enter this into the Inquiry Form instead of directly purchasing the course)
    • Read: 'The Privacy Engineer's Manifesto'
    • Take the free European Commission's course, 'Understanding Law as Code'


    Guest Info:

    • Connect with Amalia on LinkedIn
    • Connect with Eric on LinkedIn
    • Learn about Designing Privacy


    Send us a Text Message.



    TRU Staffing Partners
    Top privacy talent - when you need it, where you need it.

    Shifting Privacy Left Media
    Where privacy engineers gather, share, & learn

    Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

    Copyright © 2022 - 2024 Principled LLC. All rights reserved.

    続きを読む 一部表示
    39 分
  • S3E14: 'Why We Need Fairness Enhancing Technologies Rather Than PETs' with Gianclaudio Malgieri (Brussels Privacy Hub)
    2024/06/25

    Today, I chat with Gianclaudio Malgieri, an expert in privacy, data protection, AI regulation, EU law, and human rights. Gianclaudio is an Associate Professor of Law at Leiden University, the Co-director of the Brussels Privacy Hub, Associate Editor of the Computer Law & Security Review, and co-author of the paper "The Unfair Side of Privacy Enhancing Technologies: Addressing the Trade-offs Between PETs and Fairness". In our conversation, we explore this paper and why privacy-enhancing technologies (PETs) are essential but not enough on their own to address digital policy challenges.

    Gianclaudio explains why PETs alone are insufficient solutions for data protection and discusses the obstacles to achieving fairness in data processing – including bias, discrimination, social injustice, and market power imbalances. We discuss data alteration techniques such as anonymization, pseudonymization, synthetic data, and differential privacy in relation to GDPR compliance. Plus, Gianclaudio highlights the issues of representation for minorities in differential privacy and stresses the importance of involving these groups in identifying bias and assessing AI technologies. We also touch on the need for ongoing research on PETs to address these challenges and share our perspectives on the future of this research.

    Topics Covered:

    • What inspired Gianclaudio to research fairness and PETs
    • How PETs are about power and control
    • The legal / GDPR and computer science perspectives on 'fairness'
    • How fairness relates to discrimination, social injustices, and market power imbalances
    • How data obfuscation techniques relate to AI / ML
    • How well the use of anonymization, pseudonymization, and synthetic data techniques address data protection challenges under the GDPR
    • How the use of differential privacy techniques may led to unfairness
    • Whether the use of encrypted data processing tools and federated and distributed analytics achieve fairness
    • 3 main PET shortcomings and how to overcome them: 1) bias discovery; 2) harms to people belonging to protected groups and individuals autonomy; and 3) market imbalances.
    • Areas that warrant more research and investigation


    Resources Mentioned:

    • Read: "The Unfair Side of Privacy Enhancing Technologies: Addressing the Trade-offs Between PETs and Fairness"


    Guest Info:

    • Connect with Gianclaudio on LinkedIn
    • Learn more about Brussles Privacy Hub

    Send us a Text Message.



    Privado.ai
    Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

    TRU Staffing Partners
    Top privacy talent - when you need it, where you need it.

    Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

    Copyright © 2022 - 2024 Principled LLC. All rights reserved.

    続きを読む 一部表示
    48 分
  • S3E13: 'Building Safe AR / VR/ MR / XR Technology" with Spatial Computing Pioneer Avi Bar Zeev (XR Guild)
    2024/06/18

    In this episode, I had the pleasure of talking with Avi Bar-Zeev, a true tech pioneer and the Founder and President of The XR Guild. With over three decades of experience, Avi has an impressive resume, including launching Disney's Aladdin VR ride, developing Second Life's 3D worlds, co-founding Keyhole (which became Google Earth), co-inventing Microsoft's HoloLens, and contributing to the Amazon Echo Frames. The XR Guild is a nonprofit organization that promotes ethics in extended reality (XR) through mentorship, networking, and educational resources.

    Throughout our conversation, we dive into privacy concerns in augmented reality (AR), virtual reality (VR), and the metaverse, highlighting increased data misuse and manipulation risks as technology progresses. Avi shares his insights on how product and development teams can continue to be innovative while still upholding responsible, ethical standards with clear principles and guidelines to protect users' personal data. Plus, he explains the role of eye-tracking technology and why he advocates classifying its data as health data. We also discuss the challenges of anonymizing biometric data, informed consent, and the need for ethics training in all of the tech industry.

    Topics Covered:

    • The top privacy and misinformation issues that Avi has noticed when it comes to AR, VR, and metaverse data
    • Why Avi advocates for classifying eye tracking data as health data
    • The dangers of unchecked AI manipulation and why we need to be more aware and in control of our online presence
    • The ethical considerations for experimentation in highly regulated industries
    • Whether it is possible to anonymize VR and AR data
    • Ways these product and development teams can be innovative while maintaining ethics and avoiding harm
    • AR risks vs VR risks
    • Advice and privacy principles to keep in mind for technologists who are building AR and VR systems
    • Understanding The XR Guild

    Resources Mentioned:

    • Read: The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology
    • Read: Our Next Reality

    Guest Info:

    • Connect with Avi on LinkedIn
    • Check out the XR Guild
    • Learn about Avi's Consulting Services

    Send us a Text Message.



    Shifting Privacy Left Media
    Where privacy engineers gather, share, & learn

    TRU Staffing Partners
    Top privacy talent - when you need it, where you need it.

    Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

    Copyright © 2022 - 2024 Principled LLC. All rights reserved.

    続きを読む 一部表示
    52 分
  • S3E12: 'How Intentional Experimentation in A/B Testing Supports Privacy' with Matt Gershoff (Conductrics)
    2024/06/04

    Today, I'm joined by Matt Gershoff, Co-founder and CEO of Conductrics, a software company specializing in A/B testing, multi-armed bandit techniques, and customer research and survey software. With a strong background in resource economics and artificial intelligence, Matt brings a unique perspective to the conversation, emphasizing simplicity and intentionality in decision-making and data collection.

    In this episode, Matt dives into Conductrics' background, the role of A/B testing and experimentation in privacy, data collection at a specific and granular level, and the details of Conductrics' processes. He emphasizes the importance of intentionally collecting data with a clear purpose to avoid unnecessary data accumulation and touches on the value of experimentation in conjunction with data minimization strategies. Matt also discusses his upcoming talk at the PEPR Conference and shares his hopes for what privacy engineers will learn from the event.

    Topics Covered:

    • Matt’s background and how he started A/B testing and experimentation at Conductrics
    • The major challenges that arise when companies run experiments and how Conductrics works to solve them
    • Breaking down A/B testing
    • How being intentional about A/B testing and experimentation supports high level privacy
    • The process of the data collection, testing, and experimentation
    • Collecting the data while minimizing privacy risks
    • The value of attending the USENIX Conference on Privacy Engineering Practice & Respect (PEPR24) and what to expect from Matt’s talk


    Guest Info:

    • Connect with Matt on LinkedIn
    • Learn more about Conductrics
    • Read about George Box's quote, "All models are wrong"
    • Learn about the PEPR Conference

    Send us a Text Message.



    Privado.ai
    Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

    Shifting Privacy Left Media
    Where privacy engineers gather, share, & learn

    Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

    Copyright © 2022 - 2024 Principled LLC. All rights reserved.

    続きを読む 一部表示
    45 分
  • S3E11: 'Decision-Making Governance & Design: Combating Dark Patterns with Fair Patterns' with Marie Potel-Saville (Amurabi & FairPatterns)
    2024/04/30

    In this episode, Marie Potel-Saville joins me to shed light on the widespread issue of dark patterns in design. With her background in law, Marie founded the 'FairPatterns' project with her award-winning privacy and innovation studio, Amurabi, to detect and fix large-scale dark patterns. Throughout our conversation, we discuss the different types of dark patterns, why it is crucial for businesses to prevent them from being coded into their websites and apps, and how designers can ensure that they are designing fair patterns in their projects.


    Dark patterns are interfaces that deceive or manipulate users into unintended actions by exploiting cognitive biases inherent in decision-making processes. Marie explains how dark patterns are harmful to our economic and democratic models, their negative impact on individual agency, and the ways that FairPatterns provides countermeasures and safeguards against the exploitation of people's cognitive biases. She also shares tips for designers and developers for designing and architecting fair patterns.

    Topics Covered:

    • Why Marie shifted her career path from practicing law to deploying and lecturing on Legal UX design & combatting Dark Patterns at Amurabi
    • The definition of ‘Dark Patterns’ and the difference between them and ‘deceptive patterns’
    • What motivated Marie to found FairPatterns.com and her science-based methodology to combat dark patterns
    • The importance of decision making governance
    • Why execs should care about preventing dark patterns from being coded into their websites, apps, & interfaces
    • How dark patterns exploit our cognitive biases to our detriment
    • What global laws say about dark patterns
    • How dark patterns create structural risks for our economies & democratic models
    • How "Fair Patterns" serve as countermeasures to Dark Patterns
    • The 7 categories of Dark Patterns in UX design & associated countermeasures
    • Advice for designers & developers to ensure that they design & architect Fair Patterns when building products & features
    • How companies can boost sales & gain trust with Fair Patterns
    • Resources to learn more about Dark Patterns & countermeasures

    Guest Info:

    • Connect with Marie on LinkedIn
    • Learn more about Amurabi
    • Check out FairPatterns.com

    Resources Mentioned:

    • Learn about the 7 Stages of Action Model
    • Take FairPattern's course: Dark Patterns 101
    • Read Deceptive Design Patterns
    • Listen to FairPatterns' Fig

    Send us a Text Message.



    Privado.ai
    Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

    TRU Staffing Partners
    Top privacy talent - when you need it, where you need it.

    Shifting Privacy Left Media
    Where privacy engineers gather, share, & learn

    Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

    Copyright © 2022 - 2024 Principled LLC. All rights reserved.

    続きを読む 一部表示
    54 分
  • S3E10: 'How a Privacy Engineering Center of Excellence Shifts Privacy Left' with Aaron Weller (HP)
    2024/04/09

    In this episode, I sat down with Aaron Weller, the Leader of HP's Privacy Engineering Center of Excellence (CoE), focused on providing technical solutions for privacy engineering across HP's global operations. Throughout our conversation, we discuss: what motivated HP's leadership to stand up a CoE for Privacy Engineering; Aaron's approach to staffing the CoE; how a CoE's can shift privacy left in a large, matrixed organization like HP's; and, how to leverage the CoE to proactively manage privacy risk.

    Aaron emphasizes the importance of understanding an organization's strategy when creating a CoE and shares his methods for gathering data to inform the center's roadmap and team building. He also highlights the great impact that a Center of Excellence can offer and gives advice for implementing one in your organization. We touch on the main challenges in privacy engineering today and the value of designing user-friendly privacy experiences. In addition, Aaron provides his perspective on selecting the right combination of Privacy Enhancing Technologies (PETs) for anonymity, how to go about implementing PETs, and the role that AI governance plays in his work.

    Topics Covered:

    • Aaron’s deep privacy and consulting background and how he ended up leading HP's Privacy Engineering Center of Excellence
    • The definition of a "Center of Excellence" (CoE) and how a Privacy Engineering CoE can drive value for an organization and shift privacy left
    • What motivates a company like HP to launch a CoE for Privacy Engineering and what it's reporting line should be
    • Aaron's approach to creating a Privacy Engineering CoE roadmap; his strategy for staffing this CoE; and the skills & abilities that he sought
    • How HP's Privacy Engineering CoE works with the business to advise on, and select, the right PETs for each business use case
    • Why it's essential to know the privacy guarantees that your organization wants to assert before selecting the right PETs to get you there
    • Lessons Learned from setting up a Privacy Engineering CoE and how to get executive sponsorship
    • The amount of time that Privacy teams have had to work on AI issues over the past year, and advice on preventing burnout
    • Aaron's hypothesis about the value of getting an early handle on governance over the adoption of innovative technologies
    • The importance of being open to continuous learning in the field of privacy engineering

    Guest Info:

    • Connect with Aaron on LinkedIn
    • Learn about HP's Privacy Engineering Center of Excellence
    • Review the OWASP Machine Learning Security Top 10
    • Review the OWASP Top 10 for LLM Applications

    Send us a Text Message.



    Privado.ai
    Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

    TRU Staffing Partners
    Top privacy talent - when you need it, where you need it.

    Shifting Privacy Left Media
    Where privacy engineers gather, share, & learn

    Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

    Copyright © 2022 - 2024 Principled LLC. All rights reserved.

    続きを読む 一部表示
    40 分
  • S3E9: 'Building a Culture of Privacy & Achieving Compliance without Sacrificing Innovation' with Amaka Ibeji (Cruise)
    2024/04/02

    Today, I’m joined by Amaka Ibeji, Privacy Engineer at Cruise where she designs and implements robust privacy programs and controls. In this episode, we discuss Amaka's passion for creating a culture of privacy and compliance within organizations and engineering teams. Amaka also hosts the PALS Parlor Podcast, where she speaks to business leaders and peers about privacy, AI governance, leadership, and security and explains technical concepts in a digestible way. The podcast aims to enable business leaders to do more with their data and provides a way for the community to share knowledge with one other.

    In our conversation, we touch on her career trajectory from security engineer to privacy engineer and the intersection of cybersecurity, privacy engineering, and AI governance. We highlight the importance of early engagement with various technical teams to enable innovation while still achieving privacy compliance. Amaka also shares the privacy-enhancing technologies (PETs) that she is most excited about, and she recommends resources for those who want to learn more about strategic privacy engineering. Amaka emphasizes that privacy is a systemic, 'wicked problem' and offers her tips for understanding and approaching it.

    Topics Covered:

    • How Amaka's compliance-focused experience at Microsoft helped prepare her for her Privacy Engineering role at Cruise
    • Where privacy overlaps with the development of AI
    • Advice for shifting privacy left to make privacy stretch beyond a compliance exercise
    • What works well and what doesn't when building a 'Culture of Privacy'
    • Privacy by Design approaches that make privacy & innovation a win-win rather than zero-sum game
    • Privacy Engineering trends that Amaka sees; and, the PETs about which she's most excited
    • Amaka's Privacy Engineering resource recommendations, including:
      • Hoepman's "Privacy Design Strategies" book;
      • The LINDDUN Privacy Threat Modeling Framework; and
      • The PLOT4AI Framework
    • "The PALS Parlor Podcast," focused on Privacy Engineering, AI Governance, Leadership, & Security
      • Why Amaka launched the podcast;
      • Her intended audience; and
      • Topics that she plans to cover this year
    • The importance of collaboration; building a community of passionate privacy engineers, and addressing the systemic issue of privacy

    Guest Info & Resources:

    • Follow Amaka on LinkedIn
    • Listen to The PALS Parlor Podcast
    • Read Jaap-Henk Hoepman's "Privacy Design Strategies (The Little Blue Book)"
    • Read Jason Cronk's "Strategic Privacy by Design, 2nd Edition"
    • Check out The LINDDUN Privacy Threat Modeling Framework
    • Check out The Privacy Library of Threats for Artificial Intel

    Send us a Text Message.



    Privado.ai
    Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

    TRU Staffing Partners
    Top privacy talent - when you need it, where you need it.

    Shifting Privacy Left Media
    Where privacy engineers gather, share, & learn

    Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

    Copyright © 2022 - 2024 Principled LLC. All rights reserved.

    続きを読む 一部表示
    43 分
  • S3E8: 'Recent FTC Enforcement: What Privacy Engineers Need to Know' with Heidi Saas (H.T. Saas)
    2024/03/26

    In this week's episode, I am joined by Heidi Saas, a privacy lawyer with a reputation for advocating for products and services built with privacy by design and against the abuse of personal data. In our conversation, she dives into recent FTC enforcement actions, analyzing five FTC actions and some enforcement sweeps by Colorado & Connecticut.

    Heidi shares her insights on the effect of the FTC enforcement actions and what privacy engineers need to know, emphasizing the need for data management practices to be transparent, accountable, and based on affirmative consent. We cover the role of privacy engineers in ensuring compliance with data privacy laws; why 'browsing data' is 'sensitive data;' the challenges companies face regarding data deletion; and the need for clear consent mechanisms, especially with the collection and use of location data. We also discuss the need to audit the privacy posture of products and services - which includes a requirement to document who made certain decisions - and how to prioritize risk analysis to proactively address risks to privacy.

    Topics Covered:

    • Heidi’s journey into privacy law and advocacy for privacy by design and default
    • How the FTC brings enforcement actions, the effect of their settlements, and why privacy engineers should pay closer attention
    • Case 1: FTC v. InMarket Media - Heidi explains the implication of the decision: where data that are linked to a mobile advertising identifier (MAID) or an individual's home are not considered de-identified
    • Case 2: FTC v. X-Mode Social / OutLogic - Heidi explains the implication of the decision, focused on: affirmative express consent for location data collection; definition of a 'data product assessment' and audit programs; and data retention & deletion requirements
    • Case 3: FTC v. Avast - Heidi explains the implication of the decision: 'browsing data' is considered 'sensitive data'
    • Case 4: The People (CA) v. DoorDash - Heidi explains the implications of the decision, based on CalOPPA: where companies that share personal data with one another as part of a 'marketing cooperative' are, in fact, selling of data
    • Heidi discusses recent State Enforcement Sweeps for privacy, specifically in Colorado and Connecticut and clarity around breach reporting timelines
    • The need to prioritize independent third-party audits for privacy
    • Case 5: FTC v. Kroger - Heidi explains why the FTC's blocking of Kroger's merger with Albertson's was based on antitrust and privacy harms given the sheer amount of personal data that they process
    • Tools and resources for keeping up with FTC cases and connecting with your privacy community

    Guest Info:

    • Follow Heidi on LinkedIn
    • Read (book): 'Means of Control: How the Hidden Alliance of Tech and Government is Creating a New American Surveillance State'

    Send us a Text Message.



    Privado.ai
    Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

    TRU Staffing Partners
    Top privacy talent - when you need it, where you need it.

    Shifting Privacy Left Media
    Where privacy engineers gather, share, & learn

    Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

    Copyright © 2022 - 2024 Principled LLC. All rights reserved.

    続きを読む 一部表示
    1 時間 16 分