エピソード

  • Saidot CEO Meeri Hataaja
    2024/10/31

    In this episode, you’ll hear about Meeri's incredible career, insights from the recent AI Pact conference she attended, her company's involvement, and how we can articulate the reality of holding companies accountable to AI governance practices. We discuss how to know if you have an AI problem, what makes third-party generative AI more risky, and so much more! Meeri even shares how she thinks the Use AI Act will impact AI companies and what companies can do to take stock of their risk factors and ensure that they are building responsibly. You don’t want to miss this one, so be sure to tune in now!

    Key Points From This Episode:

    • Insights from the AI Pact conference.
    • The reality of holding AI companies accountable.
    • What inspired her to start Saidot to offer solutions for AI transparency and accountability.
    • How Meeri assesses companies and their organizational culture.
    • What makes generative AI more risky than other forms of machine learning.
    • Reasons that use-related risks are the most common sources of AI risks.
    • Meeri’s thoughts on the impact of the Use AI Act in the EU.

    Quotes:

    “It’s best to work with companies who know that they already have a problem.” — @meerihaataja [0:09:58]

    “Third-party risks are way bigger in the context of [generative AI].” — @meerihaataja [0:14:22]

    “Use and use-context-related risks are the major source of risks.” — @meerihaataja [0:17:56]

    “Risk is fine if it’s on an acceptable level. That’s what governance seeks to do.” — @meerihaataja [0:21:17]

    Links Mentioned in Today’s Episode:

    Saidot

    Meeri Haataja on LinkedIn

    Meeri Haataja on Instagram

    Meeri Haataja on X

    How AI Happens

    Sama

    続きを読む 一部表示
    25 分
  • FICO Chief Analytics Officer Dr. Scott Zoldi
    2024/10/18

    In this episode, Dr. Zoldi offers insight into the transformative potential of blockchain for ensuring transparency in AI development, the critical need for explainability over mere predictive power, and how FICO maintains trust in its AI systems through rigorous model development standards. We also delve into the essential integration of data science and software engineering teams, emphasizing that collaboration from the outset is key to operationalizing AI effectively.


    Key Points From This Episode:

    • How Scott integrates his role as an inventor with his duties as FICO CAO.
    • Why he believes that mindshare is an essential leadership quality.
    • What sparked his interest in responsible AI as a physicist.
    • The shifting demographics of those who develop machine learning models.
    • Insight into the use of blockchain to advance responsible AI.
    • How FICO uses blockchain to ensure auditable ML decision-making.
    • Operationalizing AI and the typical mistakes companies make in the process.
    • The value of integrating data science and software engineering teams from the start.
    • A fear-free perspective on what Scott finds so uniquely exciting about AI.

    Quotes:

    “I have to stay ahead of where the industry is moving and plot out the directions for FICO in terms of where AI and machine learning is going – [Being an inventor is critical for] being effective as a chief analytics officer.” — @ScottZoldi [0:01:53]

    “[AI and machine learning] is software like any other type of software. It's just software that learns by itself and, therefore, we need [stricter] levels of control.” — @ScottZoldi [0:23:59]

    “Data scientists and AI scientists need to have partners in software engineering. That's probably the number one reason why [companies fail during the operationalization process].” — @ScottZoldi [0:29:02]

    Links Mentioned in Today’s Episode:

    FICO

    Dr. Scott Zoldi

    Dr. Scott Zoldi on LinkedIn

    Dr. Scott Zoldi on X

    FICO Falcon Fraud Manager

    How AI Happens

    Sama

    続きを読む 一部表示
    34 分
  • Lemurian Labs CEO Jay Dawani
    2024/10/10

    Jay breaks down the critical role of software optimizations and how they drive performance gains in AI, highlighting the importance of reducing inefficiencies in hardware. He also discusses the long-term vision for Lemurian Labs and the broader future of AI, pointing to the potential breakthroughs that could redefine industries and accelerate innovation, plus a whole lot more.

    Key Points From This Episode:

    • Jay’s diverse professional background and his attraction to solving unsolvable problems.
    • How his unfinished business in robotics led him to his current work at Lemurian Labs.
    • What he has learned from being CEO and the biggest obstacles he has had to overcome.
    • Why he believes engineers with a problem-solving mindset can be effective CEOs.
    • Lemurian Labs: making AI computing more efficient, affordable, and environmentally friendly.
    • The critical role of software in increasing AI efficiency.
    • Some of the biggest challenges in programming GPUs.
    • Why better software is needed to optimize the use of hardware.
    • Common inefficiencies in AI development and how to solve them.
    • Reflections on the future of Lemurian Labs and AI more broadly.

    Quotes:

    “Every single problem I've tried to pick up has been one that – most people have considered as being almost impossible. There’s something appealing about that.” — Jay Dawani [0:02:58]

    “No matter how good of an idea you put out into the world, most people don't have the motivation to go and solve it. You have to have an insane amount of belief and optimism that this problem is solvable, regardless of how much time it's going to take.” — Jay Dawani [0:07:14]

    “If the world's just betting on one company, then the amount of compute you can have available is pretty limited. But if there's a lot of different kinds of compute that are slightly optimized with different resources, making them accessible allows us to get there faster.” — Jay Dawani [0:19:36]

    “Basically what we're trying to do [at Lemurian Labs] is make it easy for programmers to get [the best] performance out of any hardware.” — Jay Dawani [0:20:57]

    Links Mentioned in Today’s Episode:

    Jay Dawani on LinkedIn

    Lemurian Labs

    How AI Happens

    Sama

    続きを読む 一部表示
    29 分
  • Intel VP & GM of Strategy & Execution Melissa Evers
    2024/09/30

    Melissa explains the importance of giving developers the choice of working with open source or proprietary options, experimenting with flexible application models, and choosing the size of your model according to the use case you have in mind. Discussing the democratization of technology, we explore common challenges in the context of AI including the potential of generative AI versus the challenge of its implementation, where true innovation lies, and what Melissa is most excited about seeing in the future.

    Key Points From This Episode:

    • An introduction to Melissa Evers, Vice President and General Manager of Strategy and Execution at Intel Corporation.
    • More on the communities she has played a leadership role in.
    • Why open source governance is not an oxymoron and why it is critical.
    • The hard work that goes on behind the scenes at open source.
    • What to strive for when building a healthy open source community.
    • Intel’s perspective on the importance of open source and open AI.
    • Enabling developer choices about open source or proprietary options.
    • Growing awareness around building architecture around the freedom of choice.
    • Identifying that a model is a bad choice or lacking in accuracy.
    • Thinking critically about future-proofing yourself with regard to model choice.
    • Opportunities for large and smaller models.
    • Finding the perfect intersection between value delivery, value creation, and cost.
    • Common challenges in the context of AI, including the potential of generative AI and its implementation.
    • Why there is such a commonality of use cases in the realm of generative AI.
    • Where true innovation and value lies even though there may be commonality in use cases.
    • Examples of creative uses of generative AI; retail, compound AI systems, manufacturing, and more.
    • Understanding that innovation in this area is still in its early development stages.
    • How Wardley Mapping can support an understanding of scale.
    • What she is most excited about for the future of AI: Rapid learning in healthcare.

    Quotes:

    “One of the things that is true about software in general is that the role that open source plays within the ecosystem has dramatically shifted and accelerated technology development at large.” — @melisevers [0:03:02]

    “It’s important for all citizens of the open source community, corporate or not, to understand and own their responsibilities with regard to the hard work of driving the technology forward.” — @melisevers [0:05:18]

    “We believe that innovation is best served when folks have the tools at their disposal on which to innovate.” — @melisevers [0:09:38]

    “I think the focus for open source broadly should be on the elements that are going to be commodified.” — @melisevers [0:25:04]

    Links Mentioned in Today’s Episode:

    Melissa Evers on LinkedIn

    Melissa Evers on X

    Intel Corporation

    続きを読む 一部表示
    35 分
  • Synopsys VP of AI Thomas Andersen
    2024/09/27

    VP of AI and ML at Synopsys, Thomas Andersen joins us to discuss designing AI chips. Tuning in, you’ll hear all about our guest’s illustrious career, how he became interested in technology, tech in East Germany, what it was like growing up there, and so much more! We delve into his company, Synopsys, and the chips they build before discussing his role in building algorithms.

    Key Points From This Episode:

    • A warm welcome to today’s guest, Thomas Andersen.
    • How he got into the tech world and his experience growing up in East Germany.
    • The cost of Compute AI coming down at the same time the demand is going up.
    • Thomas tells us about Synopsys and what goes into building their chips.
    • Other traditional software companies that are now designing their own AI chips.
    • What Thomas’ role looks like in machine learning and building AI algorithms.
    • How the constantly changing rules of AI chip design continue to create new obstacles.
    • Thomas tells us how they use reinforcement learning in their processes.
    • The different applications for generative AI and why it needs good input data.
    • Thomas’ advice for anyone wanting to get into the world of AI.

    Quotes:

    “It’s not really the technology that makes life great, it’s how you use it, and what you make of it.” — Thomas Andersen [0:07:31]

    “There is, of course, a lot of opportunities to use AI in chip design.” — Thomas Andersen [0:25:39]

    “Be bold, try as many new things [as you can, and] make sure you use the right approach for the right tasks.” — Thomas Andersen [0:40:09]

    Links Mentioned in Today’s Episode:

    Thomas Andersen on LinkedIn

    Synopsys

    How AI Happens

    Sama

    続きを読む 一部表示
    42 分
  • Xactly SVP Engineering Kandarp Desai
    2024/09/24

    Developing AI and generative AI initiatives demands significant investment, and without delivering on customer satisfaction, these costs can be tough to justify. Today, SVP of Engineering and General Manager of Xactly India, Kandarp Desai joins us to discuss Xactly's AI initiatives and why customer satisfaction remains their top priority.
    Key Points From This Episode:

    • An introduction to Kandarp and his transition from hardware to software.
    • How he became SVP of Engineering and General Manager of Xactly India.
    • His move to Bangalore and the expansion of Xactly’s presence in India.
    • The rapid modernization of India as a key factor in Xactly’s growth strategy.
    • An overview of Xactly’s AI and generative AI initiatives.
    • Insight into the development of Xactly’s AI Copilot.
    • Four key stakeholders served by the Xactly AI Copilot.
    • The Xactly Extend, an enterprise platform for building custom apps.
    • Challenges in justifying the ROI of AI initiatives.
    • Why customer satisfaction and business outcomes are essential.
    • How AI is overhyped in the short term and underhyped in the long term.
    • The difficulties in quantifying the value of AI.
    • Kandarp’s career advice to AI practitioners, from taking risks to networking.

    Quotes:

    “[Generative AI] is only useful if it drives higher customer satisfaction. Otherwise, it doesn't matter.” — Kandarp Desai [0:11:36]

    “Justifying the ROI of anything is hard – If you can tie any new invention back to its ROI in customer satisfaction, that can drive an easy sell across an organization.” — Kandarp Desai [0:15:35]

    “The whole AI trend is overhyped in the short term and underhyped long term. [It’s experienced an] oversell recently, and people are still trying to figure it out.” — Kandarp Desai [0:20:48]

    Links Mentioned in Today’s Episode:


    Kandarp Desai on LinkedIn

    Xactly

    How AI Happens

    Sama

    続きを読む 一部表示
    25 分
  • AI Industry Leader Srujana Kaddevarmuth
    2024/09/09

    Srujana is Vice President and Group Director at Walmart’s Machine Learning Center of Excellence and is an experienced and respected AI, machine learning, and data science professional. She has a strong background in developing AI and machine learning models, with expertise in natural language processing, deep learning, and data-driven decision-making. Srujana has worked in various capacities in the tech industry, contributing to advancing AI technologies and their applications in solving complex problems. In our conversation, we unpack the trends shaping AI governance, the importance of consumer data protection, and the role of human-centered AI. Explore why upskilling the workforce is vital, the potential impact AI could have on white-collar jobs, and which roles AI cannot replace. We discuss the interplay between bias and transparency, the role of governments in creating AI development guardrails, and how the regulatory framework has evolved. Join us to learn about the essential considerations of deploying algorithms at scale, striking a balance between latency and accuracy, the pros and cons of generative AI, and more.

    Key Points From This Episode:

    • Srujana breaks down the top concerns surrounding technology and data.
    • Learn how AI can be utilized to drive innovation and economic growth.
    • Navigating the adoption of AI with upskilling and workforce retention.
    • The AI gaps that upskilling should focus on to avoid workforce displacement.
    • Common misconceptions about biases in AI and how they can be mitigated.
    • Why establishing regulations, laws, and policies is vital for ethical AI development.
    • Outline of the nuances of creating an effective worldwide regulatory framework.
    • She explains the challenges and opportunities of deploying algorithms at scale.
    • Hear about the strategies for building architecture that can adapt to future changes.
    • She shares her perspective on generative AI and what its best use cases are.
    • Find out what area of AI Srujana is most excited about.

    Quotes:

    “By deploying [bias] algorithms we may be going ahead and causing some unintended consequences.” — @Srujanadev [0:03:11]

    “I think it is extremely important to have the right regulations and guardrails in place.” — @Srujanadev [0:11:32]

    “Just using generative AI for the sake of it is not necessarily a great idea.” — @Srujanadev [0:25:27]

    “I think there are a lot of applications in terms of how generative AI can be used but not everybody is seeing the return on investment.” — @Srujanadev [0:27:12]

    Links Mentioned in Today’s Episode:

    Srujana Kaddevarmuth

    Srujana Kaddevarmuth on X

    Srujana Kaddevarmuth on LinkedIn

    United Nations Association (UNA) San Francisco

    The World in 2050

    American INSIGHT

    How AI Happens

    Sama

    続きを読む 一部表示
    31 分
  • UPS Sr. Director & Head of Innovation Sunzay Passari
    2024/08/29

    Our guest goes on to share the different kinds of research they use for machine learning development before explaining why he is more conservative when it comes to driving generative AI use cases. He even shares some examples of generative use cases he feels are worthwhile. We hear about how these changes will benefit all UPS customers and how they avoid sharing private and non-compliant information with chatbots. Finally, Sunzay shares some advice for anyone wanting to become a leader in the tech world.

    Key Points From This Episode:

    • Introducing Sunzay Passari to the show and how he landed his current role at UPS.
    • Why Sunzay believes that this huge operation he’s part of will drive transformational change.
    • How AI and machine learning have made their way into UPS over the past few years.
    • The way Sunzay and his team have decided where AI will be most disruptive within UPS.
    • Qualitative and qualitative research and what that looks like for this project.
    • Why Sunzay is conservative when it comes to driving generative AI use cases.
    • Sunzay shares some of the generative use cases that he thinks are worthwhile.
    • The way these new technologies will benefit everyday UPS customers.
    • How they are preventing people from accessing non-compliant data through chatbots.
    • Sunzay passes on some advice for anyone looking to forge their career as a leader in tech.

    Quotes:

    “There’s a lot of complexities in the kind of global operations we are running on a day-to-day basis [at UPS].” — Sunzay Passari [0:04:35]

    “There is no magic wand – so it becomes very important for us to better our resources at the right time in the right initiative.” — Sunzay Passari [0:09:15]

    “Keep learning on a daily basis, keep experimenting and learning, and don’t be afraid of the failures.” — Sunzay Passari [0:22:48]

    Links Mentioned in Today’s Episode:

    Sunzay Passari on LinkedIn

    UPS

    How AI Happens

    Sama

    続きを読む 一部表示
    25 分