-
サマリー
あらすじ・解説
Data is the fuel that powers AI, but what happens when there is sugar in the proverbial gas tank? On this episode of Information Risk, David Loshin gets into the perils of using AI and large language models (LLMs). when the data is faulty. Socioeconomic pillars like healthcare, real estate, and law are affected. A lawyer blindly trusting AI-generated fictitious legal cases, the mishandling of COVID-19 diagnostic tools due to biased ML algorithms, and Zillow's failed attempt to predict real estate prices, leading to a massive financial fallout. In all of these examples, data quality is essential. David urges professionals and businesses to exercise due diligence in scrutinizing the data that feeds their AI models to ensure reliability and mitigate information risks. References: • "The ChatGPT Lawyer Explains Himself" • "Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans" • "The $500mm+ Debacle at Zillow Offers – What Went Wrong with the AI Models?" • "Zillow to shutter home buying business and lay off 2,000 employees as its big real estate bet falters" Information Risk is a podcast hosted by Doug Atkinson featuring David Loshin, a renowned data strategy consultant and director at the University of Maryland. David explores theoretical and real-world examples of 'information risk', the different types of risk that misuse and misinterpretation of data can pose to organizations. Contact David Loshin: LinkedIn Twitter knowledge-integrity.com Powered by Insight Jam