• Who’s the Bigger Cybersecurity Risk – Microsoft or Open Source?

  • 2024/04/11
  • 再生時間: 1 時間 11 分
  • ポッドキャスト

Who’s the Bigger Cybersecurity Risk – Microsoft or Open Source?

  • サマリー

  • There’s a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it’s appropriate, then, for our two lead stories to revive a theme from the 90s – who’s better, Microsoft or Linux? Sadly for both, the current debate is over who’s worse, at least for cybersecurity. Microsoft’s sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports. The Board digs into the disastrous compromise of a Microsoft signing key that gave China access to US government email. The language of the report is sober, and all the more devastating because of its restraint. Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require a focus on security at a time when the company feels compelled to focus relentlessly on building AI into its offerings. The signs for improvement are not good. The only people who come out of the report looking good are the State Department security team, whose mad cyber skillz deserve to be celebrated – not least because they’ve been questioned by the rest of government for decades. With Microsoft down, you might think open source would be up. Think again, Nick Weaver tells us. The strategic vulnerability of open source, as well as its appeal, is that anyone can contribute code to a project they like. And in the case of the XZ backdoor, anybody did just that. A well-organized, well-financed, and knowledgeable group of hackers cajoled and bullied their way into a contributing role on an open source project that enabled various compression algorithms. Once in, they contributed a backdoored feature that used public key encryption to ensure access only to the authors of the feature. It was weeks from being in every Linux distro when a Microsoft employee discovered the implant. But the people who almost pulled this off seemed well-practiced and well-resourced. They’ve likely done this before, and will likely do it again. Leaving all open source projects facing their own strategic vulnerability. It wouldn’t be the Cyberlaw Podcast without at least one Baker rant about political correctness. The much-touted bipartisan privacy bill threatening to sweep to enactment in this Congress turns out to be a disaster for anyone who opposes identity politics. To get liberals on board with a modest amount of privacy preemption, I charge, the bill would effectively overturn the Supreme Court’s Harvard admissions decision and impose race, gender, and other quotas on a host of other activities that have avoided them so far. Adam Hickey and I debate the language of the bill. Why would the Republicans who control the House go along with this? I offer two reasons: first, business lobbyists want both preemption and a way to avoid charges of racial discrimination, even if it means relying on quotas; second, maybe Sen. Alan Simpson was right that the Republican Party really is the Stupid Party. Nick and I turn to a difficult AI story, about how Israel is using algorithms to identify and kill even low-level Hamas operatives in their homes. Far more than killer robots, this use of AI in war is far more likely to sweep the world. Nick is critical of Israel’s approach; I am less so. But there’s no doubt that the story forces a sober assessment of just how personal and how ugly war will soon be. Paul takes the next story, in which Microsoft serves up leftover “AI gonna steal yer election” tales that are not much different than all the others we’ve heard since 2016 (when straight social media was the villain). The bottom line: China is using AI in social media to advance its interests and probe US weaknesses, but it doesn’t seem to be having much effect. Nick answers the question, “Will AI companies run out of training data?” with a clear viewpoint: “They already have.” He invokes the Hapsburgs to explain what’s going wrong. We also touch on the likelihood that demand for training data will lead to copyright liability, or that hallucinations will lead to defamation liability. Color me skeptical. Paul comments on two US quasiagreements, with the UK and the EU, on AI cooperation. And Adam breaks down the FCC’s burst of initiatives celebrating the arrival of a Democratic majority on the Commission for the first time since President Biden’s inauguration. The commission is now ready to move out on net neutrality, on regulating cars as oddly shaped phones with benefits, and on SS7 security. Faced with a security researcher who responded to a hacking attack by taking down North Korea’s internet, Adam acknowledges that maybe my advocacy of hacking back wasn’t quite as crazy as he ...
    続きを読む 一部表示

あらすじ・解説

There’s a whiff of Auld Lang Syne about episode 500 of the Cyberlaw Podcast, since after this it will be going on hiatus for some time and maybe forever. (Okay, there will be an interview with Dmitri Alperovich about his forthcoming book, but the news commentary is done for now.) Perhaps it’s appropriate, then, for our two lead stories to revive a theme from the 90s – who’s better, Microsoft or Linux? Sadly for both, the current debate is over who’s worse, at least for cybersecurity. Microsoft’s sins against cybersecurity are laid bare in a report of the Cyber Security Review Board, Paul Rosenzweig reports. The Board digs into the disastrous compromise of a Microsoft signing key that gave China access to US government email. The language of the report is sober, and all the more devastating because of its restraint. Microsoft seems to have entirely lost the security focus it so famously pivoted to twenty years ago. Getting it back will require a focus on security at a time when the company feels compelled to focus relentlessly on building AI into its offerings. The signs for improvement are not good. The only people who come out of the report looking good are the State Department security team, whose mad cyber skillz deserve to be celebrated – not least because they’ve been questioned by the rest of government for decades. With Microsoft down, you might think open source would be up. Think again, Nick Weaver tells us. The strategic vulnerability of open source, as well as its appeal, is that anyone can contribute code to a project they like. And in the case of the XZ backdoor, anybody did just that. A well-organized, well-financed, and knowledgeable group of hackers cajoled and bullied their way into a contributing role on an open source project that enabled various compression algorithms. Once in, they contributed a backdoored feature that used public key encryption to ensure access only to the authors of the feature. It was weeks from being in every Linux distro when a Microsoft employee discovered the implant. But the people who almost pulled this off seemed well-practiced and well-resourced. They’ve likely done this before, and will likely do it again. Leaving all open source projects facing their own strategic vulnerability. It wouldn’t be the Cyberlaw Podcast without at least one Baker rant about political correctness. The much-touted bipartisan privacy bill threatening to sweep to enactment in this Congress turns out to be a disaster for anyone who opposes identity politics. To get liberals on board with a modest amount of privacy preemption, I charge, the bill would effectively overturn the Supreme Court’s Harvard admissions decision and impose race, gender, and other quotas on a host of other activities that have avoided them so far. Adam Hickey and I debate the language of the bill. Why would the Republicans who control the House go along with this? I offer two reasons: first, business lobbyists want both preemption and a way to avoid charges of racial discrimination, even if it means relying on quotas; second, maybe Sen. Alan Simpson was right that the Republican Party really is the Stupid Party. Nick and I turn to a difficult AI story, about how Israel is using algorithms to identify and kill even low-level Hamas operatives in their homes. Far more than killer robots, this use of AI in war is far more likely to sweep the world. Nick is critical of Israel’s approach; I am less so. But there’s no doubt that the story forces a sober assessment of just how personal and how ugly war will soon be. Paul takes the next story, in which Microsoft serves up leftover “AI gonna steal yer election” tales that are not much different than all the others we’ve heard since 2016 (when straight social media was the villain). The bottom line: China is using AI in social media to advance its interests and probe US weaknesses, but it doesn’t seem to be having much effect. Nick answers the question, “Will AI companies run out of training data?” with a clear viewpoint: “They already have.” He invokes the Hapsburgs to explain what’s going wrong. We also touch on the likelihood that demand for training data will lead to copyright liability, or that hallucinations will lead to defamation liability. Color me skeptical. Paul comments on two US quasiagreements, with the UK and the EU, on AI cooperation. And Adam breaks down the FCC’s burst of initiatives celebrating the arrival of a Democratic majority on the Commission for the first time since President Biden’s inauguration. The commission is now ready to move out on net neutrality, on regulating cars as oddly shaped phones with benefits, and on SS7 security. Faced with a security researcher who responded to a hacking attack by taking down North Korea’s internet, Adam acknowledges that maybe my advocacy of hacking back wasn’t quite as crazy as he ...

Who’s the Bigger Cybersecurity Risk – Microsoft or Open Source?に寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。