エピソード

  • Voice Actors Sue Lovo after hearing their own voices in a podcast they never recorded
    2024/05/28

     We've got two representative plaintiffs that purport to essentially represent a group of similarly situated people. And in this case, they talk about potentially hundreds, if not thousands of similarly situated actors. The first question I have is, is to what extent do these two actors, pursuant to their SAG agreement, have the right to sue um, in light of the claims?

    In other words, they're members of a union. They are represented by that union, and typically the union steps in for them to advocate their professional position. This is a claim that, that, as I read it, gives, would at least give rise to the question, do these two individuals, by virtue of being SAG Actor members, do they have a right to sue in their individual capacity?

    In today's court filing. We look at two actors, two voice actors. Who have sued Lovo for misappropriation and misuse of their voices. It's fascinating in that the actors actually didn't know it had happened until they heard themselves on a podcast.

    続きを読む 一部表示
    31 分
  • AWS Whistleblower says Amazon is Ignoring its own AI Policies
    2024/05/21

    From Shannon Lietz: For companies that are starting to adopt things like AI, and Copilot, and ChatGPT, and LLAMA, and you name whatever LLM that's out there, Are they evaluating their policies with relationship to how data gets used ? My perspective is, if you're going to bring in public data, or you're going to bring in copyrighted materials, note that because it could be a concern. It could end up in something that does get flagged for future lawsuits.

    From Mark Miller: In today’s episode, Joel, Shannon and I discuss a case where an employee at AWS blew the whistle, saying the company is ignoring its own policies when it comes to consumption of data for its AI engine. Is being told to ignore company policy illegal? You might be surprised on how our trio comes to grips with the concept.

    続きを読む 一部表示
    23 分
  • The Story Behind the Google Fine by the French Competition Authority
    2024/04/26

    From Joel MacMull: The French competition authority last week said the tech giant, Google, failed to negotiate fair licensing deals with media outlets and did not tell them it was using their articles to train its chatbot. And as a consequence, it fined Google about 270 million US dollars. The fine was in Euros, but that's roughly what we're dealing with in terms of a conversion rate.

    So it's not nothing, but also for one of the largest tech companies in the world, it's, it's, you know, certainly not going to make a material difference to their bottom line. But it outlines, I think, some interesting issues, particularly when we contrast that to what's going on now in the United States and some of the litigation we're seeing against OpenAI.

    From Mark Miller:  The real issue here that I read in the French decision is that, I'll put it in American terms, Google is not negotiating in good faith.

    The part of the negotiations are who do you negotiate with and who pays who when things are settled. And I think that's Google's case that they're coming back with to say, you haven't defined the rules of the game or else they keep switching, so we don't even know who we're dealing with anymore.

    続きを読む 一部表示
    17 分
  • Air Canada: Chatbot is a legal entity responsible for its own actions
    2024/03/05

    In today’s episode, we talk about how Air Canada tried to defend itself in court by contending that the chatbot on its company site is its own entity and is separate from Air Canada. A lot of the “fun” in this case is the absurdity of the defense. However, it’s a good case for thought experiments, thinking about the near term future of AI and who ultimately is responsible for its output.

    While prepping for this call, I really did dig into the case here because of the absurdity of it in my mind. Joel, give us a brief overview of what the case is and who the complainants and defendants are.


    From Joel MacMull, Lawyer

    What makes this resonate, at least with me, is the fact that we have a very sympathetic plaintiff. A young man, buys an airline ticket, in connection with his deceased grandmother, he buys it from Vancouver to Toronto. Prior to buying the ticket, he, is on Air Canada's website and is having a conversation with its chatbot and asks about bereavement fare.


    And the sum and substance of the message he receives is that within 90 days after his purchase, again, this is a conversation he's having with the chatbot, within 90 days after making his purchase, he can essentially claim bereavement. And the chatbot, in providing him with that textual response, actually has a hyperlink to another Air Canada webpage, which has additional terms about bereavement there.


    It so happens that that additional hyperlink, however, is at odds with what the chatbot is saying, and that hyperlink says, in essence, that bereavement fare has to be, paid for or, otherwise, dealt with on the front end. You can't do it after the travel has occurred.


    But, from the facts of the case, it doesn't look like this young man did that, instead just relying on the chatbot. Long story short, he travels to Toronto, within the 90 day window, he seeks his reimbursement, consistent with the information he received from the chatbot. And, from what I understand, he engages in some emails with Air Canada, and they say, Hey, you know what?


    The statement that you received on the chatbot is erroneous. We'll flag that, we'll get that corrected, but from what I understand, refused to provide him, with, the discount of his bereavement fare, which, according to the opinion, was something to the tune of 600, was the difference between the full fare and the bereavement fare that he otherwise would have been entitled to.

    続きを読む 一部表示
    31 分
  • Introducting AI host, AILAH: The George Carlin deepfake video case. Is it defensible?
    2024/02/23

    In today's episode, we investigate a case featuring a deep fake of a George Carlin comedy routine. We're also adding a new host, the voice of AILAH, an AI engine who will be giving us real time responses to questions within the discussion.

    Shannon and I were intrigued by some of the ideas AILAH came up with, which moved our thoughts in a direction different from where we originally intended. Adding eyelid to the mix turned in. Adding AILAH to the mix turned this into a very interesting discussion.

    Mark: "AILAH, give us an overview of the case."

    AILAH: In the George Carlin deep fake case, a video surfaced online purportedly showing the late comedian George Carlin delivering a monologue about the COVID-19 pandemic. However it was later discovered that the video was a deep fake, a digitally altered video created using artificial intelligence to manipulate Carlin's likeness onto another person's performance.

    This incident highlighted concerns about the rise of deep fake technology and its potential to spread misinformation and deceive viewers. It also underscored the importance of verifying the authenticity of digital content and raising awareness about the risks associated with deep fakes in the digital age.

    続きを読む 一部表示
    24 分
  • The Legal Confusion between AI and Generative AI in the Courts
    2024/01/30

    You are listening to AI, the Law and You, a show where a lawyer, a layman, and a technologist discuss the current state of AI in court filings and the court's response to those filings. These are not scripted talking points. What you hear are real conversations between Joel MacMull (the lawyer), Shannon Lietz (the technologist), and Mark Miller (the layman). In today's episode, we discussed the confusion in the court system about the differences between AI and Generative AI. We'll start with Joel giving a brief overview of the current state of AI in the courts.

    From Joel MacMull (the lawyer)

    There are now in the neighborhood of a half dozen federal judges that have issued standing orders as it relates to the use of AI in court filings. There's no outright prohibition barring the use of I'll say Generative AI. One of the problems with the standing orders is that at least some of them don't distinguish between Generative AI and AI. That's an issue because there's a lot of non-generative AI tools out there that are used every day that I think are really helpful.

    Putting that aside for a moment, these orders basically say that if you as a lawyer are going to be filing something, you are making a representation that to the extent that you used any AI tool, Generative AI tool, that you vetted it. That's another distinction.

    Some standing orders insist that the filer vet the sources. Others just simply say that the material has been vetted. Meaning, I guess, implicitly, that you could kick that over to someone else to do it. But the bottom line is some courts have said, "If you're going to use these materials, you're going to do so with the expectation that you have vetted them or that they have been vetted." Meaning that you're not going to get hallucinations. We're not going to get some of those false citations That we've talked about a few times. The Schwartz case in the summer. Most recently the issue with Michael Cohen serving up to his lawyer, a series of really specious citations.

    続きを読む 一部表示
    22 分
  • AI Copyright Law for Non-Humans, with Joel MacMull, Shannon Lietz, and Mark Miller
    2024/01/16

    In today's episode, we examine the case of Steven Thaler trying to copyright protect a piece of artwork generated by his instructions to an AI creation engine. We'll start with Joel's overview of the case.

    The Thaler case is interesting for a couple of reasons. One is obviously that it deals with AI, but it is also an extension of existing legal principles. I mean, the long and short of it is, was that Stephen Thaler applied for a copyright with the Copyright Office. He indicated that he was the claimant, but that the author was essentially his creativity machine. This was essentially some code that he developed in an effort to create an image. The Copyright Office rejected his application on grounds that, at least as applied for, there did not appear to be any human authorship.

    And, oh, backstory, one of the requirements of the Copyright Office, as recently as earlier this year in February, is that human authorship is necessary for subject matter that is amenable to being copyrighted in the United States.

    続きを読む 一部表示
    24 分
  • Who is Culpable when Michael Cohen Feeds Bogus Citations to his Legal Counsel?
    2024/01/09

    In today’s episode, we examine the case against Michael Cohen, former Trump legal advisor, whose own counsel was exposed in a New York Times article on December 29, 2023, for using ChatGPT, non-existence legal citations in a court filing. The interesting twist to the story is that Cohen, himself, provided those citations to his counsel. I’ll let lawyer Joel MacMull explain the details, before technologist Shannon Lietz and I jump to add our thoughts on the case.

    Background: Michael Cohen gave his then lawyer, some fictitious citations that he had identified from Google BARD, that of course were non existent, and of course we revisited this issue in the summer when, uh, and this is I think entirely coincidental, there was a plaintiff's lawyer by the name of Schwartz that got in trouble from Judge Castell in the Southern District of New York for serving up to the court essentially fictitious citations.


    続きを読む 一部表示
    25 分