Millions of articles from The New York Times were used to train chatbots that now compete with it, the lawsuit said.

  • Blue_Morpho@lemmy.world
    cake
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    11 months ago

    It certainly seems illegal, but if it is, then all search engines are too. They do the same thing. Search engines copy everything to their internal servers, index it, then sell access to that copyrighted data (via ads and other indirect revenue generators).

    • spaduf@slrpnk.net
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      11 months ago

      Definitely not. Search engines point you to the original and aren’t by any means selling access. That is the resources are accessible without using a search engine. LLMs are different because they do fold the inputs into the final product in a way that makes accessing the original material impossible. What’s more LLMs can fully reproduce copyrighted works and will try to pass them off as it’s own work.

      • Blue_Morpho@lemmy.world
        cake
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        11 months ago

        Search engines point you to the original

        That seems the only missing part. Openai should provide a list of the links used to give it’s response.

        That is the resources are accessible without using a search engine.

        I don’t understand what you mean? The resources are accessible whether you have a dumb or smart parser for your search.

        What’s more LLMs can fully reproduce copyrighted works

        Google has entire copyrighted works copied on its servers. It’s how you can query a phrase and get a reply back. They are selling the links to the copyrighted work. If Google had a bug in its search engine ui like openai, you could get that copyrighted data from Google’s servers. Google has “preview page” which gives you a page of copyrighted material without clicking the link. Then there was the Google Books lawsuit that Google won where several pages of copyrighted books are shown.

        • spaduf@slrpnk.net
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          11 months ago

          Your first point is probably where we’re headed but it still requires a change to how these models are built. Absolutely nothing wrong with an RAG focused implementation but those methods are not well developed enough for there to be turn key solutions. The issue is still that the underlying model is fairly dependent on works that they do not own to achieve the performance standards that that’ve become more or less a requirement for these sorts of products.

          With regards to your second point is worth considering how paywalls will factor in. The Times intend to argue these models can be used to bypass their paywall. Something Google does not do.

          Your third point is wrong in very much the same way. These models do not have a built in reference system under the hood and so cannot point you to the original source. Existing implementations specifically do not attempt to do this (there are of course systems that use LLMs to summarize a query over a dataset and that’s fine). That is the models themselves do not explicitly store any information about the original work.

          The fundamental distinction between the two is that Google does a basic amount of due diligence to keep their usage within the bounds of what they feel they can argue is fair use. OpenAI so far has largely chosen to ignore that problem.

          • Blue_Morpho@lemmy.world
            cake
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            The Times intend to argue these models can be used to bypass their paywall. Something Google does not do.

            The Google preview feature bypasses paywalls. Google Books bypasses paywalls. Google was sued and won.

            • spaduf@slrpnk.net
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              11 months ago

              Most likely the times could win a case on the first point. Worth noting, Google also respects robots.txt so if the times wanted they could revoke access and I imagine that’d be considered something of an implicit agreement to it’s usage. OpenAI famously do not respect robots.txt.

              Google books previews are allowed primarily on the basis that you can thumb through a book at a physical store without buying it.

              • Blue_Morpho@lemmy.world
                cake
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                11 months ago

                Google books previews are allowed primarily on the basis that you can thumb through a book at a physical store without buying it.

                If that’s the standard then any NYT article that has been printed is up for grabs because you can read a few pages of a newspaper without paying.