• Lushed_Lungfish@lemmy.ca
    link
    fedilink
    English
    arrow-up
    25
    ·
    20 小时前

    Um, human history has repeatedly demonstrated that when a new technology emerges, the two highest priorities are:

    1. How can we kill things with this?
    2. How can we bone with this?
    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 小时前

      Right but a chatbot is light years away from artificial general intelligence. And also let’s be honest if we want to go the pornography route then that’s going to need robots, so that would need general intelligence. As would in fact killer drones.

      So yeah, scepticism is highly warranted

  • NutWrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    20 小时前

    If you’ve ever wondered why porn sites use pictures of cars, buses, stop signs, traffic lights, bicycles and sidewalks in their captchas, it’s because they’re using the data to train car-driving AIs to recognize those patterns.

    This is not what an imminent breakthrough in cancer research looks like.

      • NotANumber@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        19 小时前

        Google recaptcha? They literally talk about this publically. It’s in their mission statement or whatever. It’s used to train other kinds of model too.

        • nialv7@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          9 小时前

          They were. They haven’t been using recaptchas to collect trainings day for years now.

        • RememberTheApollo_@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          10
          ·
          18 小时前

          Y’know, it’s bullshit that a) you seem to expect this to be common knowledge, as if everyone is supposed to have an archive of internet minutiae saved in their heads or have read and remembered any such info at all…

          And b) you chose to downvote and pretty much just said LMGTFY without even the sarcastically provided results instead of backing up your claim. It’s basic courtesy to provide a source for claims instead of downvoting like it’s some kind of affront to your ego that someone wants info on your claim.

          • NotANumber@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            18 小时前

            It’s not even my claim you are talking about jackass. Read the usernames. If you have fallen into the rabbit hole that is Lemmy you should have been around enough to know about recapcha. If not it’s one DuckDuckGo search away. In fact you could just click the link on the recapcha itself that explains how they use the data for training. Hardly arcane knowledge.

            Your comment to me read like Sealioning.

            • RememberTheApollo_@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              8
              ·
              17 小时前

              Ah, that makes it so much better. My bad for you jumping into an argument randomly? You’re not improving my view of the shitty attitude here when you double down on “you should have known.”

  • humanspiral@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    20 小时前

    FYI, using openAI/ChatGPT is expensive. Programming it to program users into dependency on its “friendship” gets them to pay for more tokens, and then why not blackmail them or coerce/honeypot them into espionage for the empire. If you don’t understand yet that OpenAI is arm of Trump/US military, among its pie in the sky promises is $35B for datacenters in Argentina.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 小时前

      What espionage is an AI simp going to be able to conduct?

      I’m pretty sure this is just them flailing around not being able to come up with anything meaningful so they’re going this route so they have some profit. I don’t think a conspiracy beyond that is required.

  • BilSabab@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    20 小时前

    startups hyping shit up to get the investors drooling is one of the most despicable things a man can observe.

    • Soup@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 小时前

      The thing that makes it actually bad is that they’re taking advantage of the mentally handicapped(investors). That and that said investors have millions to toss at nonsense while so many people are licky to have pennies to toss at such luxuries as “food”.

      Honestly though I don’t think taking advantage of evil people, who swear they deserve their millions because they’re definitely super smart, is really anything I care that much about.

  • TankovayaDiviziya@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 天前

    We are closer to making horny chatbots than a superintelligence figuring out a cure for cancer.

    Actually, if the latter wins, would that super AI win a Nobel prize?

    • percent@infosec.pub
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 天前

      It would probably go to whoever uses it to find the cure… And to none of the authors who wrote the data that it was trained on

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 天前

        That’s how the Nobel prize always works. The price goes to whoever managed to cross the finishing line, not all the thousands of scientists before who conducted preliminary research.

      • Dragonstaff@leminal.space
        link
        fedilink
        English
        arrow-up
        3
        ·
        20 小时前

        This year has just been a constant stream of examples why capitalism is stupid. Machine learning has a lot of utility in medical research. Imagine if it were deployed in such a way to benefit society instead of to maximize techbro’s profit.

  • kadu@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    2
    ·
    2 天前

    There’s not a single world where LLMs cure cancer, even if we decided to give the entirety of our energy output and water to a massive server using every GPU ever made to crunch away for months.

    • HereIAm@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      1 天前

      Not strictly LLMs, but neural nets are really good at protein folding, something that very much directly helps understanding cancer amount other things. I know an answer doesn’t magically pop out, but it’s important to recognise the use cases where NN actually work well.

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        1 天前

        I’m trying to guess what industries might do well if the AI bubble does burst. I imagine there will be huge AI datacenters filled with so-called “GPUs” that can no longer even do graphics. They don’t even do floating point calculations anymore, and I’ve heard their integer matrix calculations are lossy. So, basically useless for almost everything other than AI.

        One of the few industries that I think might benefit is pharmaceuticals. I think maybe these GPUs can still do protein folding. If so, the pharma industry might suddenly have access to AI resources at pennies on the dollar.

        • MotoAsh@piefed.social
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 天前

          integer calculations are lossy because they’re integers. There is nothing extra there. Those GPUs have plenty of uses.

          • merc@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            5 小时前

            I don’t know too much about it, but from the people that do, these things are ultra specialized and essentially worthless for anything other than AI type work:

            anything post-Volta is literally worse than worthless for any workload that isn’t lossy low-precision matrix bullshit. H200’s can’t achieve the claimed 30TF at FP64, which is a less than 5% gain over the H100. FP32 gains are similarly abysmal. The B100 and B200? <30TF FP64.

            Contrast with AMD Instinct MI200 @ 22TF FP64, and MI325X at 81.72TF for both FP32 and FP64. But 653.7TF for FP16 lossy matrix. More usable by far, but still BAD numbers. VERY bad.

            https://weird.autos/@rootwyrm/115361368946190474

            • MotoAsh@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 小时前

              AI isn’t even the first or the twentieth use case for those operations.

              All the “FP” quotes are about floating point precision, which matters more for training and finely detailed models, especially FP64. Integer based matrix math comes up plenty often in optimized cases, which are becoming more and more the norm, especially with China’s research on shrinking models while retaining accuracy metrics.

    • 🍉 Albert 🍉@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      ·
      2 天前

      which fucking sucks, because AI was actually getting good, it could detect tumours, it could figure things fast, it could recognise images as a tool for the visually impaired…

      But LLMs are non of those things. all they can do is look like text.

      LLMs are an impressive technology, but so far, nearly useless and mostly a nuance.

      • BilSabab@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        20 小时前

        down in Ukraine we have a dozen or so image analysis projects that can’t catch a break because all investors can think about are either swarm drones (quite understandably) or LLM nothingburgers that burn through money and dissipate every nine months. Meanwhile those image analysis projects manage to progress on what is basically scraps and leftovers.

        • 🍉 Albert 🍉@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          20 小时前

          the problem is that technical people can understand the value of different AI tool. but tell an executive with a business major how mind blowing it is that a program trained in Go and StarCraft can solve protein folding (studied biology in 2010 and they kept repeating how impossible solving proteins in silico was).

          But a chat bot that tells the executive how smart and special it is?

          That’s the winner.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            1 天前

            That’s not…

            sigh

            Ok, so just real quick top level…

            Transformers (what LLMs are) build world models from the training data (Google “Othello-GPT” for associated research).

            This happens by needing to combine a lot of different pieces of information together in a coherent way (what’s called the “latent space”).

            This process is medium agnostic. If given text it will do it with text, if given photos it will do it with photos, and if given both it will do it with both and specifically fitting the intersection of both together.

            The “suitcase full of tools” becomes its own integrated tool where each part influences the others. Why you can ask a multimodal model for the answer to a text question carved into an apple and get a picture of it.

            There’s a pretty big difference in the UI/UX in code written by multimodal models vs text only models for example, or utility in sharing a photo and saying what needs to be changed.

            The idea that an old school NN would be better at any slightly generalized situation over modern multimodal transformers is… certainly a position. Just not one that seems particularly in touch with reality.

            • i_love_FFT@jlai.lu
              link
              fedilink
              English
              arrow-up
              4
              ·
              20 小时前

              The main breakthrough of LLM happened when they figured out how to tokenize words… The subsequent transformer architecture was already being tested on various data types and struggled compared to similarly advanced CNN.

              When they figured out word encoding, it created a buzz because transformers could work well with words. They never quite worked as well on images. For that, stable diffusion (a variation on CNN) has always been better.

              It’s only because of the buzz on LLMs that they tried applying them to other data types, mostly because that’s how they could get funding. By throwing in disproportionate amount of resources, it works… But it would have been so much more efficient to use different architectures.

              • kromem@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 小时前

                What year are you from? Have you not seen Gemini Flash, ChatGPT 4o, Sora 2, Genie 3, etc?

                Stable Diffusion hasn’t been SotA for over a year now in a field where every few months a new benchmark is set.

                Are you also going to tell me about how we’d be better off using ships for international travel because the Wright brothers seem to be really struggling with their air machine?

                • i_love_FFT@jlai.lu
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 小时前

                  Hehe, true! I left the field about 4 years ago when it became obvious that “more GPUs!” was better than any architectural design changes…

                  Most of the image generation made by the products you mention are based on a mix of LLMs (for processing of user inputs) and some other modality for other media types. Last time I checked, ChatGPT was capable of handling images only because it offloaded the image processing to a branch of the architecture that was not a transformer, or at least not a classical transformer. They did have to grift CNN parts to the LLM to make progress.

                  Maybe in the last 4 years they reorganised it to completely remove CNN blocks, but I think people call these models “LLMs” only as a shorthand for the core of the architecture.

                  Again, you said that a new benchmark is set every few months, but considering they’re just consuming more power and water, it’s quite boring and I’d argue it’s not really progress in the academical/theoretical sense. That attitude is exactly why I don’t work with NN anymore.

    • Quetzalcutlass@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 天前

      And it’s clear we’re nowhere near achieving true AI, because those chasing it have made no moves to define the rights of an artificial intelligence.

      Which means that either they know they’ll never achieve one by following the current path, or that they’re evil sociopaths who are comfortable enslaving a sentient being for profit.

  • zxqwas@lemmy.world
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    2
    ·
    2 天前

    Either (you genuinely belive) you are 18 (24, 36 does not matter) months away from curing cancer or you’re not.

    What would we as outsiders observe if they told their investors that they were 18 months away two years ago and now the cash is running out in 3 months?

    Now I think the current iteration of AI is trying to get to the moon by building a better ladder, but what do I know.

    • agamemnonymous@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      1 天前

      The thing about AI is that it is very likely to improve roughly exponentially¹. Yeah, it’s building ladders right now, but once it starts turning rungs into propellers, the rockets won’t be far behind.

      Not saying it’s there yet, or even 18/24/36 months out, just saying that the transition from “not there yet” to “top of the class” is going to whiz by when the time comes.

      ¹ Logistically, actually, but the upper limit is high enough that for practical purposes “exponential” is close enough for the near future.

      • dreugeworst@lemmy.ml
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 天前

        why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.

        to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums

        • agamemnonymous@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 小时前

          Because AI can write programs? As it gets better at doing that, it can make AI’s that are even better, etc etc. Positive feedback loops increase exponentially.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        20 小时前

        The problem with that is they can’t actually point to a metric where when the number goes beyond that point we’ll have ASI. I’ve seen graphs where they have a dotted line that says ape intelligence, and then a bit higher up it has a dotted line that says human intelligence. But there’s no meaningful way they can possibly have actually placed human intelligence on a graph of AI complexity, because brains are not AI so they shouldn’t even be on the graph.

        So even if things increase exponentially there’s no way they can possibly know how long until we get AGI.

      • SuperNerd@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 天前

        Then it doesn’t make sense to include LLMs in “AI.” We aren’t even close to turning runs into propellers or rockets, LLMs will not get there.

  • bassad@jlai.lu
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 天前

    Ow, that’s why they are restricting “organic” porn, to sell AI porn. Damn.

  • frustrated@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    6
    ·
    1 天前

    No money in curing cancer with an LLM. Heaps of money taking advantage if increasingly alienated and repressed people.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 天前

      There’s loads of money in curing cancer. For one you can sell the cure for cancer to people with cancer.

      • i_love_FFT@jlai.lu
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 小时前

        That’s a weird take! It makes much more money sense to sell long term subscriptions treatments, rather than a one time cure.

        /s off course

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 天前

      You could sell the cure for a fortune. Imagine something that can reliably cure late stage cancers. You could charge a million for the treatment, easily.

      • frustrated@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 天前

        Yes, selling the actual cure would be profitable…but an LLM would only ever provide the text for synthesizing it but none of the extensive testing, licensing, or manufacturing, etc… An existing pharmaceutical company would have to believe the LLM and then front the costs for the development, testing, and manufacture…which constitutes a large proportion of the costs of bringing a treatment to market. Burning compute time on that is a waste of resources, especially when fleecing horny losers is available right now. It is just business.

        • BeeegScaaawyCripple@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 天前

          and LLMs hallucinate a lot of shit they “know” nothing about. a big pharma company spending millions of dollars on an LLM hallucination would crack me the fuck up were it not such a serious disease.

          • frustrated@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 天前

            Right, that is why I originally said there is no money in a cancer cure invented by LLM. It’s just not a serious possibility.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 天前

      What a weird take, research use AI already? Some researchers even research things that, gasp, is not monetiseable right away!

      • frustrated@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 小时前

        I used to work in academic physics, and I currently work in data science. I am deeply familiar with both ends of the subject in question. LLMs are useful research tools because they speed up the reference finding and literature review process, not because they synthesize new information that does not need to be independently verified.

        In the context of medical research, they could absolutely use LLMs to facilitate a literature search. What LLMs cannot do is hand researchers a proposed cure that they could sell to people. You still need to do the leg work of synthesizing the molecules, standardizing the process, industrializing it, patenting it, multiple rounds of testing on increasingly complex animals and eventually people, and then going through the drug approval process with the FDA and others. LLMs speed up the CHEAPEST and EASIEST part of the research process. That is why LLMs will not be handing us the cure for cancer.

    • uberfreeza@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 天前

      You can use AI to fulfill your fantasies! for example: having healthcare (if you’re not American, this joke does not apply)

      • BeeegScaaawyCripple@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 天前

        i’d rather lucid dream i have healthcare my friend. then i can use my care bear stare laser beam to apply vengeance to incompetent healthcare providers and administrators such that they will never know what it is like to satiate their hunger again. then ride a giant flying tardigrade named Hairy Terry off into the sunset.

        LLMs only let me imagine it, not (from my perception) experience it. and remember, no crimes without Hairy Terry on lookout