• A_Random_Idiot@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    7
    ·
    1 year ago

    I watched the gamers nexus video on this… and i honestly dont see the point? And I dont mean just ray reconstruction, I mean everything to do with upscaling like DLSS and shit.

    All this shit just seems like emergency stopgaps for panicking companies that cant fulfill the promise of 4k gaming that they made years ago. So instead of getting 4k gaming, you get 1k gaming, stretched to 4k, and a shittier look because of it.

    Like, call me crazy, I’d rather play a game thats sharp, and awesome, at 1080p, than using tons of overhead to stretch and filter it to 4k.

    and thats not some anti-nvidia stance. I feel the same way about AMD’s FSR and shit too.

    • stratoscaster@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Well, the main benefit is that you can reduce load on the main CUDA cores and redirect some of that processing power to the tensor cores. This is why you, with some games, get insane framerate boosts turning on DLSS.

      The better this technology gets, the lower the hardware requirements for high-end graphics. Even the jump from DLSS 1.0 to DLSS 2.0 was crazy, and now DLSS no longer has to be trained on a game-by-game basis.

      There’s only so much you can do in terms of optimization on a budget. A lot of games offer poor performance which can, as you said, be stopgapped by these advanced technologies. Most of the time the DLSS upscaling to 2k or 4k is nearly identical to raw 2k and 4k (unless you’re really looking for it). Ghost runner is a great example of this.

    • kadu@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      You would be correct if this tech existed 15 years ago.

      Nowadays all games use deferred rendering, which means they must have a temporal reconstruction step for several graphical elements (foliage, volumetrics, transparencies) and DLSS makes image quality way better for all of these. In fact, given the choice, I’ll take DLSS over native on any modern game regardless of any performance benefit.

      • A_Random_Idiot@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        I’m sorry, but it stretches credulity to claim that an upscaled image has better quality than the same thing at a native res.

        • kadu@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          There are quite a few tests online, some blind, some not.

          As I explained, there’s no such thing as a normal native image anymore. Your “native” resolution game is doing temporal reconstruction whenever a transparent or translucent object exists, and guess what, DLSS is significantly better at it than any other shader-based algorithm. You can test it by yourself, too.