I don’t really want companies or anyone else deciding what I’m allowed to see or learn. Are there any AI assistants out there that won’t say “sorry, I can’t talk to you about that” if I mention something modern companies don’t want us to see?

  • SpicyTaint@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    If you have a good enough NVIDIA card, probably a 1080ti or better, download KoboldCPP and a .gguf model from huggingface and run it locally.

    The quality is directly tied to your GPU’s vram size and how big of a model you can load into it, so don’t expect the same results as an LLM running on a data center. For example, I can load a 20gb gguf model into a 3090 with 24gb of vram.