• 104 Posts
  • 50 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle
















  • This is really great feedback, thank you for commenting. Don’t worry, I am not at all offended. I’m glad you told me, I can absolutely go back to handwriting each post. I honestly prefer to do it that way, sometimes I can’t tell what everyone wants to hear (until they tell me) so I try new things. Sometimes experimentations fail, and that’s okay!

    If there’s anything at all you want to see in particular, let me know! I’d be more than happy shedding light on a particular subject. In the meantime, we’ll go back to more curated content. It does take a lot of time to write those posts, but it seems its worth the effort. Thanks again for letting me know, I really do appreciate the feedback. Don’t hesitate to call me out if you feel I’ve strayed the path. This community is as much yours (all of you subscribed) as it is mine.







  • All is well. My gut instinct was right. I missed the verification email from my (now) decommissioned domain. It has been updated, admins were very helpful in this regard.

    This seems like a relevant time to mention that I have plans hosting our own Lemmy instance, geared towards FOSAI, Machine Learning, Deep Learning, open-source education, and other subprojects I’m excited to share with everyone (hopefully) later this year. Time is a fickle thing, so I can’t make any hard commitments until I figure a few things out - just know there is more to fosai than this community. All will be revealed in due time.

    Nothing about this community will change. If anything, I see it growing to become a friendly gateway for others who share similar interests and want to dive further into the material.

    If you’ve gotten this far, thank you for being a part of !fosai@lemmy.world. It means a lot to me that any of you would give these words attention.

    Whatever it is, I hope you find what you’re looking for. Just know I’m only getting started. You are all early to the party. So much more has yet to be explored, so much more has yet to be seen.

    The future is now. The future is bright. The future is ________.

    Zzzzzzzz







  • What is your vim setup for python? I need a better dev setup for python. Pycharm and VS Code have too much BS in the background and I am never letting their infinitely long list of network connections through my whitelist firewall. I started down that path once; never again. I know about VS Codium, and tried it, but all documentation and examples I came across only work with the proprietary junk. Geany is better than that nonsense. I used Thony with Micropython at one point, but didn’t get very far with that. I tried the Gnome Builder IDE recently. It has a vim like exit troll. You can’t save files in Builder and the instructions to enable saving calls for modifying a configuration file on the host while giving absolutely no info about where the file is located. I need a solid IDE that isn’t dicking around in network/telemetry or configured by the bridge troll from Monty Python’s Holy Grail.

    I am usually just running the script I’m working on post-editor in whatever command line interface I find myself in. That could be zsh, bash. or something random I found that week. If I have the time, I like to setup zsh, or ohmyzsh depending on my OS, paired with power10k and custom color schemes.

    For Windows, I usually set something like this up,

    For Mac or Linux (Ubuntu) I like to use vim and/or tmux + rectangle.

    As a practice, I often try as many new editors as I can, week by week or month by month. It helps keep me on my toes, but when I’m looking for a stable experience I typically default to VSCode behind my firewall. I feel your pains with the allow listing, but it’s the choice if I have something I’m working on and want to take my time on it. Otherwise, I’ve hopped between some of these. Check them out. Might not be for you, but they’re fun to try:


    What is a good starting point for practical CLI LLMs. I need something useful more than some toolchain project. I’ve gone down this rabbit hole far too many times with embedded hardware. I like the idea of trying something that is not an AVR, but by the time I get the toolchains setup and deal with all the proprietary bs, I’m already burned out on the project. In this space, I like to play around in the code with some small objective, but that is the full extent of my capabilities most of the time. Like I just spent most of today confused as hell with how a python tempfile works before I realized the temp file creation has its own scope… Not my brightest moment

    What sort of toolchain project were you exploring? I’m curious to hear about that. In all honesty, the reason I have so many GitHub Stars is a.) I am a curious person in general and b.) I’ve been looking for practical and pragmatic use cases for LLMs within my own life too. This has proven to be more difficult than I initially thought given the rapid developments of the space and the many obstacles you have to overcome between design, infrastructure, and model deployment.

    That being said, I recently came across Cohere, who have an SDK & API for calling their ‘command’ models. Unfortunately, the models are proprietary but they have a few projects on GitHub that interesting to explore. In my experience, local LLMs aren’t quite at the level of production-grade deployments people expect out of something with the perplexity of ChatGPT-4 (or 3). The tradeoff is data privacy, but the compromise is performance. What I am liking about Cohere is that they focus on bringing the models to you so that data can remain private, with all of the benefits of the API and hosting capabilities of a cloud-based LLM.

    For anyone starting a business in AI, being automation agencies or consulting services and integration engineers - I think this is important to consider. At least for enterprise or commercial sectors.

    Home projects? Well, that’s another story entirely. I’ll take the performance tradeoff for running a creative or functional model on my own hardware and network private and 100% local.

    A fun project I’ve been exploring is deploying your own locally hosted inference cloud API, which you could call from any CLI you’re developing on if you’re connected to your private network. This way, you get an OpenAI-like API you can tinker with, while hot swapping models on your cloud inference platform to test different capabilities.

    At this point, you are only limited by the power you can pump into your inference cloud. A colleague of mine has a server of his that has 1TB RAM, 200+ CPU Cores, and x4 GPUs we’re working on setting up with passthrough, pooling available VRAM. We’re hoping to comfortably run 40B GPTQ or high parameter GGML models using this home server rig.

    Assuming you get a private LLM cloud working at home, you can do all sorts of things. You can pass documents through something like Llamaindex or Llangchain, taking personal notes or home information and turning it into semantically available knowledge. This would be available to you on any CLI on your network, maybe through something like LocalAI

    These are really big ideas, some that have taken me months to put together and test - but they’ve been really exciting to see actually work in small ways that feel fun and futuristic. The only problem is that so many of these libraries are changing with rapid development that projects frequently break with a simple change of library or lack of documentation due to a compatibility issue with a vague library that is new and not fully built out and supported.

    I don’t know if that answers your question(s), but I’m around if you want to ask about anything else!










  • Hey thanks for sharing your post. While I am somewhat concerned about laws and regulations behind this innovative tech, I think we’re a bit ahead of the curve here and don’t have any real or immediate threat on the horizon. At least for now…

    FOSAI is an idea that no one can take from me, from you, from us. Much like FOSS, it’s a principle as much as it is a technology. I will advocate for this in the light of optimism and hope as much as I can in whatever theater this technology presents itself to me. At the moment, that’s here, but if that changes - I will be sure to bring this torch with me wherever I find myself.

    Don’t get me wrong, the last thing I want to do is have to jump through legal hoops and hurdles to deploy an open-source model - but congress and regulation moves so slow, I have a strong feeling that many of us will be able to do exactly what we want to do without as much oversight as we might expect from hearings like this.

    All the more reason to get involved with the tech now!

    A good example of this is how Congress has done very little to actually solve digital piracy or rampant depression and loneliness that has come with the advent of social media. If they can’t put up regulations with regular software, I have little worries they’ll do anything seriously restricting for people like us.

    In my opinion, there’s no ‘going back’ to a pre-AI/LLM world. You cannot control this growth. The only way is forward with each and everyone one of us empowered by this tech - building a brighter tomorrow because we finally have the ability and know-how to close the gap between our social disparities.

    Remember, apes together strong!

    Jokes aside, we are in the calm before the storm of innovation that I believe will be this next decade. Let’s hope we can have our way for quite some time, without restrictions from out-of-touch governments!

    Momentum, growth, and innovation are our allies in this.


  • Excellent resource. Thanks for sharing! Appreciate the time you put aside to make this for everyone.

    These sort of posts typically take a lot of time for me to make. As of late, I find myself with less and less time to make them as I dive further into my development projects (more educational resources I plan on debuting here when they’re ready). But you are absolutely right. The only way to properly grow Lemmy is to continue putting out quality content people want to see.

    For our community, it looks like that’s a lot of learning and technical resources like this. Or whatever is missing and locked out of the other AI/LLM communities between Lemmy and Reddit (and anywhere else on the web really).

    If it’s alright with you, I’m going to pin this for the rest of the community and add it to our sidebar!