Husband, father, kabab lover, history buff, chess fan and software engineer. Believes creating software must resemble art: intuitive creation and joyful discovery.

🌎 linktr.ee/bahmanm

Views are my own.

  • 50 Posts
  • 181 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle
  • bahmanm@lemmy.mlOPtoLemmy@lemmy.ml[ANN] lemmy-synapse v1.0.0
    link
    fedilink
    English
    arrow-up
    10
    ·
    8 months ago

    “Announcment”

    It used to be quite common on mailing lists to categorise/tag threads by using subject prefixes such as “ANN”, “HELP”, “BUG” and “RESOLVED”.

    It’s just an old habit but I feel my messages/posts lack some clarity if I don’t do it 😅




  • I didn’t like the capitalised names so configured xdg to use all lowercase letters. That’s why ~/opt fits in pretty nicely.

    You’ve got a point re ~/.local/opt but I personally like the idea of having the important bits right in my home dir. Here’s my layout (which I’m quite used to now after all these years):

    $ ls ~
    bin  
    desktop  
    doc  
    downloads  
    mnt  
    music  
    opt 
    pictures  
    public  
    src  
    templates  
    tmp  
    videos  
    workspace
    

    where

    • bin is just a bunch of symlinks to frequently used apps from opt
    • src is where i keep clones of repos (but I don’t do work in src)
    • workspace is a where I do my work on git worktrees (based off src)




  • RE Go: Others have already mentioned the right way, thought I’d personally prefer ~/opt/go over what was suggested.


    RE Perl: To instruct Perl to install to another directory, for example to ~/opt/perl5, put the following lines somewhere in your bash init files.

    export PERL5LIB="$HOME/opt/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}"
    export PERL_LOCAL_LIB_ROOT="$HOME/opt/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}"
    export PERL_MB_OPT="--install_base \"$HOME/opt/perl5\""
    export PERL_MM_OPT="INSTALL_BASE=$HOME/opt/perl5"
    export PATH="$HOME/opt/perl5/bin${PATH:+:${PATH}}"
    

    Though you need to re-install the Perl packages you had previously installed.


  • First off, I was ready to close the tab at the slightest suggestion of using Velocity as a metric. That didn’t happen 🙂


    I like the idea that metrics should be contained and sustainable. Though I don’t agree w/ the suggested metrics.

    In general, it seems they are all designed around the process and not the product. In particular, there’s no mention of the “value unlocked” in each sprint: it’s an important one for an Agile team as it holds Product accountable to understanding of what is the $$$ value of the team’s effort.

    The suggested set, to my mind, is formed around the idea of a feature factory line and its efficiency (assuming it is measurable.) It leaves out the “meaning” of what the team achieve w/ that efficiency.

    My 2 cents.


    Good read nonetheless 👍 Got me thinking about this intriguing topic after a few years.





  • I got to admit that your point about the presentation skills of the author are all correct! Perhaps the reason that I was able to relate to the material and ignore those flaws was that it’s a topic that I’ve been actively struggling w/ in the past few years 😅

    That said, I’m still happy that this wasn’t a YouTube video or we’d be having this conversation in the comments section (if ever!) 😂


    To your point and @krnpnk@feddit.de’s RE embedded systems:

    That’s absolutely true that such a mindset is probably not going to work in an embedded environment. The author, w/o explicitly mentioning it anywhere, is explicitly talking about distributed systems where you’ve got plenty of resources, stable network connectivity and a log/trace ingestion solution (like Sumo or Datadog) alongside your setup.

    That’s indeed an expensive setup, esp for embedded software.


    The narrow scope and the stylistic problem aside, I believe the author’s view is correct, if a bit radical.
    One of major pain points of troubleshooting distributed systems is sifting through the logs produced by different services and teams w/ different takes of what are the important bits of information in a log message.

    It get extremely hairy when you’ve got a non-linear lifeline for a request (ie branches of execution.) And even worse when you need to keep your logs free of any type of information which could potentially identify a customer.

    The article and the conversation here got me thinking that may be a combo of tracing and structured logging can help simplify investigations.


  • Thanks for sharing your insights.


    Thinking out loud here…

    In my experience with traditional logging and distributed systems, timestamps and request IDs do store the information required to partially reconstruct a timeline:

    • In the case of a linear (single branch) timeline you can always “query” by a request ID and order by the timestamps and that’s pretty much what tracing will do too.
    • Things, however, get complicated when you’ve a timeline w/ multiple branches.
      For example, consider the following relatively simple diagram.
      Reconstructing the causality and join/fork relations between the executions nodes is almost impossible using traditional logs whereas a tracing solution will turn this into a nice visual w/ all the spans and sub-spans.

    That said, logs do shine when things go wrong; when you start your investigation by using a stacktrace in the logs as a clue. That (stacktrace) is something that I’m not sure a tracing solution will be able to provide.


    they should complement each other

    Yes! You nailed it 💯

    Logs are indispensable for troubleshooting (and potentially nothing else) while tracers are great for, well, tracing the data/request throughout the system and analyse the mutations.






  • I cross-posted the same questions on Matrix and got the answer there.


    The hook I’m using is invoked before the minor modes are setup - that’s why it’s being overridden. The suggestion was to have a hook function for each minor mode that I want to control. It’s not clean but gets the job done.


    Here’s the working snippet:

    ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
    
    (defun bahman/helm-major-mode.hook ()
     (display-line-numbers-mode -1)
     (puni-mode -1))
    
    (add-hook 'helm-major-mode-hook
             #'bahman/helm-major-mode.hook)
    
    ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
    
    (defvar bahman/display-line-numbers-mode.disabled-modes
     '(vterm-mode erlang-shell-mode)
     "Disable `display-line-numbers' for the specified modes.")
    
    (defun bahman/display-line-numbers-mode.hook ()
     (when (seq-contains-p bahman/display-line-numbers-mode.disabled-modes
                           major-mode)
         (display-line-numbers-mode -1)))
    
    (add-hook 'display-line-numbers-mode-hook
             #'bahman/display-line-numbers-mode.hook)
    
    ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
    
    (defvar bahman/puni-mode.disabled-modes
     '(vterm-mode)
     "Disable `puni-mode' for the specificied modes.")
    
    (defun bahman/puni-mode.hook ()
     (when (seq-contains-p bahman/puni-mode.disabled-modes
                           major-mode)
       (puni-mode -1)))
    
    (add-hook 'puni-mode-hook
             #'bahman/puni-mode.hook)
    


  • This is quite intriguing. But DHH has left so many details out (at least in that post) as pointed out by @breadsmasher@lemmy.world - it makes it difficult to relate to.

    On the other hand, like DHH said, one’s mileage may vary: it’s, in many ways, a case-by-case analysis that companies should do.

    I know many businesses shrink the OPs team and hire less experienced OPs people to save $$$. But just to forward those saved $$$ to cloud providers. I can only assume DDH’s team is comprised of a bunch of experienced well-payed OPs people who can pull such feats off.

    Nonetheless, looking forward to, hopefully, a follow up post that lays out some more details. Pray share if you come across it 🙏


  • I think I understand where RMS was coming from RE “recursive variables”. As I wrote in my blog:

    Recursive variables are quite powerful as they introduce a pinch of imperative programming into the otherwise totally declarative nature of a Makefile.

    They extend the capabilities of Make quite substantially. But like any other powerful tool, one needs to use them sparsely and responsibly or end up w/ a complex and hard to debug Makefile.

    In my experience, most of the times I can avoid using recursive variables and instead lay out the rules and prerequisites in a way that does the same. However, occasionally, I’d have to resort to them and I’m thankful that RMS didn’t win and they exist in GNU Make today 😅 IMO purist solutions have a tendency to turn out impractical.