• 0 Posts
  • 377 Comments
Joined 1 year ago
cake
Cake day: August 24th, 2023

help-circle
  • My phone is starting to take better document pictures than my printer. Especially if I hold my phone steady at the right height. I bet if I used a 3D printer to create a stand to hold it for me. Heck I could buy a giant bucket or rack or whatever and bright high color index lights to further improve image clarity to ungodly levels. Still, the Samsung camera app with the AI features already being preloaded that will likely be improved in the next 5 years will just get better. Printers and scanners are completely a pita.


  • Oh! I didn’t think it that way, lol, I was thinking this quickly through. I didn’t think of relating to physical locks because it clearly said it was virtual. But I guess, there could theoretically be a physical tumbler lock with 0-5 spacers, it would just be a tall lock. You know like how some master keys have it so that there are spacers for the master key or the client key to open the lock.



  • Python3

    ah well this year ends with a simple ~12.5 ms solve and not too much of a brain teaser. Well at least I got around to solving all of the challenges.

    Code
    from os.path import dirname,realpath,join
    
    from collections.abc import Callable
    def profiler(method) -> Callable[..., any]:
        from time import perf_counter_ns
        def wrapper_method(*args: any, **kwargs: any) -> any:
            start_time = perf_counter_ns()
            ret = method(*args, **kwargs)
            stop_time = perf_counter_ns() - start_time
            time_len = min(9, ((len(str(stop_time))-1)//3)*3)
            time_conversion = {9: 'seconds', 6: 'milliseconds', 3: 'microseconds', 0: 'nanoseconds'}
            print(f"Method {method.__name__} took : {stop_time / (10**time_len)} {time_conversion[time_len]}")
            return ret
        return wrapper_method
    
    @profiler
    def solver(locks_and_keys_str: str) -> int:
        locks_list = []
        keys_list  = []
    
        for schematic in locks_and_keys_str.split('\n\n'):
            if schematic[0] == '#':
                locks_list.append(tuple([column.index('.') for column in zip(*schematic.split())]))
            else:
                keys_list.append(tuple([column.index('#') for column in zip(*schematic.split())]))
        
        count = 0
        for lock_configuration in locks_list:
            for key_configuration in keys_list:
                for i,l in enumerate(lock_configuration):
                    if l>key_configuration[i]:
                        # break on the first configuration that is invalid
                        break
                else:
                    # skipped when loop is broken
                    count += 1
        
        return count
    
    if __name__ == "__main__":
        BASE_DIR = dirname(realpath(__file__))
        with open(join(BASE_DIR, r'input'), 'r') as f:
            input_data = f.read().replace('\r', '').strip()
        result = solver(input_data)
        print("Day 25 final solve:", result)
    
    


  • It is definitely possible to do it programmatically but like most people including me simply did it manually. though I did do this:

    code
    #
        # we step through a proper Ripple Carry adder and check if the gate is correct for the given input.
        # first level is will start with all the initial input gates.
        # these must have XOR and AND as gates.
        marked_problem_gates = defaultdict(lambda: defaultdict(list))
        # dict of the full adder as the index and the value of what the next wires lead to.
        full_adder_dict = defaultdict(lambda: defaultdict(lambda: defaultdict(list)))
        for i in range(len(wire_states)//2):
            for (a_key,gate,b_key),output_wire in connections['x'+str(i).rjust(len(next(iter(wire_states)))-1, '0')]:
                if i == 0:
                    matching_z_wire = 'z' + str(i).rjust(len(next(iter(wire_states)))-1, '0')
                    # we check the first adder as it should have the correct output wire connections for a half adder type.
                    if gate == 'XOR' and output_wire != matching_z_wire:
                        # for the half adder the final sum output bit should be a matching z wire.
                        marked_problem_gates[i][gate].append(((a_key,gate,b_key),output_wire))
                    elif gate == 'AND' and (output_wire.startswith('z') and (output_wire[1:].isdigit())):
                        # for the half adder the carry over bit should not be a z wire.
                        marked_problem_gates[i][gate].append(((a_key,gate,b_key),output_wire))
                    if output_wire in connections:
                        full_adder_dict[i][gate][output_wire].extend(connections[output_wire])
                else:
                    # since the first adder is a half-adder, we only need to check the other adders to verify that their output wire is not to a z wire.
                    if (output_wire.startswith('z') and (output_wire[1:].isdigit())):
                        # if the output wire is a z wire, we mark the gate as a problem.
                        marked_problem_gates[i][gate].append(((a_key,gate,b_key),output_wire))
                    else:
                        # we get what the output wires to the next step in the adder would be.
                        full_adder_dict[i][gate][output_wire].extend(connections[output_wire])
        
        # for the next step in the full adders, we expect the carry over bit from the previous step in the adder should be have.
        # 
        
        print(marked_problem_gates)
        print(full_adder_dict)
    

    I then took the printed output and cleaned it up a little.

    output

    This really just me looking at it and seeing to make sure outputs are connected to the right locations. just took a minute but the highlighting of the selected work helped out. simply faster than just trying to think of some complex logic

    basically bruteforce part 2 with our eyes lol

    I did notice that most of the swaps where within the fulladders and not accross the entire circuit, so it was a little easier than anticipated.

    I only parse each of the initial gates that the input wires to the full adders have x and y The only exception is that the first input wires go into a half adder and not a full adder.
    So we only verify that the XOR gate is connected to an output wire z00 and that the AND gate does not output to a z wire. Every AND output wire should be connected to one logic gate and that is the OR gate for the carry bit.
    The XOR gate should have the output wire connected to the two gates that take in the carry bit wire from the previous adder.

    from there I just used my memory and intuition about full adders to clean up the output and all that to see which of the full adders have bad output wire configs.


  • Python3

    Another day solved with optimized logic

    Method build_graph took : 1.554318 milliseconds
    Method count_unique_triads took : 943.371 microseconds
    Method find_largest_clique took : 933.651 microseconds
    Method solver took : 3.800842 milliseconds

    Again, Linux handles python better with a final solve time of a combined ~3.8 ms. still it is quite fast on windows with only 0.5 ms increase.

    Still having fun adding typing and comments with VSCode + Qwen-coder 1.5B running locally. feels so nice to be proud of readable and optimized code.

    Code
    from os.path import dirname,realpath,join
    from itertools import combinations as itertools_combinations
    from collections import defaultdict
    
    from collections.abc import Callable
    def profiler(method) -> Callable[..., any]:
        from time import perf_counter_ns
        def wrapper_method(*args: any, **kwargs: any) -> any:
            start_time = perf_counter_ns()
            ret = method(*args, **kwargs)
            stop_time = perf_counter_ns() - start_time
            time_len = min(9, ((len(str(stop_time))-1)//3)*3)
            time_conversion = {9: 'seconds', 6: 'milliseconds', 3: 'microseconds', 0: 'nanoseconds'}
            print(f"Method {method.__name__} took : {stop_time / (10**time_len)} {time_conversion[time_len]}")
            return ret
        return wrapper_method
    
    @profiler
    def build_graph(connections: list[str]) -> defaultdict[set[str]]:
        """
        Builds an adjacency list from the list of connections.
        """
        adj = defaultdict(set)
        for conn in connections:
            nodes = conn.strip().split('-')
            nodes.sort()
            node1, node2 = nodes
            adj[node1].add(node2)
            adj[node2].add(node1)
        
        return adj
    
    @profiler
    def count_unique_triads(adj: defaultdict[set[str]]) -> int:
        """
        Counts the number of unique triads where a 't_node' is connected to two other nodes
        that are also directly connected to each other. Ensures that each triad is counted only once.
        """
        unique_triads = set()
        
        # Identify all nodes starting with 't' (case-insensitive)
        t_nodes = [node for node in adj if node.lower().startswith('t')]
        
        for t_node in t_nodes:
            neighbors = adj[t_node]
            if len(neighbors) < 2:
                continue  # Need at least two neighbors to form a triad
            
            # Generate all unique unordered pairs of neighbors
            for node1, node2 in itertools_combinations(neighbors, 2):
                if node2 in adj[node1]:
                    # Create a sorted tuple to represent the triad uniquely
                    triad = tuple(sorted([t_node, node1, node2]))
                    unique_triads.add(triad)
        
        return len(unique_triads)
    
    def all_connected(nodes: tuple, adj: defaultdict[set[str]]) -> bool:
        """
        Determines if all nodes are connected to each other by checking if every node is reachable from any other node.
        Effectively determines if a clique exists.
        """
        for i,node in enumerate(nodes):
            for j in range(i + 1, len(nodes)):
                if nodes[j] not in adj[node]:
                    return False
        return True
    
    @profiler
    def find_largest_clique(adj: defaultdict[set[str]]) -> list[str]:
        """
        Iterates over each vertex and its neighbors to find the largest clique. A clique is a subset of nodes where every pair of nodes is connected by an edge.
        The function returns the vertices of the largest clique found. If no clique is found, it returns an empty list.
        """
        # Keep track of the largest connected set found so far
        best_connected_set = tuple(['',''])
        # Iterate over each vertex in the graph with the neighbors
        for vertex, neighbors in adj.items():
            # Since the clique must have all nodes share similar neighbors then we can start with the size of the neighbors and iterate down to a clique size of 2
            for i in range(len(neighbors), len(best_connected_set)-1, -1):
                # we don't check smaller cliques because they will be smaller than the current best connected set
                if i < len(best_connected_set):
                    break
                for comb in itertools_combinations(neighbors, r=i):
                    if all_connected(comb, adj):
                        best_connected_set = max(
                            (*comb, vertex), best_connected_set, key=len
                        )
                        break
        return sorted(best_connected_set)
    
    # Solve Part 1 and Part 2 of the challenge at the same time
    @profiler
    def solver(connections: str) -> tuple[int, str]:
        # Build the graph
        adj = build_graph(connections.splitlines())
        # return the count of unique triads and the largest clique found
        return count_unique_triads(adj),','.join(find_largest_clique(adj))
    
    if __name__ == "__main__":
        BASE_DIR = dirname(realpath(__file__))
        with open(join(BASE_DIR, r'input'), 'r') as f:
            input_data = f.read().replace('\r', '').strip()
        result = solver(input_data)
        print("Part 1:", result[0], "\nPart 2:", result[1])
    
    

  • I found taking the straight text from the website to GPT o1 can solve it but sometimes GPT o1 produces code that fails to be efficient. so challenges that had scaling on part 2( blinking rocks and lanternfish ) or other ways to cause you to have a hard time creating a fast solution(like the towel one and day 22) are places where they would struggle a lot.

    day 12 with the perimeter and all the extra minute details also causes GPT trouble. So does the day 14, especially the easter egg where you need to step through to find it but GPT can’t really solve it because there is not enough context for the tree unless you do some digging on how it should look like.

    these were some casual observations. clearly there is more to do to test out, but it shows that these are big struggle points. If we are talking about getting on the leaderboard, then I am glad there are challenges like these in AoC where not relying on an llm is better.


  • you have a point to call name it something else, but lazy to do that. should I simply call it solve() maybe, that would work fine.
    I do want to note that having it return a value is not unheard of, it is just part of being lazy with the naming of the functions.
    I definitely would not have the code outside of main() be included in the main function as it is just something to grab the input pass it to the solver function( main in this case, but as you noted should be called something else ) and print out the results. if you imported it as a module, then you can call main() with any input and get back the results to do whatever you want with. Just something I think makes the code better to look at and use.

    While doing this is highly unnecessary for these challenge, I wish to keep a little bit of proper syntax than just writing the script with everything at the top level. It feels dirty.

    Coding in notepad was a bit brutal, but I stuck with notepad for years and never really cared because I copy pasta quite a bit from documentation or what not.(now a days, gpt helps me fix my shit code, now that hallucinations are reduced decently) even with VSCode, I don’t pay attention to many of its features. I still kinda treat it as a text editor, but extra nagging on top.(nagging is a positive I guess, but I am an ape who gives little fucks) I do like VSCode having a workspace explorer on the side. I dislike needing to alt-tab to various open file explorer windows. Having tabs for open files is really nice, too.

    VSCode is nice, and running my Qwen-coder-1.5B locally is neat for helping somethings out. Not like I rely on it for helping with coding, but rather use it for comments or sometimes I stop to think or sometimes the autocomplete is updated in realtime while I am typing. really neat stuff, I think running locally is better than copilot because of it just being more real-time than the latency with interacting with MS servers. though I do think about all the random power it is using and extra waste heat from me constantly typing and it having to constantly keep up with the new context.

    The quality took a little hit with the smaller model than just copilot, but so far it is not bad at all for things I expect it to help with. It definitely is capable of helping out. I do get annoyed when the autocomplete tries too hard to help by generating a lot more stuff that I don’t want.(even if the first part of what it generated is what I wanted but the rest is not) thankfully, that is not too often.
    I give the local llm is helping with 70% of the comments and 15% of the code on average but it is not too consistent for the code.
    For python, there is not enough syntax overhead to worry about and the autocomplete isn’t needed as much.


  • Python3

    Hey there lemmy, I recently transitioned from using notepad to Visual Studio Code along with running a local LLM for autocomplete(faster than copilot, big plus but hot af room)

    I was able to make this python script with a bunch of fancy comments and typing silliness. I ended up spamming so many comments. yay documentation! lol

    Solve time: ~3 seconds (can swing up to 5 seconds)

    Code
    from tqdm import tqdm
    from os.path import dirname,isfile,realpath,join
    
    from collections.abc import Callable
    def profiler(method) -> Callable[..., any]:
        from time import perf_counter_ns
        def wrapper_method(*args: any, **kwargs: any) -> any:
            start_time = perf_counter_ns()
            ret = method(*args, **kwargs)
            stop_time = perf_counter_ns() - start_time
            time_len = min(9, ((len(str(stop_time))-1)//3)*3)
            time_conversion = {9: 'seconds', 6: 'milliseconds', 3: 'microseconds', 0: 'nanoseconds'}
            print(f"Method {method.__name__} took : {stop_time / (10**time_len)} {time_conversion[time_len]}")
            return ret
        return wrapper_method
    
    # Process a secret to a new secret number
    # @param n: The secret number to be processed
    # @return: The new secret number after processing
    def process_secret(n: int) -> int:
        """ 
        Process a secret number by XORing it with the result of shifting left and right operations on itself. 
        The process involves several bitwise operations to ensure that the resulting number is different from the original one.
        
        First, multiply the original secret number by 64 with a left bit shift, then XOR the original secret number with the new number and prune to get a new secret number
        Second, divide the previous secret number by 32 with a right bit shift, then XOR the previous secret number with the new number and prune to get another new secret number
        Third, multiply the previous secret number by 2048 with a left bit shift, then XOR the previous secret number with the new number and prune to get the final secret number
        Finally, return the new secret number after these operations.
    
        """
        n ^= (n << 6) 
        n &= 0xFFFFFF
        n ^= (n >> 5)
        n &= 0xFFFFFF
        n ^= (n << 11)
        return n & 0xFFFFFF
    
    # Solve Part 1 and Part 2 of the challenge at the same time
    @profiler
    def solve(secrets: list[int]) -> tuple[int, int]:
        # Build a dictionary for each buyer with the tuple of changes being the key and the sum of prices of the earliest occurrence for each buyer as the value
        # At the same time we solve Part 1 of the challenge by adding the last price for each secret
        last_price_sum = 0
        changes_map = {}
        for start_secret in (secrets):
            # Keep track of seen patterns for current secret
            changes_seen = set()
            # tuple of last 4 changes
            # first change is 0 because it is ignored
            last_four_changes = tuple([
                                        (cur_secret:=process_secret(start_secret)), 
                                       -(cur_secret%10) + ((cur_secret:=process_secret(cur_secret)) % 10) , 
                                       -(cur_secret%10) + ((cur_secret:=process_secret(cur_secret)) % 10) , 
                                       -(cur_secret%10) + ((cur_secret:=process_secret(cur_secret)) % 10) 
                                       ]
                                       )
            current_price = sum(last_four_changes)
            # Map 4-tuple of changes -> sum of prices index of earliest occurrence for all secrets
            for i in range(3, 1999):
                # sliding window of last four changes
                last_four_changes = (*last_four_changes[1:], -(cur_secret%10) + (current_price := (cur_secret:=process_secret(cur_secret)) % 10) )
                # if we have seen this pattern before, then we continue to next four changes
                # otherwise, we add the price to the mapped value
                # this ensures we only add the first occurence of a patten for each list of prices each secret produces
                if last_four_changes not in changes_seen:
                    # add to the set of seen patterns for this buyer
                    changes_seen.add(last_four_changes)
                    # If not recorded yet, store the price
                    # i+4 is the index of the price where we sell
                    changes_map[last_four_changes] = changes_map.get(last_four_changes, 0) + current_price
    
            # Sum the 2000th price to the total sum for all secrets
            last_price_sum += cur_secret
        # Return the sum of all prices at the 2000th iteration and the maximum amount of bananas that one pattern can obtain
        return last_price_sum,max(changes_map.values())
    
    if __name__ == "__main__":
        # Read buyers' initial secrets from file or define them here
        BASE_DIR = dirname(realpath(__file__))
        with open(join(BASE_DIR, r'input'), "r") as f:
            secrets = [int(x) for x in f.read().split()]
        last_price_sum,best_score = solve(secrets)
        print("Part 1:")
        print(f"sum of each of the 2000th secret number:", last_price_sum)
        print("Part 2:")
        print("Max bananas for one of patterns:", best_score)
    
    

  • nice job! now do it without recursion. something I always attempt to shy away from as I think it is dirty to do, makes me feel like it is the “lazy man’s” loop.

    Aho-Corasick is definitely meant for this kind of challenge that requires finding all occurrences of multiple patterns, something worth reading up on! If you are someone who builds up a util library for future AoC or other projects then that would likely come in to use sometimes.

    Another algorithm that is specific to finding one pattern is the Boyer-Moore algorithm. something to mull over: https://softwareengineering.stackexchange.com/a/183757

    have to remember we are all discovering new interesting ways to solve similar or same challenges. I am still learning lots, hope to see you around here and next year!



  • Nice, don’t be shy to revisit these later on, I am definitely doing the same for older AoC years. I am happy you got it down that low. With python, I am really trying my best to make sure I can figure out a way to keep my solve times lower. Gives me challenge to fight a slow language faults with optimized logic haha

    I will eventually get down with using python and be moving over to Rust. I will eventually be happy to move to an old functional language like Haskell. I don’t think I will learn those specialized languages like uiua, I don’t think I can get over the magic like symbols lol


  • Python3

    Solver uses trie just like the other python solve here, I try to not look at solve threads until after it is solved. yet, I somehow got a solve that only takes ~7 ms. previously I would rebuild the trie every time and made it take ~55 ms.

    Code
    from collections import deque
    
    def profiler(method):
        from time import perf_counter_ns
        def wrapper_method(*args: any, **kwargs: any) -> any:
            start_time = perf_counter_ns()
            ret = method(*args, **kwargs)
            stop_time = perf_counter_ns() - start_time
            time_len = min(9, ((len(str(stop_time))-1)//3)*3)
            time_conversion = {9: 'seconds', 6: 'milliseconds', 3: 'microseconds', 0: 'nanoseconds'}
            print(f"Method {method.__name__} took : {stop_time / (10**time_len)} {time_conversion[time_len]}")
            return ret
    
        return wrapper_method
    
    def build_aho_corasick(towels):
        # Each node: edges dict, failure link, output (which towels end here)
        trie = [{'fail': 0, 'out': []}]
        
        # Build trie of towels
        for index, word in enumerate(towels):
            node = 0
            for char in word:
                if char not in trie[node]:
                    trie[node][char] = len(trie)
                    trie.append({'fail': 0, 'out': []})
                node = trie[node][char]
            trie[node]['out'].append(len(word))  # store length of matched towel
    
        # Build fail links (BFS)
        queue = deque()
        # Initialize depth-1 fail links
        for c in trie[0]:
            if c not in ('fail', 'out'):
                nxt = trie[0][c]
                trie[nxt]['fail'] = 0
                queue.append(nxt)
    
        # BFS build deeper fail links
        while queue:
            r = queue.popleft()
            for c in trie[r]:
                if c in ('fail', 'out'):
                    continue
                nxt = trie[r][c]
                queue.append(nxt)
                f = trie[r]['fail']
                while f and c not in trie[f]:
                    f = trie[f]['fail']
                trie[nxt]['fail'] = trie[f][c] if c in trie[f] else 0
                trie[nxt]['out'] += trie[trie[nxt]['fail']]['out']
        return trie
    
    def count_possible_designs_aho(trie, design):
        n = len(design)
        dp = [0] * (n + 1)
        dp[0] = 1
    
        # State in the trie automaton
        state = 0
    
        for i, char in enumerate(design, 1):
            # Advance in trie
            while state and char not in trie[state]:
                state = trie[state]['fail']
            if char in trie[state]:
                state = trie[state][char]
            else:
                state = 0
            
            # For every towel match that ends here:
            for length_matched in trie[state]['out']:
                dp[i] += dp[i - length_matched]
        
        return dp[n]
    
    @profiler
    def main(input_data):
        towels,desired_patterns = ( sorted(x.split(), key=len, reverse=True) for x in input_data.replace(',', ' ').split('\n\n') )
        part1 = 0
        part2 = 0
        
        # Build Aho-Corasick structure
        trie = build_aho_corasick(towels)
        
        for pattern in desired_patterns:
            count = count_possible_designs_aho(trie, pattern)
            if count:
                part1 += 1
                part2 += count
        
        return part1,part2
    
    if __name__ == "__main__":
        with open('input', 'r') as f:
            input_data = f.read().replace('\r', '').strip()
        result = main(input_data)
        print("Part 1:", result[0], "\nPart 2:", result[1])
    
    

  • I liked all of the Maze solving puzzles. this was a good year for the best maze solving skills to be put to the test. at the end, I disliked 14, 15, and 17 for the simple fact that I am not well versed in solving those types of challenges and it is hard to visualize in my head properly. Did they push me? yes, but enjoyment was less than I had with the other puzzles.

    I also enjoyed some of the corrupted memory solving puzzles like day 3 because I got to optimize with either .index( ) or straight regex. I really enjoyed it. however, that pales to the pattern recognition puzzle that day 19 had. Day 19 was extremely fun. many people had to hand roll some kind of pattern solver. I eventually found a solution using Aho-Corasick Algorithm which brought my solution for part 1 and 2 to just about ~55 ms. it was extremely fast as I already had a solver that only took ~120 ms. (in python3) however, now that I looked in the solutions thread, it seems I was not the only one to think of trie and my solution was still slower than their solution, sad me. lol nvm I found I was rebuilding the trie everytime, only need to build it once, dropped to solve ot ~7 ms

    I am still solving some of the later puzzles due to holidays but I had fun so far, even though some days I was not well equipped to solve efficiently or at all.


  • I deleted my other comment because dogs jumped on me and somehow in the mix up, I pressed the post button, so I removed the partially malformed comment.

    On the first topic, your reasoning to differentiate what the goals of competitive games and “chasing higher something” is just not well put into words here. From what I am reading, you tried to make them look different on the higher level view but failed to showcase that, which for both sides you claim one is competitive and the other side is not competitive. However, most definitions for being competitive is to chase ever higher “somethings” for which is mostly points but in other aspects can be money. Hence your reasoning has basically equated them even more deeply. So your point to make them different needs to be redone with that can make your point more clear and is reasonably opposites.

    Moving on, I didn’t agree completely that playing with cash is strictly adult. More or less, anything that requires express consent, such as decisions that involve risk of self hurting activities (playing with cash is one of these activities because of our dependency on money), and other adult topics that are not relevant here. There are things that kids can buy to play games for or gain cosmetics for RP or whatever, however I understand the problem will be account selling shenanigans, but that at least raises the bar on barrier of entry too, and most not doing it. Kids need to learn to be less impulsive and I think that is more of what is the root of the problem in this case. We need regulations that would punish companies that try to exploit this.

    Yeah, gateway drugs… Look, I know that actual hard to quit drugs are not close to the same as these psychological only addictive activities. Physical withdrawal from those drugs are miles more deadly and impactful on someone’s body and brain. I do think people can take some addictions too far but those cases are more likely to be popular in the news than the more tame stories that highlight what the majority will experience. I am still adamant that balatro is no where close to this topic. It is not able to transition you to something else and this is more on an individual basis and not really generalizable. If someone is being socially pressured by friends or others to do it then balatro will better prepare someone to not make bigger mistakes, so in reality it has a positive than a negative.

    This is why I have a hard time reasoning with you on this. I will like to move on, so I will end it here. I feel like we have made our points and we will decide for our own. When it comes down to it, I will more than likely vote for any regulation to pass on restricting gambling and other forms of negative addictions, while weighing in any freedoms that may be reduced or affected. We will see, happy holidays.




  • at the end of the day, problems are at different views here. I find games are very similar in many aspects, such as relying heavily on RNG to make outcomes varied. If normal no cash poker is adult, then games like uno and other forms of RNG based games like rolling dice can just as much be adult from your reasoning. I am not particular to labeling games in this way.

    there are many more nuances to labeling as adult oriented, such as ability to cause oneself to be worse off in life to the point that it is harmful (like losing your life savings or going into debt) or the content is sexual. that is what I would consider requirement for labeling as adult. I personally think anything that is tied with cash should be labeled as adult. yet, right now, this line is heavily blurred by microtransactions, lootboxes, cosmetics, in game currency and other ways to play with cash.

    Playing with cash should be adult, but in game cash you cannot sell or buy is not something I think I can equate here because I see it as another incremental simulator kind of game, like many idle games are like now a days.


  • I think there are dynamics at play her but to keep it short. I admire your steadfast disappointment towards gambling or playing with odds. However, for a game like balatro being completely free to continue playing after the initial purchase. As far as I can tell, there are no micro transactions or push for players to sink their wallets into the game. The entire game is basically a simulation. All randomizations are predetermined based on a seed. You can set it or you can figure it out if you are smart enough in math and cryptography. However, ratings are more annoyance than being a real guard against the real problems happening right now.

    There are many more dissatisfying things. I doubt we should fight over the rating. Instead we should be in agreement that gambling and other pay to play odds like loot boxes is bad and stricter regulations.