• 3 Posts
  • 227 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • So if I understood NVIDIA’s “strategy” right, their usage of companies like Coreweave is drawing in money from other investors and private equity? Does this mean, that unlike many of the other companies in the current bubble, they aren’t going to lose money on net, because they are actually luring in investment from other sources in companies like Coreweave (which is used to buy GPU and thus goes to them), whileleaving the debt/obligations in the hands of companies like Coreweave? If I’m following right this is still a long term losing strategy (assuming some form of AI bubble pop or deflation we are all at least reasonably sure of), but the expected result for NVIDIA is more of a massive drop in revenue as opposed to a total collapse of their company under a mountain of debt?






  • The Oracle deal seemed absurd, but I didn’t realize how absurd until I saw Ed’s compilation of the numbers. Notably, it means even if OpenAI meets its projected revenue numbers (which are absurdly optimistic, like bigger than Netflix and Spotify and several other services combined) paying Oracle (along with everyone else it has promised to buy compute from) will put it net negative on revenue until 2030, meaning it has to raise even more money.

    I’ve been assuming Sam Altman has absolutely no real belief that LLMs would lead to AGI and has instead been cynically cashing in on the sci-fi hype, but OpenAI’s choices don’t make any long term sense if AGI isn’t coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment. And to even ask what his “real beliefs” are gives him too much credit.

    Just to remind everyone: the market can stay irrational longer than you can stay solvent!













  • It’s a good post. A few minor quibbles:

    The “nonprofit” company OpenAI was launched under the cynical message of building a “safe” artificial intelligence that would “benefit” humanity.

    I think at least some of the people at launch were true believers, but strong financial incentives and some cynics present at the start meant the true believers didn’t really have a chance, culminating in the board trying but failing to fire Sam Altman and him successfully leveraging the threat of taking everyone with him to Microsoft. It figures one of the rare times rationalists recognize and try to mitigate the harmful incentives of capitalism they fall vastly short. OTOH… if failing to convert to a for-profit company is a decisive moment in popping the GenAI bubble, then at least it was good for something?

    These tools definitely have positive uses. I personally use them frequently for web searches, coding, and oblique strategies. I find them helpful.

    I wish people didn’t feel the need to add all these disclaimers, or at least put a disclaimer on their disclaimer. It is a slightly better autocomplete for coding that also introduces massive security and maintainability problems if people entirely rely on it. It is a better web search only relative to the ad-money-motivated compromises Google has made. It also breaks the implicit social contract of web searches (web sites allow themselves to be crawled so that human traffic will ultimately come to them) which could have pretty far reaching impacts.

    One of the things I liked and didn’t know about before

    Ask Claude any basic question about biology and it will abort.

    That is hilarious! Kind of overkill to be honest, I think they’ve really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks. But I like the author’s overall point that this shut-it-down approach could be used for a variety of topics.

    One of the comments gets it:

    Safety team/product team have conflicting goals

    LLMs aren’t actually smart enough to make delicate judgements, even with all the fine-tuning and RLHF they’ve thrown at them, so you’re left with over-censoring everything or having the safeties overridden with just a bit of prompt-hacking (and sometimes both problems with one model)/1