It's so funny. Every single time a company raises a ton of money at a large valuation, the comments are always filled with "how do they justify this valuation" or "they aren't work X...because Y and Z do the same thing".
VC math is pretty simple - at the end of the day, there's a pretty large likelihood that at least 1 AI company is going to reach a trillion dollar valuation. VCs want to find that company.
OpenAI, while definitely not the only player, is the most "mainstream". Your average teacher or mechanic uses "chatgpt" and "AI" interchangeably. There's a ton of value in becoming a vowel, even if other technically superior competitors exist.
Furthermore, the math changes at this level. No investor here is investing at a $300B valuation expecting a 10x. They're probably expecting a 3x or even a 2x. If they put in 300MM, they still end up with 600-900MM.
This isn't math on revenue, it's a bet. And if you think in terms of risk-adjusted bets, hoping the most mainstream AI company today might at least double your money in the next ten years in a red-hot AI market is not as wild as it seems.
Doubling your money in 10 years is < 7.2% per year compounded. With the risks involved here, I wouldn’t take that bet. There are safer assets that would return that much.
Just holding Nasdaq 100 ETFs is enough? People get impressed by "doubled my money" but forget that the most important question is - how much time did it take? Even a super safe asset with 3% returns will double your money... in 24 years.
The best way I’ve heard this described: AI (LLMs) is probably 90% of the way to human levels of reasoning. We can probably get to about 95% optimizing current technology.
Whether or not we can get to 100% using LLMs is an open research problem and far from guaranteed. If we can’t, it’s unclear if it will ever really proliferate the way things hope. That 5% makes a big difference in most non-niche use cases…
My take is that there are fundamental limitations to try to pigeon-hole reasoning to LLMs, which are essentially a very very advanced autocomplete, and that's why those % won't jump too much too soon.
I've always looked at it as we're not making software that can think, we're (quite literally) demonstrating that vast categories of things don't need thought (for some quality level). The problem is, it's clearly not 100%, maybe it's 90-some percent, but it doesn't matter, we're only outsourcing the unimportant things that aren't definitional for a task.
This is very typical of naive automation, people assume that most of the work is X and by automating that we replace people, but the thing that's automated is almost never the real bottleneck. Pretty sure I saw an article here yesterday about how writing code is not the bottleneck in software development, and it holds everywhere.
The reason management thinks coding is the bottleneck is because they don't understand the first thing abiut code and neither have the ability or temprament to. Their whole professional career is about plausibly convincing other people through jargon, manipulation and popularity contests, which generally oprn up doors, solve problems and provoke seal like clapping from all involved. The idea that the core problem in many systems and software is due to their constitutonal inability to think rigorously to define requirements logically has never crossed their mind: it must be the magic spells those losers we bullied at school use and we are now tragically dependent on.
The discussion is completely useless without defining what thought is and then demostrating that LLMs are not capable of it. And I doubt any definition you come up with will be workable.
>The best way I’ve heard this described: AI (LLMs) is probably 90% of the way to human levels of reasoning. We can probably get to about 95% optimizing current technology.
We don't know enough about how LLMs work or about how human reasoning works for this to be at all meaningful. These numbers quantify nothing but wishes and hype.
These percentage estimates of AI's proximity to "human reasoning" are misleading abstractions that mask fundamental qualitative differences in how LLMs and humans process information.
Odds are this is a dev shop with more than one person doing at least some things. It would explain how “he” was able to get so many jobs and maintain appearances. And a lot of startups don’t have the best screening processes to begin with (have a beer with a founder, check out their source code, you’re hired!). This is exactly the place where the structure and processes of larger companies can be a benefit. And even then, people work multiple jobs and get away with it. It’s become popular post COVID.
Given these two factors, I don’t think it would be out of the realm of possibility for something like this to happen.
Think so too. Also because different companies have different "reviews" of his work. Some saying he was only good at interviews, others saying the quality of work was good. Must have been diffferent people working.
Nintendo has never competed on graphics. They compete on having the most fun, accessible, entertaining games as possible. And say what you will about their business practices, they’ve probably done a better job of that than any other gaming company in history. As more devs bundle ever higher quality graphics with ever higher in-app purchases and pay to win schemes, Mario remains…Mario.
I seriously doubt many Switch users would bail on the system because of “fake” HDR. They probably don’t care about HDR at all. As long as Mario remains Mario, they’re happy.
Nintendo has ABSOLUTELY competed on graphics. The NES, SNES, N64, and Gamecube were graphical powerhouses upon release, at or near the top of their generations in graphical performance. It was only with the Wii, when they chose to iterate on the Gamecube design rather than pair a powerful multicore processor with a powerful shader-capable CPU like the PS3 and Xbox 360 did, that Nintendo started going all "but muh lateral thinking with withered technology" and claimed they never intended to compete in that space.
The GameCube was released 24 years ago. Its hardly fair to hold Nintendo accountable to a direction they haven't moved in for two and a half decades.
The visual difference between the N64 and GC was enough that it made sense to focus on upgraded graphics. When you play an N64 game, there's always the initial shock of "wow these graphics are a bit dated".
But you don't get that feeling when playing Melee, or Wind Waker, or many of the other artfully done GC games.
Essentially, somewhere around the GameCube era, graphics became good enough that the right artist direction could leap a game into the "timeless graphics" category.
And so it makes sense that Nintendo said "let's stop chasing better graphics, and instead focus on art direction and gameplay".
I think the biggest issue with Nintendo games until Switch at least has been the abysmal frame rates. We're not talking about dips just under 60fps, there are good examples of 15fps frame rates even with the best games such as Tears of Kingdom. I think they've finally fixed that issue with Switch 2, but the horrible performance of the games have been a huge issue since forever.
And of course it does not matter, Nintendo still sells because it's Mawio (and I say this with all the love, I'm a huge Mario fan myself).
> the Wii [.vs.] a powerful shader-capable CPU like the PS3 and Xbox 360
Outsold both the PS3 and XBOX360 by 15M units though. Given the lower hardware costs of the Wii (I've seen estimates of ~$160 compared to $840 for the PS3 and $525 for the Xbox 360 - both higher than launch price btw!), I'd suggest Nintendo made the right choice.
I'm a Nintendo purchaser. I absolutely care about HDR. Given they specifically advertised HDR, I suspect they expect me to care, otherwise why make noise about it?
I don't think it's fair to say they _never_ competed on graphics. The Super Nintendo was comparable and surpassed the Genesis in some graphics areas. The Nintendo 64 was a 3D monster compared to other consoles at the time. On paper, the GameCube out performs the PS2. It wasn't more powerful than the Xbox, but not a generation behind.
It wasn't until the Wii that Nintendo stepped out of the hardware race. Somehow this has been retconned into Nintendo never focusing on hardware.
If they thought it would sell more systems, they'd compete. The Switch 2 is evidence that it doesn't matter.
The Wii U had terrible wifi, but I can't really say I hated it. There were some real classics on that console - Mario Kart 8 and Super Mario 3D World (although those were both eventually ported to the Switch). It played Wii games and supported all the original controllers, but output HDMI and had real power management. I still use mine to play Wii games.
I loved mine. It had real problems but great games.
It was just an easy at-hand example.
I also liked the VirtualBoy. But I bought it and a bunch of game from Blockbuster for $50 total when they gave up on it. So my value calibration was very different from those who paid retail.
Agree, with the HDR marketing for the Switch 2 I expected a proper implementation. Sad that they cheaped out on it but at least we got this great article out of it
I am not a Nintendo fan (I do have a Switch 1) but this article is the first time I learned Nintendo added HDR graphics the the Switch 2 and this thread is the first time I learned HDR was actually being marketed. I genuinely doubt most Nintendo customers know about these features. It isn’t like Nintendo takes pride in technically impressive graphics anyway.
Lol in what world was the US education system the envy of the world? We’ve been routinely clowned for overspending, poor outcomes, university tuition bloat, and everything else under the sun.
> Lol in what world was the US education system the envy of the world?
This one.
In most rankings of work universities US schools dominate the top of the list. Same when it comes to winning Nobel prizes--8 of the top 10 are in the US (the other 2 are Oxford and Cambridge).
K-12 and undergraduate are passable. The thing that the US excels at is scale. No other country can match the US academic research environment. Thousands of well funded research institutions. Broad competitive funding opportunities at every school.
It was a golden age of knowledge that is being crushed by MAGA.
You're not wrong. These people are repeating the same inane bullshit about healthcare in the US being the best in the world. Which it arguably is if you're fucking loaded and don't have to care about cost. But it's the furthest thing from the truth for the average citizen. The average citizen has the same shit level of education as they have to healthcare. But since the rich can get the best of both, "our nation" has the best of both regardless of how many fall through the cracks and suffer as a result. The poors should have just played the capitalism game better and they would have been fine I guess. That's what freedom actually means after all.
> But it's the furthest thing from the truth for the average citizen.
Usually "the best in the world" is not going to be accessible to average people. Top schools in the USA are open to the best and brightest people from around the world, average people can't compete with that.
There isn’t a single founding engineer at a publicly traded company that regrets their choice to sacrifice salary at a more stable company for 1% of a startup.
Becoming a founding engineer is a wealth-building, passion-for-your-work risk, not a pure salary optimization decision. HN never seems to understand this. If you’re optimizing for stable salary, go for the FAANG position. You’ll be comfortable, but you’ll most likely never be able to fly private, and you’ll have to be OK existing as a cog in a massive machine. Plenty of people are ok with this. These people should not be founding engineers.
Being a founding engineer is not wealth-building. Only 10% of startups succeed. Of these successes, only a small fraction (likely less than 1%) will be large enough to compensate for missed salary.
I would categorize it more as an extremely high-risk gamble. Actually your odds would be better taking your $3M in FAANG compensation over the same time period and making a 20:1 bet with a 2% chance of winning in Vegas. Probably double your chances of being able to fly private.
The math simply never works out in favor of lower salaries/publicly funded healthcare for high paying US roles like software engineering. Even taking your (high, assuming family of 4) premiums and 0% employer contribution (very uncommon), that’s an additional $43k a year, pretax. US SWE salaries are double, sometimes triple international with absolute numbers at $50k+.
This doesn’t even factor in general higher tax brackets abroad
The problem is you can't put a price on intangible things like being scared of a medical bill.
The premiums don't cover everything and if you (heaven forbid) get a curable form of cancer you're gonna be paying quite a bit out of pocket past the quoted $43k, and it's not just about the money, but dealing with the whole system as well. Getting claims denied and having to contest it, having a mystery bill hanging over your head. If you break an ankle, it might just be less hassle to buy the things you need off Amazon than deal with insurance.
Meanwhile you can complain about eg long wait times in, say, Sweden, but the total deductible per year there is approximately $100 and then you don't pay more than that. Getting a curable cancer just doesn't have the same implication there.
There's also other intangibles. Things like time off, infrastructure, social safety net, dating pool, culture, human rights, cost of living. inequality (good or bad). What's it worth in dollars to you for your daughter to be able to afford to buy a house someday when it also turns out she wants to be a painter when she grows up?
I'm not making any argument for or against a particular county. What I'm saying is, live where you want to live. Money isn't everything so make sure to read past the numbers on a spreadsheet.
Never think that pure hard work leads to success, placement, privilege, or anything else. The farmhand in a field works harder than 99.9% of high paid tech employees. Hard work is important, sure, but it’s all about relative value contribution in the market, nothing else.
It’s easy to find another farmhand, it’s hard to find another ML engineer
I agree, and I've argued vocally against RTO from my privileged position because I have that value that makes me hard to replace.
I just want to make sure that we're not forgetting the people who weren't able to become high-level ML engineers for various social and economic reasons and are locked into 10 hour hard days in person.
A lot of people bullied the ever hell out of the current ML engineers today. A lot of those bullies are only just now experiencing the economic effects of their actions from 10-30 years ago.
Kids knew which kids they’d have to clean to the house of 20 years in the future and they intuitively want to knock those elite kids down a peg while they still can.
Never forget the extreme resentment that those around nerds have for a nerds mind. When this country stops treating nerds like shit and celebrating anti-intellectualism, I’ll start being worried about the plight of the lowly security guard.
I'm sorry, it sounds like you had a really negative childhood, at least in regards to your relationships with some of your peers.
I would argue that you, as the intellectual elite, are only leaning into and confirming their bias, and that perhaps a good way to begin to help reform the anti-intellectualism of America would be to try to have compassion for the people far beneath you, like the security guard. Otherwise, those people could have even greater anti-intellectual mindsets.
It's also a little odd to me to be willing to persecute or punish through inaction adults for their actions as children.
>I’m pretty sure in the grand scheme of things the Forbes family is still perfectly OK with the association
The writers, editors and other business partners who built their reputation by contributing to Forbes previous good reputation are probably very not OK with it
The question was whether to name it after a person or not. Those people would be equally upset if it was named Bisclock, so them being upset at the current site is not relevant to the naming discussion.
It’ll be a billion dollar business just like Clash of Clans or Candy Crush are. A few whales will account for the vast majority of the spend and the majority of the population won’t even touch it.
It’s still $1B worth of TAM the question would be how spread out that revenue stream would be.
Quite a few Onlyfans “creators” are launching their own “companion bots” and even sexdolls right now.
I suspect the most profitable venture would likely be a middleware for those creators as you would likely need to ground it in reality somewhat to hook in people.
I’m actually surprised that Onlyfans isn’t on top of it as they are directly positioned to launch an adult GenAI service with royalties as they have access to the most relevant training data in terms of photos, videos and most importantly a the chat messages between “creators” and “clients”.
VC math is pretty simple - at the end of the day, there's a pretty large likelihood that at least 1 AI company is going to reach a trillion dollar valuation. VCs want to find that company.
OpenAI, while definitely not the only player, is the most "mainstream". Your average teacher or mechanic uses "chatgpt" and "AI" interchangeably. There's a ton of value in becoming a vowel, even if other technically superior competitors exist.
Furthermore, the math changes at this level. No investor here is investing at a $300B valuation expecting a 10x. They're probably expecting a 3x or even a 2x. If they put in 300MM, they still end up with 600-900MM.
This isn't math on revenue, it's a bet. And if you think in terms of risk-adjusted bets, hoping the most mainstream AI company today might at least double your money in the next ten years in a red-hot AI market is not as wild as it seems.