Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could the PS6 be the last console generation with an expressive improvement in compute and graphics? Miniaturization keeps giving ever more diminishing returns each shrink, prices of electronics are going up (even sans tariffs), lead by the increase in the price of making chips. Alternate techniques have slowly been introduced to offset the compute deficit, first with post processing AA in the seventh generation, then with "temporal everything" hacks (including TAA) in the previous generation and finally with minor usage of AI up-scaling in the current generation and (projected) major usage of AI up-scaling and frame-gen in the next gen.

However, I'm pessimistic on how this can keep evolving. RT already takes a non trivial amount of transistor budget and now those high end AI solutions require another considerable chunk of the transistor budget. If we are already reaching the limits of what non generative AI up-scaling and frame-gen can do, I can't see where a PS7 can go other than using generative AI to interpret a very crude low-detail frame and generating a highly detailed photorealistic scene from that, but that will, I think, require many times more transistor budget than what will likely ever be economically achievable for a whole PS7 system.

Will that be the end of consoles? Will everything move to the cloud and a power guzzling 4KW machine will take care of rendering your PS7 game?

I really can only hope there is a break-trough in miniaturization and we can go back to a pace of improvement that can actually give us a new generation of consoles (and computers) that makes the transition from an SNES to a N64 feel quaint.



My kids are playing Fortnite on a PS4, it works, they are happy, I feel the rendering is really good (but I am an old guy) and normally, the only problem while playing is the stability of the Internet connection.

We also have a lot of fun playing board games, simple stuff from design, card games, here, the game play is the fun factor. Yes, better hardware may bring more realistic, more x or y, but my feeling is that the real driver, long term, is the quality of the game play. Like the quality of the story telling in a good movie.


Every generation thinks the current generation of graphics won't be topped, but I think you have no idea what putting realtime generative models into the rendering pipeline will do for realism. We will finally get rid of the uncanny valley effect with facial rendering, and the results will almost certainly be mindblowing.


Every generation also thinks that the uncanny valley will be conquered in the next generation ;)

The quest for graphical realism in games has been running against a diminishing-returns-wall for quite a while now (see hardware raytracing - all that effort for slightly better reflections and shadows, yay?), what we need most right now is more risk-taking in gameplay by big budget games.


I think the inevitable near future is that games are not just upscaled by AI, but they are entirely AI generated in realtime. I’m not technical enough to know what this means for future console requirements, but I imagine if they just have to run the generative model, it’s… less intense than how current games are rendered for equivalent results.


I don't think you grasp how many GPUs are used to run world simulation models. It is vastly more intensive in compute that the current dominant realtime rendering or rasterized triangles paradigm


I don't think you grasp what I'm saying? I'm talking about next token prediction to generate video frames.


Yeah, which is pretty slow due to the need to autoregressively generate each image frame token in sequence. And leading diffusion models need to progressively denoise each frame. These are very expensive computationally. Generating the entire world using current techniques is incredibly expensive compared to rendering and rasterizing triangles, which is almost completely parallelized by comparison.


in a few years it's possible that this will run locally in real time


Okay you clearly know 20x more than me about this, so I cannot logically argue. But the vague hunch remains that this is the future of video games. Within 3 to 4 years.


I don't think that will ever happen die to extreme hardware requirements. What I do see happen is that only an extremely low fidelity scene is rendered with only basic shapes, no or very little textures etc. that is them filled in by AI. DLSS taken to the extreme, not just resolution but the whole stack.


I’m thinking more procedural generation of assets. If done efficiently enough, a game could generate its assets on the fly, and plan for future areas of exploration. It doesn’t have to be rerendered every time the player moves around. Just once, then it’s cached until it’s not needed anymore.


Even if you could generate real-time 4K 120hz gameplay that reacts to a player's input and the hardware doesn't cost a fortune, you would still need to deal with all the shortcomings of LLMs: hallucinations, limited context/history, prompt injection, no real grasp of logic / space / whatever the game is about.

Maybe if there's a fundamental leap in AI. It's still undecided if larger datasets and larger models will make these problems go away.


I actually think many of these are non-issues if devs take the most likely approach which is simply doing a hybrid approach.

You only need to apply generative AI to game assets that do not do well with the traditional triangle rasterization approach. Static objects are already at practically photorealistic level in Unreal Engine 5. You just need to apply enhancement techniques to things like faces. Using the traditionally rendered face as a prior for the generation would prevent hallucinations.


Realtime AI generated video games do exist, and they're as... "interesting" as you might think. Search YouTube for AI Minecraft


Good luck trying to tell a "cinematic story" with that approach, or even trying to prevent the player from getting stuck and not being able to finish the game, or even just to reproduce and fix problems, or even just to get consistent result when the player turns the head and then turns it back etc etc ;)

There's a reason why such "build your own story" games like Dwarf Fortress are fairly niche.


Yes, that's something I failed to address in my post. I myself have also been happier playing older or just simpler games than chasing the latest AAA with cutting edge graphics.

What I see as a problem though is that the incumbent console manufacturers, sans Nintendo, have been chasing graphical fidelity since time immemorial as the main attraction for new generations of consoles and may have a hard time convincing buyers to purchase a new system once they can't irk out expressive gains in this area. Maybe they will successfully transition into something more akin to what Nintendo does and focus on delivering killer apps, gimmicks and other innovations every new generation.

Or perhaps they will slowly fall into irrelevance and everything will converge into PC/Steam (I doubt Microsoft can pull off whatever plan they have for the future of xbox) and any half-decent computer can run any game for decades to come and Gabe Newell becomes the richest person in the world.


I can't figure out what Microsoft's strategy is with the Rog ally X or whatever it's called. The branding is really confusing, even on just the devices. It gets even more confusing with Xbox on pc.

Are they planning on competing with steam and that's how they'll make money? I have a steam deck and I've zero interest in the Rog ally, windows 11 is bad enough on my work pc.


That's the Nintendo way. Avoiding the photorealism war altogether by making things intentionally sparse and cartoony. Then you can sell cheap hardware, make things portable etc.


also nintendo vision which is "mobile gaming" are

handheld devices like switch,steam deck etc is really the future while phone is also true for some extend but gaming on a phone vs gaming on a handheld is really world of a differences

give it few generations then traditional consoles would obsolete, I mean we are literally have a lot of people enjoy indie game in steam deck right now


I.e., the uncanny valley.


Cartoony isn’t the uncanny valley. Uncanny valley is attempted photorealism that misses the mark.


I think games like dishonoured also managed to side step the uncanny valley. Not cartoony, just their own thing (actually more like an oil painting).


Unreal engine 1 looks good to me, so I am not a good judge.

I keep thinking there is going to be a video game crash soon, over saturation of samey games. But I'm probably wrong about that. I just think that's what Nintendo had right all along: if you commoditize games, they become worthless. We have endless choice of crap now.

In 1994 at age 13 I stopped playing games altogether. Endless 2d fighters and 2d platformer was just boring. It would take playing wave race and golden eye on the N64 to drag me back in. They were truly extraordinary and completely new experiences (me and my mates never liked doom). Anyway I don't see this kind of shift ever happening again. Infact talking to my 13 year old nephew confirms what I (probably wrongly) believe, he's complaining there's nothing new. He's bored or fortnight and mine craft and whatever else. It's like he's experiencing what I experienced, but I doubt a new generation of hardware will change anything.


> Unreal engine 1 looks good to me, so I am not a good judge.

But we did hit a point where the games were good enough, and better hardware just meant more polygons, better textures, and more lighting. The issues with Unreal Engine 1 (or maybe just games of that era) was that the worlds were too sparse.

> over saturation of samey games

So that's the thing. Are we at a point where graphics and gameplay in 10-year-old games is good enough?


Are we at a point where graphics and gameplay in 10-year-old games is good enough?

Personally, there are enough good games from the 32bit generation of consoles, and before, to keep me from ever needing to buy a new console, and these are games from ~25 years ago. I can comfortably play them on a MiSTer (or whatever PC).


Yep, I have a mister and a steam deck that's mainly used for emulators and old pc games. I'm still chasing old highs


If the graphics aren’t adding to the fun and freshness of the game, nearly. Rewatching old movies over seeing new ones is already a trend. Video games are a ripe genre for this already.


Now I'm going to disagree with myself... there came a point where movies started innovating in storytelling rather than the technical aspects (think Panavision). Anything that was SFX-driven is different, but the stories movies tell and how they tell them changed, even if there are stories where the technology was already there.


I get so sad when I hear people say there’s no new games. There are so many great, innovative games being made today, more than any time in history. There are far more great games on Steam than anyone can play in a lifetime.

Even AAAs aim to create new levels of spectacle (much like blockbuster movies), even if they don’t innovate on gameplay.

The fatigue is real (and I think it’s particularly bad for this generation raised to spend all their gaming time inside the big 3), but there’s something for you out there, the problem is discoverability, not a lack of innovation.


This so much. Anyone that's saying games used to be better is either not looking or has lost their sight to nostalgia.


"if you commoditize games, they become worthless"

???? hmm wrong??? if everyone can make game, the floor is raising making the "industry standard" of a game is really high

while I agree with you that if everything is A then A is not meaning anything but the problem is A isn't vanish, they just moved to another higher tier


You probably have a point and it's not something I believe completely. My main problem I think is I have seen nothing new in games for 20 years at least.

Gunpei yokoi said something similar here:

https://shmuplations.com/yokoi/

Yokoi: When I ask myself why things are like this today, I wonder if it isn’t because we’ve run out of ideas for games. Recent games take the same basic elements from older games, but slap on characters, improve the graphics and processing speed… basically, they make games through a process of ornamentation.


It sounds like even the PS6 isn’t going to have an expressive improvement, and that the PS5 was the last such console. PS5 Pro was the first console focused on fake frame generation instead of real output resolution/frame rate improvements, and per the article PS6 is continuing that trend.


What really matters is the cost.

In the past a game console might launch at a high price point and then after a few years, the price goes down and they can release a new console at a high at a price close to where the last one started.

Blame crypto, AI, COVID but there has been no price drop for the PS5 and if there was gonna be a PS6 that was really better it would probably have to cost upwards of $1000 and you might as well get a PC. Sure there are people who haven’t tried Steam + an XBOX controller and think PV gaming is all unfun and sweaty but they will come around.


Inflation. PS5 standard at $499 in 2019 is $632 in 2025 money which is the same as the 1995 PS 1 when adjusted for inflation $299 (1995) to $635(2025). https://www.usinflationcalculator.com/

Thus the PS6 should be around 699 at launch.


When I bought a PS 1 around 1998-99 I paid $150 and I think that included a game or two. It's the later in the lifecycle price that has really changed (didn't the last iteration of it get down to either $99 or $49?)


In 2002 I remember PS1 being sold for 99€ in Toys'r'Us in the Netherlands, next to a PS2 being sold for 199€.


The main issue with inflation is that my salary is not inflation adjusted. Thus the relative price increase adjusted by inflation might be zero but the relative price increase adjusted by my salary is not.


The phrase “cost of living increase” is used to refer to an annual salary increase designed to keep up with inflation.

Typically, you should be receiving at least an annual cost of living increase each year. This is standard practice for every company I’ve ever worked for and it’s a common practice across the industry. Getting a true raise is the amount above and beyond the annual cost of living increase.

If your company has been keeping your salary fixed during this time of inflation, then you are correct that you are losing earning power. I would strongly recommend you hit the job market if that’s the case because the rest of the world has moved on.

In some of the lower wage brackets (not us tech people) the increase in wages has actually outpaced inflation.


Thank you for your concern but I'm in Germany so the situation is a bit different and only very few companies have been able to keep up with inflation around here. I've seen at least a few adjustments but would not likely find a job that pays as well as mine does 100% remote. Making roughly 60K in Germany as a single in his 30s isn't exactly painful.


> but would not likely find a job that pays as well as mine does 100% remote.

That makes sense. The market for remote jobs has been shrinking while more people are competing for the smaller number of remote jobs. In office comes with a premium now and remote is a high competition space.


If you want to work 100% remote you could consider working for a US company as a consultant?


If a US company hires you in Germany, either you get hired by their German branch or a personnel service provider based in Germany; and thus get paid "competitive" salaries typical of the country. Or you need to have some kind of setup where you are a freelancer or such and you figure out the taxation and statutory insurances and such on your own, which I'm not familiar with (my freelance IT consultancy side-business is rather simple because of small scale and only domestic customers). That will probably work, and if you manage to get a senior Silicon Valley salary, you would probably come out ahead by a bit after taxes and insurances. But you would probably need good tax advisors to avoid stepping in expensive loopholes, and if you work more than 80% for a single employer, the tax administration will be on your case because false self-employment is a possible method of tax evasion and has been outlawed.


If the client you're consulting for has no presence in Germany then it cannot possibly be false self-employment, surely?


False self-employment is judged solely by whether you spend 80% or more of your worktime working for a single employer.


Typically "Cost Of Living" increases target roughly inflation. They don't really keep up though, due to taxes.

If you've got a decent tech job in Canada your marginal tax rate will be near 50%. Any new income is taxed at that rate, so that 3% COL raise, is really a 1.5% raise in your purchasing power, which typically makes you worse off.

Until you're at a very comfortable salary, you're better off job hopping to boost your salary. I'm pretty sure all the financial people are well aware they're eroding their employees salaries over time, and are hoping you are not aware.


Tax brackets also shift through time, though less frequently. So if you only get COL increases for 20 years you’re going to be reasonably close to the same after tax income barring significant changes to the tax code.

In the US the bottom tax brackets where 10% under 2020 $19,750 then 12% next bucket, in 2025 it’s 10% under $23,850 then 12% next bracket. https://taxfoundation.org/data/all/federal/historical-income...


And here I am in the UK, where the brackets have been frozen until 2028 (if they don't invent some reason to freeze further).


Freezing tax brackets is a somewhat stealthy way to shift the tax burden to lower income households as it’s less obviously a tax increase.


Is your salary the same as 10 years ago?


Those in charge of fiat printing presses have run the largest theft or wealth in world history since 1971 when the dollar decoupled from gold.


Cash is a small fraction of overall US wealth, but inflation is a very useful tax on foreigners using USD thus subsidizing the US economy.


But now you’re assuming the PC isn’t also getting more expensive.

If a console designed to break even is $1,000 then surely an equivalent PC hardware designed to be profitable without software sales revenue will be more expensive.


You have to price it equivalent grams of gold to see the real price trend


Says who?

Economists use the consumer price index, which tracks a wide basket of goods and services.

Comparing console prices to a single good is nonsense, even if the good has 6000 of years of history, it's not a good comparison to a single good in a vacuum.


PCs do get cheaper over time though, except if there is another crypto boom, then we are all doomed.


"PCs do get cheaper over time though"

pc get cheaper but the gpu isnt


A GTX1050ti was $139 9 years ago. Getting a Ryzen 8700G instead of a 8700F gives you more and costs you $39 today!


no way it cost 39$ more also inflation exist


As long as I need a mouse and keyboard to install updates or to install/start my games from GOG, it's still going to be decidedly unfun, but hopefully Windows' upcoming built-in controller support will make it less unfun.


Today you can just buy an Xbox controller and pair it with your Windows computer and it just works and it’s the same same with the Mac.

You don’t have to install any drivers or anything and with the big screen mode in Steam it’s a lean back experience where you can pick out your games and start one up without using anything other than the controller.


I like big picture mode in Steam, but.... controller support is spotty across Steam games, and personally I think you need both a Steam controller and a DualSense or Xbox controller. Steam also updates itself by default every time you launch, and you have to deal with Windows updates and other irritations. Oh, here's another update for .net, wonderful. And a useless new AI agent. SteamOS and Linux/Proton may be better in some ways, but there are still compatibility and configuration headaches. And half my Steam library doesn't even work on macOS, even games that used to work (not to mention the issues with intel vs. Apple Silicon, etc.)

The "it just works" factor and not having to mess with drivers is a huge advantage of consoles.

Apple TV could almost be a decent game system if Apple ever decided to ship a controller in the box and stopped breaking App Store games every year (though live service games rot on the shelf anyway.)


> [...]controller support is spotty[...]

DualSense 4 and 5 support under Linux is rock-solid, wired or wireless. That's to be expected since the drivers are maintained by Sony[1]. I have no idea about the XBox controller, but I know DS works perfectly with Steam/Proton out of the box, with the vanilla Linux kernel.

1. https://www.phoronix.com/news/Sony-HID-PlayStation-PS5


I have clarified that I meant controller support in the Steam games themselves. Some of them work well, some of them not so well. Others need to be configured. Others only work with a Steam controller. I wish everything worked well with DualSense, especially since I really like its haptics, but it's basically on the many (many) game developers to provide the same kind of controller support that is standard on consoles.


Thanks for the clarification. I've into that a couple of times - Steam's button remapping helps sometimes, but you'd have to remember which controller button the on-screen symbol maps to.


Are you sure you have steam configured right? Because with steam input you can get proper xbox controller emulation on games that don't support PS4/5 and NS controllers. It's not perfect but you should never be stuck if you don't have an xbox or steam controller when running games inside Steam.


Lots of games on Steam simply don't have great (or really any) controller support. Steam controller can sort of play some of them though since it can emulate mouse + keyboard etc.

My experience with Steam Input is ... OK in some cases. It's annoying that it seems to break games that actually do support the DualSense properly (though full haptics only work in wired mode) like FFXIV.


But when I have to install drivers, or install a non-Steam game, I can't do that with the controller yet. That's what I need for PC gaming to work in my living room.


Or you just need a Steam controller. They're discontinued now but work well as a mouse+keyboard for desktop usage. It got squished into the Steam Deck so hopefully there's a new version in the future.


If you have steam, ps4/ps5 controllers also work fine.


They do not work fine in every game. That is why I think you need a Steam controller as well.


They do but they cost a lot more.


My ps5 came with one for “free”


Plus add your GOG games as non-Steam games to Steam and launch them from big screen mode as well.


Launch Steam in big screen mode. Done.


I'm aware of Big Picture Mode, and it doesn't address either of the scenarios I cited specifically because they can't be done from Big Picture Mode.


How many grams of gold has the PS cost at launch using gold prices on launch day


If I'm doing this right, then:

PS1: 24.32 grams at launch

PS5 (disc): 8.28 grams at launch

(So I guess that if what one uses for currency is a sock drawer full of gold, then consoles have become a lot cheaper in the past decades.)


Im still watching 720p movirs, video games.

Somewhere between 60 hz and 240hz, theres zero fundamental benefits. Same for resolution.

It isnt just that hardware progress is a sigmoid, our experiential value.

The reality is that exponential improvement is not a fundamental force. Its always going to find some limit.


On my projector (120 inch) the difference between 720p and 4k is night and day.


Screen size is pretty much irrelevant, as nobody is going to be watching it at nose-length distance to count the pixels. What matters is angular resolution: how much area does a pixel take up in your field of vision? Bigger screens are going to be further away, so they need the same resolution to provide the same quality as a smaller screen which is closer to the viewer.

Resolution-wise, it depends a lot on the kind of content you are viewing as well. If you're looking at a locally-rendered UI filled with sharp lines, 720p is going to look horrible compared to 4k. But when it comes to video you've got to take bitrate into account as well. If anything, a 4k movie with a bitrate of 3Mbps is going to look worse than a 720p movie with a bitrate of 3Mbps.

I definitely prefer 4k over 720p as well, and there's a reason my desktop setup has had a 32" 4k monitor for ages. But beyond that? I might be able to be convinced to spend a few bucks extra for 6k or 8k if my current setup dies, but anything more would be a complete waste of money - at reasonable viewing distances there's absolutely zero visual difference.

We're not going to see 10.000Hz 32k graphics in the future, simply because nobody will want to pay extra to upgrade from 7.500Hz 16k graphics. Even the "hardcore gamers" don't hate money that much.


Does an increased pixel count make a bad movie better?


Does a decreased pixel count make a good movie better?


> Im still watching 720p movirs, video games.

There's a noticeable and obvious improvement from 720 to 1080p to 4k (depending on the screen size). While there are diminishing gains, up to at least 1440p there's still a very noticeable difference.

> Somewhere between 60 hz and 240hz, theres zero fundamental benefits. Same for resolution.

Also not true. While the difference between 40fps and 60fps is more noticeable than say from 60 to 100fps, the difference is still noticeable enough. Add the reduction in latency that's also very noticeable.


Is the difference between 100fps and 240fps noticeable though? The OP said "somewhere between 60hz and 240hz" and I agree.


Somewhere between a shoulder tap and a 30-06 there is a painful sensation.

The difference between 60 and 120hz is huge to me. I havent had a lot of experience above 140.

Likewise, 4k is a huge difference in font rendering, and 1080->1440 is big in gaming.


4K is big but certainly was not as big a leap forward as SD to HD


That would be very obvious and immediately noticeable difference but you need enough FPS rendered (natively not with latency increasing frame generation) and a display that can actually do 240hz without becoming a smeary mess.

If you have this combination and you play with it for an hour and you go back to a locked 100hz Game you would never want to go back. It's rather annoying in that regard actually.


Even with frame generation it is incredibly obvious. The latency for sure is a downside, but 100 FPS vs 240 FPS is extremely evident to the human visual system.


> Is the difference between 100fps and 240fps noticeable though?

Yes.

> The OP said "somewhere between 60hz and 240hz" and I agree.

Plenty of us dont. A 240hz OLED still provides a signifacntly blurrier image in motion than my 20+ year old CRT.


Surely that 20+ year old CRT didn't run at more than 240Hz? Something other than framerate is at play here.


> Surely that 20+ year old CRT didn't run at more than 240Hz?

It didnt have too.

> Something other than framerate is at play here.

Yes, sample and hold motion blur, inherent to all modern display types commonly in use for the most part.

Even at 240hz, modern displays can not match CRT for motion quality.

https://blurbusters.com/faq/oled-motion-blur/


Lower latency between your input and its results appearing on the screen is exactly what a fundamental benefit is.

The resolution part is even sillier - you literally get more information per frame at higher resolutions.

Yes, the law of diminishing returns still applies, but 720p@60hz is way below the optimum. I'd estimate 4k@120hz as the low end of optimal maybe? There's some variance w.r.t the application, a first person game is going to have different requirements from a movie, but either way 720p ain't it.


Really strange that a huge pile of hacks, maths, and more hacks became the standard of "true" frames.


Consoles are the perfect platform for a proper pure ray tracing revolution.

Ray tracing is the obvious path towards perfect photorealistic graphics. The problem is that ray tracing is really expensive, and you can't stuff enough ray tracing hardware into a GPU which can also run traditional graphics for older games. This means games are forced to take a hybrid approach, with ray tracing used to augment traditional graphics.

However, full-scene ray tracing has essentially a fixed cost: the hardware needed depends primarily on the resolution and framerate, not the complexity of the scene. Rendering a million photorealistic objects is not much more compute-intensive than rendering a hundred cartoon objects, and without all the complicated tricks needed to fake things in a traditional pipeline any indie dev could make games with AAA graphics. And if you have the hardware for proper full-scene raytracing, you no longer need the whole AI upscaling and framegen to fake it...

Ideally you'd want a GPU which is 100% focused on ray tracing and ditches the entire legacy triangle pipeline - but that's a very hard sell in the PC market. Consoles don't have that problem, because not providing perfect backwards compatibility for 20+ years of games isn't a dealbreaker there.


> Rendering a million photorealistic objects is not much more compute-intensive than rendering a hundred cartoon objects

Increasing the object count by that many orders of magnitude is definitely much more compute intensive.


Only if you have more than 1 bounce. Otherwise it’s the same. You’ll cast a ray and get a result.


No, searching the set of triangles in the scene to find an intersection takes non-constant time.


I believe with an existing BVH acceleration structure, the average case time complexity is O(log n) for n triangles. So not constant, but logarithmic. Though for animated geometry the BVH needs to be rebuilt for each frame, which might be significantly more expensive depending on the time complexity of BVH builds.


Yeah, this search is O(log n) and can be hardware-accelerated, but there's no O(1) way to do this.


It's also only O(log n) if the scene is static. Which is what is often missed in the quest for more photo-realistic graphics - it doesn't mean anything if what you are rendering only looks realistic in still frames but doesn't behave realistically if you try to interact with it.


What if we keep the number of triangles constant per pixel, independently of scene complexity, through something like virtualized geometry? Though this would then require rebuilding part of the BVH each frame, even for static scenes, which is probably not a constant operation.


For static geometry we could but for animated geometry or dynamic geometry it would have to be calculated during a mesh shader step.


> Rendering a million photorealistic objects is not much more compute-intensive than rendering a hundred cartoon objects

Surely ray/triangle intersection tests, brdf evaluation, acceleration structure rebuilds (when things move/animate) all would cost more in your photorealistic scenario than the cartoon scenario?


Matrix multiplication is all that is and GPUs are really good at doing that in parallel already.


So I guess there is no need to change any of the hardware, then? I think it might be more complicated than waving your hands around linear algebra.


Yes there is, to improve ray tracing…


Combining both ray tracing (including path tracing, which is a form of ray tracing) and rasterization is the most effective approach. The way it is currently done is that primary visibility is calculated using triangle rasterization, which produces perfectly sharp and noise free textures, and then the ray traced lighting (slightly blurry due to low sample count and denoising) is layered on top.

> However, full-scene ray tracing has essentially a fixed cost: the hardware needed depends primarily on the resolution and framerate, not the complexity of the scene.

That's also true for modern rasterization with virtual geometry. Virtual geometry keeps the number of rendered triangles roughly proportional to the screen resolution, not to the scene complexity. Moreover, virtual textures also keep the amount of texture detail in memory roughly proportional to the screen resolution.

The real advantage of modern ray tracing (ReSTIR path tracing) is that it is independent of the number of light sources in the scene.


>Consoles don't have that problem, because not providing perfect backwards compatibility for 20+ years of games isn't a dealbreaker there.

I'm not sure that's actually true for Sony. You can currently play several generations of games on the PS5, and I think losing that on PS6 would be a big deal to a lot of people.


Maybe they can pull the old console trick of just including a copy of the old hardware inside the new console.

However I suspect that this isn't as cost and space effective as it used to be.


So create a system RT only GPU plus a legacy one for the best of both worlds?


I think there are a few factors that are likely to slow the pace down quite a bit soon:

1. We realistically aren't going much past 4k anytime soon. Even the few 8k sets on the market are sort of tech demos, because there isn't really the content for it. Maybe 120/240/etc FPS will be a thing, but that's really a linear growth, not exponential, and it has a pretty short path to go (will 500Hz displays ever become a big seller? 1kHz?)

2. The triple-A games market itself is strained-- too many big-money flops, and those are the ones that have historically substituted more triangles for better storytelling.

So you're going to reach a point where the hardware isn't really limiting the designers' visions anymore.

GenAI seems like a questionable approach for game rendering, both because of inefficiency and non-repeatability. If the AI renders the same scene slightly differently on two machines, it could cause bugs or unfair competitive edges. At most, we'd see AI during the development process to build assets, and that doesn't require a bigger local GPU.


After raytracing, the next obvious massive improvement would be path tracing.

And while consoles usually lag behind the latest available graphics, I'd expect raytracing and even path tracing to become available to console graphics eventually.

One advantage of consoles is that they're a fixed hardware target, so games can test on the exact hardware and know exactly what performance they'll get, and whether they consider that performance an acceptable experience.


There is no real difference between "Ray Tracing" and "Path Tracing", or better, the former is just the operation of intersecting a ray with a scene (and not a rendering technique), the latter is a way to solve the integral to approximate the rendering equation (hence, it could be considered a rendering technique). Sure, you can go back to the terminology used by Kajiya in his earlier works etc etc, but it was only a "academic terminology game" which is worthless today. Today, the former is accelerated by HW since around a decade (I am cunting the PowerVR wizard). The latter is how most of non-realtime rendering renders frames.

You can not have "Path Tracing" in games, not according to what it is. And it also probably does not make sense, because the goal of real-time rendering is not to render the perfect frame at any time, but it is to produce the best reactive, coherent sequence of frames possible in response to simulation and players inputs. This being said, HW ray tracing is still somehow game changing because it shapes a SIMT HW to make it good at inherently divergent computation (eg. traversing a graph of nodes representing a scene): following this direction, many more things will be unlocked in real-time simulation and rendering. But not 6k samples unidirectionally path-traced per pixel in a game.


> You can not have "Path Tracing" in games

It seems like you're deliberately ignoring the terminology currently widely used in the gaming industry.

https://www.rockpapershotgun.com/should-you-bother-with-path...

https://gamingbolt.com/10-games-that-make-the-best-use-of-pa...

(And any number of other sources, those are just the first two I found.)

If you have some issue with that terminology, by all means raise that issue, but "You can not have" is just factually incorrect here.


> If you have some issue with that terminology, by all means raise that issue, but "You can not have" is just factually incorrect here.

It is not incorrect because, at least for now, all those "path tracing" modes do not do compute multiple "paths" (with each being made of multiple rays casted) per pixel but rasterize primary rays and then either fire 1 [in rare occasions, 2] rays for such a pixel, or, more often, read a value from a local special cache called a "reservoir" or from a radiance cache - which is sometimes a neural network. All of this goes even against the defition your first article gives itself of path tracing :D

I don't have problems with many people calling it "path tracing" in the same way I don't have issues with many (more) people calling Chrome "Google" or any browser "the internet", but if one wants to talk about future trends in computing (or is posting on hacker news!) I believe it's better to indicate a browser as a browser, Google as a search engine, and Path Tracing as what it is.


You should state the definitions of terms you are using (which you still haven't done) in cases where they are disputed.


not all games need horse power. We've now past the point of good enough to run a ton of it. Sure, tentpole attractions will warrant more and more, but we're turning back to mechanics, input methods, gameplay, storytelling. If you play 'old' games now, they're perfectly playable. Just like older movies are perfectly watchable. Not saying you should play those (you should), but there's not kuch of a leap needed to keep such ideas going strong and fresh.


This is my take as well. I haven’t felt that graphics improvement has “wowed” me since the PS3 era honestly.

I’m a huge fan of Final Fantasy games. Every mainline game (those with just a number; excluding 11 and 14 which are MMOs) pushes the graphical limits of the platforms at the time. The jump from 6 to 7 (from SNES to PS1); from 9 to 10 (PS1 to 2); and from 12 to 13 (PS3/X360) were all mind blowing. 15 (PS4) and 16 (PS5) were also major improvements in graphics quality, but the “oh wow” generational gap is gone.

And then I look at the gameplay of these games, and it’s generally regarded as going in the opposite direction- it’s all subjective of course but 10 is generally regarded as the last “amazing” overall game, with opinions dropping off from there.

We’ve now reached the point where an engaging game with good mechanics is way more important than graphics: case in point being Nintendo Switch, which is cheaper and has much worse hardware, but competes with the PS5 and massively outsells Xbox by huge margins, because the games are fun.


FF12 and FF13 are terrific games that have stood the test of time.

And don't forget the series of MMOs:

FF11 merged Final Fantasy with old-school MMOs, notably Everquest, to great success.

FF14 2.0 was literally A Realm Reborn from the ashes of the failed 1.0, and was followed by the exceptional Heavensward expansion.

FF14 Shadowbringers was and is considered great.


> non generative AI up-scaling

I know this isn't an original idea, but I wonder if this will be the trick for step-level improvement in visuals. Use traditional 3D models for the broad strokes and generative AI for texture and lighting details. We're at diminishing returns for add polygons and better lighting, and generative AI seems to be better at improving from there—when it doesn't have to get the finger count right.


I'd hesitate to call the temporal hacks progress. I disable them every time.


There's likely still room to go super wide with CPU cores and much more ram but everyone is talking about neutral nets so that's what the press release is about.


Gaming using weird tech is not a hardware manufacturer or availability issue. It is a game studio leadership problem.

Even in the latest versions of unreal and unity you will find the classic tools. They just won't be advertised and the engine vendor might even frown upon them during a tech demo to make their fancy new temporal slop solution seem superior.

The trick is to not get taken for a ride by the tools vendors. Real time lights, "free" anti aliasing, and sub-pixel triangles are the forbidden fruits of game dev. It's really easy to get caught up in the devil's bargain of trading unlimited art detail for unknowns at end customer time.


doubtful, they say this with every generation of console and even gaming pc systems. When it's popularity decreases then profits decrease and then maybe it will be "the last generation".


they can't move everything to the cloud because of latency


It's not just technology that's eating away at console sales, it's also the fact that 1) nearly everything is available on PC these days (save Nintendo with its massive IP), 2) mobile gaming, and 3) there's a limitless amount of retro games and hacks or mods of retro games to play and dedicated retro handhelds are a rapidly growing market. Nothing will ever come close to PS2 level sales again. Will be interesting to see how the video game industry evolves over the next decade or two. I suspect subscriptions (sigh) will start to make up for lost console sales.


"Nothing will ever come close to PS2 level sales again."

ps2 sales number is iffy at very least, also ps2 sales has been dethrone "few times" quotation mark since when nintendo sales is creeping up, sony announced there are "few millions sales" added while they already didnt produce them years ago


> Nothing will ever come close to PS2 level sales again.

The switch literally has and according to projections the Switch 1 will in fact have outsold the PS2 globally by the end of the year.


Welcome to the Age of the Plateau. It will change everything we know. Invest accordingly.


And what do you think to invest in for such times?


Hard assets and things with finite supply. Anything real. Gold, bitcoin, small cap value stocks, commodities, treasuries (if you think the government won't fail).

https://portfoliocharts.com/2021/12/16/three-secret-ingredie...


> Anything real

> bitcoin

:D


Bitcoin hate is real, here. At least.


It was in the context of the argument. What's more real about Bitcoin than bonds, options, or stock?

It felt like "real" referred to the physical: gold, food, energy, minerals, real physical things that people have a constant demand for.

It was funny to spot the odd entry, where bitcoins are fully digital and have a very socially dependent agreement and entire digital infrastructure required to keep them functioning.

Gold has a bit of a social agreement as well, but it's social agreement dates thousands of years and even if it broke away still has value as a physical material.


but can i invest in bitcoin hate?



If the Internet goes away, Bitcoin goes away. That's a real threat in a bunch of conceivable societal failure scenarios. If you want something real, you want something that will survive the loss of the internet. Admittedly, what you probably want most in those scenarios is diesel, vehicles that run on diesel, and salt. But a pile of gold still could be traded for some of those.


Everyone always talk like societal collapse is global. Take a small pile of gold and use it to buy a plane ticket somewhere stable with internet and your bitcoin will be there waiting for you.


But worth less, because the demand from Internet starved countries went to zero.


It only requires a code change. Sure you need 50% of the network as well to run the new code, but that's already starting to heavily concentrate, and it only take that majority to agree that more needs to be printed.

But in any case, if OP meant deflationary, I don't think "real" is a good synonym.


Is your argument that not being able to "get some easily" makes a thing more real?


Moats. Government relationships. Simple and unsexy. Hard assets.


Beyond the PS6, the answer is very clearly graphics generated in real time via a transformer model.

I’d be absolutely shocked if in 10 years, all AAA games aren’t being rendered by a transformer. Google’s veo 3 is already extremely impressive. No way games will be rendered through traditional shaders in 2035.


The future of gaming is the Grid-Independent Post-Silicon Chemo-Neural Convergence, the user will be injected with drugs designed by AI based on a loose prompt (AI generated as well, because humans have long lost the ability to formulate their intent) of the gameplay trip they must induce.

Now that will be peak power efficiency and a real solution for the world where all electricity and silicon are hogged by AI farms.

/s or not, you decide.


Stanislaw Lem’s “The Futurological Congress” predicted this in 1971.


FYI it's got an amazing film adaptation by Ari Folman in his 2013 "The Congress". The most emotionally striking film I've ever watched.


There will be a war between these biogamers and smart consoles that can play themselves.


It's all about nerual spores

https://youtu.be/NyvD_IC9QNw


Is this before or after fully autonomous cars and agi? Both should be there in two years right?

10 years ago people were predicting VR would be everywhere, it flopped hard.


I've been riding Waymo for years in San Francisco.

10 years ago, people were predicting that deep learning will change everything. And it did.

Why just use one example (VR) and apply it to everything? Even then, a good portion of people did not think VR would be everywhere by now.


> I've been riding Waymo for years in San Francisco.

Fully autonomous in select defined cities owned by big corps is probably a reasonable expectation.

Fully autonomous in the hands of an owner applied to all driving conditions and working reliably is likely still a distant goal.


Baidu Apollo Go is conpletes millions of rides a year as well, with expansions into Europe in the Middle East. In China they've been active for a long time - during COVID they were making autonomous deliveries.

It is odd how many people don't realize how developed self-driving taxis are.


The future isn't evenly distributed.

I think most people will consider self driving tech to be a thing when it's as widespread as TVs were, 20 years after their introduction.


TV tech was ready, it just wasn't cheap enough. Self Driving is not wide spread not because of cost issue though. It is still not quite good enough for universal usage. Give it another 10 years I think we should be close, especially in places like Japan.


And outside of a few major cities with relatively good weather, self driving is non existent


It did flop, but still a hefty loaf of money was sliced off in the process.

Those with the real vested interest don't care if that flops, while zealous worshippers to the next brand new disruptive tech are just a free vehicle to that end.


VR is great industrial tech and bad consumer tech. It’s too isolating for consumers.


Just because it's possible doesn't mean it is clearly the answer. Is a transformer model truly likely to require less compute than current methods? We can't even run models like Veo 3 on consumer hardware at their current level of quality.


I’d imagine AAA games will evolve to hundreds of billions of polygons and full path tracing. There is no realistic way to compute a scene like that on consumer hardware.

The answer is clearly transformer based.


Transformer maybe not, but neural net yes. This is profoundly uncomfortable for a lot of people, but it's the very clear direction.

The other major success of recent years not discussed much so far is gaussian splats, which tear up the established production pipeline again.


Neural net is already being used via DLSS. Neural rendering is the next step. And finally, a full transformer based rendering pipeline. My guess anyway.


How much money are you willing to bet?


All my money.


Even in a future with generative UIs, those UIs will be composed from pre-created primitives just because it's faster and more consistent, there's literally no reason to re-create primitives every time.


Go short Nintendo and Sony today. I'm the last one who's going to let my technical acumen get in the way of your mistake.


Why would gaming rendering using transformers lead to one shorting Nintendo and Sony?


That's just not efficient. AAA games will use AI to pre-render assets, and use AI shaders to make stuff pop more, but on the fly asset generation will still be slow and produce low quality compared to offline asset generation. We might have a ShadCN style asset library that people use AI to tweak to produce "realtime" assets, but there will always be an offline core of templates at the very least.


It is likely a hell of a lot more efficient than path tracing a full ultra realistic game with billions of polygons.


This _might_ be true, but it's utterly absurd to claim this is a certainty.

The images rendered in a game need to accurately represent a very complex world state. Do we have any examples of Transformer based models doing something in this category? Can they do it in real-time?

I could absolutely see something like rendering a simplified and stylised version and getting Transformers to fill in details. That's kind of a direct evolution from the upscaling approach described here, but end to end rendering from game state is far less obvious.


Doesn’t this imply that a transformer or NN could fill in details more efficiently than traditional techniques?

I’m really curious why this would be preferable for a AAA studio game outside of potential cost savings. Also imagine it’d come at the cost of deterministic output / consistency in visuals.


  I could absolutely see something like rendering a simplified and stylised version and getting Transformers to fill in details. That's kind of a direct evolution from the upscaling approach described here, but end to end rendering from game state is far less obvious.
Sure. This could be a variation. You do a quick render that any GPU from 2025 can do and then make the frame hyper realistic through a transformer model. It's basically saying the same thing.

The main rendering would be done by the transformer.

Already in 2025, Google Veo 3 is generating pixels far more realistic than AAA games. I don't see why this wouldn't be the default rendering mode for AAA games in 2035. It's insanity to think it won't be.

Veo3: https://aistudio.google.com/models/veo-3


> Google Veo 3 is generating pixels far more realistic than AAA games

That’s because games are "realtime", meaning with a tight frame-time budget. AI models are not (and are even running on multiple cards each costing 6 figures).


I mistaken veo3 for Genie model. Genie is the Google model I should have referenced. It is real time.


I look forward to playing games where the map and scenery geometry just mutates as I move around, and can’t complete levels or objectives because the model forgot about their existence while it was adding a door midair at 0.25 fps.

I’m so excited to be charged AAA prices for said wonderful experience.


That already exists for Minecraft


Genie 3 is still currently low quality, low resolution and not anywhere near current AAA graphics, while requiring hardware that exceed current AAA Graphics requirements.

For Genie to exceed AAA Graphics in 2035 at 60 to 120fps per second would require a breakthrough of efficiency that is at least an order of magnitude, and much higher for it to be cost effective.

The gaming industry for AAA titles requires at least 3-4 years in making. Which means AAA titles studios would need to start working on it in 2031. Possibility of All AAA games in 2031 are made with LLM model is practically zero.


The point is that we can generate ultra realistic graphics.

We are talking 10 years from now.


We can already do that today given sufficient computing resources with current AI, doing it real time is different. And I thought you mentioned earlier that all AAA games will be using it by 2035?

And unfortunately 10 years isn't that long time in many industries. We are barely talking about 3 cycles.


Genie is real time.


Well you missed the point. You could call it prompt adherence. I need veo to generate the next frame in a few milliseconds, and correctly represent the position of all the cars in the scene (reacting to player input) reliably to very high accuracy.

You conflate the challenge of generating realistic pixels with the challenge of generating realistic pixels that represent a highly detailed world state.

So I don't think your argument is convincing or complete.


> Already in 2025, Google Veo 3 is generating pixels far more realistic than AAA games.

Traditional rendering techniques can also easily exceed the quality of AAA games if you don't impose strict time or latency constraints on them. Wake me up when a version of Veo is generating HD frames in less than 16 milliseconds, on consumer hardware, without batching, and then we can talk about whether that inevitably much smaller model is good enough to be a competitive game renderer.


Genie 3 is already a frontier approach to interactive generative world views no?

It will be AI all the way down soon. The models internal world view could be multiple passes and multi layer with different strategies... In any case; safe to say more AI will be involved in more places ;)


I am super intrigued by such world models. But at the same time it's important to understand where they are at. They are celebrating the achievement of keeping the world mostly consistent for 60 seconds, and this is 720p at 24fps.

I think it's reasonable to assume we won't see this tech replace game engines without significant further breakthroughs...

For LLMs agentic workflows ended up being a big breakthrough to make them usable. Maybe these World Models will interact with a sort of game engine directly somehow to get the required consistency. But it's not evident that you can just scale your way from "visual memory extending up to one minute ago" to 70+ hour game experiences.


For future readers, yes. It’s Google Genie.


Be prepared to be shocked. This industry moves extremely slow.


They'll have to move fast when a small team can make graphically richer game than a big and slow AAA studio.

Competition works wonders.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: