Hacker Newsnew | past | comments | ask | show | jobs | submit | more coder543's commentslogin

The music industry needs to more widely use some kind of equivalent to the ISBN that the book industry uses. A simple "ISMN" list per playlist/library would be all that would be needed to move between services when both apps have the same songs.

One could also imagine a standardized ismn://<number> URL format that could open in your preferred music app, and this could work even without a streaming service if you already own that song in your personal music collection.

ISMN seems to exist: https://www.loc.gov/ismn/about.html

But, I've never actually seen it used for recordings; it seems to be focused solely on music notation. So, it would be nice to have some kind of recording-focused identifier for keeping track of specific performances between services.


This exists and is called ISRC. This metadata is embedded in a subchannel on CD's.

https://en.wikipedia.org/wiki/International_Standard_Recordi...


Cool, I didn’t know that. I wish it were actually used by the various music apps.


It's a pity that compression is likely to provide subtly different data under different network conditions. If everything were lossless we could just use hashes as identifiers and proceed without the participation of the apps.

After all, they have no incentive to make this easy for us.


This is actually a beautiful idea. It'd definitely be possible for music purchased through bandcamp since lossless is generally available for most releases. Commenting to bookmark this.


MusicBrainz has also been making a list of unique identifiers for music tracks: https://musicbrainz.org/


They’re called magnet links.


Yeah, but the next step is to make identifier somethink like a has so the media can be content addressable.

Wait, we can't have that. It's too convenient


Content addressable doesn't really work here... different apps may have the same recordings encoded in different formats and bitrates, but they are still the same recording. Unless you meant "content addressable" in the sense of a uniquely assigned identifier like I was already talking about, and not a computed identifier from the raw bytes of the file like a hash.


This sounds like an acoustic fingerprint, such as AcoustID[0]. I think AcoustIDs and XSPF[1] would be a good combination for shared playlists. It's a shame that development stopped on the Tomahawk music player[2], it would have been an ideal platform for shared playlists like this.

[0] https://musicbrainz.org/doc/AcoustID

[1] https://en.wikipedia.org/wiki/XML_Shareable_Playlist_Format

[2] https://github.com/tomahawk-player/tomahawk


Now we just need to associate payment/attribution related metadata with those identifiers


I thought the OEMs liked the idea of being able to demand high profit margins on RAM upgrades at checkout, which is especially easy to justify when the RAM is on-package with the CPU. That way no one can claim the OEM was the one choosing to be anti-consumer by soldering the RAM to the motherboard, and they can just blame Intel.


OEMs like it when it's them buying the cheap RAM chips and getting the juicy profits from huge mark-ups, not so much when they have to split the pie with Intel. As long as Intel cannot offer integrated RAM at price equivalent to external RAM chips, their customers (OEMs) are not interested.


Intel would definitely try to directly profit from stratified pricing rather than letting the OEM keep that extra margin (competition from AMD permitting).


The current-gen Apple TV is already overpowered for what it does, and extremely nice to use. I can think of very few changes I would like to see, and most of them are purely software.


I really wish it had some way to connect USB storage directly.


Mine has 128GB of onboard storage... but Apple still bans apps from downloading video, which annoys me.

The streaming apps virtually all support downloading for offline viewing on iPhone, but the Apple TV just becomes a paperweight when the internet goes out, because I'm not allowed to use the 128GB of storage for anything.

If they're not going to let you use the onboard storage, then it seems unlikely for them to let you use USB storage. So, first, I would like them to change their app policies regarding internal storage, which is one of the purely software improvements I would like to see.


I use a dedicated NAS as a Plex server + Plex app on Apple TV itself for local streaming, which generally works fine. Infuse app can also index and stream from local sources.

But there are some cases like e.g. watching high-res high-FPS fractal zoom videos (e.g. https://www.youtube.com/watch?v=8cgp2WNNKmQ) where even brief random skipped frames from other things trying to use WiFi at the same time can be really noticeable and annoying.


I do strongly recommend using Ethernet, unless you have the WiFi-only model, but gotcha.


They could, at least, lower the price.


I wish we would just repeal the DMCA.

Under no circumstances should we need an exemption from the copyright office just to be able to repair an ice cream machine. It's not even a permanent exemption! The DMCA causes many weird problems.


It's still weird to me that we ended up in a world in which every bit of information can now be copied at zero cost and instead of heralding and building upon that technological achievement we've somehow decided that instead we're going to make laws to protect and enforce rent seeking instead. I assume it's one of those things where a few corpos just outplayed 99% of the population; just like universal health care, or public education.


This seems like a very one-sided take. Just look at all the artists (actors, painters, musicians, etc.) that are fighting tooth and nail against AI, and for good reason. While there are plenty of issues with copyright, I don't agree at all that just because the marginal cost of copying is 0 that if someone puts a ton of time and effort in creating a piece of work that I should just get to copy it for free.


That's not the argument though. The argument is that rent seeking as a business strategy is a deeply flawed and counter productive economic practice that ultimately limits our species' technological advancements and societal progress. The sooner we move past it, the better off we will all be. I agree that you should be able to create whatever you want and keep it 100%. So, clearly if someone were to steal your creation from your home and publish it online that would be theft. However, if you decided to take advantage of your creation by creating an infinite number of copies at no cost to you at all (which is what publishing through digital media means) that's your decision as a creator to make, nobody forced that on you.


I don't like your redefinition of "rent seeking" at all. If I create an original piece of art, and publish it digitally, preventing other people from freely copying that art is not rent seeking - the thing I created didn't exist in any form before I created it, and I'm not trying to "extract rent" from you by preventing you from creating any of your own works.

Now, like I said previously, there are currently issues with copyright, and this can cross into rent seeking if I try to extract money from your own original works of art (see the family of Marvin Gaye), and there are issues with the length of copyright (i.e. I believe there is a fundamental difference in protecting the right of a creator while they're alive, vs. the rights of inheritors in perpetuity). But the whole concept of rent seeking is around using the power of government to extract money from others simply because you were there first, not around allowing unlimited copying of truly original works.


Before we invented copying machines there was no concept of copyright. It is a recent invention, and not how human culture evolved. And it was ok up until the internet was invented, but people want to return from passive consumption to interactive. We now prefer games, social networks and search to books, radio and TV. AI is just the latest stage in this move away from passivity.

Why should society not have the right for its traditional interactive way of exchanging culture? Extending the duration of copyright was a perverse move, and now blocking the right to repair is another perverse move. DMCA put all publishers at the whim of agencies spamming takedowns with impunity even for no reason at all. Artists more recently would like to copyright abstractions to block generative AI from reusing their ideas.

People need their traditional ways back. We started open source, made Wikipedia, we now have open scientific publication, teachers share prep materials. Clearly there is a sign that copyright is not essential for society. Copyleft or sharing is more important.


See, even the name is misleading because it's not a right - it's really a prohibition. We can agree or disagree about definitions, but I think it is self-evident that defending and enforcing a prohibition on creating zero-cost copies of digital media requires bending over backwards to undo the very properties of the technology that make it desirable to those who want to profit off of it. To me that sure sounds like rent seeking.


I don't understand this. Copyright law does not prevent people from sharing their information freely. It gives the option for "rent seekers" to do their thing. Enforcing your rights for return is optional for people that don't want to do it. I'm not talking about right-to-repair here, but the idea of copyright in general.

A lot of information is generated by taking some financial risk with the hopes of creating something of value and recouping that investment + some profit. Copyright makes that kind of venture possible. It doesn't prevent altruistic souls from putting in the same effort without any expectation of return. We always had this, by default. Copyright framework allows pursuit, generation and dissemination of huge swaths of valuable information that would otherwise not exist.


Uhh, it's because information can be easily copied that the laws were put in place. If anyone can "steal" your work then it would be a deterrent to invention.

If I'm a business that can make money on the service contract I can sell the unit at a lower price. Now I'm forced to make the unit cost higher.


   I assume it's one of those things where a few corpos just outplayed 99% of the population
"The key element of social control is the strategy of distraction that is to divert public attention from important issues and changes decided by political and economic elites"

-Chomsky


Everyone believe they need copyright, therefore it is the status quo.


It’s a reasonable stance to want copyright.

It’s an anti-consumer stance to force copyright to nearly 100 years and allow no format swapping under a hilariously broad set of normal transmission and format-swapping techniques.


Does everyone believe that we need copyright to be the exact way that it is though?

I'm pretty sure that the reason that copyright laws are the way they are is because certain industries in the US lobby the government to strong arm other countries into adopting onerous copyright restrictions as part of free trade agreements.

Whatever you feel about the merits of intellectual property laws the idea that they're wrapped up as 'free trade' when they in fact make things that would otherwise be free cost money is downright Orwellian.

Maybe countries that don't really have a film or tv industry don't want to see copyright on those products and why would they? Why would they want to see their citizens paying American countries for something that would otherwise be free?


I'm fine going back to the old 14+14 rules copyright originally. having your creation for an entire generation seems appropriate. But opinions are all across the spectrum on this issue.

I think the primary reason the "spirit" of current copyright broke down is because it's been reduced to hoarding over protecting. the idea is that I can license out an idea if I really want to make use of it. So creations flow and the company makes their own cut out of it.

But I can't just walk up to Disney and pay 100 dollars ,1000, maybe even 1 million to grab Mickey Mouse and work with something. Depending on their products, they may not want anyone using Mickey period, even if there is no mickey product cycle. You basically need to be EA or Mattel or Warner Bros. to even begin being considered for such a thing.

That's their right but it spoils the social contract. When everything by default is locked down, there is no creation flowing. Just broken dreams for abandoned franchises everyone else would love to make use of.


No one has put forth a good argument about why I don't need copyright.


There's definitely tiers of copyright to consider, which is part of the divisiveness on the issue. You wanting to protect your creation and get compensated for its IP for 10-20 years (so, a good portion of your life career) is very different from Disney wanting to delay Mickey mouse going into public domain. an IP its creator and studio already reaped trillions from over the century.


No one should have to. If we're talking about putting/maintaining restrictions on people, the onus should be on the proponents to put forth a good argument why we need it.


Not when you are the one trying to change the status quo. Regardless of what you believe, if you want to change the default you need to explain to people why it should change. A self-righteous stance like yours will change nothing.


Well the argument is that the status quo is bad for all but a select few, so moving past it would be beneficial for basically nearly everybody.

Rent seeking limits innovation, needlessly drives up costs, creates barriers where there shouldn't be any, encourages predatory economic behaviors, suppresses competition, and ultimately leads to monopolistic and or oligopolic wealth and power structures.

It's universally bad practice that results in bad outcomes for society and we should move away from enabling and indeed incentivizing that kind of economic behavior.


Disney was willing to go to the ends of the Earth to protect Mickey mouse...


Sure, but the anti-circumvention provisions in particular just inconvenience everyone. It's not like DVDs being "protected" prevented them from being ripped.


And Paraguay won


Reference for those who didn't see it:

https://news.ycombinator.com/item?id=41550417


I really have to wonder if the BoJack Horseman writers knew about this when they wrote the Disney trademark episode.


Well they didn’t literally go to every last square km on planet Earth… so it’s not that surprising.


Do you think people should face consequences for piracy? If not, should DRM be legal then?


I’m not a lawyer, but I think it’s pretty clear that piracy is not illegal because of the DMCA; it’s illegal because it violates normal copyright laws. Repealing the DMCA would not change the legal status of piracy.

Repealing the DMCA also wouldn’t make DRM illegal, but DRM would still be exactly as (in)effective as it has already proven to be countless times. DRM has done nothing to restrict piracy, as far as I can tell.

Repealing the DMCA would simply allow people to more freely break DRM in pursuit of lawful purposes, which are currently restricted unfairly, including activities that would fall strictly under Fair Use. I would argue the DMCA is infringing my legal rights for no benefit to society.

Distributing copies of copyrighted content without authorization was unlawful long before the DMCA, outside of Fair Use scenarios.


Piracy was just as Federally illegal prior to the DMCA. Think back to Streetfighter....


It’s a simple question. I know it’s illegal. Should regular people face consequences or not? The status quo is “no,” which is the first step to understanding why making consequences for circumventing DRM is a bitter compromise that is maybe the best option.


I'm not sure I follow. In the case where breaking DRM isn't illegal, but piracy still is illegal, what happens that you think is bad?


What is the difference between doing something illegal and having consequences and doing something illegal that has no consequences?


(not op) I think DMCA specifically should be repealed. We can still have DRM/Copyright/etc if enough people want it, we could look at other systems, but DMCA itself is awful. Repealing it doesn't make any statement about piracy.


> 1. Qualcomm develops a chip that competitive in performance to ARM

Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.

If Qualcomm were motivated, I believe they could swap ISAs relatively easily on their flagship processors, and the rest of the core would be the same level of performance that everyone is used to from Qualcomm.

This isn’t the old days when the processor core was deeply tied to the ISA. Certainly, there are things you can optimize for the ISA to eke out a little better performance, but I don’t think this is some major obstacle like you indicate it is.

> 2. The entire software world is ready to recompile everything for RISC-V

#2 is the only sticking point. That is ARM’s only moat as far as Qualcomm is concerned.

Many Android apps don’t depend directly on “native” code, and those could potentially work on day 1. With an ARM emulation layer, those with a native dependency could likely start working too, although a native RISC-V port would improve performance.

If Qualcomm stopped making ARM processors, what alternatives are you proposing? Everyone is switching to Samsung or MediaTek processors?

If Qualcomm were switching to RISC-V, that would be a sea change that would actually move the needle. Samsung and MediaTek would probably be eager to sign on! I doubt they love paying ARM licensing fees either.

But, all of this is a very big “if”. I think ARM is bluffing here. They need Qualcomm.


> Everyone is switching to Samsung or MediaTek processors?

Why not? MediaTek is very competitive these days.

It would certainly perform better than a RISC-V decoder slapped onto a core designed for ARM having to run emulation for games (which is pretty much the main reason why you need a lot of performance on your phones).

Adopting RISC-V is also a risk for the phone producers like Samsung. How much of their internal tooling (e.g. diagnostics, build pipelines, testing infrastructure) are built for ARM? How much will performance suffer, and how much will customers care? Why take that risk (in the short/medium term) instead of just using their own CPUs (they did it in some generations) or use MediaTek (many producers have experience with them already)?

Phone producers will be happy to jump to RISC-V over the long term given the right incentives, but I seriously doubt they will be eager to transition quickly. All risks, no benefits.


> Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.

You're talking essentially about microcode; this has been the case for decades, and isn't some new development. However, as others have pointed out, it's not _as_ simple as just swapping out the decoder (especially if you've mixed up a lot of decode logic with the rest of the pipeline). That said, it's happened before and isn't _impossible_.

On a higher level, if you listen to Keller, he'll say that the ISA is not as interesting - it's just an interface. The more interesting things are the architecture, micro-architecture and as you say, the microcode.

It's possible to build a core with comparable performance - it'll vary a bit here and there, but it's not that much more difficult than building an ARM core for that matter. But it takes _years_ of development to build an out-of-order core (even an in-order takes a few years).

Currently, I'd say that in-order RISC-V cores have reached parity. Out of order is a work in progress at several companies and labs. But the chicken-and-egg issue here is that in-order RISC-V cores have ready-made markets (embedded, etc) and out of order ones (mostly used only in datacenters, desktop and mobile) are kind of locked in for the time being.

> Many Android apps don’t depend directly on “native” code, and those could potentially work on day 1.

That's actually true, but porting Android is a nightmare (not because it's hard, but because the documentation on it sucks). Work has started, so let's see.

> With an ARM emulation layer, those with a native dependency could likely start working too, although a native RISC-V port would improve performance.

I wonder what the percentage here is... Again, I don't think recompiling for a new target is necessarily the worst problem here.


> > Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.

> You're talking essentially about microcode; this has been the case for decades, and isn't some new development.

Microcode is much less used nowadays than in the past. For instance, several common desktop processors have only a single instruction decoder capable of running microcode, with the rest of the instruction decoders capable only of decoding simpler non-microcode instructions. Most instructions on typical programs are decoded directly, without going through the microcode.

> However, as others have pointed out, it's not _as_ simple as just swapping out the decoder

Many details of an ISA extend beyond the instruction decoder. For instance, the RISC-V ISA mandates specific behavior for its integer division instruction, which has to return a specific value on division by zero, unlike most other ISAs which trap on division by zero; and the NaN-boxing scheme it uses for single-precision floating point in double-precision registers can be found AFAIK nowhere else. The x86 ISA is infamous for having a stronger memory ordering than other common ISAs. Many ISAs have a flags register, which can be set by most arithmetic (and some non-arithmetic) instructions. And that's all for the least-privileged mode; the supervisor or hypervisor modes expose many more details which differ greatly depending on the ISA.


> Many details of an ISA extend beyond the instruction decoder. For instance, the RISC-V ISA mandates specific behavior for its integer division instruction, which has to return a specific value on division by zero, unlike most other ISAs which trap on division by zero; and the NaN-boxing scheme it uses for single-precision floating point in double-precision registers can be found AFAIK nowhere else. The x86 ISA is infamous for having a stronger memory ordering than other common ISAs. Many ISAs have a flags register, which can be set by most arithmetic (and some non-arithmetic) instructions. And that's all for the least-privileged mode; the supervisor or hypervisor modes expose many more details which differ greatly depending on the ISA.

All quite true, and to that, add things like cache hints and other hairy bits in an actual processor.


1. That doesn't mean you can just slap a RISC-V decoder on an ARM chip and it will magically work though. The semantics of the instructions and all the CSRs are different. It's going to be way more work than you're implying.

But Qualcomm have already been working on RISC-V for ages so I wouldn't be too surprised if they already have high performance designs in progress.


That is a good comment, and I agree things like CSR differences could be annoying, but compared to the engineering challenges of designing the Oryon cores from scratch… I still think the scope of work would be relatively small. I just don’t think Qualcomm seriously wants to invest in RISC-V unless ARM forces them to.


> I just don’t think Qualcomm seriously wants to invest in RISC-V unless ARM forces them to.

That makes a lot of sense. RISC-V is really not at all close to being at parity with ARM. ARM has existed for a long time, and we are only now seeing it enter into the server space, and into the Microsoft ecosystem. These things take a lot of time.

> I still think the scope of work would be relatively small

I'm not so sure about this. Remember that an ISA is not just a set of instructions: it defines how virtual memory works, what the memory model is like, how security works, etc. Changes in those things percolate through the entire design.

Also, I'm going to go out on a limb and claim that verification of a very high-powered RISC-V core that is going to be manufactured in high-volume is probably much more expensive and time-consuming than the case for an ARM design.

edit: I also forgot about the case with Qualcomm's failed attempt to get code size extensions. Using RVC to approach parity on code density is expensive, and you're going to make the front-end of the machine more complicated. Going out on another limb: this is probably not unrelated to the reason why THUMB is missing from AArch64.


> verification of a very high-powered RISC-V core that is going to be manufactured in high-volume is probably much more expensive and time-consuming than the case for an ARM design.

Why do you say this?


Presumably, when you have a relationship with ARM, you have access to things that make it somewhat less painful:

- People who have been working with spec and technology for decades

- People who have implemented ARM machines in fancy modern CMOS processes

- Stable and well-defined specifications

- Well-understood models, tools, strategies, wisdom

I'm not sure how much of this exists for you in the RISC-V space: you're probably spending time and money building these things for yourself.


There is a market for RISC-V design verification.

And there is already some companies specializing on supplying this market. They do consistently present at RISC-V Summit.


The bigger question is how much of their existing cores utilize Arm IP… and how sure are they that they would find all of it?


> That doesn't mean you can just slap a RISC-V decoder on an ARM chip and it will magically work though.

Raspberry Pi RP2350 already ships with ARM and RISC-V cores. https://www.raspberrypi.com/products/rp2350/

It seems that the RISC-V cores don't take much space on the chip: https://news.ycombinator.com/item?id=41192341

Of course, microcontrollers are a different from mobile CPUs, but it's doable.


That's not really comparable. Raspberry Pi added entirely separate RISC-V cores to the chip, they didn't convert an ARM core design to run RISC-V instructions.

What is being discussed is taking an ARM design and modifying it to run RISC-V, which is not the same thing as what Raspberry Pi has done and is not as simple as people are implying here.


Nevertheless, several companies that originally had MIPS implementations did exactly this, to implement ARM processors.


I am fan of the Jeff Geerling Youtube series in which he is trying to make GPU (AMD/Nvidia) run on Raspbery Pi. It is not easy - and they have linux kernel source code available to modify. Now imagine all Qualcomm clients have to do similar stuff with their third party hardware, possibly with no access to source code of drivers. Then debug and fix for 3y all the bugs that pop up in the wild. What a nightmare.

Apple at least have full control on hardware stack (Qualcomm do not as they only sells chips to others).


Hardware drivers certainly can be annoying, but a hobbyist struggling to bring big GPUs’ hardware drivers to a random platform is not at all indicative of how hard it would be for a company with teams of engineers. If NVidia wanted their GPUs to work on Raspberry Pi, then it would already be done. It wouldn’t be an issue. But NVidia doesn’t care, because that’s not a real market for their GPUs.

Most OEMs don’t have much hardware secret sauce besides maybe cameras these days. The biggest OEMs probably have more hardware secret sauce, but they also should have correspondingly more software engineers who know how to write hardware drivers.

If Qualcomm moved their processors to RISC-V, then Qualcomm would certainly provide RISC-V drivers for their GPUs, their cellular modems, their image signal processors, etc. There would only be a little work required from Qualcomm’s clients (the phone OEMs) like making sure their fingerprint sensor has a RISC-V driver. And again, if Qualcomm were moving… it would be a sea change. Those fingerprint sensor manufacturers would absolutely ensure that they have a RISC-V driver available to the OEMs.

But, all of this is very hypothetical.


> If NVidia wanted their GPUs to work on Raspberry Pi, then it would already be done. It wouldn’t be an issue. But NVidia doesn’t care, because that’s not a real market for their GPUs.

It's weird af that Geerling ignores nVidia. They have a line of ARM based SBCs with GPUs from Maxwell to Ampere. They have full software support for OpenGL, CUDA, and etc. For the price of an RPi 5 + discreet GPU, you can get a Jetson Orin Nano (8 GB RAM, 6 A78 ARM cores, 1024 Ampere cores.) All in a much better form factor than a Pi + PCIe hat and graphics card.

I get the fun of doing projects, but if what you're interested in is a working ARM based system with some level of GPU, it can be had right now without being "in the shop" twice a week with a science fair project.


> It's weird af that Geerling ignores nVidia.

“With the PCI Express slot ready to go, you need to choose a card to go into it. After a few years of testing various cards, our little group has settled on Polaris generation AMD graphics cards.

Why? Because they're new enough to use the open source amdgpu driver in the Linux kernel, and old enough the drivers and card details are pretty well known.

We had some success with older cards using the radeon driver, but that driver is older and the hardware is a bit outdated for any practical use with a Pi.

Nvidia hardware is right out, since outside of community nouveau drivers, Nvidia provides little in the way of open source code for the parts of their drivers we need to fix any quirks with the card on the Pi's PCI Express bus.”

Reference = https://www.jeffgeerling.com/blog/2024/use-external-gpu-on-r...

I’m not in a position to evaluate his statement vs yours, but he’s clearly thought about it.


I mean in terms of his quest for GPU + ARM. He's been futzing around with Pis and external GPUs and the entire time you've been able to buy a variety of SBCs from nVidia with first class software support.


AFAIK the new SiFive dev board actually supports AMD discrete grsphics cards over PCIe


Naively, it would seem like it would be as simple as updating android studio and recompiling your app, and you would be good to go? There must be less than 1 in 1000 (probably less than 1 in 10,000) apps that do their own ARM specific optimizations.


Without any ARM specific optimizations, most apps wouldn’t even have to recompile and resubmit. Android apps are uploaded as bytecode, which is then AOT compiled by Google’s cloud service for the different architectures, from what I understand. Google would just have to decide to support another target, and Google has already signaled their intent to support RISC-V with Android.

https://opensource.googleblog.com/2023/10/android-and-risc-v...


I remember when Intel was shipping x86 mobile CPUs for Android phones. I had one pretty soon after their release. The vast majority of Android apps I used at the time just worked without any issues. There were some apps that wouldn't appear in the store but the vast majority worked pretty much day one when those phones came out.


I'm not sure how well it fits the timeline (i.e. x86 images for the Android emulator becoming popular due to better performance than the ARM images vs. actual x86 devices being available), but at least these days a lot of apps shipping native code probably maintain an x86/x64 version purely for the emulator.

Maybe that was the case back then, too, and helped with software availability?


Yep! I had the Zenfone with an Intel processor in it, and it worked well!


> Android apps are uploaded as bytecode, which is then AOT compiled by Google’s cloud service for the different architectures, from what I understand.

No, Android apps ship the original bytecode which then gets compiled (if at all) on the local device. Though that doesn't change the result re compatibility.

However – a surprising number of apps do ship native code, too. Of course especially games, but also any other media-related app (video players, music players, photo editors, even my e-book reading app) and miscellaneous other apps, too. There, only the original app developer can recompile the native code to a new CPU architecture.


> No, Android apps ship the original bytecode which then gets compiled (if at all) on the local device.

Google Play Cloud Profiles is what I was thinking of, but I see it only starts “working” a few days after the app starts being distributed. And maybe this is merely a default PGO profile, and not a form of AOT in the cloud. The document isn’t clear to me.

https://developer.android.com/topic/performance/baselineprof...


Yup, it's just a PGO profile (alternatively, developers can also create their own profile and ship that for their app).


> Virtually all high performance processors these days operate on their own internal “instructions”. The instruction decoder at the very front of the pipeline that actually sees ARM or RISC-V or whatever is a relatively small piece of logic.

If that's true, then what is arm licensing to Qualcomm? Just the instruction set or are they licensing full chips?

Sorry for the dumb question / thanks in advance.


Qualcomm has historically licensed both the instruction set and off the shelf core designs from ARM. Obviously, there is no chance the license for the off the shelf core designs would ever allow Qualcomm to use that IP with a competing instruction set.

In the past, Qualcomm designed their own CPU cores (called Kryo) for smartphone processors, and just made sure they were fully compliant with ARM’s instruction set, which requires an Architecture License, as opposed to the simpler Technology License for a predesigned off the shelf core. Over time, Kryo became “semi-custom”, where they borrowed from the off the shelf designs, and made their own changes, instead of being fully custom.

These days, their smartphone processors have been entirely based on off the shelf designs from ARM, but their new Snapdragon X Elite processors for laptops include fully custom Oryon ARM cores, which is the flagship IP that I was originally referencing. In the past day or two, they announced the Snapdragon 8 Elite, which will bring Oryon to smartphones.


thank you for explaining


A well-designed (by apple [1], by analyzing millions of popular applications and what they do) instruction set. One, where there are reg+reg/reg+shifted_reg addressing modes, only one instruction length, and sane useful instructions like SBFX/UBFX, BFC, BFI, and TBZ. All of that is much better than promises of a magical core that can fuse 3-4 instructions into one magically.

[1] https://news.ycombinator.com/item?id=31368681


1 - thank you

2 - thank you again for sharing your eink hacking project!


Note that these are just a person's own opinions, obviously not shared by the architects behind RISC-V.

There are multiple approaches here. There's this tendency for each designer to think their own way is the best.


I get that. I just work quite distantly from chips and find it interesting.

That said, licensing an instruction set seems strange. With very different internal implementations, you'd expect instructions and instruction patterns in a licensed instruction set to have pretty different performance characteristics on different chips leading to a very difficult environment to program in.


Note that this is not in any way a new development.

If you look at the incumbent ISAs, you'll find that most of the time ISA and microarchitecture were intentionally decoupled decades ago.


>Many Android apps don’t depend directly on “native” code, and those could potentially work on day 1. With an ARM emulation layer, those with a native dependency could likely start working too, although a native RISC-V port would improve performance.

This is only true if the application is written purely in Java/Kotlin with no native code. Unfortunately, many apps do use native code. Microsoft identified that more than 70% of the top 100 apps on Google Play used native code at a CppCon talk.

>I think ARM is bluffing here. They need Qualcomm.

Qualcomm's survival is dependent on ARM. Qualcomm's entire revenue stream evaporates without ARM IP. They may still be able to license their modem IP to OEMs, but not if their modem also used ARM IP. It's only a matter of time before Qualcomm capitulates and signs a proper licensing agreement with ARM. The fact that Qualcomm's lawyers didn't do their due diligence to ensure that Nuvia's ARM Architecture licenses were transferable is negligent on their part.


On the one hand, if OpenAI makes a bad choice, it’s still a bad choice to copy it.

On the other hand, OpenAI has moved to a naming convention where they seem to use a name for the model: “GPT-4”, “GPT-4 Turbo”, “GPT-4o”, “GPT-4o mini”. Separately, they use date strings to represent the specific release of that named model. Whereas Anthropic had a name: “Claude Sonnet”, and what appeared to be an incrementing version number: “3”, then “3.5”, which set the expectation that this is how they were going to represent the specific versions.

Now, Anthropic is jamming two version strings on the same product, and I consider that a bad choice. It doesn’t mean I think OpenAI’s approach is great either, but I think there are nuances that say they’re not doing exactly the same thing. I think they’re both confusing, but Anthropic had a better naming scheme, and now it is worse for no reason.


> Now, Anthropic is jamming two version strings on the same product, and I consider that a bad choice. It doesn’t mean I think OpenAI’s approach is great either, but I think there are nuances that say they’re not doing exactly the same thing

Anthropic has always had dated versions as well as the other components, and they are, in fact, doing exactly the same thing, except that OpenAI has a base model in each generation with no suffix before the date specifier (what I call the "Model Class" on the table below), and OpenAI is inconsistent in their date formats, see:

  Major Family  Generation    Model Class Date
  claude        3.5           sonnet      20041022
  claude        3.0           opus        20240229
  gpt           4             o           2024-08-06
  gpt           4             o-mini      2024-07-18
  gpt           4             -           0613
  gpt           3.5           turbo       0125


But did they ever have more than one release of Claude 3 Sonnet? Or any other model prior to today?

As far as I can tell, the answer is “no”. If true, then the fact that they previously had date strings would be a purely academic footnote to what I was saying, not actually relevant or meaningful.


1.3MB seems perfectly reasonable in a modern web app, especially since it will be cached after the first visit to the site.

If you’re just storing user preferences, obviously don’t download SQLite for your web app just to do that… but if you’re doing something that benefits from a full database, don’t fret so much about 1MB that you go try to reinvent the wheel for no reason.

If the other comment is correct, then it won’t even be 1.3MB on the network anyways.


A megabyte here, a megabyte there, pretty soon you’re talking about a really heavyweight app.


Given how hefty images are, a full database doesn't seem too bad for the purpose of an "app" that would benefit from it, especially when compression can being the size down even lower.


We are past the stage where every piece of JS has to be loaded upfront and delay the first meaningful paint. Modern JS frameworks and module are chunked and can be eager/lazy loaded. Unless you make the sqlite DB integral part for your first meaningful page load, preloading those 1.3MB in the background/upon user request is easy.


By the time you have a good reason to add this library, I think you're already in heavyweight app territory.


On another thread, someone linked to this online demo where you can try it out: https://huggingface.co/spaces/akhaliq/depth-pro


Hmm it doesn't crash or do anything weird but it just separates the background from the foreground. It looks like the whole foreground is at one distance and the background has a bit of a gradient (higher is further). I've never actually looked at the background that much before and it messes with you as well haha. So it's difficult to tell whether the model is right or wrong.


Adobe's did the same:

https://imgur.com/a/u87J9A9


Translation is difficult at the best of times. I thought it was interesting how Google Translate seemingly kept coming up with different translations for the name of the program. Under “Features”, it suddenly decides the name is “Takigami”, as one example. By the end, it even goes so far as to say: “When you start up the cooking program, the following screen will be displayed.”

I asked ChatGPT 4o to translate the README: https://chatgpt.com/share/6700bed9-1198-8004-8eed-07f5055d07...

The translation seemed largely consistent with what Google Translate provided, but some of ChatGPT’s translation differences seemed more plausible to me, and it certainly reads more coherently. It also doesn’t keep forgetting that it’s dealing with the proper name of the program.

I didn’t try Gemini for this, but I imagine it has to be decent at translation too, so I wonder if/when Google will use Gemini to assist, replace, or otherwise complement Google Translate.


I think I would prefer a slightly worse translator program than one that potentially hallucinates new information onto the page


Google Translate hallucinates as well.

It’s particularly obvious if you translate between languages with vastly different grammar, e.g. Korean -> English, since Korean doesn’t require a subject per sentence but English does – so Google Translate then sometimes just inserts random subjects from its training data into the translated text. ChatGPT, by understanding more of the context before each sentence in a long text, seems to do this less.

For stuff like French -> English or German -> English where there is “no missing info” per sentence to create grammatically correct sentences, so that it doesn’t need to rely on context to translate correctly, Google Translate is great.


I stopped using GT for larger texts when (having finally achieved some rudimentary literacy in my target language) I noticed it was rather cavalier about inserting or deleting not, changing the sense of sentences completely.

Sure, in translation one always has issues of sarcasm or irony, but I felt the tool was probably hallucinating more than being a useful work instrument.

Lagniappe: https://www.youtube.com/watch?v=CFCABnWlN8E

EDIT: and yes, I also prefer the older behaviour of translation programs, whose output was noticeably disfluent where it was poor instead of just bullshitting to stay fluent.


I've been talking to a lot of Chinese people using machine translation recently, and noticed that inserting and removing "not" is very common for all translation tools I've used, from Google Translate to DeepL to ChatGPT. I'm not sure if it's particular to Chinese ←→ English, or if it's a common problem across all languages.

A priori, it seems like a pretty huge issue, because it changes a sentence's meaning to its opposite. Fortunately, it's usually easy to notice. But then again, I obviously wouldn't know about any instances I haven't noticed.


Chinese ←→ English? any chance you might be willing to recommend a better test for: https://news.ycombinator.com/item?id=41696289 ?


I don't have a good idea on how to objectively test this, but subjectively, my impression is that most regular people in China don't know a lot about any countries outside of East and Southeast Asia.

Even something like Halloween, which apparently triggered this discussion, is not something most people, including kids, seem to be particularly familiar with. When I mentioned to a Chinese friend that we were celebrating Halloween last year, she advised me to be careful when inviting ghosts into my home. She was unaware that it is mostly a fun children's holiday where they dress up and get candy.


Halloween is of Western, Christian origination and so knowledge of it wouldn't be something to expect in places not heavily influenced by the West or Christianity.


Despite already having our own (week-long, springtime) holiday involving dressing up in costumes, Halloween has taken a firm hold here over the last couple of decades. Kids have never yet showed up at our door, but they're definitely out trick or treating.

I'd blamed chinese factories needing more places to sell their plastic halloween gear, but now it sounds like it just comes down to US media saturation?

Then again, we should all be stealing more holidays from each other; a more syncretic world is a less boring one.

[My german teacher in high school said the best thing about growing up in southern germany was that they got all the holidays (both protestant and christian) off from school]


Google Translate frequently makes plenty of errors/hallucinations. I pointed out several above in this very thread!

When accuracy is absolutely critical, don’t depend on machine translation alone, and especially don’t depend on a single machine translation without cross checking it. As it is, I have anecdotally only had good experiences when comparing GPT-4o’s translation quality to Google Translate. I would love to see objective research into the topic, if someone were offering it, but not trite dismissals that imply Google Translate is somehow immune to hallucinations.


Human translators can "hallucinate" new bullshit too, sometimes deliberately.

Source: Was fansub translator, partook in many translator flame wars over translation disagreements and we all shook heads at the work of craplators.

Also, I will tell you most professional translators are shit at their job.

I can't wait until computer programs can practically take over translating, it's a thankless sweatshop line of work.


Why do you assume that google translate is anything other than an LLM in 2024?


As a note, for Japanese text deepl is widely used even by Japanese people. From eng to jpn it may not choose properly nuanced words though, but it largely produces acceptable translations.


It's better than Google Translate but it still leaves out a lot. GPT is the best at translating.


Well, Nvidia has powered a much more popular console... the Nintendo Switch, and Nvidia looks set to power the Switch 2 when it launches next year. So, AMD is clearly not the only choice.


The problem with choosing Nvidia is that they can't make an x86 processor with an integrated GPU. If you're looking to maintain backward compatibility with the Playstation 5, you're probably going to want to stick with an x86 chip. AMD has the rights to make x86 chips and it has the graphics chips to integrate.

Nvidia has graphics chips, but it doesn't have the CPUs. Yes, Nvidia can make ARM CPUs, but they haven't been putting out amazing custom cores.

AMD can simply repackage some Zen X cores with RDNA X GPU and with a little work have something Sony can use. Nvidia would need to either grab off-the-shelf ARM Cortex cores (like most of their ARM CPUs use) or Sony would need to bet that Nvidia could and would give them leading-edge performance on custom designed cores. But would Nvidia come in at a price that Sony would pay? Probably not. AMD's costs are probably a lot lower since they're going to be doing all that CPU work anyway for the rest of their business.

For Nintendo, the calculus is a bit different. Nintendo is fine with off-the-shelf cores that are less powerful than smartphones and they're already on ARM so there's no backward incompatibility there. But for Sony whose business is different, it'd be a huge gamble.


I think changing from AMD GPUs to Nvidia GPUs by itself has a good chance of breaking backwards compatibility with how low level and custom Sony's GPU API apparently is, so the CPU core architecture would just be a secondary concern.

I was not saying Sony should switch to Nvidia, just pointing out that it is objectively incorrect to say that AMD is the only option for consoles when the most popular console today does not rely on AMD.

I also fully believe Intel could scale up an integrated Battlemage to meet Sony's needs, but is it worth the break in compatibility? Is it worth the added risk when Intel's 13th and 14th gen CPUs have had such publicly documented stability issues? I believe the answer to both questions is "probably not."


> incorrect to say that AMD is the only option for consoles

It's a bit of an apples to oranges comparison though, even if all 3 devices are technically consoles. The Switch is basically a tablet with controllers attached and a tablet/phone CPU while PS5/Xbox are just custom build PCs.


The only reason I can see that it would matter that the Switch is a low-end console is if you think Nvidia is incapable of building something higher end. Are you saying that Nvidia couldn't make more powerful hardware for a high end console? Otherwise, the Switch just demonstrates to me that Nvidia is willing to form the right partnership, and reliably supply the same chips for long periods of time.

I'm certain Nvidia would have no trouble doing a high end console, customized to Microsoft and/or Sony's exacting specs... for the right price.


> Are you saying that Nvidia couldn't make more powerful hardware for a high end console?

Hard to say. It tooks Qualcomm years make something that was superior to standard ARM designs. GPU is of course another matter.

> I'm certain Nvidia would have no trouble doing a high end console,

The last mobile/consumer CPU (based on their own core) that they have released came out in 2015 and they have been using off the shelf ARM core designs for their embedded and server stuff. Wouldn't they be effectively be starting from scratch?

I'm sure they could achieve that in a few years but do you think it would take them significantly less time that it did Apple or Qualcomm?

> Nvidia is incapable of building something higher end

I think it depends more on what Nintendo is willing to pay, I doubt they really want a "high-end" chip.


> I think it depends more on what Nintendo is willing to pay, I doubt they really want a "high-end" chip.

In this thread, we were talking about what Sony and Microsoft would want for successors to the PS5 and XSX, not Nintendo. Nintendo was just a convenient demonstration that Nvidia is clearly willing to partner with console makers like Sony and Microsoft.

> Hard to say. It tooks Qualcomm years make something that was superior to standard ARM designs.

> The last mobile CPU

I wasn't talking about Nvidia custom designing an ARM core, although they have done that in the past, and again, this wouldn't be mobile hardware. Nvidia is using very powerful ARM cores in their Grace CPU today. They have plenty of experience with the off-the-shelf ARM cores, which are very likely good enough for modern consoles.


> Nvidia is using very powerful ARM cores in their Grace CPU today

I'm not sure Neoverse is particularly (or at all) suitable for gaming consoles. Having 60+ cores wouldn't be particularly useful and their single core performance is pretty horrible (by design).

> which are very likely good enough for modern consoles

Are they? Cortex-X4 has barely caught up with Apple's M1 (from 2020)? What other options are there? ARM just doesen't seem to care that much about the laptop/desktop market at all.


The Neoverse cores are substantially more powerful than something like Cortex-X4. Why would they not be suitable? It's hard to find benchmarks that are apples-to-apples in tests that would be relevant for gaming, but what little I've been able to find shows that the Neoverse V2 cores in Nvidia's Grace CPU are competitive against AMD's CPUs. I hate to draw specific comparisons, because it's very easy to attack when, as I already said, the numbers are hard to come by, but I'm seeing probably 20% better than Zen 3 on a clock-for-clock, single core basis. The current-generation PS5 and XSX are based on Zen 2. Zen 3 was already a 10% to 30% jump in IPC over Zen 2, depending on who you ask. Any hypothetical Nvidia-led SoC design for a next-gen console would be pulling in cores like the Neoverse V3 cores that have been announced, and are supposedly another 15% to 20% better than Neoverse V2, or even Neoverse V4 cores which might be available in time for the next-gen consoles.

These gains add up to be substantial over the current-gen consoles, and as an armchair console designer, I don't see how you can be so confident they wouldn't be good enough.

The CPU cores Nvidia has access to seem more than sufficient, and the GPU would be exceptional. AMD is clearly not the only one capable of providing hardware for consoles. Nvidia has done it, will do it again, and the evidence suggests Nvidia could certainly scale up to much bigger consoles if needed. One problem is certainly that Nvidia is making bank off of AI at the moment, and doesn't need to vie for the attention of console makers right now, so they aren't offering any good deals to those OEMs. The other problem is that console makers also don't want any break in compatibility. I've already addressed these problems in previous comments. It's just incorrect to say that the console makers have no other choices. They're just happy with what AMD is offering, and making the choice to stick with that. Nintendo will be happy using hardware made on a previous process node, so it won't interfere with Nvidia's plan to make insane money off of AI chips the way that next-gen console expectations from Sony or Microsoft would. I'm happy to admit that I'm being speculative in the reasons behind these things, but there seem to be enough facts to support the basic assertion that AMD is not the only option, which is what this sub-thread is about.

Since you seem so confident in your assertions, I assume you have good sources to back up the claim that Neoverse V2/V3/V4 wouldn't be suitable for gaming consoles?


> Nvidia's Grace CPU are competitive against AMD's CPUs

I don't think PS/Xbox are using AMDs 64+ core server chips like Milan etc.

> I assume you have good sources to back up the claim that Neoverse V2/V3/V4

These are data center CPUs designed for very different purposes. Neoverse is only used in chips that target very specific, highly parallelized workloads. The point is having a very high number 64-128+ of relatively very slow but power efficient cores and extremely high bandwidth.

e.g Grace has comparable single thread performance to Ryzen 7 3700X (a 5 year old chip). Sure MT performance is 10x better but how does that matter for gaming workloads?

I assume you could boost the frequency and build a SoC with several times less core than all recent Neoverse chips (if ARM let's you). Nobody has done that or publically considered doing it. I can't prove that it's impossible but can you provide any specific arguments why do you think that you be a practical approach?

> substantially more powerful than something like Cortex-X4.

Of course it's just rumors but Nvidia seems to be going with ARM A78C which is a tier below X4. Which is not particularly surprising since Nintendo would rather spend money on other components / target a lower price point. As we've agreed the GPU is the important part here the CPU will probably be comparable to an off the shelf SoC you can get from Qualcomm or even MediaTek.

That might change in the future but I don't see any evidence that Nvidia is somehow particularly good at building CPUs or is close to being in the same tier as AMD, Intel, Qualcomm (maybe even Ampere depending if they finally deliver what they have been promising in the near future).

Same applies to Grace, the whole selling point is integration with their datacenter GPUs. For CPU workloads it provides pretty atrocious price/performance and it would make little sense to buy it for that.


Emulating x86 would be an option - though given Sony's history, I doubt they'd consider it seriously.

For context...

- PS1 BC on PS2 was mostly hardware but they (AFAIK?) had to write some code to translate PS1 GPU commands to the PS2 GS. That's why you could forcibly enable bilinear filtering on PS1 games. Later on they got rid of the PS1 CPU / "IO processor" and replaced it with a PPC chip ("Deckard") running a MIPS emulator.

- PS1 BC on PS3 was entirely software; though the Deckard PS2s make this not entirely unprecedented. Sony had already written POPS for PS1 downloads on PS2 BBN[0] and PSP's PS1 Classics, so they knew how to emulate a PS1.

- PS2 BC on PS3 was a nightmare. Originally it was all hardware[1], but then they dropped the EE+GS combo chip and went to GPU emulation, then they dropped the PS2 CPU entirely and all backwards compatibility with it. Then they actually wrote a PS2 emulator anyway, which is part of the firmware, but only allowed to be used with PS2 Classics and not as BC. I guess they consider the purchase price of the games on the shop to also pay for the emulator?

- No BC was attempted on PS4 at all, AFAIK. PS3 is a weird basketcase of an architecture, but even PS1 or PS2 aren't BC supported.

At some point Sony gave up on software emulation and decided it's only worth it for retro re-releases where they can carefully control what games run on the emulator and, more importantly, charge you for each re-release. At least the PS4 versions will still play on a PS5... and PS6... right?

[0] A Japan-only PS2 application that served as a replacement for the built-in OSD and let you connect to and download software demos, game trailers, and so on. Also has an e-mail client.

[1] Or at least as "all hardware" as the Deckard PS2s are


> Then they actually wrote a PS2 emulator anyway, which is part of the firmware, but only allowed to be used with PS2 Classics and not as BC.

To be fair, IMO that was only 80-90% of a money grab; "you can now run old physical PS2 games, but only these 30% of our catalog" being a weird selling point was probably also a consideration.

> Sony had already written POPS for PS1 downloads on PS2 BBN[0] and PSP's PS1 Classics, so they knew how to emulate a PS1.

POPS on the PSP runs large parts of the code directly on the R4000 without translation/interpretation, right? I'd call this one closer to what they did for PS1 games on the (early/non-Deckard) PS2s.


> No BC was attempted on PS4 at all, AFAIK. PS3 is a weird basketcase of an architecture, but even PS1 or PS2 aren't BC supported.

To Be Faiiiirrrrrr, that whole generation was a basket case. Nintendo with the motion controls. Microsoft with a console that internally was more PC then "traditional" console (and HD-DVD). Sony with the Cell processor and OtherOS™.

I do have fond memories of playing around with Linux on the PS3. Two simultaneous threads! 6 more almost cores!! That's practically a supercomputer!!!


I remember the hype around cell processors being so high around the release of the PlayStation 3. It was novel for the application, but still fizzled out even with the backing it had.


In what sense would you say the Xbox 360 was more "PC-like" than "console-like"?


I'll try to answer in the parent commenter's place.

Prior generations of consoles were true-blue, capital-E "embedded". Whatever CPU they could get, graphics hardware that was custom built for that particular machine, and all sorts of weird coprocessors and quirks. For example, in the last generation, we had...

- The PlayStation 2, sporting a CPU with an almost[0] MIPS-compatible core with "vertex units", one of which is exposed to software as a custom MIPS coprocessor, a completely custom GPU architecture, a separate I/O processor that's also a PS1, custom sound mixing hardware, etc.

- The GameCube, sporting a PPC 750 with custom cache management and vector instructions[1], which you might know as the PowerPC G3 that you had in your iMac. The GPU is "ATI technology", but that's because ATI bought out the other company Nintendo contracted to make it, ArtX. And it also has custom audio hardware that runs on another chip with it's own memory.

- The Xbox, sporting... an Intel Celeron and an Nvidia GPU. Oh, wait, that's "just a PC".

Original Xbox is actually a good way to draw some red lines here, because while it is in some respects "just a PC", it's built a lot more like consoles are. All games run in Ring 0, and are very tightly coupled to the individual quirks of the system software. The "Nvidia GPU" is an NV2A, a custom design that Nvidia built specifically for the Xbox. Which itself has custom audio mixing and security hardware you would never find in a PC.

In contrast, while Xbox 360 and PS3 both were stuck with PPC[2], they also both had real operating system software that commercial games were expected to coexist with. On Xbox 360, there's a hypervisor that enforces strict code signing; on PS3 games additionally run in user mode. The existence of these OSes meant that system software could be updated in nontrivial ways, and the system software could do some amount of multitasking, like playing music alongside a game without degrading performance or crashing it. Y'know, like you can on a PC.

Contrast this again to the Nintendo Wii, which stuck with the PPC 750 and ArtX GPU, adding on a security processor designed by BroadOn[3] to do very rudimentary DRM. About the only thing Nintendo could sanely update without bricking systems was the Wii Menu, which is why we were able to get the little clock at the bottom of the screen. They couldn't, say, run disc games off the SD card or update the HOME Menu to have a music player or friends list or whatever, because the former runs in a security processor that exposes the SD card as a block device and the latter is a library Nintendo embedded into every game binary rather than a separate process with dedicated CPU time budgets.

And then the generation after that, Xbox One and PS4 both moved to AMD semicustom designs that had x86 CPUs and Radeon GPUs behind familiar APIs. They're so PC like that the first thing demoed on a hacked PS4 was running Steam and Portal. The Wii U was still kind of "console-like", but even that had an OS running on the actual application processor (albeit one of those weird designs with fixed process partitions like something written for a mainframe). And that got replaced with the Switch which has a proper microkernel operating system running on an Nvidia Tegra SoC that might have even wound up in an Android phone at some point!

Ok, that's "phone-like", not "PC-like", but the differences in systems design philosophy between the two is far smaller than the huge gulf between either of those and oldschool console / embedded systems.

[0] PS2 floating-point is NOWHERE NEAR IEEE standard, and games targeting PS2 tended to have lots of fun physics bugs on other hardware. Case in point: the Dolphin wiki article for True Crime: New York City, which is just a list of bugs the emulator isn't causing. https://wiki.dolphin-emu.org/index.php?title=True_Crime:_New...

[1] PPC 750 doesn't have vector normally; IBM added a set of "paired single" instructions that let it do math on 32-bit floats stored in a 64-bit float register.

[2] Right after Apple ditched it for power reasons, which totally would not blow up in Microsoft's face

[3] Which coincidentally was founded by the same ex-SGI guy (Wei Yen) who founded ArtX, and ran DRM software ported from another Wei Yen founded company - iQue.


Considering how the wins are blowing, I'm going to guess the next consoles from Sony and Microsoft are the last ones to use x86. They'll be forced to switch to ARM for price/performance reasons, with all x86 vendors moving upmarket to try and maintain revenues.


> Nvidia has graphics chips, but it doesn't have the CPUs. Yes, Nvidia can make ARM CPUs, but they haven't been putting out amazing custom cores.

Ignorant question - do they have to? The last time I was up on gaming hardware it seemed as though most workloads were GPU-bound and that having a higher-end GPU was more important than having a blazing fast CPU. GPUs have also grown much more flexible rendering pipelines as game engines have gotten much more sophisticated and, presumably, parallelized. Would it not make sense for Nvidia to crank out a cost-optimized design comprising their last-gen GPU architecture with 12 ARM cores on an affordable node size?

The reason I ask is because I've been reading a lot about 90s console architectures recently. My understanding is that back then the CPU and specialized co-processors had to do a lot of heavy lifting on geometry calculations before telling the display hardware what to draw. In contrast I think most contemporary GPU designs take care of all of the vertex calculations themselves and therefore free the CPU up a lot in this regard. If you have an entity-based game engine and are able to split that object graph into well-defined clusters you can probably parallelize the simulation and scale horizontally decently well. Given these trends I'd think a bunch of cheaper cores could work as well for cheaper than higher-end ones.


I think a PS6 needs to play PS5 games, or Sony will have a hard time selling them until the PS6 catalog is big; and they'll have a hard time getting 3rd party developers if they're going to have a hard time with console sales. I don't think you're going to play existing PS5 games on an ARM CPU unless it's an "amazing" core. Apple does pretty good at running x86 code on their CPUs, but they added special modes to make it work, and I don't know how timing sensitive PS5 games are --- when there's only a handful of hardware variants, you can easily end up with tricky timing requirements.


I mean, the PS4 didn't play PS3 games and that didn't hurt it any. Backwards compatibility is nice but it isn't the only factor.


The first year of PS4 was pretty dry because of the lack of BC; It really helped that the competition was the Xbox One, which was less appealing for a lot of reasons


At this point people have loved the PS5 and Xbox Series for having full backwards compatibility. The Xbox goes even further through software. People liked the Wii’s backwards compatibility and the Wii U (for those who had it).

And Nintendo’s long chain of BC from the GB to the 3DS (though eventually dropping GB/GBC) was legendary.

The Switch was such a leap over the 3DS and WiiU Nintendo got away with it. It’s had such a long life having no BC could be a huge hit if the Switch 2 didn’t have it.

I think all three intended to try and keep it going forward at this point.


Which is also the reason why many games on PS 5 and XBox Series are kind of lame, as studios want to keep PS 4 and XBone gamers in the sales loop, and why PS 5 Pro is more of scam kind of thing for hardcore fans that will buy anything that a console vendor puts out.


One data point: there was no chip shortage at the PS4 launch, but I still waited more than a year to get one because there was little to play on it.

While with the PS5 I got one as soon as I could (that still took more than a year since launch, but for chip shortage reasons) because I knew I could simply replace the PS4 with it under the TV and carry on.


We're not in 2012 anymore. Modern players don't only want a clean break to play the new AAA games every month, they also want access to a large indie marketplace, they also want the games they play every day, they also want to improve the performance of the games they already have.


PS5 had Zen 2 which was fairly new at the time. If PS6 targets 120 fps they'll want a CPU that's double the performance of Zen 2 per thread. You could definitely achieve this with ARM but I'm not sure how new of an ARM core you would need.


Is there a need to ever target 120 fps? Only the best-of-best eyes will even notice a slight difference from 60.


Yes.

You say that, but you can absolutely notice. Motion is smoother, the picture is clearer (higher temporal resolution), and input latency is half what it is at 60.

Does every game need it? Absolutely not. But high-speed action games and driving games can definitely benefit. Maybe others. There’s a reason the PC world has been going nuts with frame rates for years.

We have 120 fps on consoles today on a few games. They either have to significantly cut back (detail, down to 1080p, etc) or are simpler to begin with (Ori, Prince of Persia). But it’s a great experience.


My eyes are not best-of-best but the difference between 60 and 120hz in something first-person is dramatic and obvious. It depends on the content but there are many such games for consoles. Your claim that it's "slight" is one that only gets repeated by people who haven't seen the difference.


Honestly, I can't even tell the difference between 30 and 60. Maybe I'm not playing the right games or something but I never notice framerate at all unless it's less than 10-20 or so.


I would guess it's partly the games you play not having a lot of fast motion and maybe partly that you're not really looking for it.


I don't think my TV can display 120 fps and I'm not buying a new one. But they promise 4K 60 (with upscaling) on the PS5 Pro, so they have to have something beyond that for PS6.


They have 120 today, it’s just not used much.

Even if people stick to 4K 60, which I suspect they will, the additional power means higher detail and more enemies on screen and better ray tracing.

I think of the difference between the PlayStation three games that could run at 1080 and PS4 games at 1080. Or PS4 Pro and PS5 at 4k or even 1440p.


Nvidia has very little desire to make a high-end razor thin margin chip that consoles traditionally demand. This is what Jensen has said, and it makes sense when there are other areas that the silicon can be directed to with much greater profit.


>The problem with choosing Nvidia is that they can't make an x86 processor with an integrated GPU

Can't and not being allowed are two very different things


That's not an apples-to-apples comparison. Switch is lower price, lower performance by design and used, even originally, a mature NVIDIA SoC, not really a custom.


> much more popular console

which isn't a useful metric because "being a good GPU" wasn't at all why the switch became successful, like you could say it became successful even through it had a pretty bad GPU. Through bad only in the perf. aspect as far as I can tell back then amd wasn't competitive on energy usage basis and maybe not on a price basis as the nvidea chips where a by product of Nvidea trying to enter the media/TV add on/handheld market with stuff like the Nvidea Shield.

But yes AMD isn't the only choice, IMHO in difference to what many people seem to think for the price segment most consoles tend to target Intel is a viable choice, too. But then we are missing relevant insider information to properly judge that.


> the Nintendo Switch, and Nvidia looks set to power the Switch 2

Which runs a very old mobile chip which was already outdated when the Switch came out. Unless Nintendo is planning to go with something high-end this time (e.g. to compete with the Steam Deck and other more powerful handhelds) whatever they get from Nvidia will probably be more or less equivalent to an mid-tier of the shelf Qualcomm SoC.

It's interesting that Nvidia is going with that, it will just depress their margins. I guess they want to reenter the mobile CPU market and need something to show off.


We already have a good sense of what SoC Nintendo will likely be going with for the Switch 2.

Being so dismissive of the Switch shows the disconnect between what most gamers care about, and what some tech enthusiasts think gamers care about.

The Switch 1 used a crappy mobile chip, sure, but it was able to run tons of games that no other Tegra device could have dreamed of running, due to the power of having a stable target for optimization, with sufficiently powerful APIs available, and a huge target market. The Switch 1 can do 90% of what a Steam Deck can, while using a fraction of the power, thickness, and cooling. With the Switch 2 almost certainly gaining DLSS, I fully expect the Switch 2 to run circles around the Steam Deck, even without a “high end chip”. It will be weaker on paper, but that won’t matter.

I say this as someone who owns a PS5, a Switch OLED, an ROG Ally, and a fairly decent gaming PC. I briefly had an original Steam Deck, but the screen was atrocious.

Most people I see talking about Steam Deck’s awesomeness seem to either have very little experience with a Switch, or just have a lot of disdain for Nintendo. Yes, having access to Steam games is cool… but hauling around a massive device with short battery life is not cool to most gamers, and neither is spending forever tweaking settings just to get something that’s marginally better than the Switch 1 can do out of the box.

The Switch 1 is at the end of its life right now, but Nintendo is certainly preparing the hardware for the next 6 to 8 years.


> Being so dismissive of the Switch shows the disconnect between what most gamers care about, and what some tech enthusiasts think gamers care about.

What makes you think I am? Hardware wise it's an equivalent of a unremarkable ancient Android tablet, yet it's pretty exceptional what Nintendo manage to achieve despite of that.

> The Switch 1 can do 90% of what a Steam Deck can

That's highly debatable and almost completely depends on what games specifically you like/play. IMHO PC gaming and Nintendo have relatively little overlap (e.g. compared to to PS and Xbox at least).

> Steam Deck’s awesomeness

I never implied that the Switch was/is/will be somehow inferior (besides potentially having a slower CPU & GPU).

> but Nintendo is certainly preparing the hardware for the next 6 to 8 years

It's not obvious that they were the first time and still did fine, why would they change their approach this time (albeit there weren't necessarily that many options on the market back then but it was still an ~2 year old chip).


> IMHO PC gaming and Nintendo have relatively little overlap (e.g. compared to PS and Xbox at least).

That was true back in the Wii era, because there was nothing remarkable about the Wii apart from its input method. It was "just another home console" to most developers, so why bother going through the effort to port their games from more powerful consoles down to the Wii, where they will just look bad, run poorly, and have weird controls?

With the Nintendo Switch, Nintendo found huge success in third party titles because everyone who made a game was enthusiastic about being able to play their game portably, and the Switch made that possible with hardware that was architecturally similar to other consoles and PCs (at least by comparison to previous handheld gaming consoles), which made porting feasible without a complete rewrite.

In my opinion, basically the only console games that aren't available on Switch at this point are the very most recent crop of high-end games, which the Switch is too old to run, as well as certain exclusives. If the Switch were still able to handle all the third party ports, then I don't even know if Nintendo would be interested in a Switch 2, but they do seem to care about the decent chunk of money they're making from third party games.

The overlap with PC is the same as the overlap between PC and other consoles... which is quite a lot, but doesn't include certain genres like RTSes. They've tried bringing Starcraft to console before, and that wasn't very well received for obvious reasons, haha

> It's not obvious that they were the first time and still did fine, why would they change their approach this time

I'm not sure I was saying they would change their approach... the Switch 1 is over 7 years old at this point. I was just saying they're preparing the next generation to last the same amount of time, which means finding sufficiently powerful hardware. The Switch 1 was sufficiently powerful, even for running lots of "impossible" ports throughout its lifetime: https://www.youtube.com/watch?v=ENECyFQPe-4

All of that to say, I am a big fan of the Switch OLED. I'm honestly ready to sell my ROG Ally, just because I never use it. But, being a bit of a contradiction, I am also extremely interested in the PS5 Pro. My PS5 has been a great experience, but I wish it had graphical fidelity that was a bit closer to my PC... without the inconvenience of Windows. For a handheld, I care a lot about portability (small size), battery life, and low hassle (not Windows, and not requiring tons of tweaking of graphics settings to get a good experience), and the Switch OLED does a great job of those things, while having access to a surprisingly complete catalog of both first-party and third-party games.


Actually there is one remarkle thing about the Wii, but it hardly matters in this context, it was one of the very few consoles out there that actually had something that relates to OpenGL, namely the shading language and how the API was designed.

Many keep thinking that Khronos APIs have any use on game consoles, which is seldom the case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: