> Basically the vibe I get from it is that they think their users are dumb
Your point would have been much more convincing had you refrained from this sort of pejorative assigning of motives. It wasn't necessary.
I've been running the betas to the final release and there are a number of basic affordances and system improvements that are definitely worthwhile. I will not be going back.
Having said that, while I know they had good intentions with this whole design, and probably really thought they were pursing a winner, what a massive, massive miss. This is such an aesthetic disaster that I'm just in awe. I feel like they had a huge push to do some seemingly substantial change, particularly on the mobile side, given the stumbles in the AI space, so they changed a lot of things maybe without quite enough thought.
Ugly as hell. More dead space. On the mobile side they released an update to iOS just today from the RC a few days ago that removes some of the particularly stupid animations (the app tray did some dumb thing where it expanded and shrank, and that and a few similar things are gone).
Yet the majority of businesses use dark patterns to avoid cancellations because it's hugely profitable to do so. Chargebacks are expensive, but the truth is that the majority of customers never leverage it, and often just endure years of paying for products they don't use. Maybe they tried to cancel to find that while they could sign up in seconds online, cancelling invariably requires a call (to a number that, wouldn't you believe it, has higher than normal call volumes!) into some maze of retention garbage.
What a world we would be if companies didn't want to bill customers that don't use their product. Imagine if companies automatically paused billing if you stopped using their product? Panacea.
Apple is a hugely greedy company, but it's one thing I like about subscribing to things in there -- I can cancel at any time with minimal effort.
They do, but the vast majority of fluids the average person consumes comes in products made elsewhere, along with restaurants, etc. So you can RO your home water, but unless you don't eat anything made elsewhere, water your own crops, etc, you need comprehensive protections to avoid them.
We posted the same thing, in essence, at the same time. This piece is completely nonsensical in every way, and I presume it is targeted at laymen who'll just go along with it. Like anyone who sees that last bit about MLX and CoreML and doesn't realize the author seems to not have a clue what they're talking about should understand they're being duped.
Apple adopted a new cooling technique on their highest end device to differentiate and give spec sheet chasers something to be hyped about. It should help reduce throttling for the very odd event where someone is running a mobile device at 100% continuously (which is actually super rare in normal usage). It's already in the Pixel 9 Pro, for instance, and is a new "must have". It has nothing to do with whatever app these guys were building.
The rest of the nonsense is just silly. If you are building an app for a mobile device and it pegs the CPU and GPU, you're going to have a bad time. That's the moment you realize it's time to go back to the drawing board.
Our app wasn't running on CPU or GPU –– the actual software we built was running entirely on Apple Neural Engine and it was crazy fast because we designed the architecture explicitly to run that specific chip.
We were just calling the iPhone's built-in face tracking system via the Vision Framework to animate the avatars. That's the thing that was running on GPU.
Okay, though I'm not sure what that has to do with my comment. I understood that from the post: you were concurrently maxing out multiple parts of the SoC and it was overheating as they all contributed to the thermal load. This isn't new or novel -- benchmarks that saturate both the CPU and GPU are legendary for throttling -- though the claim that somehow normal thermal management didn't protect the hardware is novel, albeit entirely unsubstantiated.
That is neither here nor there on CoreML -- which also uses the CPU, GPU, and ANE, and sometimes a combination of all of them -- or the weird thing about MLX.
I don't get what's so weird about MLX. Apple's focus is obviously on MLX / Metal going forward.
The only reason to use CoreML these days is to tap into the Neural Engine. When building for CoreML, if one layer of your model isn't compatible with the Neural Engine, it all falls back to the CPU. Ergot, CoreML is the only way to access the ANE, but it's a buggy all-or-nothing gambit.
Have you ever actually shipped a CoreML model or tried to use the ANE?
>Apple's focus is obviously on MLX / Metal going forward.
This is nonsensical.
MLX and CoreML are orthogonal. MLX is about training models. CoreML is about running models, or ML-related jobs. They solve very different problems, and MLX patches a massive hole that existed in the Apple space.
Anyone saying MLX replaces CoreML, as the submission does, betrays that they are simply clueless.
>The only reason to use CoreML these days is to tap into the Neural Engine.
Every major AI framework on Apple hardware uses CoreML. What are you even talking about? CoreML, by the very purpose of its design, uses any of the available computation subsystems, which on the A19 will be the matmul units on the GPU. Anyone who thinks CoreML exists to use the ANE simply doesn't know what they're talking about. Indeed, the ANE is so limited in scope and purpose that it's remarkably hard to actually get it to use the ANE.
>Have you ever actually shipped a CoreML model or tried to use the ANE?
Literally a significant part of my professional life, which is precisely why this submission triggered every "does this guy know what he's talking about" button.
Yes, MLX is for research, but MLX-Swift is for production and it works quite well for supported models! Unlike CoreML, the developer community is vibrant and growing.
Maybe I am working on a different set of problems than you are. But why would you use CoreML if not to access ANE? There are so many other, better newer options like llama.cpp, MLX-Swift, etc.
What are you seeing here that I am missing? What kind of models do you work with?
I know what MLX is. MLX-swift is just a more accessible facade, but it's still MLX. The entire raison d'être for MLX is training and research. It is not a deployment library. It has zero intention in being a deployment library. Saying MLX replaces CoreML is simply nonsensical.
> But why would you use CoreML if not to access ANE?
The whole point of CoreML is hardware agnostic operations, not to mention higher level operations for most model touchpoints. If you went into this thinking CoreML = ANE, that's just fundamentally wrong at the beginning. ANE is one extremely limited path for CoreML models. The vast majority of CoreML models will end up running on the GPU -- using metal, it should be noted -- aside from some hyper-optimized models for core system functions, but if/when Apple improves the ANE, existing models will just use that as well. Similarly when you run a CoreML model on an A19 equipped unit, it will use the new matmul instructions where appropriate.
That's the point of CoreML.
Saying other options are "better, newer" is just weird and meaningless. Not only is CoreML rapidly evolving and can support just about every modern model feature, in most benchmarks of CoreML vs people's hand-crafted metal, CoreML smokes them. And then you run it on an A19 or the next M# and it leaves them crying for mercy. That's the point of it.
Can someone hand craft some metal and implement their own model runtime? Of course they can, and some have. That is the extreme exception, and no one in here should think that has replaced anything
It sounds like your experience differs from mine. I oversaw teams trying to use CoreML in the 2020 - 2024 era who found it very buggy, as per the screenshots I provided.
More recently, I personally tried to convert Kokoro TTS to run on ANE. After performing surgery on the model to run on ANE using CoreML, I ended up with a recurring Xcode crash and reported the bug to Apple (as reported in the post and copied in part below).
What actually worked for me was using MLX-audio, which has been great as there is a whole enthusiastic developer community around the project, in a way that I haven't seen with CoreML. It also seems to be improving rapidly.
In contrast, I have talked to exactly 1 developer who have ever used CoreML since ChatGPT launched, and all that person did was complain about the experience and explain how it inspired them to abandon on-device AI for the cloud.
___
Crash report:
A Core ML model exported as an `mlprogram` with an LSTM layer consistently causes a hard crash (`EXC_BAD_ACCESS` code=2) inside the BNNS framework when `MLModel.prediction()` is called. The crash occurs on M2 Ultra hardware and appears to be a bug in the underlying BNNS kernel for the LSTM or a related operation, as all input tensors have been validated and match the model's expected shape contract. The crash happens regardless of whether the compute unit is set to CPU-only, GPU, or Neural Engine.
*Steps to Reproduce:*
1. Download the attached Core ML models (`kokoro_duration.mlpackage` and `kokoro_synthesizer_3s.mlpackage`)
2. Create a new macOS App project in Xcode. Add the two `.mlpackage` files to the project's "Copy Bundle Resources" build phase.
3. Replace the contents of `ContentView.swift` with the code from `repro.swift`.
4. Build and run the app on an Apple Silicon Mac (tested on M2 Ultra, macOS 15.6.1).
5. Click the "Run Prediction" button in the app.
*Expected Results:*
The `MLModel.prediction()` call should complete successfully, returning an `MLFeatureProvider` containing the output waveform. No crash should occur.
*Actual Results:*
The application crashes immediately upon calling `model.prediction(from: inputs, options: options)`. The crash is an `EXC_BAD_ACCESS` (code=2) that occurs deep within the Core ML and BNNS frameworks. The backtrace consistently points to `libBNNS.dylib`, indicating a failure in a low-level BNNS kernel during model execution. The crash log is below.
I can't speak to how CoreML worked for you, or how the sharp edges cut. I triply wouldn't comment on ANE, which is an extremely limited bit of hardware mostly targeted at energy efficient running of small, quantized models with a subset of features. For instance extracting text from images.
CoreML is pervasively used throughout iOS and macOS, and this is more extensive than ever in the 25 versions. Zero percent of the system uses MLX for the runtime. The incredibly weird and nonsensical submissions weird contention that because ANE doesn't work for them, therefore Apple is admitting something is just laughable silliness.
And FWIW, people's impressions of the tech world from their own incredibly small bubble is often deeply misleading. I've read so many developers express with utter conviction that no one uses Oracle, no one uses Salesforce, no one uses Windows, no one uses C++, no one uses...
In my conversations with people at Apple, my understanding is that they do not use CoreML. Instead, they have access to lower level libraries that allow more direct programmatic control of the hardware.
CoreML is the crappy middleware they made for 3rd party devs, but never got much love and never took off.
Re: ANE — as stated, ANE is crazy fast when it works. Yes, it’s also more power efficient, but the reason I think it’s actually worth building on is being able to make consumer products where the entire experience depends on speed.
I think you can agree that 5 milliseconds to generate a 512 px resolution image is absolutely insane speed.
This is such an odd submission, and a lot of the claims are bizarre and seemingly nonsensical. Maybe I'm just misunderstanding. Exchanges in it seem remarkably...fictional. It reads like a tosser peacocking LinkedIn post by someone desperately trying to scam some rubes.
It also seems like one of those self-aggrandizing things that tries to spin everything as a reaction to themselves, instead of just technology progressing. No, vapour chamber cooling isn't some grand admission, it's something that a variety of makers have been adopting to reduce throttling as a spec-sheet item of their top end devices. It isn't all about you.
And given that the base 17 doesn't have VCC, I guess Apple isn't "admitting" it at all, no?
And the CoreML v MLX nonsense at the end is entirely nonsensical and technically ignorant. Like, wow.
No one should learn anything from this piece. The author might know what they're talking about (though I am doubtful), but this pieces was "how to make an Apple event about ourselves" and it's pretty ridiculous.
> And given that the base 17 doesn't have VCC, I guess Apple isn't "admitting" it at all, no?
It will be fun to see how hot the iPhone Air gets since it has the same chip as the 17 Pro (w/ one fewer GPU core), but a less thermally conductive metal and no vapor chamber.
I imagine it will be a lot like the MacBook Air, in that it just thermal throttles faster. It has the same chip as the Pro but will never seen the same _sustatined_ performance.
This will be one of those situations where we'll really miss Anandtech. Still can't believe that site died.
In the real world I doubt anyone will ever notice the difference, VCC or not. VCC only will materially affect usage when someone is doing an activity that will hit throttling, which is actually extraordinarily rare in normal use, and usually only comes into play in benchmarking. The overwhelming majority of time we peg those cores for a tiny amount of time and get a quick Animoji or text-extraction from an image, and so on. Even the "AI" usage on a mobile device is extremely peaky.
Ultraviolet light is ionizing. Things oxidize and often whiten in sun because the UV light (the part of the UV spectrum as you go below ~315nm) ionizes and causes chemical reactions, in most cases by splitting O2 which is then charged O atoms that want to react with things.
445nm light isn't ionizing at any brightness, and shouldn't be catalyzing oxidation. Didn't look at it in detail but what is their claim on mechanism?
> if you want to assassinate a culture warrior jerkwad at a public event
The root post's comparison was to someone beside you at the supermarket, rather than "sniper at a distance". The capacity to kill is almost universally distributed, it's just that the vast majority of us are not murderers.
But sure, it's actually one of the justifications for the 2nd amendment. Firearms really are sort of an equalizer, and do more equally distribute the risk to even the most powerful.
You can't make a targeting killing at a supermarket any easier with your car or cleaning products either. Not sure how that changes the calculus. If you want to kill someone with non-gun products, it's very difficult: the evidence being the notably higher number of gun killings over poisonings or deliberate collisions.
With guns, it's literally just a button push kind of UI. That this is controversial is just insane to me. Every 2A nut knows that guns are effective killing machines, that's why they like guns. Yet we end up in these threads anyway watching people try to deny it.
Please try citing numbers if you want to make a numeric argument. The USA has four times as many guns per capita as Finland. And in fact Finland has a much higher gun death rate that the rest of industrial Europe (about 3-4x that of the UK or Germany, for example), which has fewer guns. Finland is, to be sure, safer than the US, with about half the per-capita-per-gun fatality rate. So sure, you can do better than the US without reducing guns.
But clearly guns are the obviously most important driving variable here, and to argue otherwise is just silly.
> The USA has four times as many guns per capita as Finland
42% of US households have one or more guns. 37% of Finland households have one or more guns. That US collectors are aficionados doesn't seem relevant. Access to guns is similar.
> And in fact Finland has a much higher gun death rate
This is an amazing claim given everything we've talked about. Finland's homicide rate is the same as Germany's, and significantly lower than the UK. Do you understand how catastrophic this is for your very argument?
There are more guns so murderous people use them, but murderous people have other methods otherwise, as seen by the UK having over 40% more murders despite having 1/7th the number of households with guns...
"objectively the best place to live in the world rn"
I feel like you were just patronizing the crowd and this is pablum, but the US is one of the angriest, most dissatisfied countries on the planet. It always does poorly on happiness metrics, doesn't do great on corruption indexes, and has a median lifespan and child mortality rate more in the developing country range.
In no universe is there an objective reality where it's the best place to live.
But too much is made about deadly weapons. Every one of us has access to knives. Most of us drive 5000lb vehicles, with which a flick of the wrist could kill many. We all have infinite choices in our life that could take lives.
But we don't, because ultimately there are social issues at play that are simply more important than access to weapons. Loads of countries have access to weapons and it doesn't translate in murder rate at all.
While I understand that people like a base vocabulary of the common elements defined in a list, it has always seemed like a mistake that we keep adding to some massive list for every fringe demand, instead of just embedding tiny SVGs that can be perfectly aligned to every single platform, niche, industry, and so on.
Your point would have been much more convincing had you refrained from this sort of pejorative assigning of motives. It wasn't necessary.
I've been running the betas to the final release and there are a number of basic affordances and system improvements that are definitely worthwhile. I will not be going back.
Having said that, while I know they had good intentions with this whole design, and probably really thought they were pursing a winner, what a massive, massive miss. This is such an aesthetic disaster that I'm just in awe. I feel like they had a huge push to do some seemingly substantial change, particularly on the mobile side, given the stumbles in the AI space, so they changed a lot of things maybe without quite enough thought.
Ugly as hell. More dead space. On the mobile side they released an update to iOS just today from the RC a few days ago that removes some of the particularly stupid animations (the app tray did some dumb thing where it expanded and shrank, and that and a few similar things are gone).