Hacker Newsnew | past | comments | ask | show | jobs | submit | alpha64's commentslogin

Isn't this why the MNT Reform, and possibly others, use a DSI to eDP converter - specifically to avoid having to play this game?


Author here. Yes - the obvious way to wire up the internal display of the MNT Reform would have been to use eDP directly, but since that would require use of the blob, they used the i.MX8M's MIPI DSI interface instead and used a DSI to eDP converter.


Shouldn't you have mentioned this in your article? The way the article is written makes it sound like the MNT Reform is flawed in a way this clarifies it is not.


Not sure what you're referring to. As I state in the article:

>Therefore, it is impossible to ever replace the HDMI blob used by this device. The device could be used without this blob, but you then forego use of the HDMI (or DisplayPort) functionality.

If you use the MNT Reform without the blob you can never use certain features of the device, namely the external HDMI port, so it's not as though the MNT Reform is without flaws. In any case the article is about the i.MX8M, not any specific device.


You explicitly mention the MNT Reform and the Librem 5, which use this chip. I agree that you never say that any given device can't work around this, but that is what I understood from the way this was presented. I was vaguely familiar with the MNT Reform and went looking for that converter chip because I thought they had a solution to this, which they do.

Perhaps if the last bullet point in the article said something about not being able to use HDMI or displayport without a converter chip, it would have been clearer to me.


I've updated the article to clarify this. Thanks for the feedback!


Should append [2015] to the submission title. This is practically ancient news due to the rate of changes in the Linux kernel in this domain.


Is there a 2023 version? What parts are obsolete now?


The CFS scheduler went into Linux in 2007 and is still the main scheduler, as reflected in this paper, so nothing major has changed in that regard. Most changes are minor.

The paper talks about process priority and niceness - a lot of how that works on Unix has not changed in almost fifty years.


They already know people who are trying to access signal without a proxy, so I don't think this would make a significant difference. Also note that from the Signal Blog post above:

----

The Signal client establishes a normal TLS connection with the proxy, and the proxy simply forwards any bytes it receives to the actual Signal service. Any non-Signal traffic is blocked. Additionally, the Signal client still negotiates its standard TLS connection with the Signal endpoints through the tunnel.

This means that in addition to the end-to-end encryption that protects everything in Signal, all traffic remains opaque to the proxy operator.

----


You sorted by single core performance, then compared multi core performance. Sort by multi core performance, and you will see that the i9-11900K is nowhere near the top spot.

For example, the Ryzen 9 5950X has single/multi core scores of 1,688/16,645 - which is higher in multi core score than the M1 Max, but lower in the single core.


Interestingly, the iPhone's A15 SOC did get a newer version of Apple's big core this year.

>On an adjacent note, with a score of 7.28 in the integer suite, Apple’s A15 P-core is on equal footing with AMD’s Zen3-based Ryzen 5950X with a score of 7.29, and ahead of M1 with a score of 6.66.

https://www.anandtech.com/show/16983/the-apple-a15-soc-perfo...

On floating point, it's slightly ahead. 10.15 for the A15 vs. 9.79 for the 5950X.


Which is still not that much higher. Of the "consumer" CPUs only 5900X and 5950X score higher. And their stress power draw is about 2X of speculated M1 Max's.


That's maybe not a bad way to sort? Most of the time I'm interacting with a computer I'm waiting for some single thread to respond, so I want to maximize that, then look over a column to see if it will be adequate for bulk compute tasks as well.


Perhaps they were referencing the highest 8C chip. Certainly, a 5950X is faster, but it also has double the number of cores (counting only performance on the M1; I don't know if the 2 efficiency cores do anything on the multi-core benchmark). Not to mention the power consumption differences - one is in a laptop and the other is a desktop CPU.

Looking at a 1783/12693 on an 8-core CPU shows about a 10% scaling penalty from 1 to 8 cores - suppose a 32-core M1 came out for the Mac Pro that could scale only at 50% per core, that would still score over 28000, compared to the real-world top scorer, the 64-core 3990X scoring 25271.


M1 Max has 10 cores.


But the two efficiency cores are less than half a main core thought right?


1/3 the performance, but 1/10 the power. Not adding more was a mistake IMO. Maybe next time...


Really? I mean if it gets me 10-14h coding on a single charge that’s awesome…


The A15 efficiency cores will be in the next model. They are A76-level performance (flagship-level for Android from 2019-2020), but use only a tiny bit more power than the current efficiency cores.

At that point, their E-cores will have something like 80% the performance of a Zen 1 core. Zen 1 might not be the new hotness, but lots of people are perfectly fine with their Threadripper 1950X which Apple could almost match with 16 E-cores and only around 8 watts of peak power.

I suspect we'll see Apple joining ARM in three-tiered CPUs shortly. Adding a couple in-order cores just for tiny system processes that wake periodically, but don't actually do much just makes a ton of sense.


Stil 8 more than my desktop pc :p


The 3950X is also an AM4 part (non-threadripper).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: