Hacker Newsnew | past | comments | ask | show | jobs | submit | akerl_'s commentslogin

Is there no line, in your opinion? At this point, there are computers (many of which run variants of Linux in many cases) in my:

1. Laptop

2. Phone

3. Car

4. Washing machine

5. Handheld GPS

6. E-reader

7. TV

Is there some intrinsic different between a device where the manufacturer has programmed it using an ARM/x86-based chip vs a microcontroller vs some other method that means in the 1st case I have the right to install whatever I want? Because that feels like what's happened with cell phones: manufacturers started building them with more capable and powerful components to drive the features they wanted to include, and because those components overlapped what we'd seen in desktop computers, we've decided that we have an intrinsic right to treat them like we historically treated those computers.


For everything on that list, I'd say that if you figure out how to run software of your choice on them the manufacturer shouldn't be able to legally stop you. (And specifically, the anti-circumvention clauses of the DMCA are terrible).

Phones get a lot of attention in this regard because they've replaced a large amount of PC usage, so locking them down has the effect of substantially reducing computing freedom.


This is sort of delightfully circular?

> I'd say that if you figure out how to run software of your choice on them the manufacturer shouldn't be able to legally stop you.

That's already the case. The manufacturer can't come after you for anything you do to your device. They can:

1. Set up their terms of service so that things you do to alter the device are grounds for blocking your access to cloud/connected services that they host on their infrastructure

2. Attempt to make it difficult to run software of your choice.

3. Use legal means to combat specific methods of redistributing tools to other people that compromise things they do in number 2.


There is already a widespread notion of "general computing" device.

For all intents and purposes, a laptop computer and a smart phone are one. This is, for example, evidenced by the fact we run general purpose "applications" on them (not defined ahead of time), including a most general app of them all (a web browser).

For other device types you bring up, I would go with a very similar distinction: when you can run an open ended app platform like a browser, why not be able to install non-browser based applications as well? Why require going through a vendor to do that?


"why not" isn't a compelling case for something to be a fundamental right.

I'm not saying I dislike the concept of being able to run my own code on my devices. I love it. I do it on several devices, some of which involve circumventing manufacturer restrictions or controls.

I just don't think that because manufacturers started using the same chips in phones as computers, they magically had new requirements applied to them. Phones had app stores before they were built using the same chips. My watch lets me install apps from an app store.


You've asked for an intrinsic difference between a class of devices: no, you are unlikely to want to run general purpose apps on your washing machine. Yes, you are likely to do so on your smart phone. Probable on your modern "smart TV". Low probability on your eReader.

Legislation like EU Cybersecurity Act hopefully pushes things into more of a fundamental rights thing by demanding that devices don't go into the trash pile as soon as the vendor stops issuing security updates by mandating an ability to keep operating these devices without negatively affecting Internet at large (by, for example, becoming a part of a botnet).

This is already possible with many general compute devices by putting a version of up-to-date GNU/Linux or FreeBSD or... on it. And for a smaller subset of GC smartphones, with AOSP-based Android.


I'm not asking for an intrinsic difference: I'm suggesting that if "I can install custom applications/code on this device I own" is a fundamental right, there would need to be an intrinsic difference. My personal opinion is that there is not an intrinsic difference. That "I want to do it to these devices and not those" can't be the justification for it being a right that I'm able to.

To counter your claim, I've tried to explain what that intrinsic difference is in my previous comment.

I am not sure if you are disagreeing with me or ignoring my point :)


The only one that sounds potentially harmful is the car and in that case I think it should have to meet emissions standards and prove you aren't running a defeat device but like... Yeah. I should be allowed to run my own infotainment system that doesn't crash and doesn't spy on me

I'd like to be able to install my own software on all of these

I'm not asking what you'd like to do. I'd like to be able to customize all of those things too.

I'm asking why taking a device that uses a microcontroller and making a new model with an ARM chipset and a Linux-based OS seems to suddenly make people treat the ability to install custom software on it as a fundamental right.


If I own it, regardless of if it's Linux or ARM based, I should be able to install things on it.

Video game consoles?

Good catch. They are similarly noteworthy to phones: there are all kinds of projects and tools built around making custom and modded games for the Gameboy, or hacking the NES, but there wasn't a movement saying Nintendo was violating our fundamental rights by not allowing users to overwrite or modify the code inside the actual console.

Then consoles started shipping with recognizable internals, and we had waves of people very frustrated at things like Sony's removal of OtherOS, or Nintendo's attempts to squash the exploits that enabled Wii Homebrew.


Yes, you absolutely should have the right to install (or uninstall) whatever software you want on any of those, assuming it contains writable program memory. The alternative is a nightmarish dystopian future where your washing machine company is selling its estimate of your political inclinations, sexual activities, and risk aversion to your car insurance company, your ex-husband, your trade union representative, and your homeowners' association.

I thought I had this line, but I imagined if my credit card had writable program memory, I'd be fine with a third party preventing me from using it for its intended purpose if it wasn't trusted there. There must be some purpose for my own good for preventing me from writing to my own program memory, and I should be able to void this purpose if I deem it worth it.

Likewise, I'd be fine with banking apps on phones requiring some level of trust, but it shouldn't affect how the rest of my phone works so drastically.


Why would your credit card need to act against your interests? The only thing it should be doing is signing transactions to signal that you approve. The credit card company has their own computers that can be consulted to ask them if they approve a transaction. They don't need one in your pocket. They can rent a rack in a data center. It's not that expensive.

Similarly, the banking app on your phone should be representing your interests, 100%. It may need to keep secrets, such as a private transaction signing key, from your bank or from your boyfriend, but not from you. And it definitely should not be collecting information on your phone against your will or without your knowledge. But that is currently common practice.


Why?

My washing machine could be programmed to do all of those things you're worried about without any writeable memory. Why does the parts the manufacturer puts into it turn it from an appliance that washes my clothes to a computer that I have a right to install custom code on?


The principle is that the owner should have full control of their own device, because that's what defines private property. In particular, everything that the maker can make the device do must be something that the owner can make the device do. If the device is simply incapable of doing a certain thing, that might be bad for the owner, but it's not an abrogation of their right to their own property, and it doesn't create an ongoing opportunity for exploitation by the maker.

Maybe in theory your washing machine could be programmed to do those things without writable program memory. Like, if you fabricated custom large ROM chips with the malicious code? And custom Harvard-architecture microcontrollers with separate off-chip program and data buses? But then the functionality would be in theory detectable at purchase time (unlike, for example, Samsung's new advertising functionality: https://news.ycombinator.com/item?id=45737338) and you could avoid it by buying an older model that didn't have the malicious code. This would greatly reduce the maker's incentives to incorporate such features, even if it were possible. In practice, I don't think you could implement those features at all without writable program memory, even with the custom silicon designs I've posited here.

If you insist that manufacturers must not prevent owners from changing the code on their devices, you're insisting that they must not use any ROM, for any purpose, including things like the PLA that the 6502 used to decode instructions. It's far more viable, and probably sufficient, to insist that owners must be able to change any code on their devices that manufacturers could change.


Why?

What negative thing happens to the dnsmasq project if they just don’t argue about whether or not it’s a big deal.


Some product decides not to use it. Someone loses a contract supporting it. Someone doesn't get a job because their work isn't favored anymore.

I think you're trying to invoke a frame where because dnsmasq is "open source" that it isn't subject to market forces or doesn't define value in a market-sensitive way. And... it is, and it does.

Free software hippies may be communists at heart but they still need to win on a capitalist battlefield.


> The developer typically defines its threat model.

Is this the case? As we're seeing here, getting a CVE assigned does not require input or agreement from the developer. This isn't a bug bounty where the developer sets a scope and evaluates reports. It's a common database across all technology for assigning unique IDs to security risks.

The developer puts their software into the world, but how the software is used in the world defines what risks exist.


Maybe we should issue a CVE for company vulnerability response processes that blindly take CVSS scoring as input without evaluating the vulnerability.

> blindly take CVSS scoring as input without evaluating the vulnerability.

Evaluating the CVSS score in your own context is the work I'm talking about.

It does no one any good to have a CVE that says "may lead to remote code execution", when in fact it cannot, and if the reporter did more work, then you wouldn't need hundreds of people to independently do that work to determine this is garbage.


People being able to collectively analyze a vulnerability instead of having to all do it independently is pretty much the whole reason for having a CVE database, so I'm glad we agree.

I mean, I'm fine with the complaint about vulnerabilities that ambiguously refer to possible code execution, but that is a problem that long predates CVE.

It seems weird to take an inaccurate paraphrase from a commenter and then use it to paint the authors with your desired brush.

Not sure the replies to that comment help the cause at all.

There are tons of advent calendars commercially sold that have fewer than 25 slots.

It’s safe to say this ship has sailed.


If they’re released every day for 12 days, you can do a puzzle every other day.

If the were released every other day, people who wanted to do them for 12 straight days could not.


> If the were released every other day, people who wanted to do them for 12 straight days could not.

If they instead waited 12 days, they could start with the 6 puzzles already released, and then have enough puzzles to solve once a day for the next 12 days.


A quick skim of https://iverify.io/blog makes it seem pretty clear that iVerify’s audience is people who are interested in security, not just existing industry experts.

But then skim the submission article and try to evaluate which audience it seems written for.

Considering they have stuff like "Located within the Sysdiagnoses in the Unified Logs section (specifically, Sysdiagnose Folder -> system_logs.logarchive -> Extra -> shutdown.log)" in the article, my guess is that they're aiming for people who at least have a basic understanding of security, not general users, as those wouldn't understand an iota of that.


Considering there is actualy not an iota of technically security challenging stuff (specifically, any computer user can understand your quote that there is a log file located at some path, there is 0 security understanding required there), using your own logic we can deduce the general audience was the target

The typical/general computer user wouldn't even understand the ">" character, I think you either don't grasp the wide range of people who sit in front of computers daily, or you over-estimate their ability of grasping computer concepts, because you'd say that sentence to the typical computer user and most of them wouldn't understand most of it.

That's fine, you don't need to understand the > character, it clearly says there is some log file located at some folder.

> because you'd say that sentence to the typical computer user and most of them wouldn't understand most of it.

Yeah, do try that, just not your cut version focusing on the irrelevance of a specific path and the meaning of >, but the whole paragraph. Do see how many people fail to understand that there was some file at some folder. You could even ask extra SAT questions "what do you thing a "shutdown log" is, does it record activities during device shutdown?")


This argument seems neatly circular.

Any example where somebody says an article doesn’t do a great job defining its terms just becomes proof that the authors only wanted readers who already understand the terms.


I think it's fine for the magazine, but I would have liked to see it expanded in the HN submission title, since many of us are not cybersecurity specialists.

Some stuff is written for some people, other stuff is written for other people. This shouldn't be hard to understand, nor particularly novel either.

These are corporate roles, not retail staff.

> As for current NFA items holder, the constitution requires them to be compensated fair value if they are to be confiscated.

Where is that in the constitution?


5th amendment

I think you may have to check the text again? The 5th amendment says you get due process, and requires compensation if something is taken for “public use”.

Passing a law which you can challenge in court that says “machine guns are illegal now, turn them in so we can melt them down for scrap” is not public use.


Taking it for the public smelting furnace for the state to melt down under the auspices of public safety is a public use.

You can pretty clearly see this isn’t the case. Prior to the reversal of the bump stock ban, owners of bump stocks were required to surrender or destroy them.

That's because the state argued they were unregistered machine guns, thus never legally held property. It is not at all comparable to legal, stamped machine guns then being made illegal.

The EO couldn't have forced an uncompensated surrender of a registered bump stock, were it one existed before the Hughes Amendment.


The case law I’m seeing does not seem to provide that level of certainty.

There’s plenty of flexibility in the case law for what counts as “public use”, but nearly all of it is about individual cases where the government takes a specific person’s specific property, or damages it in some way. There doesn’t appear to be much case law at all for the guardrails if the government declares an object to be illegal to possess writ large for safety purposes and requires owners to destroy or surrender those objects.

I’m not saying there’s no path where the courts would require compensation, but for the level of certainty you’re claiming, I’d expect there to be a more clear line you can draw to existing cases.


It's wild to claim with certainty "clearly see that's not the case" then just claim you're just uncertain here.

My initial claim in any case was that the constitution requires the compensation, not that there is 0% chance the government would violate the constitution.


I’m saying: I am certain the constitution does not guarantee payment in this situation. I am not certain a court couldn’t find a way to connect the takings clause and expand current case law to apply to a case like you’re describing in the future.

None of the above has anything to do with the government violating the constitution.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: