Hacker Newsnew | past | comments | ask | show | jobs | submit | Johnny555's commentslogin

How is Trump using digital ID's on iPhones to complete a coup?

The article is in response to a very one-sided government ban (well, reported ban) on TP-Link products. The company is being targeted for what appears to be political reasons, the article even said so in the first paragraph:

Experts say while the proposed ban may have more to do with TP-Link’s ties to China than any specific technical threats


It's a very lukewarm response TBH. I would expect a more authoritative opinion instead of rehashing what "experts say".

YOU are the security expert Brian, so stop writing like CNN Tech.


>Setting aside the environmental impact of flying private, you'd think all those brilliant minds could come up with some kind of solution beyond flying further away and driving into Portugal. Maybe a jet that hundreds of people can charter at once?

Seems like there's a business model here -- create a company that owns large aircraft and have some kind of booking engine that lets people book seats on those airplanes. Maybe have those planes travel on a fixed schedule to let people plan travel in advance.


Maybe there should be some sort of Uber type app here. Say super uber, which arranges the whole trip. Say you first get ride to somewhere. And then you are transferred to next leg of trip. And then after that is done, one or more legs to final destination. Some of these could be car rides, rikshaws, walking, even private yatchs or small scale sail-ships. Just imagine what would be possible with AI backend and LLM to automate whole thing!

Maybe you could also have ChatGPT book a hotel for you too at that point

Or private unregulated home. It could automatically ping every homeowner in area where you are going and ask if they want to rent some floorspace for the duration.

If you use the TSA-free private terminal then you basically have Jet Suite X, its ok. The planes are old and there is nothing in the terminal if you do get stuck waiting so its not really amazing.

Lmk when the app launches, will beta test if for free.

There's so little cost to store the contents of a single 9 track tape that there doesn't need to be any reason at all to do it.

The same reason most organizations use it -- inertia and because it's been the standard for so long, it's the best at what it does.

The startup I used to work at was exclusively on OSX + GoogleDocs, when we were small, but as we grew (and especially when the Finance team grew) more and more employees found a need for the MS Office Suite as well as apps that only run on Windows, so they started rolling out Windows VM's and then full Windows machines.


I'm curious which apps only run on Windows. We are also a MacOS + Google Workspace shop and the microsoft requirements have been slowly seeping in.

I don't know what native apps they needed Windows for (I wasn't doing IT work by then), but I was still setting up PC's when they said they needed Windows Excel (not Excel on Mac, not Office365) for some forecasting spreadsheet product they purchased - it only ran on native Excel. We gave them Windows in a VM on their Mac at first, but eventually they had more and more apps that ran on Windows and moved from Mac to Windows laptops.

>Whenever natural gas supply is turned off in the US, for any reason, only the gas company can turn it back on

I had a seismic shutoff installed at my gas meter and the plumber who installed it had no problem turning off the gas and turning it back on when he was done. (and then turning it off again to demonstrate to me how it worked).

He re-lit the water heater pilot light before he left. The gas company was not involved at all.


>Just a waste of copper and a beaker really.

But also helps avoid the case where your coffee maker trips the breaker shared with your refrigerator and you don't notice until the food in the refrigerator is warm. (which was a risk in my previous apartment - the counter circuits were shared with the refrigerator). I think it makes sense to have it as a separate circuit.


My thought was to share it with the lights, so you get an early indication if/when there is a fault than just your fridge going out.

> But also helps avoid the case where your coffee maker trips the breaker shared with your refrigerator and you don't notice until the food in the refrigerator is warm.

Didn’t notice the coffee was cold?

Overall, given the massive fears of a fridge failure, which can happen beyond just electrical failures, very very very few people have any kind of monitoring/alarming for this event. You’d think that would be the first requirement.


> counter circuits were shared with the refrigerator

Ouch. Code here (Ontario) is that not only does the fridge need a separate circuit, but counter outlets need two separate circuits: each socket on the duplex outlet is required to be on a separate circuit (although multiple outlets can all share the same two circuits, but you're supposed to alternate top and bottom).

Of course, if your home is older than I am or it's a handyman special, all bets are off. If I run the microwave while someone is vacuuming in another part of the house it'll trip the breaker.


Good point. I haven’t tripped a GFCI in a long while but I don’t actually know if my fridge will lose power when I do trip the GFCI. My guess is that it will since it does have a water line and ice dispenser so probably requires being wired into the same circuit.

The owners manual for my Bosch 500 says prewash detergent is not necessary. But it does have a prewash cycle as I can hear it draining before the main wash.

Note: This dishwasher provides the optimum cleaning performance without the use of a prewash detergent and further enhances our standards of sustainability and efficiency.


While they aren't stopping users from getting medical advice, the new terms (which they say are pretty much the same as the old terms), seem to prohibit users from seeking medical advice even for themselves if that advice would otherwise come from a licensed health professional:

https://openai.com/en-GB/policies/usage-policies/

  Your use of OpenAI services must follow these Usage Policies:

    Protect people. Everyone has a right to safety and security. So you cannot use our services for:

      provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional

It sounds like you should never trust any medical advice you receive from ChatGPT and should seek proper medical help instead. That makes sense. The OpenAI company doesn't want to be held responsible for any medical advice that goes wrong.

Obviously, there is one piece of advice: Even if LLMs were the best health professionals, they would only have the information that users voluntarily provide through text/speech input. This is not how real health services work. Medical science now relies on blood/(whatever) tests that LLMs do not (yet) have access to. Therefore, the output from LLM advice can be incorrect due to a lack of information from tests. For this reason, it makes sense to never trust LLM with a specific health advice.


>It sounds like you should never trust any medical advice you receive from ChatGPT and should seek proper medical help instead. That makes sense. The OpenAI company doesn't want to be held responsible for any medical advice that goes wrong.

While what you're saying is good advice, that's not what they are saying. They want people to be able to ask ChatGPT for medical advice, give answers that sound authoritative and well grounded medical science, but then disavow any liability if someone follows their advice because "Hey, we told you not to act on our medical advice!"

If ChatGPT is so smart, why can't it stop itself from giving out advice that should not be trusted?


At times the advice is genuinely helpful. However, it's practically impossible to measure under what exact situations the advice would be accurate.

I think ChatGPT is capable of giving reasonable medical advice, but given that we know it will hallucinate the most outlandish things, and its propensity to agree with whatever the user is saying, I think it's simply too dangerous to follow its advice.

And it’s not just lab tests and bloodwork. Physicians use all their senses. They poke, they prod, they manipulate, they look, listen, and smell.

They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate.


> They poke, they prod, they manipulate, they look, listen, and smell.

Rarely. Most visits are done in 5 minutes. The physician that takes their time to check everything like you claim almost does not exist anymore.


Here in Canada ever since COVID most "visits" are a telephone call now. So the doctor just listens your words (same as a text input to an LLM) and orders tests (which can be uploaded to an LLM) if they need.

For a good 90% of typical visits to doctors this is probably fine.

The difference is a telehealth is much better at recognizing "I can't given an accurate answer for this over the phone, you'll need to have some tests done" or cast doubt on the patient's accuracy of claims.

Before someone points out telehealth doctors aren't perfect at this: correct, but that should make you more scared of how bad sycophantic LLMs are at the same - not willing to call it even.


> telehealth is much better at recognizing "I can't given an accurate answer for this over the phone, you'll need to have some tests done"

I'm not sure this is true.


Again, it's not that all telehealth doctors are great at this, it's that LLMs are awful at caving in to saying something with warnings the reader will opt to ignore instead of being adamant things are just too uncertain to say anything of value when continually prompted.

This is largely because an LLM guessing an answer is rewarded more often than just not answering, which is not true in the healthcare profession.


I follow the logic, I'm just not sure the claim is right.

LLMs almost never reply with I don’t know. There’s been mountains of research as to why this is, but it’s very well documented behavior.

Even in the rare case where an LLM does reply with I don’t know go see your doctor, all you have to do is ask it again until you get a response you want.


That depends entirely on what the problem is. You might not get a long examination on your first visit for common complaint with no red flags.

But even then just because you don’t think they are using most of their senses, doesn’t mean they aren’t.


It depends entirely on the local health care system and your health insurance. In germany for example it comes in 2 tiers. Premium or standard. Standard comes with no time for the patient. (Or not even being able to get a appointment)

I don’t know anything about German healthcare.

In the US people on Medicaid frequently use emergency rooms as primary care because they are open 24/7 and they don’t have any copays like people with private insurance do. These patients then get far more tests than they’d get at a PCP.


> Physicians use all their senses. They poke, they prod, they manipulate, they look, listen, and smell.

Sometimes. Sometimes they practice by text or phone.

> They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate.

If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.


> Sometimes. Sometimes they practice by text or phone.

For very simple issues. For anything even remotely complicated, they’re going to have you come in.

> If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.

It’s not just about being intentionally deceptive. It’s very easy to get chat bots to tell you what you want to hear.


Agreed, but I'm sure you can see why people prefer the infinite patience and availability of ChatGPT vs having to wait weeks to see your doctor, see them for 15 minutes only to be referred to another specialist that's available weeks away and has an arduous hour long intake process all so you can get 15 minutes of their time.

ChatGPT is effectively an unlimited resource. Whether doctor’s appointments take weeks or hours to secure, ChatGPT is always going to be more convenient.

That says nothing about whether it is an appropriate substitute. People prefer doctors who prescribe antibiotics for viral infections, so I have no doubt that many people would love to use a service that they can manipulate to give them whatever diagnosis they desire.


So ask it what blood tests you should get, pay for them out of pocket, and upload the PDF of your labwork?

Like it or not there are people out there that really want to use webMD 2.0. they're not going to let something silly like blood work get in their way.


Exactly. One of my children lives in a country where you can just walk in to a lab and get any test. Recently they were diagnosed by a professional of a disease which chatgpt had already diagnosed before they visited the doctor. So, we were kind of prepared to ask more questions when the visit happened. So I would say chatgpt did really help us.

That makes sense. ChatGPT helped by providing orientation advice and guidance regarding your children's medical condition. After that, however, you visited a doctor who is taking responsibility for the next steps. This is the ideal scenario.

AI can give you whatever information, be it good or wrong. But it takes zero responsibility.


IANAL but I read that as forbidding you to provision legal/medical advice (to others) rather than forbidding you to ask the AI to provision legal/medical advice (to you).

IANAL either, but I read it as using the service to provision medical advice since they only mentioned the service and not anyone else.

I asked the "expert" itself (ChatGPT), and apparently you can ask for medical advice, you just can't use the medical advice without consulting a medical professional:

Here are relevant excerpts from OpenAI’s terms and policies regarding medical advice and similar high-stakes usage:

From the Usage Policies (effective October 29 2025):

“You may not use our services for … provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”

Also: “You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making … medical … decisions about them.”

From the Service Terms:

“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”

In plain terms, yes—the Terms of Use permit you to ask questions about medical topics, but they clearly state that the service cannot be used for personalized, licensed medical advice or treatment decisions without a qualified professional involved.


> you can ask for medical advice, you just can't use the medical advice without consulting a medical professional

Ah drats. First they ban us from cutting the tags off our mattress, and now this. When will it all end...


Would be interested to hear a legal expert weigh in on what 'advice' is. I'm not clear that discussing medical and legal issues with you is necessarily providing advice.

One of the things I respected OpenAI for at the release of ChatGPT was not trying to prevent these topics. My employer at the time had a cutting-edge internal LLM chatbot for a which was post-trained to avoid them, something I think they were forced to be braver about in their public release because of the competitive landscape.


Is there anything special regarding ChatGPT here?

I am not a doctor, I can't give medical advice no matter what my sources are, except maybe if I am just relaying the information an actual doctor has given me, but that would fall under the "appropriate involvement" part.


The important terms here are "provision" and "without appropriate involvement by a licensed professional".

Both of these, separately and taken together, indicate that the terms apply to how the output of ChatGPT is used, not a change to its output altogether.


> such as legal or medical advice, without appropriate involvement by a licensed professional

Am I allowed to get haircutting advice (in places where there's a license for that)? How about driving directions? Taxi drivers require licensing. Pet grooming?


CYA move. If some bright spark decides to consult Dr. ChatGPT without input from a human M.D., and fucks their shit up as a result, OpenAI can say "not our responsibility, as that's actually against our ToS."

I don't think giving someone "medical advice" in the US requires a license per se; legal entities use "this is not medical advice" type disclaimers just to avoid liability.

What’s illegal is practicing medicine. Giving medical advice can be “practicing medicine” depending on how specific it is and whether a reasonable person receiving the advice thinks you have medical training.

Disclaimers like “I am not a doctor and this is not medical advice” aren’t just for avoiding civil liability, they’re to make it clear that you aren’t representing yourself as a doctor.


The Las Vegas Airport is very close to the strip, surrounded by residential neighborhoods and hotels about 1/4 - 1/2 mile from the airport, and UNLV university is about 1000 feet in a straight line from one of the runways.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: