Hacker Newsnew | past | comments | ask | show | jobs | submit | kamarg's commentslogin

> Recovery is beyond the scope of most small practices.

Seems like a business opportunity. Could probably work very similar to other collections agencies where they either buy the debt for pennies on the dollar or take a percentage of the collected amount.


Yeah, there's an industry of companies that insert themselves between the medical record and the insurance company to upcode claims and get better payments. This article is about the reverse process, where the insurance company looks at the claims and downcodes them to send worse payments.

IMHO, in office care should be more of a time and materials billing than billing based on procedures done. Of course, then the doctors' billing office would aggressively measure time the doctor spent, and the insurance company would suggest the doctor took too long for whatever.


It's much easier to treat it like identity theft where the business's problem becomes the customer's problem to solve. In this case, insurance didn't pay what was required so the patient does. There's already a potential collections agency involved if the patient doesn't pay.

Who do you think is easier to squeeze the money from? A mega-insurance corporation or your sick grandma?


Sending your patient's 'debt' to collections promptly is very unpopular with the patients, and the insurance companies will 100% insist that the patient is responsible.


You'll notice the doctor's office in the article already has a team of billing experts. But instead of working on new claims, they are being forced to relitigate claims they already submitted that weren't accepted.


Humpback whales have been known to defend other animals from Orcas. The food of my enemy is my friend type of thing I guess.


The humpback whale and orca beef is kind of hilarious.

Orcas are kind of assholes and it seems other animals care.


> With such a… fixed opinion, the hidebound government agencies can't allow themselves to think that the overall risk profile has increased

This is only true if policy makers are logically consistent. If they're not then whatever they feel like at the time goes. I don't think it takes much effort to see that logical consistency is not something that is highly valued by the people currently in charge.


I knew a guy that was extremely upset to find out that there isn't a lenticular garage door product so he could have it display an "animated" image as it opened/closed.


> This means people are okay with whatever is happening

Or it means the game has been rigged which is exactly the point of all the gerrymandering going on right now. Tons of people are not okay with what is happening but their power to replace their government representative has been or is currently being effectively stolen from them.


A lot of those things require you to give them your email to get the coupon. They could do that with a button as well but couldn't then follow up to let you know you didn't buy anything from them in 24hrs.


> Sure, you can scale it, but if an LLM takes, say, $1 million a year to run an AGI instance, but it costs only $500k for one human researcher, then it still doesn’t get you anywhere faster than humans do.

Just from the fact that the LLM can/will work on the issue 24/7 vs a human who typically will want to do things like sleep, eat, and spend time not working, there would already be a noticeable increase in research speed.


This assumes that all areas of research are bottlenecked on human understanding, which is very often not the case.

Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.

An LLM would not be able to do 24/7 work in this case, and would only save a few hours per day at most. Scaling up to many experiments in parallel may not always be possible, if you don't know what to do with additional experiments until you finish the previous one, or if experiments incur significant cost.

So an AGI/expert LLM may be a huge boon for e.g. drug discovery, which already makes heavy use of massively parallel experiments and simulations, but may not be so useful for biological research (perfect simulation down to the genetic level of even a fruit fly likely costs more compute than the human race can provide presently), or research that involves time-consuming physical processes to complete, like climate science or astronomy, that both need to wait periodically to gather data from satellites and telescopes.


> Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.

With automation, one AI can presumably do a whole lab's worth of parallel lab experiments. Not to mention, they'd be more adept at creating simulations that obviates the need for some types of experiments, or at least, reduces the likelihood of dead end experiments.


Presumably ... the problem is this is an argument that has been made purely as a thought experiment. Same as gray goo or the paper clip argument. It assumes any real world hurdles to self improvement (or self-growth for gray goo and paper clipping the world) will be overcome by the AGI because it can self-improve. Which doesn't explain how it overcomes those hurdles in the real world. It's a circular presumption.


What fields do you expect these hyper-parallel experiments to take place in? Advanced robotics aren't cheap, so even if your AI has perfect simulations (which we're nowhere close to) it still needs to replicate experiments in the real world, which means relying on grad students who still need to eat and sleep.


Biochemistry is one plausible example. Deep Mind made hug strides in protein folding satisfying the simulation part, and in vitro experiments can be automated to a significant degree. Automation is never about eliminating all human labour, but how much of it you can eliminate.


Only if it’s economically feasible. If it takes a city sized data center and five countries worth of energy, then… probably not going to happen.

There are too many unknowns to make any assertions about what will or won’t happen.


> ...the fact that the [AGI] can/will work on the issue 24/7...

Are you sure? I previously accepted that as true, but, without being able to put my finger on exactly why, I am no longer confident in that.

What are you supposed to do if you are a manically depressed robot? No, don't try to answer that. I'm fifty thousand times more intelligent than you, and even I don't know the answer. It gives me a headache just trying to think down to your level. -- Marvin to Arthur Dent

(...as an anecdote, not the impetus for my change in view.)


>Just from the fact that the LLM can/will work on the issue 24/7 vs a human who typically will want to do things like sleep, eat, and spend time not working, there would already be a noticeable increase in research speed.

Driving A to B takes 5 hours, if we get five drivers will we arrive in one hour or five hours? In research there are many steps like this (in the sense that the time is fixed and independent to the number of researchers or even how much better a researcher can be compared to others), adding in something that does not sleep nor eat isn't going to make the process more efficient.

I remember when I was an intern and my job was to incubate eggs and then inject the chicken embryo with a nanoparticle solution to then look under a microscope. In any case incubating the eggs and injecting the solution wasn't limited by my need to sleep. Additionally our biggest bottleneck was the FDA to get this process approved, not the fact that our interns required sleep to function.


If the FDA was able to work faster/more parallel and could approve the process significantly quicker, would that have changed how many experiments you could have run to the point that you could have kept an intern busy at all times?


It depends so much on scaling. Human scaling is counterintuitive and hard to measure - mostly way sublinear - like log2 or so - but sometimes things are only possible at all by adding _different_ humans to the mix.


My point is that “AGI has human intelligence” isn’t by itself enough of the equation to know whether there will be exponential or even greater-than-human speed of increase. There’s far more that factors in, including how quickly it can process, the cost of running, the hardware and energy required, etc etc

My point here was simply that there is an economic factor that trivially could make AGI less viable over humans. Maybe my example numbers were off, but my point stands.


Did you read his followup Echopraxia? How would you say it compared to Blindsight?


I've read Blindsight multiple times, and weirdly couldn't finish Echopraxia. Maybe this time.


Mostly battery life I would think


> What jobs or opportunities were people posting Reddit comments or whatever getting that are now going to AI?

Content writing, product reviews (real & fake), creative writing, customer support, photography/art to name a few off the top of my head.


Now the astroturfing is done by AI agents instead of hard working serfs in a call center, you hate to see it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: