I think the product was just too early for its time, and there is not much demand for it.
For what it's worth, the founder (Ren Ng) went back to academia, and was highly instrumental in computer vision research, e.g. being the PI on the paper for NeRF: (https://dl.acm.org/doi/abs/10.1145/3503250)
I don't think it was quite too early, it just makes tradeoffs that are undesirable.
Lytro as I understand it, trades a huge amount of resolution for the focusing capability. Some ridiculous amount, like the user gets to see just 1/8th of the pixels on the sensor.
In a way, I'd say rather than too early it was too late. Because autofocus was already quite good and getting better. You don't need to sacrifice all that resolution when you can just have good AF to start with. Refocusing in post is a very rare need if you got the focus right initially.
And time has only made that even worse. Modern autofocus is darn near magic, and people love their high resolution photos.
I find it very useful for wildlife photos. Autofocus never seems to work well for me on e.g. birds in flight.
It's also possible to generate a depth map from a single shot, to use as a starting point for a 3D model.
They're pretty neat cameras. The relatively low output resolution is the main downside. They would also have greatly benefited from consulting with more photographers on the UI of the hardware and software. There's way too much dependency on using the touchscreen instead of dedicated physical controls.
I'd argue the opposite, consumers need more resolution than pros.
A pro will show up with a 300mm f/2.8, a tripod, a camera with good AF and high ISO, and the skills, plan and patience to catch birds in flight.
But all that stuff is expensive. The consumer way to approximate the lack of a good lens is a small, high res sensor. That only works in bright light, but you can get good results with affordable equipment in the right conditions. Greatly reducing the resolution is far from optimal when you can't have a big fancy lens to compensate.
And where is focus the hardest? Mostly where you want to have high detail. Wildlife, macro, sports.
I am sorry to hear that OP, I hope you fight the good fight and wish you all the best.
On a different note, a quick cursory glance of this company really makes me wonder who even gave them $160M? The company site is soulless and filled with corporate jargon, and the whole company smells of bloat and leadership team is a long list of people in bullshit jobs. Is this where VC money goes these days? I am dumbfounded by the degree of mismatch between capital and utility
Well, a cursory glance into the funding round shows an equity firm (Highland Europe) which had one of their partners moved into a director position at Deepski. Could be the guy collecting "AI leadership experience" for his resume.
Another notable investor is a french public entity (bpifrance) which might very well have similar reasons but on the country level, having to allocate funds to "AI" to demonstrate France leading role in future technology.
Note that this doesn't mean Deepski and it's leadership can't be great - but the thought experiment of some well networked people noticing they could all benefit over a glass of wine also doesn't seem too far off.
Edit: Maybe there's an angle for someone really serious about this FOSS dilemma here, I hear public entities really hate bad PR - maybe ask bpifrance how they feel about this?
If they do some decarbonization real estate SaaS and need connections with both regulators or whoever issues certificates and REITs etc, then it makes sense. I don't think they need a lot of software for that.
Maybe that's what they meant: doctors can always switch into "drug discovery software, telehealth software, embedded software in medical devices" and resent those too!
Is the average person a truth seeker in this sense that performs truth-seeking behavior? In my experience we prioritize sharing the same perspectives and getting along well with others a lot more than a critical examination of the world.
In the sense that I just expressed, of figuring out the intention of a user's information query, that really isn't a tuned thing, it's inherent in generative models from possessing a lossy, compressed representation of training data, and it is also truth-seeking practiced by people that want to communicate.
You are right. I have ignored completely the context in the phrasing "truth seeker" was made, given my own wrong interpretation to the phrase, and I in fact agree with the comment I was responding to that they "work with the lens on our reality that is our text output".
If ChatGPT claims arsenic to be a tasty snack, OpenAI adds a p0 eval and snuffs that behavior out of all future generations of ChatGPT. Viewed vaguely in faux genetic terms, the "tasty arsenic gene" has been quickly wiped out of the population, never to return.
Evolution is much less brutal and efficient. To you death matters a lot more than being trained to avoid a response does to ChatGPT, but from the point of view of the "tasty arsenic" behavior, it's the same.
It's difficult to ascertain the interests and intent of people, but I'm even more suspicious and uncertain of the goals of LLMs who literally cannot care.
I keep seeing news articles that claim Grok is flawed or biased recently, but I've been unable to replicate any such behavior on my computer.
That being said, I don't ask any controversial or political questions; I use it to search for research papers. But if I try the occasional such question, the response is generally balanced and similar to that of any other LLM.
1. It’s generally difficult to quantify such risks in any meaningful manner
2. Provision of any number adds liability, and puts you in a damned-if-does, damned-if-it-doesn’t-work-out situation
3. The operating surgeon is not the best to quantify these risks - the surgeon owns the operation, and the anaesthesiologist owns the patient / theatre
4. Gamblers quantify risk because they make money from accurate assessment of risk. Doctors are in no way incentivised to do so
5. The returned chance of 1/3 probably had an error margin of +/-33% itself
Not a lawyer but I do wonder if refusal to provide any number also adds liability, especially if it can be demonstrated to a court later that a reasonable estimate was known or was trivial to look up, and the deciding party would not have gone through with the action that ended in harm if they had been provided said number. I'm also not seeing how giving a number and then the procedure working out results in increased risk, perhaps you can expand on that? Like, where's the standing for a lawsuit if everything turned out fine but in one case you said the base rate number for a knee replacement surgery was around 1/1000 for death at the hospital and 1/250 for all-cause death within 90 days, but in another case you refused to quantify?
> It’s generally difficult to quantify such risks in any meaningful manner
According to the literature 33 out of 100 patients who underwent this operation in the US within the past 10 years died. 90% of those had complicating factors. You [ do / do not ] have such a factor.
Who knows if any given layman will appreciate the particular quantification you provide but I'm fairly certain that data exists for the vast majority of serious procedures at this point.
I've actually had this exact issue with the veterinarian. I've worked in biomed. I pulled the literature for the condition. I had lots of different numbers but I knew that I didn't have the full picture. I'm trying to quantify the possible outcomes between different options being presented to me. When I asked the specialist, who handles multiple such cases every day, I got back (approximately) "oh I couldn't say" and "it varies". The latter is obviously true but the entire attitude is just uncooperative bullshit.
> puts you in a damned-if-does, damned-if-it-doesn’t-work-out situation
Not really. Don't get me wrong, I understand that a litigious person could use just about anything to go after you and so I appreciate that it might be sensible to simply refuse to answer. But from an academic standpoint the future outcome of a single sample does not change the rigor of your risk assessment.
> Doctors are in no way incentivised to do so
Don't they use quantifications of risk to determine treatment plans to at least some extent? What's the alternative? Blindly following a flowchart? (Honest question.)
> The returned chance of 1/3 probably had an error margin of +/-33% itself
What do you mean by this? Surely there's some error margin on the assessment itself but I don't see how any of us commenting could have any idea what it might have been.
> According to the literature 33 out of 100 patients who underwent this operation in the US within the past 10 years died. 90% of those had complicating factors. You [ do / do not ] have such a factor.
Everyone has complicating factors. Age, gender, ethnicity, obesity, comorbidities, activity level, current infection status, health history, etc. Then you have to factor in the doctor's own previous performance statistics, plus the statistics of the anaesthesiologist, nursing staff, the hospital itself (how often do patients get MRSA, candidiasis, etc.?).
And, of course, the more factors you take into account, the fewer relevant cases you have in the literature to rely on. If the patient is a woman, how do you correctly weight data from male patients that had the surgery? What are the error bars on your weighting process?
It would take an actuary to chew through all the literature and get a maximally accurate estimate based on the specific data that is known for that patient at that point in time.
No one said anything about a maximally accurate estimate. This is exactly the sort of obtuse attitude I'm objecting to.
By complicating factors I was referring to things that are known to have a notable impact on the outcome of this specific procedure. This is just summarizing what's known. It explicitly does not take into account the performance of any particular professional, team, or site.
Something like MRSA is entirely separate. "The survival rate is 98 out of 100, but in this region of the country people recovering from this sort of thing have been exhibiting a 10% risk of MRSA. Unfortunately our facility is no exception to that."
If the recipients of a procedure are predominately female and the patient is a male then you simply indicate that to them. "The historical rate is X out of Y, but you're a bit unusual in that only 10% of past recipients are men. I'm afraid I don't know what the implications of that fact might be."
You provide the known facts and make clear what you don't know. No weasel words - if you don't know something then admit that you don't know it but don't use that as an excuse to hide what you do know. It's utterly unhelpful.
So, while you are correct, you are missing an important piece:
most people cannot think like this
I'm not talking about patients, I'm talking about everyone, including doctors. They just can't think in a probabilistic sense. And you'll counter that it's just reporting facts, but they don't even know which ones to report to you, how to report them, none of it. It just doesn't seem to fit in many peoples heads.
this is part of the mindset had by doctors that makes some people want to “do their own research” rather than trust their physician. A medical intervention has to have positive expected value for it to be a good idea, and figuring out the expected value has to involve some quantification of risks. If doctors don’t want to do that because they could get sued if they don’t give a maximally accurate estimate and producing a maximally correct estimate would be too much work, then fine, it’s a free country and I don’t want to make doctors do anything they don’t feel like doing, but they are creating a situation where parents who want to figure out if something is a good idea have no choice but to start googling things themselves.
I’ve undergone some surgeries that were not without risks and every time, i’ve been stonewalled by doctors when asking for basic information like “in your personal practice, what is the success rate for this surgery?”. Always something like “Oh, everyone is different, so there’s no way to give any estimates.” The only options are, either they have some estimate they think is accurate enough that they’re comfortable recommending the surgery but they won’t tell me (in which case they’re denying me useful information for their own benefit), or they have no idea and are recommending the surgery for some other reason (a very concerning possibility lol). Either way, it instantly makes our relationship adversarial to some extent, and means I need to do my own research if I want to be able to make an informed decision.
I doubt doctors do: my guess would be most doctors follow a list of best practices devised by people like malpractice actuaries and by their sense of the outcomes from experience.
Thanks for sharing the realities you experience. The rest of this is picayuni.
> Doctors are in no way incentivised to do so
Personal pride, care for patient, and avoiding the mess of a bad outcome seem like powerful incentives. That said, I assume you mean they are not given explicit bonuses for good outcomes (the best trend to attract business and the highest salaries).
In Norway a pregnant woman over forty is offered genetic counselling because of the risk of Downs syndrome. These risks are definitely quantifiable and no liability is generated by providing them. The counsellor (a doctor) explains the risks and the syndrome and apart from this appointment is not otherwise involved.
This could surely be done for other situations, especially surgical procedures as the statistics should be collected and associated not only with the procedure but also the hospital and surgeon.
On the off chance you're not being facetious: why? Isn't it part of their job description to weigh the ups and downs of any operation before conducting it? I'd imagine failure to do so would open them to liability.
1. Presumably, the surgeon has determined that this specific intervention is the best possible intervention of all the possible ones (fewest downsides, best outcome, etc). There are always alternatives - including #wontfix.
2. Once this decision has been made, I don't want them second guessing, I want them 100% confident in the decision and their abilities. If there's any lingering doubt - then return to step 1 and re-evaluate.
Reading the core paper discussed in the article, along with all its context, will take ~hours. This article is a great 5-minute update on the latest research with a well written primer on the topic
The Kanji for Mitsubishi is 三菱, which literally means “three rhombus”. It is possible that they were independently invented, but the hypothesis on family crest crossovers still feels more likely
The design is much older in east asia, I've seen it on 19th century textiles and pottery for sure but I suspect it goes back a lot more than that.
The shape is somehow associated with the name mitsubishi, possibly through visual or phonetic punning that is common in pictogram-based writing systems and tonal languages. Mitsubishi the name is more widespread than this one family or this group of companies, and the symbol appears to have long associated with the name per se rather than this specific mitsubishi. Mitsu sounds like three, I don't know what the rhombus connection is.
That shade of red has a specific proper name in japanese (think like alice blue in english) and has long been associated with japan by the japanese.
I don't think any of this is a coincidence there's a connection between all this stuff. But I don't know what it is and I don't think the article author does either.
> a specific proper name in japanese (think like alice blue in english)
I hadn't heard of that one [0], the example that comes to mind is "Canary Yellow", but I suppose that's not so bound up to a specific cultural history.
reply