It would stop in your country but not globally. That means your country loses all expertise and will be much worse off to defend itself.
Case in point: In 2007 Germany passed a "hacking law" (§202c). On its face, it was supposed to prevent black hat work. Except it very predictably also did enormous damage to security research.
> We evaluated the diagnostic power (...). We estimated a 94% accuracy for (our method), significantly higher than the (traditional method) (66% accuracy).
Both methods have counting in their name, but they are comparing the diagnostic power.
The sensitivity of such a test would be 0. This test had a sensitivity of 91% versus 61% for the glass slide count method, which is a large improvement.
The sample size is pretty small here and the control group even smaller. The paper concludes that a larger study is necessary to confirm the result.
If you read the actual link I don't think they're saying that using it as a covid test with some specific threshold of microclots has a 94% accuracy but just that the raw microclot count has a 94% accuracy.
The title on hn which implies that seems to be inaccurate and it's not the original title of the article.
No, that does not seem to be what they are saying.
> We evaluated the diagnostic power of the device in a cohort of 45 LC patients and 14 healthy pediatric donors. We estimated a 94% accuracy for the microclot count using the devices, significantly higher than the traditional counting of microclots on slides (66% accuracy).
They are comparing the predictive power and using accuracy (instead of sensitivity, recall, F1, etc.). For their method "using the devices", they compute an accuracy of the predictive power, not of the count, of 94%. For the previous method they say the accuracy is 66%.
Basic questions: Is accuracy even a good metric for this? Is 94% a good value or just the difference between bad and very bad?
It might very well be that their improvement is from bad to really good, but the point is that a raw stat of "94% accuracy" is useless without context and so is the headline.
OK, I looked at the actual paper, and what 94% actually is is the 0.94 area under the curve for the receiver-operating characteristic curve (the plot of the true positive rate (TPR) against the false positive rate (FPR) at each threshold setting) not the accuracy for a specific binary result (e.g. at a specific arbitrary threshold).
> In general, an AUC of 0.5 suggests no discrimination (i.e., ability to diagnose patients with and without the disease or condition based on the test), 0.7 to 0.8 is considered acceptable, 0.8 to 0.9 is considered excellent, and more than 0.9 is considered outstanding
That is exactly why I gave the trivial example of an "always No" test. It has perfect specificity (zero false positives) and has accuracy corresponding to prevalence. The sensitivity is zero, however, which is the point.
The paper explains what it actually means, so it's not nonsense. See my other comment https://news.ycombinator.com/item?id=45558941 it's the area under the curve for the receiver-operating characteristic curve and 94% is extremely good.
The primary conclusion of this research was basically just "this looks like it would be worth doing more research on." Which is a fair conclusion for a study this small.
Connect your phone to a display, mouse, keyboard and get a full desktop experience.
At the time smartphones were not powerful enough, cables were fiddly (adapters, HDMI, USB A instead of a single USB c cable) and virtualization and containers not quite there.
Today, going via pkvm seems like promising approach. Seamless sharing of data, apps etc. will take some work, though.
Much higher resource demands, which then requires tricks like upscaling to compensate. Also you get uneven competition between GPU vendors because it is not hardware ray tracing but Nvidia raytracing in practice.
On a more subjective note, you get less interesting art styles because studio somehow have to cram raytracing as a value proposition in there.
Will we? The hardware you want for AI and the hardware you want for super computing seem to have different priorities, e.g. concerning floating point precision.
compute is compute, yes. Maybe some older models might have to be rewritten to take advantage of a gpu farm's parallel processing capabilities, but humanity will ultimately benefit if the current AI boom fizzles
The question is usually more about whether the inverse is also continuous, smooth, easy to compute....etc.