Hacker Newsnew | past | comments | ask | show | jobs | submit | smaddox's commentslogin

> though we did not delve into the observation

Oh, the irony.


Gold/M2SL(Billion USD) is currently around 0.12. In 1980, it peaked around 0.45. Monthly average since 1960 is 0.11. In late 2011 it peaked around 0.18.

Gold / Global M2 would be a better metric, but I haven't analyzed that yet.


I switched to Windsurf.ai when cursor broke for me. Seems about the same but less buggy. Haven't used it in the last couple weeks, though, so YMMV.


I found the Windsurf agent to be relatively less capable, but their inline tool (and the “Tab” they’re promoting so much) has been extremely underwhelming, compared to Cursor.

The only one in this class to be even worse in my experience is Github Copilot.


For those who are actually interested in this field, the proper way to measure this would be with a four point probe. You do need a constant current source and a high-impedence voltage meter, though.

Also, you don't need to solder wires to the sample. But if you want to measure the hall resistance of a thin film of a semiconductor, you can solder a glob of indium on to four corners of a 1 cm x 1 cm wafer, put it in a magnetic field, and then do basically the same measurement as four point probe, except not inline.


You can train that size of a model on ~1 billion tokens in ~3 minutes on a rented 8xH100 80GB node (~$9/hr on Lambda Labs, RunPod io, etc.) using the NanoGPT speed run repo: https://github.com/KellerJordan/modded-nanogpt

For that short of a run, you'll spend more time waiting for the node to come up, downloading the dataset, and compiling the model, though.


> That application was submitted in March 2024 and is on track for approval in December 2026

Huh? Is this something where there's multiple incremental steps in the process, and that date is just the final approval stamp, or does it actually just take more than 1.5 years?


I'm generally pretty open to the idea that the NRC is bad and needs to be reformed, but a year and a half doesn't seem that unreasonable? Especially for a new reactor design.


I'm pretty sure this is extremely fast for the nuclear industry.


It's always fun seeing someone jump into NRC discourse for the first time.


I hope this involves a lot of much faster feedback / modification cycles, and the process ends when all the feedback has been addressed.


Feds


Agree more levels seems like it could be beneficial. And another Meta paper published a day later shows how that might work: https://ai.meta.com/research/publications/large-concept-mode...


Apparently they're using this CMOS sensor: https://www.onsemi.com/products/sensors/image-sensors/ar0136...

It's not an event camera, so it's very much taking images, which are then being processed by computer vision algorithms.

Event cameras seem more viable than CMOS sensors for autonomous vehicle applications in the absence of LIDAR. CMOS dynamic range and response isn't as good as the human eye. LIDAR+CMOS is considerably better in many ways.


Nah thats old. They now use https://www.sony-semicon.com/files/62/pdf/p-15_IMX490.pdf and they run multiexposure on every frame. Stupid high dynamic range


Oh, interesting. 120db is much better than what I thought possible with CMOS sensors. Thats competetive with the human eye.

Well, that definitely changes my opinion on how feasible/competitive camera-only autonomous driving can be.


Next time you’re facing blinding direct sunlight, pull out your iPhone and take a picture/video. It’s a piece of cake for it. And it has to do far more post processing to make a compelling jpeg/heic for humans. Tesla can just dump the data from the sensor from short&long exposures straight into the neural net.


Humans can also decide they want to get a better look at something and move their head (or block out the sun with their hands) which cameras generally don't do.


> if you compare Intel's headcount with comparable companies (AMD, Nvidia), you can see Intel is really wasteful

AMD and NVIDIA are fabless. They are not comparable. It takes far more people to R&D a cutting-edge process node and run a dozen fabs 24/7 365.25 than it takes to design cutting edge chips.


> AMD and NVIDIA are fabless. They are not comparable.

Which is why I said:

> You can compare just the Intel Products head count (excluding the fabs).

Both AMD and Nvidia have under 30K folks. Intel has what - 115K employees? I can assure you that 85K of them are not working in Foundry. TSMC, BTW, has 76K employees in case you want to do a Foundry comparison. Anyway you slice it (compare products or compare fabs), Intel is wasteful.


If both sides are frosted, then you will have a similar effect as subsurface scattering.


Yes, it looks similar. But it's still not sub surface scattering.

Light inside the frosted glass just goes in a straight direction. It will not behave like this : https://blendamator.com/wp-content/uploads/2023/09/schema-ra...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: