Gold/M2SL(Billion USD) is currently around 0.12. In 1980, it peaked around 0.45. Monthly average since 1960 is 0.11. In late 2011 it peaked around 0.18.
Gold / Global M2 would be a better metric, but I haven't analyzed that yet.
I found the Windsurf agent to be relatively less capable, but their inline tool (and the “Tab” they’re promoting so much) has been extremely underwhelming, compared to Cursor.
The only one in this class to be even worse in my experience is Github Copilot.
For those who are actually interested in this field, the proper way to measure this would be with a four point probe. You do need a constant current source and a high-impedence voltage meter, though.
Also, you don't need to solder wires to the sample. But if you want to measure the hall resistance of a thin film of a semiconductor, you can solder a glob of indium on to four corners of a 1 cm x 1 cm wafer, put it in a magnetic field, and then do basically the same measurement as four point probe, except not inline.
You can train that size of a model on ~1 billion tokens in ~3 minutes on a rented 8xH100 80GB node (~$9/hr on Lambda Labs, RunPod io, etc.) using the NanoGPT speed run repo: https://github.com/KellerJordan/modded-nanogpt
For that short of a run, you'll spend more time waiting for the node to come up, downloading the dataset, and compiling the model, though.
> That application was submitted in March 2024 and is on track for approval in December 2026
Huh? Is this something where there's multiple incremental steps in the process, and that date is just the final approval stamp, or does it actually just take more than 1.5 years?
I'm generally pretty open to the idea that the NRC is bad and needs to be reformed, but a year and a half doesn't seem that unreasonable? Especially for a new reactor design.
It's not an event camera, so it's very much taking images, which are then being processed by computer vision algorithms.
Event cameras seem more viable than CMOS sensors for autonomous vehicle applications in the absence of LIDAR. CMOS dynamic range and response isn't as good as the human eye. LIDAR+CMOS is considerably better in many ways.
Next time you’re facing blinding direct sunlight, pull out your iPhone and take a picture/video. It’s a piece of cake for it. And it has to do far more post processing to make a compelling jpeg/heic for humans. Tesla can just dump the data from the sensor from short&long exposures straight into the neural net.
Humans can also decide they want to get a better look at something and move their head (or block out the sun with their hands) which cameras generally don't do.
> if you compare Intel's headcount with comparable companies (AMD, Nvidia), you can see Intel is really wasteful
AMD and NVIDIA are fabless. They are not comparable. It takes far more people to R&D a cutting-edge process node and run a dozen fabs 24/7 365.25 than it takes to design cutting edge chips.
> AMD and NVIDIA are fabless. They are not comparable.
Which is why I said:
> You can compare just the Intel Products head count (excluding the fabs).
Both AMD and Nvidia have under 30K folks. Intel has what - 115K employees? I can assure you that 85K of them are not working in Foundry. TSMC, BTW, has 76K employees in case you want to do a Foundry comparison. Anyway you slice it (compare products or compare fabs), Intel is wasteful.
Oh, the irony.