That's between Waymo and their investors at this point. They claim it's not, but it's not there's any way for them to actually prove they aren't, like the moon landing.
FSD on the other hand works fine without sleight of hand techniques, since I’ve taken it up to rural Maine without any cellular connectivity and it worked great, even in irregular rural traffic situations.
> active fluid exchangers operating at speed spanning kilometers of real estate, to get dissipation/area anywhere back near linear/area again
Could the compute be distributed instead? Instead of gathering all the power into a central location to power the GPUs there, stick the GPUs on the back of the solar panels as modules? That way even if you need active fluid exchanger it doesn’t have to span kilometers just meters.
I guess that would increase the cost of networking between the modules. Not sure if that would be prohibitive or not.
> He’s working with a company to develop nanosensors able to detect movement in the iceberg so he has advance warning of a flip
The "nanosensors" doesn't sound likely at all. If I were to tasked to create a "iceberg sudden flip detector" I would break the problem into two parts. Part 1 is monitoring the shape of the iceberg as it is changing. Part 2 is modelling how stable the iceberg is given the measured shape. Both sounds like a wicked hard problem even if you have a large team of engineers.
For the first maybe you could do periodic ultrasounds from the inside out. Embeding an array of accustic transducers and an array of microphones in the ice and then using signal processing black magic to pick out the shape of the echo you get back from the ice-ocean surface. Or just hang around with a ship mounted side scanning sonar and monitor the iceberg from the outside.
The second one should be a "simple" monte carlo simulation. But to validate it you would need data recorded from the evolution of many icebergs. Which I suspect would be expensive and lengthy to obtain.
"Nanosensors" is useless technobabble. But I bet you could do it by carefully monitoring the rocking of the iceberg in waves. Watch the period of the berg's movements; as the melting brings it closer to instability, the period would get longer and longer, which could give you some warning. (You couldn't predict the consequence of some portion breaking off, but it might give you something.)
> traffic lights by design are very clearly red, or green
I suspect you feel this because you are observing the output of a very sophisticated image processing pipeline in your own head. When you are dealing with raw matrixes of rgb values it all becomes a lot more fuzzy. Especially when you encounter different illuminations, exposures and the cropping of the traffic light has noise on it. Not saying it is some intractably hard machine vision problem, because it is not. But there is some variety and fuzzyness there in the raw sensor measurements.
I once solved a machining problem using SVG and a bit of javascript and python.
I was prototyping an orrery. It involved cutting out a lot of ad-hoc gears and frame bits on my CNC out of a sheet of brass. It was relatively easy to generate the g-code for the individual parts using fusion360, but then it was a lot of faff to zero the machine such that it cut the part from a fresh part of the brass sheet without wasting too much metal in between the parts. It involved a lot of guesswork, and eyeballing. And even with that there was a lot of brass “wasted” between the parts especially since you could only move your part in x-y but not easily rotate it.
As a solution I wrote a python script which converted the g-code into svg, and a simple one page website where i could drag the svg around and rotate it on a visual representation of the sheet. Once i found a good safe spot for it to be cut the page told me the x,y, theta coordinates for it. And then with a separate python script i could transform the g-code using the coordinates and rotation. This way the svg renderer was doing the heavy lifting of visualising the cutting paths, and i only needed to concentrate on the relatively easy transforms.
I don’t understand your point about UPS 2976. You make it sound as if people there were hurt by the engine parts hitting them. But in actuality it is the airplane crashing into them which killed those unfortunate.
Even aviation turbines are quite safe and uncontained engine mallfunctions are very rarely a problem. On top of that there is every reason to think that ground based power generating applications can be even safer. There weight is much less of a constraint, so you can easily armour the container to a much higher assurance level. The terrestrial turbine is not jostled around so you have less of a concern about gyroscopic effects. And finally you can install the power generating turbine with a much larger keep out zone. All three factors making terrestrial power generating jets safer than the aviation ones.
The plane suffered an engine mount failure, which tore a hole in the wing, sprayed shrapnel into engine 2, which caused a compressor stall reducing thrust past the survivable level. Then it crashed into a fuel recycling plant with a full load of jet fuel.
The scary part of the mount failure is that the mounts cracked in an unexposed part where visual inspection did not reveal the damage. It wasn't due for a teardown and inspection until it had traveled 25% (80% of the maintenance window) farther. That's why they grounded the entire fleet.
Takeoffs are dangerous because they run the engines hard, and parts are operating in the supersonic range.
I’m aware of the facts you say. But they have nothing to do with terrestial operations. If the same thing happened to an engine sitting next to a data center the worst thing which could happen is it knocks the neighbouring engines out too. And if you are worried about that you can add more armouring between the engines. Which you can do because they don’t need to fly. Heck you can put a row of hesco barriers between engines in a terrestial application. But either way the data center is not going to suddenly fall on a fuel recycling plant.
The purpose of zoomed out comparison is to show the quality reduction of applying this tool. The purpose of zoomed in before picture is to show how a typical pixel misalignment. Aligned pixels can be easily imagined.
> The purpose of zoomed out comparison is to show the quality reduction of applying this tool.
Reduction? Shouldn't the tool be improving the quality of the image? If it is reducing the quality then why do it?
> The purpose of zoomed in before picture is to show how a typical pixel misalignment.
Okay, but how does this supposed "misalignment" look on the picture? Would I even notice it? If not, does it matter? Did they just zoom in, and draw a misaligned grid over the zoomed in image? Or the grid fault lines are visible in the gestalt?
> Aligned pixels can be easily imagined.
Everything can be easily imagined. Misaligned pixels can be imagined. They could just write "our processed images look better" and let me imagine how much nicer they are. The purpose of a comparison is to prove that they are nicer/better/crisper whatever they want to claim.
The way I see it, converting something to pixel art is akin to lossy compression or quantization. The goal is to retain as much detail as possible given the constraints.
The exact way that pixels are misaligned is a feature of the specific AI models that generated the almost-pixel art.
There are more details in the fixed version too, e.g. an extra detailed dark line within right leg (tibia) that is not present in the original; where do these details come from?
> But for all the complex parts, the eval function is basically one bit (you crashed/not crashed)
It is not like crashed/not crashed is the only possible eval function.
It can be easily much more nuanced than that. The driving system should be able to predict how everyone will move next is a good sub-goal. Checking if you were in the positon of an other driver, seeing what they see would our code be driving the same way as them is also a good sub goal. (Obviously total alignment here is neither possible nor is it desireable.)
Other evaluation is to check if you forced anyone to change speed/swerve to avoid you. And then you can have synthetic scenairos for every time you approached a lane which had priority over you. You can add conflicting vehicles approaching (with different timings and speeds) and see if own vehicle notices and handles them correctly. (And “handles them correctly” is not a binary crashed/not crashed either, you can check if the vehicle inconvenienced the simulated vehicle.)
They do follow hand signals from police. There are many videos documenting the behaviour. Here is one from waymo: https://waymo.com/blog/2024/03/scaling-waymo-one-safely-acro...
Look for the embed next to the text saying “The Waymo Driver recently interpreting a police officer’s hand signals in a Los Angeles intersection.”
Or here is a video observing the behaviour in the wild: https://youtu.be/3Qk_QhG5whw?si=GCBBNJqB22GRvxk1
Do you want confirmation about something more specific?
reply