Sometimes, late at night when I'm trying to sleep, and I hear the grumble of a Harley, or my neighbors staggering to their door, I wonder: why do we not have earflaps, like we do eyelids?
Fair enough, I thought what I'd originally written for that section was too wordy, so I asked Claude to rewrite it. I'll go a bit lighter on the AI editing next time. Here's most of the original with the code examples omitted:
Watching Tuomas' initial talk about Linear's realtime sync, one of the most appealing aspects of their design was the reactive object graph they developed. They've essentially made it possible for frontend development to be done as if it's just operating on local state, reading/writing objects in an almost Active Record style.
The reason this is appealing is that when prototyping a new system, you typically need to write an API route or rpc operation for every new interaction your UI performs. The flow often looks like:
- Think of the API operation you want to call
- Implement that handler/controller/whatever based on your architecture/framework
- Design the request/response objects and share them between the backend/frontend code
- Potentially, write the database migration that will unlock this new feature
Jazz has the same benefits of the sync + observable object graph. Write a schema in the client code, run the Jazz sync server or use Jazz Cloud, then just work with what feel like plane js objects.
There is a risk to DUI checkpoints and speeding checkpoints even if you are doing neither. Innocent people die at the hands of the police fairly often, but many more are wrongfully imprisoned. Wanting to limit your interactions with the police is a valid safety and risk management proposal.
If safety was the real goal the police themselves would announce checkpoints and speed traps. This gives people a chance to not drink too much or speed in the first place. I've lived in places where DUI checkpoints were all announced ahead of time, and I think for many it was a serious reminder to not drink and drive.
But for many DUI checkpoints safety is not the goal. It's simply a pretext to check everyone's papers.
That only works if you actually have a DUI checkpoints all the time everywhere. It is a random check because then people need to be careful all the time. If there is a DUI checkpoint 2 times per year in your area you can just avoid driving drunk at those two days per year.
They do. DUI checkpoints are heavily advertised here in California for exactly that reason — to deter drunk drivers. The only thing they don't do is tell the exact intersection so drunks don't just drink and drive the other direction.
What about people that are on their way to work (or somewhere else time sensitive) who want to be aware of places with a slowdown because of checkpoints?
1. I didn't say people can't have another opinion. I didn't say that because I don't believe it and never implied otherwise.
2. Supposing I did believe it and did say it, I would be well within my rights to say it. The First Ammendment assures the right to say things like that, no matter how dumb and misguided those things are.
Doctors and teachers handle that, since they have regular contract with children. At least in my state they're required by law to report suspected child abuse.
As a side note, these laws are doing damage to organizations looking for volunteers that I don't think we have fully grasped yet.
People are willing to put a couple of weekends into making a middle school or high school competition happen. They're a lot less willing to do it if they have to go to an FBI station to get fingerprinted or produce a state and federal background check first. And I'm not talking about people with something to hide; I'm talking about people with a completely clean background who just don't want to be bothered.
NZ OP here. Few weeks ago there was a morning checkpoint to inspect everyone's child car seat installation.
Few years back got chased by a cop and ticketed (and scolded) for not restraining kiddo (small town and my clever 2yo somehow learned how to unbuckle themselves (even that houdini clip didn't help)). Warned I could get prosecuted for child neglect if I continue. I suspect the daycare has tipped him off.
Making slippery slope arguments like this is not discussing in good faith. I was providing the context of someone who lives in that geo-political area.
And check that every single one of your federal papers are present and punctual. We'd hate to have someone that's unbecoming to share a full disclosure of themselves to officers on the road.
My guess (it would be nice if they actually said...) is that they were missing the required lang attribute on their HTML.
<html lang=en>
If not defined it will default to unknown (not to the user's locale) and so this makes Chrome guess. And there wasn't much text in the lightbox (which might be a different page?) for the browser to infer from.
That's probably true, however I'd be really curious to know why Chrome's guess for "yes" is the Spanish word for "Y-junctions" instead of the English word yes.
This was my immediate thought, but it doesn't sound like what they did. They also mention they still get the Google translate pop up - which suggests they didn't.
Though it sounds like they serve many languages, so they'd need to do each survey individually.
We used to have this problem in AWS Rekognition; a poorly detected face -- e.g. a blurry face in the background -- would hit with high confidence with every other blurry face. We fixed that largely by adding specific tests against this [effectively] null vector. The same will work for text or other image vectors.
And it was only parallax pseudo-3D. It wasn't anything like the 3DS lenticular stereo effect. Four (!) head-tracking cameras on the front tried to understand where you were looking, and then the UI would shift about in response. There was no convincing 3D effect.
The intention was that the UI would reveal extra info (like meta-data about a photo, or calendar details) when you shifted your head POV. Like a gesture that you could do one-handed while riding the bus. (But not discoverable, and twitchy, and tail-wagging-the-dog.)
Given how coarse the tracking had to be so that the UI wasn't constantly shifting, it would have been far simpler and more energy efficient just to use the accelerometer instead of the fancy camera setup. And just use that for some nice parallax effects in the launcher; not as a core UI interaction.
Thanks, AI blog writer. Did the AI product manager see too much customer pushback hitting your AI account managers?