With all these new APIs, I worry what the web will look like in 10 years, will be have this huge API surface always will us? Will we start to see deprecations in the web api? I'm worried if the XSLT is setting a precedent on deprecating "complex" and "hard to maintain" APIs.
EDIT: Think also about webgl1/2, webgpu, webxr, websocket, webrtc, webauthn ...
A decade or so ago, after an interview that didn't go that well, a candidate reached out asking for feedback. I gave him some algorithms and data structures advice and where to read what and stuff like that and he responded really positively then reached out to me months later to tell me he went and learned all the stuff and got a job at some now-famous startup (Airbnb? I don't remember). I was early in my career back then and was happy for him. Now, if I were to do that I'd be like "Damn, this guy is capable of taking the feedback and actioning on it. I should have somehow found a way to hire him!"
It's about "investment": People spend a lot of time, consciously on purpose or implicitly as a matter of consequence, on making up their plans & preferences.
They've been building up mental velocity to whatever they're going to do.
When you give them a contradictory opinion or advice, you're asking them to discard that investment and abruptly switch directions.
Instead of asking them to drive off their mental road and into the dirt or turn around, offer them something akin to a rail track that they can gradually/subtly switch onto.
I gave an advice to my friend who is doing a startup. I told him it probably won't work out. But we continue with our own line of thinking because the outcome totally depends on reality. Also, my friend can tack on lot of ifs later on (if only this and this and this had happened, I would be successful) to "prove" himself right. It might be possible that with no decisive outcome favoured by reality, we would both continue to be right in our heads.
I can't wait to see how the coding performance will start to drop on with newer tools and versions, as people no longer discuss them in the same detail and quantity as they used to. People using LLMs will be stuck in the pre-2023 tools, using new stuff is an uphill battle already (you have to give it the correct docs manually)
It looks like Valve wants to avoid going down the road of an extremely locked down system like that. They even view the ability to load alternate OS's as a feature of their products.
They could offer both locked down signed software on top of their hardware and allow for bypass when the user wants to install their own thing. I prefer by default to have locked down signed chain of software bootstrapping but I do want to also have the ability to use my own.
good advice, someone should have told me this years ago, when you start, you need to know that this is not your game, work, watch and learn. Don't even think about "this is wrong" "they should do this instead" "they have no idea" "I would do it much better"
"Delivering this feature goes against everything I know to be right and true. And I will sooner lay you into this barren earth, than entertain your folly for a moment longer."
That’s the only way I would utter it — if I can then sit down and so do it. If I am asking someone else to do I would ask them to tell me how hard it would be and if they need help or if they suggest a different approach.
I once worked at a place where one of the partners consistently claimed the engineering team over-built and over-thought everything (reality: almost everything was under-engineered and hanging on by a thread.)
His catch phrase was "all you gotta do is [insert dumb idea here.]"
It was anxiety inducing for a while, then it turned into a big joke amongst the engineering staff, where we would compete to come up with the most ridiculous "all you gotta do is ..." idea.
Similar to my experience doing low-level systems work, being prodded by a "manager" with a fifth of my experience. No, I'm not going to implement something you heard about from a candidate in an interview, an individual whom we passed on within the first 30 minutes. No, you reading out the AI overview of a google search to me for a problem that I've thought about for days ain't gonna work, nor will it get us closer to a solution. Get the fuck out of the way.
I'm there right now at my current job. It's always the same engineer, and they always get a pass because (for some reason) they don't have to do design reviews for anything they do, but they go concern-troll everyone else's designs.
Last week, after 3 near-misses that would have brought down our service for hours if not days from a corner this engineer cut, I chaired a meeting to decide how we were going to improve this particular component. This engineer got invited, and spent thr entire allocated meeting time spreading FUD about all the options we gathered. Management decided on inaction.
People think management sucks at hiring good talent (which is sometimes true, but I have worked with some truly incredible people), but one of the most consistent and costly mistakes I’ve observed over my career has been management's poor ability to identify and fire nuisance employees.
I don’t mean people who “aren't rockstars” or people for whom some things take too long, or people who get things wrong occasionally (we all do).
I mean people who, like you describe, systemically undermine the rest of the team’s work.
I’ve been on teams where a single person managed to derail an entire team’s delivery for the better part of a year, despite the rest of the team screaming at management that this person was taking huge shortcuts, trying to undermine other people’s designs in bad faith, bypassing agreed-upon practices and rules and then lying about it, pushing stuff to production without understanding it, etc.
Management continued to deflect and defer until the team lead and another senior engineer ragequit over management’s inaction and we missed multiple deadlines at which point they started to realize we weren’t just making this up for fun.
You could argue that, but I think that a bug is the software failing to do what it was specified, or what it promised to do. If security wasn't promised, it's not a bug.
Which is exactly the case here. This CVE is for a hobby codec written to support digital preservation of a some obscure video files from the 90’s that are used nowhere else. No security was promised.