> marquee tags were a missed opportunity to implement horizontal scrolling often used on shopping websites. Now it uses JS to achieve the same.
I have been trying to find other more commonly known UI patterns that could be done natively.
Are you sure you are talking about the functionality of the marquee tag? What exactly do you mean by "implement horizontal scrolling often used on shopping websites. Now it uses JS to achieve the same"?
For a banner-type text "Sale: 50% off until New Year" I can imagine this. And this is possible with almost no JS, by using a CSS animation on translateX. I think you need to pass the initial width of the content one time from JS to CSS via a custom property IIRC, and update that variable on resize, for example using a ResizeObserver, to avoid a jump when reaching the "wrap around"...
But – sorry if this a misunderstanding – I have a sneaking feeling that you might also have things in mind like swipable horizontal lists etc?
This is also possible using CSS in theory (scroll-snap and overflow-auto, and CSS to hide the scrollbars). But last I tried to use it, it simply wasn't good enough to sell it, compared to a fat JS library that did the trick, perfectly, on mobile and desktop.
When it comes to UX niceties, I feel it's pretty common to have HTML+CSS solutions that are good enough in theory, but suck in practise.
For the horizontal scrolling with "snap", I would also like a good JS-free solution. But I feel that the more interactive and complex the UX patterns become, it would be senseless bloat to build a specific implementation into the platform.
I think that "autocomplete" inputs are a good example for this, as discussed in another thread.
I once tried to implement a custom autocomplete using datalist (including dynamically updating the list with xhrs). In the end, the native element had so many issues that it was a waste of time and a JS solution was way better. At the time, that JS solution was Alpine.Js, because it was a b2b app with minimal a11y requirements and JS-allergic developers.
Within an hour, I was polishing keyboard use while the "near-native" solution using datalist was broken in a thousand different ways in every browser and in the end didn't work well at all.
Before I saw the URL, I read the title and expected a critique of digital platform enshittification, capitalism as a whole, or something like that.
But I also like the decrapulator idea.
In certain environments, it quickly becomes hard to follow these rules, especially when SEO and "best practises" don't really matter.
For example, you might want to avoid reaching into a React/Vue/whatever component's root element via CSS to style it, and instead prefer to add a wrapping HTML element.
From the markup consumer's perspective, this is terrible (div soup).
But as long as it does not make a difference to the user experience, this difference can be irrelevant for some types of web apps (especially b2b software, anything that is not a public site).
And from the developer's perspective, the component preserves better isolation and changes are more predictable.
Still I get the ick when looking at the unneccessarily deep DOM tree and totally understand objecting to that.
I try to pick the middle and apply things like flex (child) properties, margin, max-width etc to root elements of components, but nothing else. Basically all properties that only relate to the surroundings.
This principle also makes sense to me when architecting regular CSS for public websites, e.g. when following approaches similar to BEM etc.
Deno is very good at marketing: they also have a nice page about the history of JS.
But just like with this JS trademark thing, it feels like they present themselves as spokespeople and spearhead for the whole JS community, which feels kind of misleading and grandiose.
The mentioned timeline site (link below) also has this issue: it slowly shifts focus from things like the first JS version, the creation of XMLHttpRequest, to later focusing on Deno milestones, as if these events would have had comparable impacts:
And that seems kind of dishonest and designed to nudge outsiders towards thinking Deno would be the default server runtime now, which doesn't seem to be true.
Pun aside, my new hobby is using ChatGPT with a pre-prompt along the lines of
"Please reply to each of my prompts with the strongest possible counterarguments you can give. Do not output other text", and then feed it with Wikipedia articles or news headlines.
Goes a long way to demonstrate what false balance is and why AI chatbots rarely contribute anything towards having a more balanced opinion.
It will attack pretty much anything in a seemingly objective tone, doubting even basic historical facts or derailing the conversation.
For example, when prompted with a sentence about the date of Thatcher's election victory in the UK and the date she took office, it complained about implying causation between the election result and her tenure, because formally, the only the monarchy can decide about the PM.
That was also one of the more useful answers :)
But the quoted sentence didn't even say what it claimed, it just said she took office after that election result.
"Paris is the capital of France" is a coherent sentence, just like "Paris dates back to Gaelic settlements in 1200 BC", or "France had a population of about 97,24 million in 2024".
The coherence of sentences generated by LLMs is "emergent" from the unbelievable amount of data and training, just like the correct factoids ("Paris is the capital of France").
It shows that Artificial Neural Networks using this architecture and training process can learn to fluently use language, which was the goal? Because language is tied to the real world, being able to make true statements about the world is to some degree part of being fluent in a language, which is never just syntax, also semantics.
I get what you mean by "miracle", but your argument revolving around this doesn't seem logical to me, apart from the question: what is the the "other miracle" supposed to be?
Zooming out, this seems to be part of the issue: semantics (concepts and words) neatly map the world, and have emergent properties that help to not just describe, but also sometimes predict or understand the world.
But logic seems to exist outside of language to a degree, being described by it. Just like the physical world.
Humans are able to reason logically, not always correctly, but language allows for peer review and refinement. Humans can observe the physical world. And then put all of this together using language.
But applying logic or being able to observe the physical world doesn't emerge from language. Language seems like an artifact of doing these things and a tool to do them in collaboration, but it only carries logic and knowledge because humans left these traces in "correct language".
> But applying logic or being able to observe the physical world doesn't emerge from language. Language seems like an artifact of doing these things and a tool to do them in collaboration, but it only carries logic and knowledge because humans left these traces in "correct language".
That's not the only element that went into producing the models. There's also the anthropic principle - they test them with benchmarks (that involve knowledge and truthful statements) and then don't release the ones that fail the benchmarks.
And there is Reinforcement Learning, which is essential to make models act "conversational" and coherent, right?
But I wanted to stay abstract and not go into to much detail outside my knowledge and experience.
With the GPT-2 and GPT-3 base models, you were easily able to produce "conversations" by writing fitting preludes (e.g. Interview style), but these went off the rails quickly, in often comedic ways.
Part of that surely is also due to model size.
But RILHF seems more important.
I enjoyed the rambling and even that was impressive at the time.
I guess the "anthropic principle" you are referring to works in a similar direction, although in a different way (selection, not training).
The only context in which I've heard details about selection processes post-training so far was this article about OpenAIs model updates from GPT-4o onwards, discussed earlier here:
The parts about A/B-Testing are pretty interesting.
The focus is ChatGPT as an enticing consumer product and maximizing engagement, not so much the benchmarks and usefulness of models. It briefly addresses the friction between usefulness and sycophancy though.
Anyway, it's pretty clever to use the wording "anthropic principle" here, I only knew the metaphysical usage (why do humans exist).
Sounds dystopian to me, I'd want to reconcile it by not allowing "one-party consent" for people to record me.
Not sure if the state laws you're referencing are in reality limited to phone calls, but I strongly dislike unregulated public camera use.
Your vision (no pun intended) is the story of the Black Mirror episode "The entire history of you", IMO from the show's golden age.
edit; I know that surveillance cameras pass this line already, but here they have to be announced with signs. And even when they aren't, to me state or police surveillance is different from potentially everyone stealthily recording me in private or public spaces.
It's possible the state laws in question (Tennessee) only apply to audio recordings, which would suit my desire. I also don't believe that the idea of a rolling buffer that normally discards its contents to be morally against the idea of notification of recording, or of seeking someone's consent.
I'd be fine with glasses that only record audio in such a way, that illuminate an LED once the "record" button has been pressed. If audio is being recorded into a buffer at all times, but then discarded unless triggered to start "recording", then maybe that should not count as "recording" under the law.
As a practical matter, if one is in a situation where such recording is warranted, by the time you press the record button, you've already missed important information that's relevant to the context of the recording. Allowing a 60-second rolling buffer that then gets dumped to storage when "actual" recording starts should be allowed.
1984? It's not the only surveillance state story. Everyone loves when you can dig up something from decades ago that is no longer representative.
Cameras everywhere just keeps everyone honest, right? Nothing to hide, nothing to fear, right? What's acceptable now will always be acceptable in the future, right? My mind never changes, whose does?
The point of this idea is that it would be under control of the individual wearing the glasses. I would most definitely not want it to be syncing to the cloud or some stupid shit like that. The buffer, and the storage, would need to be entirely contained within the glasses (or other device, if it turns out audio is a legally safer way to implement something like this).
As I mentioned in a sibling comment, I'm not against a visual notification of such recording once the "start saving to storage" button has been pressed. At the same time, I realize that the 60 seconds or so leading up to pressing that button is also often vital (otherwise dashcams wouldn't use a rolling buffer). And in such a situation where audio (or video, in applicable jurisdictions) is being recorded only in volatile memory and overwritten when the buffer is exhausted, I don't think a recording notification should be necessary unless the user has actively engaged non-volatile recording. In that sense, it's similar to the difference between streaming and downloading media. Both are technically the same, but the intention of "streaming" is to download the media and decode it without storing it in a non-volatile fashion.
I think you're thinking about this a bit naively, concentrating on the utility without considering the detriments.
Look at social media. WE are the ones who surveil ourselves. Yes, the big social media companies process all that data and use it against us, but we are the ones who give the pictures, videos, and words to them. There's really no good way around this either. I put those same things on my blog and they still get scrapped.
So what ends up being the difference? It's not synced to the cloud, but we put it there anyways. Do you really think most people are just going to take the videos and not share them? Do you think most people are just going to run a NAS at home? In an ideal world, yes. But I don't think we're anywhere near that happening. So a good portion of those videos just get put online somewhere and bad actors have access.
Non-volatile recording doesn't really exist. We're on HN and I'd expect most people here to be familiar with how easy it is to download a streamed video. yt-dlp will do that for a lot more than youtube.
As far as I know, the order of hook calls is important to link them to the correct components: the important point of the linked list is not the ordering of built-in hooks depending ob name.
Although it's true that useEffect runs code after render. The picture places useReducer after useEffect, which would not even make sense in this interpretation?
Could be that I'm misremembering details, but I find it more interesting to read, for example, a good description, when a PR is large, dense, or hard to understand.
Are you sure you are talking about the functionality of the marquee tag? What exactly do you mean by "implement horizontal scrolling often used on shopping websites. Now it uses JS to achieve the same"?
For a banner-type text "Sale: 50% off until New Year" I can imagine this. And this is possible with almost no JS, by using a CSS animation on translateX. I think you need to pass the initial width of the content one time from JS to CSS via a custom property IIRC, and update that variable on resize, for example using a ResizeObserver, to avoid a jump when reaching the "wrap around"...
But – sorry if this a misunderstanding – I have a sneaking feeling that you might also have things in mind like swipable horizontal lists etc?
This is also possible using CSS in theory (scroll-snap and overflow-auto, and CSS to hide the scrollbars). But last I tried to use it, it simply wasn't good enough to sell it, compared to a fat JS library that did the trick, perfectly, on mobile and desktop.
When it comes to UX niceties, I feel it's pretty common to have HTML+CSS solutions that are good enough in theory, but suck in practise.
For the horizontal scrolling with "snap", I would also like a good JS-free solution. But I feel that the more interactive and complex the UX patterns become, it would be senseless bloat to build a specific implementation into the platform.
I think that "autocomplete" inputs are a good example for this, as discussed in another thread.
I once tried to implement a custom autocomplete using datalist (including dynamically updating the list with xhrs). In the end, the native element had so many issues that it was a waste of time and a JS solution was way better. At the time, that JS solution was Alpine.Js, because it was a b2b app with minimal a11y requirements and JS-allergic developers.
Within an hour, I was polishing keyboard use while the "near-native" solution using datalist was broken in a thousand different ways in every browser and in the end didn't work well at all.
reply