I can never decide if it's reasonable that they're off by default (they break more things, and they're not "ads") or if it's an obvious missed opportunity to improve millions of people's lives.
Just went in the UBlock Origin settings... and yep, there are 14 potential "annoyance" filters. Just turned them on an applied them. Lets see how it goes...
Most of those must be new additions. Whenever I set up a new browser I enable every list, and I only had 5 annoyance lists enabled. I'm glad I checked again!
Okay, I'll take a little time to brag. The word count challenge here got my interest at the time, so I whipped up an assembly language version of it and iterated several times, trying to figure out the value of switching different registers. In the end, I took second place:
However, there's an interesting story behind this story. David Stafford, who came in first, posted that he thought he had the fastest solution and bet $100 that nobody could beat it. I posted my code which was significantly faster, and David tweaked it further to eventually win the challenge. Like a true person of honor, he did pay the $100 and I cashed his check.
I remember this, Dave. I was confident I had the fastest possible algorithm and you proved me wrong. It was a humbling experience but it forced me to throw out my assumptions and start over. It taught me to assume there was always a faster or better way just waiting to be discovered. TANSTATFC
Awesome. Any day that you can take Dave and Terje to school is a good day at the office.
I have my own little footnote in that book, somewhere, and know/have hung out with most of those guys. The old saying goes, "If you ever find that you're the smartest person in the room, you need to find another room," but it doesn't say what to do if you're pretty sure you're the dumbest guy in the room. It's good that the book is still in circulation despite its age, as there's a lot of wisdom left in it.
A lot of time I found myself the dumbest guy in the room but I'm getting used to it, as long as it's not a poker room or a trading room...it's a lot of fun to hang out with smart guys.
> If CSS was designed the way it is just to satisfy the constraints of 1996, then maybe that gives us permission 20 years later to do things a little differently.
Yeah, just like we can choose which side of the road to drive on or pick any arbitrary character encoding for an 8-bit byte.
Driving on one side of the road offers no obvious benefits compared to driving on the other side of the road whereas getting rid of HTML and CSS offers obvious benefits and is long overdue.
Everybody talks about replacing CSS, yet nobody comes up with a feasible solution.
Layouting is hard. Very hard.
I mean, there's a reason why ooxml or docx are so similar to CSS, and why PDF is so broken that it's a candy store for exploits.
Both object and functional oriented replacements always have led to a lot of redundancy compared to their compiled CSS equivalents. And usually you cannot model layouts as flexible as with CSS' different flow models.
Everybody that says CSS can be replaced with something simple usually hasn't even thought of print stylesheets, media queries, or why the box model and flow model got so complicated.
The spec authors had very good reasons to make changes to the CSS spec(s).
Layouting is hard only from a css perspective. We used TFrames and tAligns in delphi, we used (sane!) boxing in gtk, we used apple's constraint system. Entire operating systems and business packs were built exclusively on these for decades. And nothing was as hard as making damn css work on three devices without an explicit role or two to manage that.
Alternative is a king. Leave your css if you like it, but please make it an intermediate layer, not final, and allow more sane primitives on which people could build their ui. People are sick and tired of translating geometric ui layout to allegedly-text layout, when there is not a single character of text until fifteenth level of divs.
Edit: it is not "css as format" that is broken, if that was not clear. Broken is a set of primitives under "display" property.
Any given layout is pretty easy to work. The problem is the sheer number of form factors, flexible resizing, and rendering methods (as you mentioned, print vs AMOLED vs monitors) that make designers and engineers cry.
When I talk to designers, they love love love paper for many reasons, but one of them is the total control they have over the medium and the fact that they only have to deal with one fixed form factor at a time.
Maybe instead of having stylesheets for print and media queries, different versions are just written for each?
I've wondered this about a11y in HTML too. Instead of trying to torment the browser into understanding how a11y should work with a bunch of ARIA properties, what if a simple alternative was offered which was easier to understand for a11y but not for 'normal' users?
ARIA properties are really just the escape hatch. For the most part, people shouldn't be using them except to fill in gaps.
Semantic HTML is the simple alternative. And the reason HTML has worked for accessibility overall is because it forces developers to use the accessible interface.
Compare that with image/video captions/descriptions, where most devs just don't do it. If you have a programming setup where the accessible and visual parts of your interface are two separate things, then by and large you will usually only get software with a visual interface.
The terminal is another good example of this. It turns out that forcing developers to code an interface that can reduce to pure text has some advantages for accessibility, extensibility, and portability.
Those suggestions would probably be better if everyone building a website had infinite man power. But in the real world adoption is going to suffer if you have to make a separate document for web, print, a11y, etc. Then someone will say "Why don't we have a unified markup language that we can use to produce each of these documents for us?" And then XSLT will be invented. And what a mess we've made.
But if people are already putting in extra effort to get these features working in their unified markup language, then it isn't different (from the perspective of effort) to write separate versions. In fact, it might be easier, because at that point you're working in a language custom designed for the task at hand, not trying to shoe horn a spoken-word UI into a visual UI language.
Don't say this. The quotes don't make it any better - this is like outright discrimination to consider that people who need a well designed website are not normal. Every user of your product is a "normal user".
Remember that people's ability doesn't exist at two extremes - Just take eyesight. It is just a fact that people's eyesight starts to deteriorate, especially as you get older. This is incredibly normal, and just because you've gotten a bit older and can't see like a 12 year old doesn't mean you deserve a segregated web experience.
Gtk does use css, but not for the purpose of layout. Their layout rests on the same good, understandable and no-bullshit primitives (GtkContainer subtree) as before the time css-like theming was introduced. Gtk css solved a problem of theme engines (clearlooks, redmond, etc) which had to be done at low-level and thus were less easy to create or modify.
What I honestly didn't understand is why CSS and HTML don't have ui specific namespaces by default that could offer alternatives to div elements that would not be influenced by user agent stylesheets.
I think this was the primary idea behind xhtml back then (when looking at xforms et al) but somehow got lost into some weird hacks to make everything "somehow" run on IE.
Now we have ui and semantics in the same namespace (section, aside, dialog, main, article, footer, header et al) and everybody is just more confused. How should I use dialog, for example? No matter how code turns out, it's always a crappy JS based solution.
To be honest, I love the idea of web components, but I hate that there's no JS free deserialization from html to dom possible.
If you create a solution to advance the semantics of SGML, it's an architecture fail to implement it without the semantic aspects HTML and CSS were designed for.
> Driving on one side of the road offers no obvious benefits compared to driving on the other side
If that were true, then why did Sweden go to the trouble to change in 1967?
The fact that they did so, and that doing so was to improve interoperability with neighboring systems, might give you some clue why getting rid of CSS and HTML requires more than mere 'obvious benefits' to justify it.
You’re being deliberately obtuse. Driving on some side of the road offers no benefit, other than historical concerns arising from one’s neighbors decisions, and likewise with CSS. I’m sure you were aware of this and I’m not sure why you bothered to raise such a pedantic and worthless point.
My experience is that people who tend to disparage web tech don't have much experience building clients in general, so they think building web clients is hard and annoying because it's the web without realizing it's because it's a client.
"My experience is that people who tend to disparage web tech don't have much experience building clients in general"
My mom should be able to build great frontends for Web software, but currently the technology requires her to know much more than she knows. As you say, she "don't have much experience building clients in general." Therefore it's important that we get rid of HTML, CSS, and Javascript.
This is an old debate, but to repeat the highlights:
The benefit would be the productivity gained from specialization. The work could be moved away from computer programmers. Beginners would find beginning as easy as building a HyperCard stack, and specialist UI/UX experts (not computer programmers) could be put in charge of advanced frontends.
The same argument that was made for Web Assembly also applies to the frontend: we now know what we need as a general compilation layer for frontend descriptive languages, things we did not know in 1996 when HTML/CSS/Javascript were coming together.
The crucial thing is to have the kind of serialization formats that software can write, thus opening the door to a version of Dreamweaver that actually works. In other words, something like Adobe InDesign would then be the correct way to create all frontends. I wrote about this in detail here:
HyperCard, Dreamweaver, InDesign, Photoshop, Flash and Illustrator do not use a markup language internally. As with Web Assembly, it's important we have the right primitive that we can compile to. There are configuration languages that can make it easy for beginners to hand edit a frontend, and in my article I mention the configs that Ruby On Rails, Symfony (PHP), and Django (Python) offer to generate simple CRUD interfaces, and I mention that these configs could be much more powerful if they had the correct primitive to compile against, rather rendering to HTML/CSS.
Aside from all that, I would raise the more urgent question for our industry, why did it seem like such an urgent task, all through the 1980s, 1990s, and early 00s, to create visual software that would make it easy for beginners to create software, and yet now this is no longer a priority? Is there some reason why we are moving away from the era when "Empower the masses to create software" seemed like an important goal for the industry?
You can't version control whatever binary format InDesign uses. Markup languages benefit from precision, readability from an editor, easy integration with other tools.
HTML and CSS have been evolving for over two decades (now maybe faster than ever) and have been battle tested by nearly 2 billion websites.
It seems crazy to me to think that we should throw it all out and do something new. I suspect contributing to the improvement of the existing spec is a much more pragmatic endeavor.
This has a lot of really serious implications. I built a form for a charity that allowed users to buy a subscription but include an additional donation amount. Chrome was sometimes filling that field with the two-digit year. The charity got a lot of complaints and it ruined the trust relationship with the donors who didn't understand what was happening and thought it was intentional.
Chrome has other behaviour that I think violates a sort of trust relationship. One of which is that Youtube would ask you "do you want to install Chrome"? Almost as if your current browser is not "what you need to access Youtube". This is especially a problem for elderly people who often use the web but don't really understand how things fit together (the way 5 year olds actually do).
> Almost as if your current browser is not "what you need to access Youtube"
This is not just confusing. It's intentionally misleading and unethical.
And it doesn't just affect old people, or YouTube wouldn't have come up with it. Lots of young people grew up with computers and understand how to do what they want to do, but they never develop a systematic understanding of what they're using.
It gets worse, though. While we tech-folk know that all modern browsers are supposed to have near-parity, Google optimizes its sites for Chrome, leading to additional confusion for both knowledgeable and lay users.
AMP is just a set of conventions and limitations that, when followed, make for a fast site. Anyone can make a fast site if they follow similar rules. Most sites don't do that because either the developers want to use something that's "nicer" to code but a lot bigger, or because the marketing department insists on loading 12 different tracking and analytics progams--when they probably only use one or two.
They could also just have mandated serving AMP with some random HTTP header and the client would cached a whitelist of the AMP scripts if got that header, nothing in the AMP design needs to have an AMP server, it's an arbitrary limitation from Google to control the web.
I didn't say, "they could guarantee the time it takes to load for every user on every connection on every browser".
I said "they could simply measure the load time of a site when they index it." Their indexer, running from their servers could be their "reference point" for this content.
It feels like AMP is Google nailing its own coffin to me. It probably felt like a winning move when Bell and AT&T made people buy their own products to use telephone systems, but it led directly to their disruption by the DOJ. Even though some poeple at Google probably realize that, it won’t matter if they cash out beforehand, if working on a project gets them promoted, and if institutional inertia is in control.
Google is hurtling toward an antitrust case they won’t win, and it will really be all their fault.
Throwaway Google search engineer here. People here really do care about making the web faster and moving metrics, both because that is how the company is set up to reward employees but also because they believe it makes the product better.
Google has been penalizing slow sites forever. It stopped moving the needle (I suspect because it isn’t marketable). Amp on the other hand really is working, metrics show a faster, smoother experience, and user studies have been positive. That’s why Google is doubling down so much with it. Not because they have goals to control content providers or wall off the web, but because it makes a dramatic difference on the whole.
So if the "only" goal of AMP is to make things faster, why isn't the carousel based purely on "your result must load in < X ms"? If a company can "force" other companies to adopt AMP, it can surely "force" them to improve load times on their own.
Also, if speed is their goal, why does the mandatory AMP 'boilerplate' include a CSS-driven 8 second delay before content is shown, that is removed if the client loads the AMP JS?
Oh right. I know the answer. It's to give the impression that blocking third-party resources (such as AMP JS) via e.g. a content blocker, won't make the site faster. Which as we know, is a load of shit.
You might have the best of intentions and donate your entire salary to homeless blind children - that doesn't mean for a second that I believe google's actual goals with AMP are anything less than exerting more control over the web for their own purposes.
You should stop and do some serious research on why people don't like AMP. You're talking about the Web like it's a Google "product." You're hijacking the Web in a way that will eventually destroy it.
Publishers don't want their content restricted and hosted on Google's servers on Google's domain, but they are getting their arms twisted by reduced rankings if they don't implement it. They also don't fully understand the long term implications of what they are doing.
Sure, but one of those conventions and limitations is that the page is hosted by google. It's not something you can deploy on your own server. It's not as if it's just some list of recommendations that make sites faster - you're signing up to a Google service, and you're at their mercy from then on.
>It's not something you can deploy on your own server.
But you can. Often it doesn't matter because you aren't a globally distributed cache, but if you are cloudflare[1] (or apparently Microsoft[2]), you can and do.
For example, you can't use an img tag on an AMP page. That's invalid AMP. You have to use an amp-img tag, which is rendered client side with js.
Another example is with forms. It forces you to include the amp-form js.
If it were just best practise, I'd be far happier to go along with it. Kind of the point here is that we were already using best practise, because the site was super fast.
This allows AMP to skip preloading images below the fold on viewports of arbitrary size. If you fired somebody for doing something you don't understand without asking for an explanation, that's your problem.
Impossible. For a site loaded from a SERP, the page will be preloaded if it is AMP. If you are creating an AMP page to be accessed directly (like the author), you fundamentally don't understand the problem that AMP solves and are using it wrong. PEBKAC.
It's about control of preloaded content, specifically. If you don't care about your content being easily preloaded from the SERP page, then sure, no need to use AMP. And if, as a user, you don't care about quick access to preloaded content (because e.g. you're on a low-latency connection), feel free to scroll down from the top-stories carousel.
Preloading is a waste of data on mobile. If you're on a metered connection then this costs more. Ranking sites by performance would already ensure a fast UX for users.
Also position on the SERP page is a major indicator of relevancy. Biasing results because they use AMP is unfair and inaccurate.
Yes, things are changing rapidly everywhere on the stack, but it seems that with front-end dev the same two things are left behind with each generation: performance and accessibility.
Each time we get close to making performant and accessible web sites easy for mortals to develop, some major disruption in the force occurs and well-crafted solutions no longer work in the new environment. So people reinvent them, poorly, and it takes another four or five years before things catch up.
I think many designers and developers prioritize their own needs or preferences and fail to advocate for what users really want. No user wants a web site or app to take 10 seconds to load on their phone, yet most do. Users don't care if your app is written in React, Angular, or served from a WordPress site. They just want to be able to use it.
I wasn't able to find any working examples from the homepage, even with the updated link. Can you link directly to a demo page or two that uses the library, rather than a Github repo with code that uses the library?
I don't think they want control over the browser market, just the browser users on Windows. People are less likely to ditch Edge if it is more compatible with the way Chrome and Safari work (including the ability to run Chrome extensions). That means they have more visibility and control over those users.
It's important to express the underlying reasoning for the policy: things with similar size and speed can be together, dissimilar things should be separated. Cars and bikes/scooters are not the same, and streets need to separate them via dedicated lanes. If not, those users are going to choose sidewalks as the lesser of two dangers.