Hacker Newsnew | past | comments | ask | show | jobs | submit | marcusarmstrong's commentslogin

In 2016 maybe.


On a disc brake bike, calipers are ~100g, rotors ~120g, and then you’ve got ~150g of tubing/fluid. They’re certainly heavier, but the components don’t even add up to 1000g, nevermind when you offset the rim brake component weights.


And the frame reinforcements for caliper mounts, the extra spokes, the disc compatible hubs, the thru axles, the heavier brake levers, and probably some stuff I'm missing.

No need to get emotional over this stuff, I love my hydraulic discs but they are heavier at comparable price points. It's also not particularly important to have a bike weigh a bit more, I was just letting the OC know that he was wrong on that aspect.


Rim brakes need frame reinforcement at their mount points too, albeit not adapters, so that’s 20g. Hubs are 20g heavier each… I wouldn’t really go for a 20 spoke wheel myself but if you did I suppose that would save 20g. 50g for thru axles. 120g on the brifters.

So to add it all up you’re looking at 770g in components/additions, offset by 75g skewers, 310g calipers, and an added 25g or so per rim for the brake track. So discs are naively something like +335g. I obviously agree it’s a heavier system, but I don’t think ballparking the penalty as triple what it actually is is reasonable.


Gitlab does support this with “Merged Result pipelines”[0]. We use them extensively alongside their merge train functionality to sequential is everything and it’s fantastic.

[0]: https://docs.gitlab.com/ee/ci/pipelines/merged_results_pipel...


I’d be particularly interested in something along these lines in a 1U form factor if anybody is familiar with such a thing. Obviously the actual fan-less-ness is less important in that use case but I have a hard time justifying high wattage hardware for my home server operation, and racking provides a lot of convenience.


Take a look at the Supermicro brochure. Not sure the height is 1U, but the E100 and E302 series look interesting and are fanless. The E300 series I believe is a compact 1U but does have some active cooling. I myself have been eyeing getting one of these but haven’t decided on which one yet.

https://www.supermicro.com/manuals/brochure/brochure_Embedde...


If you get a 1U mini-itx case, you can put pretty much whatever in there. Finding a heatsink that fits will be a challenge, but they're out there. Expect thermal throttling (or set a low power throttle, which is inline with your desire to reduce watts anyway)


Worth qualifying the comparisons here with core counts... these are mostly 8 core/16 thread parts, with a few 6 core/12 thread parts, vs the M1 4+4.


A TDP comparison would be more appropriate, if I have the same multicore performance with 4/6/8/n cores, then what is the core count worth?


A TDP comparison is certainly reasonable, though we don't actually have real power consumption metrics so it's a challenging thing to really evaluate. But more to your question, it depends on whether you're trying to evaluate _this chip_ or the _chip family_. Personally I'm not going to be buying anything with the M1 in it because they aren't machines that fit what I need, but I'm extremely interested in benchmarks of the M1 because of what they tell us about the hypothetical M1X (or whatever it ends up being called) in a body that I would, in fact, purchase.


If anybody else was curious, it appears that the performance win of use of this instruction looks to be about 1-2% in general javascript workloads: https://bugs.webkit.org/show_bug.cgi?id=184023#c24


I would argue a solid 1-2% can get you a promotion in HW companies. You put 5 of this improvements and that’s a generational improvement.


1-2% for a single language at the cost of polluting an entire instruction set.


Would you say the same if the language was c or c++?

Yes this is necessitated by some of JS's big warts, but the sheer amount of javascript in existence weights heavily when considering the trade-offs. You cannot ignore HTML/JS if you're targeting UI applications - it is table stakes.


Yeah, but isn't the whole point of ARM to be a reduced instruction set? How reduced are we, really, if we're dedicating transistors to the quirks of a single language?


RISC is a misleading name, the concepts of its design are not really based around the idea of a "Reduced Instruction Set" as in "small" per se, nor are CISC machines necessarily a large size instruction set.

It is much more about the design of the instructions, generally RISC instructions take a small, fixed amount of time and conceptually are based on a sort of minimum unit of processing, with a weak to very weak memory model (with delay slots, pipeline data hazards, required alignment of data etc) with the compiler/programmer combining them into usable higher level operations.

CISC designs on the other hand happily encode large, arbitrarily complex operations that take unbounded amounts of time, and have very strong memory models (x86 in particular is infamous here, you can pretty much safely access memory, without any alignment, at any time, even thought often the result will be slow, it wont crash)

As an example, the PDP-8 has fewer than 30 instructions, but is still definitely a CISC architecture, some ARM variants have over 1000 instructions but are still definitely RISC.

RISC is about making building processors simpler, not about making instruction sets and programming with them necessarily simpler.


Well, it's not like x86 unaligned access was some cute thing they did just to make it extra complex. It made complete sense in 1976 when designing an ISA that might support implementations with an 8 bit memory bus and no cache. Why eat more bus cycles than you need?

Fast forward a decade or so and all processors had cache and multi-byte memory bus so the unaligned access and compact instruction streams are no longer necessary.

But processors these days are complex multi core beasts with IEEE fpu, simd units, mmu, memory controller, cache controller, pci-e northbridge, all kinds of voltage/thermal interplay, and even iGPU. ISA is over emphasized.


when the concerned ARM instruction is doing something perfectly capable by the software to just score 1-2% performance improvements, it is definitely CISC based on the definitions you listed above.


This Javascript related instruction is complex in terms of a bitwise operator on a single value, however complex it is still a simple operation; CISC-ness generally relates more to accessing memory or interacting with the hardware in complex ways.

From the definition given:

CISC designs on the other hand happily encode large, arbitrarily complex operations that take unbounded amounts of time, and have very strong memory models


The operation it accelerates is very simple, its just not really required much outside of javascript. It is already identical to an existing instruction, just with slightly different overflow behavior that javascript relies on.


Considering the amount of phones and tablets in the world, ARM's primary purpose right now, by a large margin, is interpreting web pages.

Consumers and even manufacturers are both focused on performance as a primary metric, not the size of an instruction set.


I'd argue at this point ARM serves more value as an instruction set which isn't encumbered by the mass of x86 patents and historical legal baggage, thus meaning it's something that can reasonably be implemented by more than just two companies on the planet.

It being RISC is secondary to that.


> Yeah, but isn't the whole point of ARM to be a reduced instruction set?

It was the point at, say, the original ARMv1.


Yeah, the problem is not that it's javascript per se, it's that it's a quirk. FCVTZS would have been just fine.


I'd argue that the whole point of ARM (as in ARM holdings) is to make money for their shareholders! :-)


One of the most popular and fastest growing languages


What is the cost of this pollution?

I am genuinely curious, I don't know much about instruction set design.


This uses existing rounding modes with pre-set flags, so it costs 1 entry in the LUT and a small number of muxes, one per flag, assuming the worst case.


I guess it's however many bytes the instruction occupys in L1 cache.


The pollution would be in the chip complexity.


It's not a random esoteric programming language, it's JavaScript.


Is that significant enough to justify the instruction?


A recent AMD microarch was 10-20% faster than the previous one, despite running on the same physical process. No single component was responsible; there were changes to several components, each of which increased speed by only 1-4%.


It is if it also increases battery life by a similar amount ;-)


It increases the die area, power consumption, and instruction set size by a miniscule amount, so probably.


A great question for the folks who actually do CPU design! As somebody who writes JS rather frequently I’m not complaining.


it depends on the cost (in gates and timing) looks like this might be just a few gates


Nothing justifies the prolonging of Javascript torture.


Nothing justifies the prolonging of C torture either, except of the C's wide spread. Why do you think modern CPUs still expose mostly C-abstract-machine-like interface instead of their actual out-of-order, pipelined, heterogeneous-memory-hierarch-ied internal workings?


>> Why do you think modern CPUs still expose mostly C-abstract-machine-like interface instead of their actual out-of-order, pipelined, heterogeneous-memory-hierarch-ied internal workings?

Because exposing that would be a huge burden on the compiler writers. Intel tried to move in that direction with Itanium. It's bad enough with every new CPU having a few new instructions and different times, the compiler guys would revolt if they had to care how many virtual registers existed and all the other stuff down there.

But why C? If you want languages to interface with each other it always comes down C as a lowest common denominator. It's even hard to call C++ libraries from a lot of things. Until a new standard down at that level comes into widespread use hardware will be designed to run C code efficiently.


> Until a new standard down at that level comes into widespread use hardware will be designed to run C code efficiently.

Exactly this hinders any substantial progress in computer architecture for at least 40 years now.

Any hardware today needs to simulate a PDP-7 more or less… As otherwise the hardware is doomed to be considered "slow" should it not match the C abstract machine (which is mostly a PDP-7) close enough. As there is no alternative hardware available nobody invests in alternative software runtime models. Which makes investing in alternative hardware models again unattractive as no current software could profit from it. Here we're gone full circle.

It's a trap. Especially given that improvements in sequential computing speed are already difficult to achieve and it's known that this will become even more and more difficult in the future, but the computing model of C is inherently sequential and it's quite problematic to make proper use of increasingly more parallel machines.

What we would need to overcome this would be a computer that is build again like the last time many years ago, as a unit of hard and software which is developed hand in hand with each other form the ground up. Maybe this way we could finally overcome the "eternal PDP-7" and move on to some more modern computer architectures (embracing parallelisms in the model from the ground up, for example).


I don't have words to describe how exciting that would be. The only way I could see it happen is if the existing legacy architecture would be one (or many) of the parallel processes so that efforts to make it run legacy software don't consume the entire project. I really do think it possible to make a "sane" machine language that doesn't need layers of abstraction or compilers and is easy to learn.


My favorite article on this: "C Is Not a Low-level Language: Your computer is not a fast PDP-11"

Hacker news post: https://news.ycombinator.com/item?id=16967675

and again: https://news.ycombinator.com/item?id=21888096


GPUs are pretty different, despite exposed interface being about the same C


It is not the same C, it is a dialect full of extensions, and it only applies to OpenCL, which it was one of the reasons why it failed and forced Khronos to come up with SPIR, playing catch up the polyglot PTX environmnent of CUDA.

OpenCL 3.0 is basically OpenCL 1.2, which is OpenCL before SPIR was introduced.


>> Especially given that improvements in sequential computing speed are already difficult to achieve and it's known that this will become even more and more difficult in the future...

That's perfect. As performance stop increasing just by shrinking transistors, other options will have a chance to prove themselves.

>> but the computing model of C is inherently sequential and it's quite problematic to make proper use of increasingly more parallel machines.

IMHO Rust will help with that. The code analysis and ownership guarantees should allow the compiler to to decide when things can be run in parallel. Rust also forces you to write code that will be easier to do that. It's not a magic bullet but I think it will raise the bar on what we can expect.


> If you want languages to interface with each other it always comes down C as a lowest common denominator

Nope, "always" only applies to OS written in C and usually following POSIX interfaces as they OS ABI.

C isn't the lowest common denominator on Android (JNI is), on Web or ChromeOS (Assembly / JS are), on IBM and Unisys mainframes (language environments are), on Fuchsia (FIDL is), just as a couple of examples.


CPUs expose a "mostly-C-abstract-machine-like" interface because this allows chip designers to change the internal workings of the processor to improve performance while maintaining compatibility with all of the existing software.

It has nothing to do with C, specifically, but with the fact that vast amounts of important software tend to be distributed in binary form. In a hypothetical world where everybody is using Gentoo, the tradeoffs would be different and CPUs would most likely expose many more micro-architectural details.


> Why do you think modern CPUs still expose mostly C-abstract-machine-like interface

I don’t think that, because they don’t. Your premise is hogwash.

Modern RISC derived CPUs for the most part expose a load store architecture driven by historical evolution of that micro arch style and if they are SMP a memory model that only recently has C and C++ adapted to with standards. Intels ISA most assuredly was not influenced by C. SIMD isn’t reminiscent of anything standard C either.

Also you might want to look into VLIW and the history of Itanium for an answer to your other question.


> Also you might want to look into VLIW and the history of Itanium for an answer to your other question.

There's only one question. What do you mean?

But Itanium wasn't out of order. How does that even come close to answering a question about exposed out-of-order machinery?


There is one CPU that exposes its out of order inner workings, the VIA C3 ("ALTINST"). The unintended effects are so bad that people that accidentally discovered it referred to it as a backdoor: https://en.wikipedia.org/wiki/Alternate_Instruction_Set


> [..] referred to it as a backdoor

Because it was implemented in a flawed way.

> "In 2018 Christopher Domas discovered that some Samuel 2 processors came with the Alternate Instruction Set enabled by default and that by executing AIS instructions from user space, it was possible to gain privilege escalation from Ring 3 to Ring 0.["


It's not like other languages are well-adapted to that either. That's a hard target to code for.


Why couldn’t Haskell compilers make good use of that?


The "sufficiently smart compiler" [1] has been tried often enough, with poor enough results, that it's not something anyone counts on anymore.

In this case, the most relevant example is probably the failure of the Itanium. Searching for that can be enlightening too, but heres a good start: https://stackoverflow.com/questions/1011760/what-are-the-tec... (For context, the essential Itanium idea was to move complexity out of the chip and into the compiler.)

Also, don't overestimate Haskell's performance. As much fun as I've had with it, I've always been a bit disappointed with its performance. Though for good reasons, it too was designed in the hopes that a Sufficiently Smart Compiler would be able to turn it into something blazingly fast, but it hasn't succeeded any more than anything else. Writing high-performance Haskell is a lot like writing high performance Javascript for a particular JIT... it can be done, but you have to know huge amounts of stuff about how the compiler/JIT will optimize things and have to write in a very particular subset of the language that is much less powerful and convenient than the full language, with little to no compiler assistance, and with even slight mistakes able to trash the perfromance hardcore as some small little thing recursively destroys all the optimizations. It's such a project it's essentially writing in a different language that just happens to integrate nicely with the host.

[1]: https://duckduckgo.com/sufficiently smart compiler


This is reminiscent of my experience designing SQL queries to be run on large MySQL databases.

I had to write my queries in such a way that I was basically specifying the execution plan, even though in theory SQL is practically pure set theory and I shouldn’t have to care about that.


You can stuff Haskell-drived bluespec into FPGA.


Well it turned out that for running scalar code with branches and stack frames exposing too much to the compiler was not helpful, especially as transistor budgets increased. So as long as we program with usual functions and conditionals this is what we have.

I can swap out a cpu for one with better IPC and hardware scheduling in 10 minutes but re-installing binaries, runtime libraries, drivers, firmware to get newly optimized code -- no way. GPU drivers do this a bit and it's no fun.


For a long time I thought the JS hate was just a friendly pop jab. From working with backend folks I’ve realized it comes from at least a somewhat patronizing view that JS should feel more like backend languages; except it’s power, and real dev audience, is in browsers, where it was shaped and tortured by the browser wars, not to mention it was created in almost as many days as Genesis says the world was built.


Huh? Python and powershell are all over the backend, and it’s hard to argue against JavaScript while you’re using those. At least in my opinion.

I think it has more to do with the amount of people who are bad at JavaScript but are still forced to sometimes work with it because it’s the most unavoidable programming language. But who knows, people tend to complain about everything.


So, there are excuses for Javascript's terribleness, but that doesn't stop it from being objectively terrible.


Sad to see the word "objective" become the next bullshit intensifier because people can't separate their own subjective opinions from the realm of verifiable fact.

"Terribleness" isn't an objective property.


You can compare Javascript to other languages and note that many of its notorious problems have no rational justification, and are unnecessary. That's what I call objectively terrible.


Yeah, that's not objective. Sorry.


What do you think "objective" means?

The comparative approach I mentioned can be used to eliminate personal feelings about such issues - not in all cases (types might be an example), but certainly in some.

Many users of Javascript, including myself, recognize that it has many weaknesses. There's even a book that acknowledges this in its title: "Javascript: The Good Parts."

Denying this seems to be denying objective reality.

You may be confusing "objective" with "universal," thinking that I'm claiming some unsituated universal truth. But that's not the case. Any statement is only ever true within some context - the language that defines it, the semantics of the statement, the premises that it assumes.

In this case, there is a shared context that crosses programming languages, that allows us in at least some cases to draw objective conclusion about programming language features. "The bad parts" implied by Crockford's title includes many such features. We can examine them and conclude that while they might have some historical rationale, that they are not good features for a programming language to have.

In many cases this conclusion is possible because there's simply no good justification - the title of this post is an example. Having all numbers be floating point has many negative consequences and no significant positive ones - the only reason for it is historical. Such features end up having consequences, such as on the design of hardware like ARM chips. That is an objectively terrible outcome.

You can of course quibble with such a statement, based on a rigid application of a simplistic set of definitions. But you'd do better to try to understand the truth conveyed by such a statement.


What do YOU think "objective" means?

None of the properties you're talking about are objective. Objective doesn't mean Crockford wrote a book about it or "lots of people agree with me".

Objective means factual. You're putting the word "objective" in front of your own and others opinions to arrogate the credibility of objectivity onto statements that are not based in observation of material reality.

More people holding an opinion doesn't make it a fact. "Terribleness" or "justifiableness" are not matters of fact, they are both matters of opinion.

Do you understand? You keep repeating your opinion and then using the word "objective" to claim that your opinion is fact. You think I am disagreeing with your opinion, rather I am disagreeing with you stating your opinion is a fact. No matter how many people agree with you it will never be a fact, it will always be an opinion because "terribleness" is not a matter of fact! "Terribleness" is the result of a value judgement.

There are no such things as "objective conclusions", objectivity is not a manner of reasoning. You're looking for something more like "observations", "measurements", hard facts.. none of which apply to "terribleness" because it can't be materially observed--only judged.

"Objectively" isn't an intensifier unless used in the form "Objectively [something that isn't objective]." Why would actual facts need to be intensified? What kind of insane argument would anyone have where facts and opinions are compared directly?

I know it sounds stronger to say your opinions are facts but it is okay to have opinions. Just remember that the difference between opinions and facts is a difference of kind rather than a difference of degree. You can argue an opinion, you can attempt to persuade me to your way of thinking if you show your reasoning.


You can just look up some dictionary definitions, like "not influenced by personal feelings or opinions in considering and representing facts." I've explained how that applies in this case - we can use comparative analysis to draw factual conclusions.

Focusing on the specific word "terrible" is a bit silly. Sure, it's hyperbolic, but I used it as a way to capture the idea that Javascript has many features that are comparatively worse at achieving their goals than equivalent features in many other languages. This is something that can be analyzed and measured, producing facts.

Crockford's book title is simply an example of how even a strong advocate of Javascript recognizes its weaknesses. You may not understand how it's possible for it to objectively have weaknesses, but that doesn't mean it doesn't. In this case an objectively bad feature would be one that has negative consequences for programmers, and can be replaced by a features that can achieve the same goals more effectively, without those negative consequences.

If there's anyone who'll argue in favor of such features on technical rather than historical grounds, then it would certainly undermine the objectivity claim. But the point is that there are (mis)features in Javascript which no-one defends on technical grounds. That is a reflection of underlying objective facts.

I'm also not making some sort of ad populum argument. As I pointed out, any claim of objective fact has to be made in some context that provides it with semantics. In some languages, the expression "1"+"1" is a type error, in others it produces "11". Both of those outcomes are objective facts in some context. What your objection really amounts to is saying that there's no semantic context in which my claim could be true. That's clearly not the case.

Perhaps a different example would help: people don't write programs any more by toggling binary codes into machines via switches. That's because we've come up with approaches that are objectively more effective. We can factually measure the improvements in question. The features I was referring to fall into the same category.

I'm going to repeat the closing from my last comment, because you're still doing the same thing:

You can of course quibble with such claims, based on a rigid application of a simplistic set of definitions. But you'd do better to try to understand the truth conveyed by the claims, and engage with that.


Again, you think I am disagreeing with your opinion by pointing out that it is an opinion and not a matter of fact. You're only continuing to restate your opinion and insist it is fact.

Claims of objective facts? Objective facts in some context? Badness isn't a matter of fact--it's a matter of opinion, I say again, you're making a value judgement and asserting that as a fact. You may as well tell me an onion is a planet and I can live on it if I believe hard enough.

You think I am disparaging your argument by saying it is mere opinion as though they isn't good enough to be real true fact. I am not, I am merely pointing out that your statement is actually an opinion which you are incorrectly claiming to be a fact.

> I'm also not making some sort of ad populum argument. As I pointed out, any claim of objective fact has to be made in some context that provides it with semantics. In some languages, the expression "1"+"1" is a type error, in others it produces "11". Both of those outcomes are objective facts in some context. What your objection really amounts to is saying that there's no semantic context in which my claim could be true. That's clearly not the case.

"Objective fact" isn't claimed. You seem to be missing that an opinion even if backed up by evidence still isn't itself a fact and thus isn't objective. This isn't a matter of context. The difference between opinion and fact is not like the difference between true and false.

I don't know how you're lost on this. "JS is bad" is an opinion. "JS is objectively bad" is still an opinion but claims to be a fact, because "badness" isn't an objective property. Whether or not something is bad is not a matter of fact, it's a matter of opinion.

The "+" operator performs both string concatenation and addition in JS. <-- That is a fact, anyone can fire up a JS interpreter and confirm this for themselves.

The "+" operator being overloaded to perform string concatenation and addition with implicit type coercion is bad. <-- That's an opinion. While anyone can observe the behavior, they have to make a judgement on whether or not it is desirable and desirability is not a matter of fact.


Normally JS devs don't really encounter these notorious problems, for many years now.


You sound like a complete novice, funboy or the one who knows only JS. There are many issues, it is possible not to touch or to work around them, TS and flow helps. JS solved just a few — 'use strict', strict comparison, arrow functions, string literals. Core problems still there — implicit types, prototype declarations, number is float, toString/inspect division, typeof null. Every javascript programmer has to walk that way of embarrassment.


Lol.

I've been programming for a decade in many languages including assembly, C#, Rust, Lisp, Prolog, F# and more, focusing on JS in the last 5 years.

Virtually no one writes plain JavaScript, most people including me write TypeScript, but Babel with extensions is normally used. Your reply exhibits your ignorance of the JS world.


I occasionally write JavaScript since 2007, experiment a lot last 5 years, red through ES5 specification several times. I've worked as C++, PHP, Python, Ruby developer. Experimented with a few languages.

"JS" instead of "TypeScript" brings confusion. TS solves some issues and I've mentioned it, still

    typeof null
    //"object"
Template literals interpolation helps but if string (not literal string) slips by it is a mess

    1 - "2"
    //-1
    1 + "2"
    //"12"
Check out another comment [1], Object, Function, etc defined as constructor. It is not solved by "class", it is still a function with a bit of sugar:

    class Foo {}
    Foo instanceof Function
    //true
Globals with a few exceptions defined as constructors, DOM elements defined as constructors, inheritance defined as constructors

    class Bar extends Foo {}
You can internalize how it works and there are some good explanations [2] but design is error prone and terrible.

[1] https://news.ycombinator.com/item?id=24815922

[2] https://yehudakatz.com/2011/08/12/understanding-prototypes-i...


C++ has WAY more spec footguns than JS (and that's without counting all the C undefined behaviors which alone outweight all the warts of JS combined). PHP also beats out JS for warts (and outright bad implementation like left-to-right association of ternaries). Ruby has more than it's fair share of weirdness too (try explaining eigenclass interactions to a new ruby dev). Even python has weirdness like loops having an `else` clause that is actually closer to a `finally` clause.

`typeof null === "object"` is a mistake (like with most of the big ones, blame MS for refusing to ratify any spec that actually fixed them).

If you're having issues accidentally replacing `+` with `-` then you have bigger issues (eg, not unit testing). I'd also note that almost all the other languages you list allow you to overload operators which means they could also silently fail as well. In any case, garbage in, garbage out.

Foo being an instance of function is MUCH more honest than it being an instance of a class because the constructor in all languages is actually a function. This is even more true because you are looking at the primitive rather than the function object which contains the primitive.


I have not claimed JS is the weirdest. But I have not claimed "Normally C++/PHP/Ruby/Python devs don't really encounter these notorious problems, for many years now" either.

Eigenclass (singleton_class) explained in another thread. I have not encountered Pythons for/else [1] yet.

Right, typeof null exposed by Microsoft IE 2 (?). Web is many times bigger now yet even such a small mistake is not fixed.

I have issue + of being concatenator, I prefer string interpolation, separate operators. Implicit type conversion often does not make sense spoils a lot

    [] * 2
    //0
    foo = {}
    bar = {}
    foo[bar] = 1  // just throw please
    Object.keys(foo)
    //["[object Object]"]
> they could also silently fail as well.

But they don't. If only these rules were defined as library. I am sure it would be ditched long ago. Actually this may be argument in favor of operator overloading in JavaScript, the way to fix it.

> Foo being an instance of function is MUCH more honest

    class Foo
    end

    Foo.send(:initialize)
    TypeError (already initialized class)
    # wrong one
 
    Foo.instance_method(:initialize).call
    NoMethodError (undefined method `call' for #<UnboundMethod: Foo(BasicObject)#initialize()>)
    # does not allow unbound

    Foo.new
new constructs an object and calls initialize. Same in JavaScript

    function Foo () {
      console.log(this)
    }
    new Foo
    // Foo {}
    Foo()
    // Window
It kind of make sense — new creates an object of constructor.prototype and calls constructor. I can't see how it is MUCH more honest than if new creates an object of prototype and calls prototype.constructor. By that logic Object.create is not honest

    Object.create(Object.prototype) // expects [[Prototype]] not constructor
    Object.create(null)
And even if it was

    foo = {}
    bar = Object.create(foo)
    bar.__proto__ === foo 
    //true
    bar.__proto__.__proto__ === Object.prototype
    //true
    bar.__proto__.__proto__.__proto__ === null 
    //true

    class Foo {}
    class Bar extends Foo {}
    bar = new Bar 
    bar.__proto__ === Bar.prototype 
    bar.__proto__.__proto__ === Foo.prototype
    bar.__proto__.__proto__.__proto__ === Object.prototype
    bar.__proto__.__proto__.__proto__.__proto__  === null
I don't need constructor except in new, otherwise I use it only to access prototype. Absence of languages adopting this approach confirms its usability issues.

> This is even more true because you are looking at the primitive rather than the function object which contains the primitive.

Could you please expand this part? "Primitive" has specific meaning in JavaScript.

[1] https://book.pythontips.com/en/latest/for_-_else.html


`__proto__` doesn't necessarily equal `.prototype`.

    var foo = Object.create(null)
    //now foo.prototype and foo.__proto__ are both undefined
    foo.prototype = {abc:123}
    //foo.__proto__ is still undefined. Need to use Object.setPrototypeOf()
In older JS code, I've seen people trying to abuse prototypes. One result in this kind of thing is often retaining references to those hidden `__proto__` leading to memory leaks.

Also, `__proto__` is deprecated. If you're writing JS, you should be using `.getPrototypeOf()` instead.

> Could you please expand this part? "Primitive" has specific meaning in JavaScript.

    var fn = function () {}
    fn.bar = "abc"

    Object.keys(fn) //=> ["bar"]

    //likewise
    (1).__proto__ === Number.prototype //=> true
JS is torn on the idea of whether something is primitive or an object. You see this (for example) in Typescript with the primitive number being different from the Number type which represents a number object. To get at the primitive, you must actually call `.valueOf()` which returns the primitive in question. Meanwhile, you can attach your own properties to the function object -- a fact exploited by many, many libraries including modern ones like React. You can also add your own `.valueOf()` to allow your code to better interact with JS operators, but I believe that to pretty much always be a bad practice.


Yes, I know about null [[Prototype]] and who is owner of prototype and __proto__. I like how it is not real property

    var foo = {
      __proto__: null
    }
    foo.__proto__
    undefined
As you can see I do not store it and do not modify it, it is assertion, kind of instanceof.

> Also, `__proto__` is deprecated

Object.getPrototypeOf would be too verbose in this example. I could have defined

    Object.defineProperty(Object.prototype, "proto", {
      get() { Object.getPrototypeOf(this) }
    })
but why bother? We both know that I meant [[Prototype]].

> Primitive

> fn.bar = "abc"

is syntactic sugar for

fn["bar"] = "abc"

do not follow.

> (1).__proto__ === Number.prototype

number is primitive

    typeof 1
    "number"
    1 instanceof Number
    false
number is wrapped in Number when we access property

    Number.prototype.foo = function () { return this }
    1..foo()
    //Number {1}
    typeof 1..foo()
    //"object"
    1..foo() instanceof Number 
    //true
Function is not primitive

    typeof Math.max
    "function"
    Math.max instanceof Function
    true
    Math.max instanceof Object 
    true
it is an object, we can attach properties to object.

As I remember valueOf is complicated by hints, exposed in JavaScript [1]. I've played with removal

    delete Array.prototype.toString
    delete Object.prototype.toString
    Uncaught TypeError: Cannot convert object to primitive value
Unfortunately it converts to string and than converts string to number.

    delete Array.prototype.valueOf 
    delete Object.prototype.valueOf 
    +[]
    //0
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


The same thing. I work in the JS world, and interact with it nonstop.


Oh, JS has no issues, solved by ClojureScript, PureScript, Elm.


These languages are not used by virtually all JS programmers. Babel and TS is.

The other issues you mention are solved by using ESLint which flags code like this.

I do not encounter these issues in my life as a professional JS programmer, neither do my colleagues; and I'm not on my first project, don't worry. For all practical purposes they are non-existent.

anyways, we are all happy for wasm, it's not that we love JS so much.


I actually like JavaScript, with a few changes it can become a good language, safe for a novice programmer.


Agreed. Check out AssemblyScript, you might like that. It's early, though.


> Virtually no one writes plain JavaScript

Which is because plain Javascript is objectively terrible, as I pointed out.


That's like pointing out that C++ after template transformations are applied is terrible. Yeah, so what? Nobody writes it like that.


"C++ is C because nobody writes C", you can't be serious.


That's not what I said.


And that's your opinion. I find javascript quite enjoyable and easy to use, without producing errors. YMMV.


JavaScript has good parts, I write it a lot. But it is ignorant to close eyes on its warts

    1 + '2'
    1 - '2'
    Number.MAX_SAFE_INTEGER + 2
and entire WAT series, stems from "don't raise" ethos. JavaScript exposes constructor instead of prototype that messed up a lot, in Ruby terms

    Object.alias_method :__proto__, :class
    Object = Object.instance_method(:initialize)
    Class = Class.instance_method(:initialize)
    Class.__proto__.alias_method :prototype, :owner
    new = ->(constructor) { constructor.owner.new }
    
    Person = Class.prototype.new do
      def initialize
      end
    end.instance_method(:initialize)
    
    def Person.foo
      'foo'
    end
    puts Person.foo

    john = new.call Person
    
    def john.bar
      'bar'
    end
    puts john.bar
    
    def (Person.prototype).baz
      'baz'
    end
    puts john.__proto__.baz
Does anyone wants to adopt this feature in their language?


Operator overloading can lead to ambiguities in dynamic languages. Ruby, python, and any number of other languages have it much worse because they can be overloaded any way you want while JS overloads are (currently at least) set in stone by the language.

If you could only choose one number type, would it be floats or ints? Crockford would say decimal, but the rest of use using commodity hardware would choose floats every time. It's not the language implement's fault someone doesn't understand IEEE 854. This max safe integer issue exists in ALL languages that use IEEE 854. In any case, BigInt is already in browsers and will be added to the spec shortly.


Perl and Lua have separate arithmetic and concatenation operators

    1 + "2"
    #3
    1 - "2"
    #-1
    1 . "2"  | 1 ..  "2"
    #"12"
    1 . "-2" | 1 .. "-2"
    #"1-2"
Python and Ruby throw exception, use explicit type conversion and string interpolation

    1 + '2'
    Traceback (most recent call last):
    TypeError (String can't be coerced into Integer)
    1 + '2'.to_i
    #3
    "1#{'2'}"
    #"12"
Hardly any language is perfect, I have not encounter much of operator overload in ruby (nokogiri?) but I believe C++ got it bad.

One number type in Lua:

    > 0 | 0xffffffff
    4294967295
    > 0 | 0xffffffffffffffff
    -1
    > 0 | 0x7fffffffffffffff
    9223372036854775807
limited but better than JavaScript:

    0 | 0xffffffff
    //-1
BigInt is an improvement

    0n | 0xffffffffn
    //4294967295n
    0xffffffffffffffffn
    18446744073709551615n
it is strict

    1n - "2"
    1 + 1n
    Uncaught TypeError: Cannot mix BigInt and other types, use explicit conversions
works nice with string interpolation:

    `${1n}`
    "1"
Numbers is a sane example. One can argue it was for good. How about `{} + []`? I believe I can disable this part in JavaScript engine and no one would notice. And misleading `object[key]` where it calls toString, sure I have not tried that in a decade but it is stupid. UTF-16:

    ""[1] # there were emoji
    //"�"
You've said nothing about constructor oriented programming. Unique feature, I have not heard any other language adopted it yet. The post you've replied contents sketch for Ruby. Actually I've got it wrong — every JavaScript function is a closure (Ruby method is not closure) and Prototype method was a class method (not instance method), fixed but ugly:

    def function(&block)
      Class.prototype.new.tap do |c|
        c.define_method(:initialize, block)
      end.instance_method(:initialize)
    end

    def function_(object, name, &block)
      object.class_eval do
        define_method(name, &block)
      end
    end

    Person = function { |name|
      @name = name
    }
    function_(Person.prototype, :name_) {
      @name
    }

    john = new.call Person, 'john'
    puts john.__proto__ == Person.prototype
    puts john.name_

    def function__(object, name, &block)
      object.singleton_class.class_eval do
        define_method(name, &block)
      end
    end

    function__(john, :name__) {
      @name
    }
    puts john.name__
By the way, you can say "Yes, I know JavaScript has some problems". It is not a secret, everyone knows.


I completely agree that separate operators area MUST for dynamic languages.

Lua allows operator overloading with metatables as do ruby and Python with classes

http://lua-users.org/wiki/MetatableEvents

https://docs.python.org/3/reference/datamodel.html#emulating...

https://www.ruby-lang.org/en/documentation/faq/7/

> One number type in Lua:

Not quite true. Lua had only 64-bit floats like JS until version 5.3 and the blazing fast LuaJIT still only has floats. Well, to be honest, it has hidden 32-bit integers for sake of bitwise operations just like JS (well, JS uses 31-bits with a tag bit which is probably a lot faster).

> How about `{} + []`? I believe I can disable this part in JavaScript engine and no one would notice.

That's very simple. {} at the beginning of a line is an empty block rather than an object (yay C). "Disabling" that would break the entire language.

> UTF-16

UCS-2 actually. Back in those days, Unicode was barely a standard and that in name only. Java did/does use UCS-2 and JS for marketing reasons was demanded to look like Java. I don't want to go into this topic, but python, PHP, ruby, C/C++, Java, C#, and so on all have a long history not at all compatible with UTF-8.

> You've said nothing about constructor oriented programming. Unique feature, I have not heard any other language adopted it yet.

I'll give you that JS prototypal inheritance is rather complex due to them trying to pretend it's Java classes. Once again though, the deep parts of both Python and Ruby classes are probably more difficult to explain. Lua's metatables are very easy to understand on the surface, but because there's no standard inheritance baked in, every project has their own slightly different implementation with it's own footguns.

Closures are almost always preferred over classes in modern JS. Likewise, composition is preferred over inheritance and the use of prototype chains while not necessarily code smell, does bear careful consideration.

If someone insists on using deep inheritance techniques, they certainly shouldn't be using class syntax as it adds yet another set of abstractions on top. Object.create() and inheriting from `null` solves a ton of issues.

> By the way, you can say "Yes, I know JavaScript has some problems". It is not a secret, everyone knows.

I'd say if you take the top 20 languages on the tiobe index, it sits in the middle of the pack with regard to warts and weirdness. Maybe people are just attracted to weird languages.


I have no grudge against operator overloading when done consciously — complex numbers, matrix multiplication. I've tried to implement JavaScript arithmetic in Ruby, failed so far.

Sorry, I had to be clear, in Lua "one number type" means float, my bad. I meant Lua 5.3 integer still works like JavaScript Number. In the end we have to know about ToInteger, ToInt32, ToUint32, Number.MAX_SAFE_INTEGER [1]. It is not one number type but encoding of several number types, union.

Prior to 5.3 and in LuaJIT it has different limitations

    > print(string.format("%18.0f",9007199254740991 + 1))
      9007199254740992
    > print(string.format("%18.0f",9007199254740991 + 2))
      9007199254740992
they have extended original "one number type" without change of the interface. In any case both versions do not convert to BigNum like Ruby.

    0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
    => 115792089237316195423570985008687907853269984665640564039457584007913129639935
Ruby unified Fixnum, Integer and BigNum as Integer in 2.4. Can't see benefits of Number/BigInt against Float/Integer. I'd rather have 3.6f literal.

Yes, I know how WAT works. I meant ToPrimitive [2]

    [] * {}
    //NaN
I've disabled this code in Firefox, have not done extensive testing but looks like no one depends on it. We infer types with TypeScript and flow but VM already knows it, it can report such cases without external tools. I think of it as extension of Firefox Developer edition — lint in the browser.

Object.prototype.toString is not as useful as Ruby, Python

    class Foo {}
    `${new Foo}`
    //"[object Object]"

    class Foo end
    Foo.new
    #=> #<Foo:0x0000560b7a10df20>

    >>> class Foo:
    ...     pass
    >>> Foo()
    <__main__.Foo object at 0x7fb53aecf1f0>

And there is no separate inspect/__repr__

    Date.today
    #=> #<Date: 2020-10-20 ((2459143j,0s,0n),+0s,2299161j)>
    Date.today.to_s
    #=> "2020-10-20"
> UCS-2 actually

Oh, DOM UTF-16 string broken by UCS-2 JavaScript function. I understand it is not easy to fix, Ruby fixed in 1.9, Python in 3.0, new languages (Rust, Elixir) come with UTF-8. Microsoft Windows has code pages, UCS-2, UTF-16.

Maybe Python way? b"binary", u"utf-8" (but together, not python fiasco), ruby has "# Encoding: utf-8", transformation tools can mark "b" or "u" all unspecified strings.

> Once again though, the deep parts of both Python and Ruby classes are probably more difficult to explain.

No, every Ruby object contains variables and has a link to a class which defines instance methods, we call it singleton_class

    foo = Object.new
    foo.class == Object
    #=> true
    foo.singleton_class == Object
    #=> false

    def foo.bar
    end
    foo.singleton_class.instance_method(:bar)
    #=> #<UnboundMethod: #<Class:#<Object:0x000055cb33808e10>>#bar() (irb):7>
There is ancestors chain

    Object.ancestors
    #=> [Object, Kernel, BasicObject]
There is a bit of syntactic sugar

    Foo = Class.new
There are few revelations with main (method defined in Object)

    def baz
    end
    Object.instance_method(:baz)
    => #<UnboundMethod: Object#baz() (irb):19>
Nothing like audible "click" I had when understood that "function" is a "constructor"

    constructor Foo {}
    // you can call me as function too
that unlike any other language [[Prototype]] is hidden. I've red through ES5 to be sure there are no hidden traps left.

Every JavaScript programmer has to go through this list either beforehand or by experience. I do not want to undermine TC39 effort — arrow functions, string interpolation in template literals, strict BigInt, Object.create — these are great advancement. I don't feel same way for "class", underlying weirdness is still there.

Make [[Prototype]] visible

    Object = Object.prototype
    Function = Function.prototype
now it is easy to reason about

    typeof Object 
    //"object"

    Foo = class {}.prototype         // redefine with sweetjs macro
    Bar = class extends Foo.constructor {}.prototype

    new Foo.constructor              // redefine with sweetjs macro

    Object.constructor.create(Bar)   // redefine as Reflect.create
once redefined:

    Foo = class {}
    Bar = class extends Foo {}
    new Foo
    Reflect.create(Bar)
I've shown it in another comment [3].

Languages are weird, there are a lot of C++ developers, I've been there, no way to know all dark corners. Pythons ideology hurts. Java took EE way. C# was tied to Microsoft. C K&R is beautiful, hard to write safe, packs a lot in the code. PHP has its bag of problems. SQL is not composable, CTE helps. Go ideology. Ruby — performance. And JavaScript because browser, not bad when know and avoid skeletons in the shelf.

Lua metatables looked like a proxy/method_missing for me.

[1] https://www.ecma-international.org/ecma-262/5.1/#sec-9.5

[2] https://www.ecma-international.org/ecma-262/5.1/#sec-9.1

[3] https://news.ycombinator.com/item?id=24815922


> Object.prototype.toString is not as useful as Ruby, Python

I don't know that returning that info would be good or secure in JS

> Oh, DOM UTF-16 string broken by UCS-2 JavaScript function. I understand it is not easy to fix, Ruby fixed in 1.9, Python in 3.0, new languages (Rust, Elixir) come with UTF-8. Microsoft Windows has code pages, UCS-2, UTF-16.

The best and most compatible answer is moving JS to UTF-32. JS engines already save space by encoding strings internally as latin1 instead of UCS when possible (even around 90% of text from Chinese sites is still latin1). IMO they should have made backtick strings UTF-32, but that doesn't exactly transpile well.

> No, every Ruby object contains variables and has a link to a class which defines instance methods, we call it singleton_class

https://mccraigmccraig.files.wordpress.com/2008/10/ruby-eige...

https://i.stack.imgur.com/FPPdI.png

or adding all the things with a few instances, .constructor, etc

https://i.stack.imgur.com/HvzDP.png

I'll let you decide which implementation is easier to work through, but I have a definite opinion that Ruby's system is more complex (and Python layers their OOP on top of what is basically a hidden prototypal system).

> I've red through ES5 to be sure there are no hidden traps left.

You'll love newer ES versions then. The upcoming private fields are an even bigger mess.

JS needs a "use stricter" mode which really slices away the bad parts. Even better, just add a `version="es20xx"` requirement to use newer features and have browsers ignore what they don't know, so you could even compile and link to multiple compilation levels of the same script and have the browser choose.

In truth, JS would be in the best place of any top-20 language if Eich had just been allowed to make a scheme variant as he had planned.


Yes, template strings was a clean start.

Of course prototype based language is simpler than class based. Ruby system is more complex. It provides more tools — Class, Module, class and instance methods, variables (as depicted on the picture). You've asked eigenclass (singleton_class these days), that's Class:a -> A, very simple concept.

And yet Ruby inheritance is much easier, it is all around and it just works. No one does this in JavaScript, too complex. There were many attempts of building OOP people could understand on top of JavaScript in 200x. No one does this for Ruby.

> https://i.stack.imgur.com/FPPdI.png

With fix above:

http://sergeykish.com/javascript-no-constructor.png

I'll let you decide which implementation is easier to work through, but I have a definite opinion that current JavaScript system is more complex.

> You'll love newer ES versions then. The upcoming private fields are an even bigger mess.

I follow.

Netscape and Mozilla tried version approach. Module is a clean start, 'use strict' by default. It is not Visual Basic, already good.


Sure, until you parachute into a code base where several generations of contractors added features that communicate over a shared global object. This is bad per-se, but becomes worse when your language allows one to add fields on the fly and you end up with this container full of similar fields because eventually nobody knows exactly what’s in the object any more...


And that seems pretty simple to fix. "The same level of awareness that created a problem cannot be used to fix the problem" - and maybe that's why they hired you, to fix these problems. I've been that guy before. What was really fun was the codebase I had to fix that used mexican slang words as variable names, and I only speak English. So much fun. But I sucked it up, accepted the challenge, and I improved it.

It really doesn't take a super-genius to code javascript simply, efficiently, and without errors. Funny that a lot of programmers that think they're very smart are the ones that either shit on javascript, or make a lot of stupid errors with it, or both.


I've seen similar abuses with global maps in other languages (essentially the same). This is an architecture fault rather than a language fault.

As you say, that is a problem with any language and project with a revolving door of developers. Perhaps those companies should learn their lesson and hire at least one or two good, permanent senior devs to keep things on track.

Like always, the human factor outweighs almost everything else.


I'd love to have a way to take a `Date` object and set it as the value of a `datetime-local` input. Sure feels like that should be straightforward to do, without requiring any weird conversions like it does.



Wow. `zonedDateTimeISO` looks like exactly what I was hoping for. I'm kind of surprised that there isn't anything analogous to `strftime`, though.


That’s not what “objective” means.


"When I use a word, it means just what I choose it to mean —neither more nor less."


It's not often that one comes across a quote from one of the greatest works of art:

https://www.goodreads.com/quotes/12608-when-i-use-a-word-hum...

That whole conversation is quite interesting and perhaps this part can also be thought of as alluding to "prescriptivism versus descriptivism".


I think it goes beyond prescriptivism/descriptivism.

Descriptivism refers generally to describing how a community uses language, so the most common usages end up being the primary definitions. Once documented, these tend to become prescriptive.

In that context, a single author who uses a word in an unusual way would likely not have any impact on the descriptive or prescriptive definitions of that word, unless of course their usage becomes common.

Humpty was arguing for the benefits of using words in unusual ways, which potentially violates both prescriptivism and descriptivism.

Impressionistic use of words is one example of this, where the words used might convey a certain feeling, perhaps via their connotations, even though their literal definitions may not be exactly appropriate. This kind of usage is generally found in literature where language is being used as an artistic tool rather than a prosaic tool of communication.

tl;dr: my HN comments is a art


So you're saying we should applaud reducing it by 1-2% right?

To answer the question of if it's worth it to add this specialized instruction, it really depends on how much die space it adds, but from the look of it, it's specialized handling of an existing operation to match an external spec; that can be not too hard to do and significantly reduce software complexity for tasks that do that operation. As a CE with no real hardware experience, it looks like a clear win to me.


Why whine about it? No matter your age, JavaScript will almost certainly still be in use until you retire.


This is like saying “nothing justifies the prolonging of capitalist torture”. On some level it’s correct, but it’s also being upset at something bordering a fundamental law of the universe.

There will always be a “lowest common denominator” platform that reaches 100% of customers.

By definition the lowest common denominator will be limited, inelegant, and suffer from weird compatibility problems.

If it wasn’t JavaScript it would be another language with very similar properties and a similar history of development.


That's not true at all. HTTP/2 is hugely impactful without using push at all—multiplexing beyond 4/8 streams enables radically different, performant bundling strategies, all "for free" with zero configuration, just by the act of using the protocol.


Wikipedia claims MTr/mm^2s of:

  Intel 14: 37.5
  Intel 10: 100.8
  TSMC 16/14/12: 28.88
  TSMC 10: 52.51
  TSMC 7: 96.5 to 113.9, depending on iteration.
  TSMC 5: 173


It really shows you how good Intel's IP is that they can even make competitive CPUs on a generation old process.


No, it's just that they use a different scale for vanity metric number. Intel's chips aren't competitive for the price; they just have customer inertia.


Even on a transistors per mm measure the 14nm++++ they are using is vastly inferior to TSMC’s 7nm.

Its only Intel 10nm that is equivalent and we don’t have desktop CPUs on that process yet (if we will at all).


this isn't an entirely fair representation of intel's 14nm. over the years, it has become extremely well optimized for high clocks. the 14nm parts actually have a slight disadvantage in IPC compared to amd, but an individual core is still faster due to the large difference in clock speed.

this has actually created an odd marketing dilemma for intel. despite having less IPC and worse power efficiency, intel's 14nm parts are actually faster than the new 10nm parts because the new process can't achieve such high clock speeds.


You can't really divorce IPC from the process used, the amount of nested logic in each pipeline stage is a direct function of gate delay and therefore the manufacturing process. On one process I may be able to fit 10 stages while on another I could fit 20. It's quite likely Intel's IPC would be much better on TSMC's 7nm vs their current 14nm.

This is also a problem when moving designs to a new generation FPGA. You may find your current level of pipelining is no longer optimal and you should do more each cycle.

People seem to have this weird idea that IPC is unrelated to how things are manufactured.


150% vs Nashorn is an important caveat here. At {employer} we started our SSRing via Nashorn and later moved to a separately deployed Node service for performance reasons... and saw a 4x drop in time per request for rendering tasks.


Nashorn is deprecated though, and GraalVM is its intended replacement. So why even evaluate Nashorn for new projects?


Browser vendor/version is hugely important information for triaging issues and implementing progressive enhancement approaches on large scale websites.


Triaging issues, sure. Progressive enhancement? If you’re parsing User-Agent to implement that, you’re doing it wrong. (Feature detection is the correct approach, when necessary.)


There are cases like IndexedDB bugs in certain versions of Safari that are very hard to feature detect...


Can’t feature detect during SSR, so making an educated guess and falling back gracefully (when possible—many times it just isn’t, as with ES6 syntax) is important to get initial page load right by default.


Because so many people abused the user agent header for things that could gracefully fall back (or they just made invalid assumptions from it) it was made an unreliable indicator for times you actually want to send shimmed content on first load.


Here's an official Google page telling you to sniff UAs or their new feature will break everything: https://www.chromium.org/updates/same-site/incompatible-clie...

Edit: OK, this isn't progressive enhancement. But it's still a major problem you have to sniff the User-Agent for. I don't want to sniff UAs either but when Chrome is wont to change how fundamental parts of the web work like cookies it's sometimes necessary. I know UA hints will still allow you to do this, but requiring a second request is going to make things like redirect pages difficult to implement.


SameSite=None isn’t progressive enhancement.


You're right, I should have replied elsewhere. By the way, if anyone knows what this _is_ called I would be interested to know. As far as I can see it's basically feature detection with no other way of detecting it besides the UA.


Perhaps, but it invalidates most of their stated reasons for doing this. Which makes the post a confusing read.


> it invalidates most of their stated reasons

The "main advantages" section for client hints all seem to apply there.


IMO it’s explained better in the Client Hints specification.


Client hints are not any more useful than traditional UA for privacy or progressive enhancement. Its the same content, just split up. Any browser not sending everything will have atrocious permission UX, nagging websites or just get the broken IE5 version.

UA hints solve nothing, even after reading the spec


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: