Hacker Newsnew | past | comments | ask | show | jobs | submit | jmrm's commentslogin

The response is ChatGPT.

They're going to add ads to their responses (if they didn't added it already), and people use it as search engine.


Google is in that game too with "AI Mode", stealing traffic from ChatGPT.

But as OP and other threads here highlighted, the other ~half of the gold sits in more fenced communities like WhatsApp, IG, Telegram, and other messaging and non-digital communities that are getting their "news" and "information" from viral shorts from IG, TikTok and YT Shorts.


Maybe I'm reading between lines, but isn't it ridiculous to talk about license prices when the affected machines are $900 pro laptops?

I mean, I understand that in a cheap single board computer, but this is nonsense.


Someone buying one doesn't care if it's $898.54 or $898.84.

However the price point is set to $899 regardless

Then if someone can save just 10 cents each on 10 million units, that's $1m in "savings". Despite making it a $5 worse experience, they will do this, because the majority of buyers won't be swayed by this type of choice.

"Value engineering", it's how good things get bad, and eventually new products enter the market which have consistent quality. It's one of the many problems of scale. No small company with a CEO who cares about his product is going to devalue it to save 0.1% of the cost. Once you get large though, nobody personally cares about the product, only the financials, because the financials if they do lag the product will do so after years.


Who is to say that all 'features' on a SoC won't have the licensed variants coming out of the woodwork. If Intel and AMD didn't think they were worth paying for themselves then they shouldn't have put them in silicon to pass on a few times to the consumer with a bundled copy, possibly buying it in the store anyway, maybe not even using windows or multimedia, etc.

The best move would have been killing it in the crib, the next best is making no one certain the format will work with all their demographics.


Also keep in mind that, for a $999 laptop, Dell and HP aren't getting $999 in profit.

Most of the price of that laptop goes into components that other companies make. There's very little that's actually made by Dell (or even specifically for Dell).

I wouldn't be surprised if they make as much on kickbacks for mcAfee subscriptions as they make on the laptops themselves.


>$5 worse experience

lets all calm down, its about h.265 nobody sane uses anyway


Looks at folder on ZFS array with ~16TB of video files, at least half of which by bytes-stored are h.265

Haha, yeah. Haha. Nobody sane.

Sweats


Yeah, just because some data hoarder on the internet has TBs of videos doesn't mean that's normal. So weird call out.

It is however a call out of the GP as well for not knowing how ubiquitous something can be while not being shoved in your face that it is being used. The GP is evidently unaware that most streaming services will offer an h.265 encode for those users that can use it as the bandwidth savings make it very worthwhile. Mobile devices are using HEVC by default now as well as at least iOS using a still image variant. From reading elsewhere in these comments, clearly MS Teams uses it as well.

So just because you don't know it is being used does not mean it is not being used the way you might think.


Literally every decent video application uses h.265. What are you even talking about?

Is this some Linux bigot thing?


no. youtube and netflix both use h264+av1 as their codec options. Netflix seems to use x265 for a small subset (but it's somewhat unclear).


That's incorrect.

Youtube detects your capabilities and sets it automatically. Unless you're using an obsolete potato network or watching low resolution stuff you'll likely get x265.

https://support.google.com/youtube/answer/2853702?hl=en#:~:t...

Netflix is similar. It defaults to h265 for Netflix content (because they want it to look good). Partner/licensed content uses the inferior codecs that use more bandwidth to achieve worse quality.


youtube has never and will never come to support x265 they even tried to block support from chrome becuase they hate it that much they support x264,vp8/vp9, av1 and soon av2 they literally started and entire organisation to take on mpeg called aomedia


Ahh. You're right, sort of.

Users can choose h265 for live streams and they allow hevc uploads, but they then transcode it to worse codecs before broadcast.

I wonder how they would save on bandwidth by switching to hevc? I think its something like 40% more efficient on average.

I guess av1 is even better, but what percentage of hardware supports it?


> what percentage of hardware supports it?

Pretty much everything modern except Apple. Intel since 11th gen (2021), AMD since Zen4 (2022), Samsung phones since 2021, Google phones since 2021, Mediatek since 2020.

With modern lifecycles the way they are, that's probably ~60-80% of everything out there.

Also software decoding works just fine.


Thanks! I guess I have some catching up to do.


Kind of. But where does it stop. Looking at a 24 cent license in isolation does sound silly. But what about when you add up Windows, h265, h264, mp3, aac, HDMI, ... You can't throw in every single feature into a laptop because it is cheap individually. Eventually they add up. Not to mention that in addition to per-unit fees lots of these have required memberships and certification which may add up, especially on lower volume products.

IMHO the fact that this wasn't visible on any product page was pretty awful, especially when this was a near globally included feature before. Maybe in an ideal world the customer would be able to pick which licenses they want individually when purchasing the device (or add them on at a later time). But that is beyond the knowledge of most consumers and has other downsides.

So while I do consider this choice to be pretty silly. I do find it hard to draw the line of at what point it is clearly ridiculous.


I really like how some good structures used by other languages, specially Rust and Zig, have been added to the newer C++ standard.

The Result, Optional, and Variant are really sweet for day-to-day use of the language, and those in-process standard libraries of SIMD operations, BLAS mathematical functions, and the the execution library looks really cool, specially as standard.

I would like C++ would be a little more "batteries included" in some ways, like having a basic standard for signals, networking (just handling sockets would be a huge thing), and some basic system calls.


> I really like how some good structures used by other languages, specially Rust and Zig, have been added to the newer C++ standard. The Result, Optional, and Variant are really sweet for day-to-day use of the language, and those in-process standard libraries of SIMD operations, BLAS mathematical functions, and the the execution library looks really cool, specially as standard.

for Optional and Variant they both were basically standardized versions of boost.optional & boost.variant, which exist since 2003 and 2002 respectively. Most of the time you can just change boost:: to std:: and it works exactly the same ; for many years software I develop could switch from one to another with a simple #ifdef due to platforms not supporting std::optional entirely (older macOS versions, pre 10.14 IIRC)


Often the std flavored implementation is inferior of the boost one. Support for optional references has only be added to the draft standard recently, while bossy has had it since forever.


I agree, I was stuck on boost::optional for a long time for this reason, and I only use boost::variant2 for my variant needs, although code can still build with std:: in case a customer really does not want boost


Correct, they have been around for a lot longer than rust.


I knew some changes (like STL containers) came from Boost, but I didn't know those also came from there, and specially since such a long time!

That means I need to look more Boost documentation :)


"I would like C++ would be a little more "batteries included" in some ways, like having a basic standard for signals, networking (just handling sockets would be a huge thing), and some basic system calls."

Besides basic handling of TCP sockets and the Unix-style "Ctrl-c" keyboard interrupt, none of the stuff you're asking for is portable across different platforms. I'm not saying it's a bad idea, just that there is no one single universal standard for what an OS should do and what knobs and levers it should expose, or at least one that everybody follows.

Linux has non-trivial deviations from the POSIX spec, and even FreeBSD and OpenBSD have deviations. POSIX has its own compliance test suite that it runs to award certification of compliance, but it's not open source and it you need to pay a fee for it.

All of that however, is a drop in the bucket compared to making an API that exposes all the knobs and levers you want in a way that behaves exactly the same on Windows which barely has any architectural resemblance to UNIX. For exmaple, NTFS is case-insensitive by default and has nothing resembling the UNIX style of file permissions. Or more importantly, signals do not exist on Windows; something resembling signals for keyboard interrupts exists, but stuff like SIGHUP and SIGBUS does not. I'm talking the kind of known caveats that come with using a POSIX-compatibility layer on Windows, e.g. Cygwin.

I think if I get much deeper than that I'm just being pedantic, but even Python code behaves differently on Windows than it does on all the POSIX-like OSes out there.


Asilo Is a portable and efficient network abstraction. It was being tweaked for standardization for the last 15 years, before being suddenly voted out.


[s/asilo/asio/; Autocorrect has been wrecking havoc to all my comments today]


There's no universally adopted OS standard for a lot of the stuff in the stdlib but C++ sans std::string, a large portion of things under std::ios_base, most all of concurrency (e.g. std::thread), std::filesystem, and so on would be relatively shit in comparison.

As much as possible in the stdlib should behave the same across as many targets as possible. That's about where the relevance ends in my mind.


I knew about the difference they have between UNIX-like OSs in the usage of different signals (and the System V vs BSD battles, between others), but I didn't know Windows didn't have a similar system (I haven't done too much low-level in Windows).

Thanks for the long comment!


A lot of those design decisions in OSes were influenced by hardware limitations of home computers in the 1970s and 1980s. UNIX may have been around since '71, but Windows NT was designed to expose an API similar to DOS-based Windows 95 (the Win32 API), which in turn was designed around backward compatibility with MS-DOS, which in turn was designed to mimic other other "DOS" OSes from other companies in the late 70s. This ultimately traces back to a time when consumer-grade hardware just couldn't handle all the features we now take for granted, like virtual memory and preemptive multitasking, or even floating point math.

However, Windows NT was also written after machines capable of running Unix cost less than a BMW so a lot of the good folks in Redmond during the early 90s took some liberties to improve on some fundamental design flaws of UNIX.

1. "Everything is a file" is very flexible for writing server applications where the user is expected to know and trust every program, but it is potentially harmful to expose devices as files to non-technical users. Nowadays with UEFI, you can even pipe /dev/zero to /dev/mem or /dev/port and brick your motherboard. There was a patch for this, but there are old servers running old Linux versions in the wild that can be permanently bricked.

2. Arguably, exposing such a wide range of signals to a userland program for it to handle is a design flaw, like the memory fault signals SIGSEGV and SIGBUS. They were not designed for IPC or exception handling, but they ended up being used that way by a lot of developers over the years. I won't start a war to make the case because I can see both sides on that, but #3 below is not controversial at all.

3. NTFS ACLs are a big improvement over UNIX-style ugo-rwx permissions. FWIW, they're also easier to work with than POSIX ACLs.

Just something to think about: the Windows way is radically different because compatibility with ye-olde DOS running on 68k CPUs ruined it in some ways, but in other ways its design was driven by learning from UNIX's mistakes.

despite the confusing name, Win32 is not just a 32-bit libc, it's a 64-bit libc on 64-bit Windows.


Some of them date back from way before Rust and Zig. I am thinking about Qt and Boost.

Boost in particular is like a testing ground for future C++ standards, with many of the "batteries" you want included. And it is already C++.

Of course, Rust is a huge influence nowadays, and it sparks a lot of debates on the direction C++ should take. I think less so with Zig, which is more C than C++ in spirit, but every good idea is good to take.


> I would like C++ would be a little more "batteries included" in some ways, like having a basic standard for signals, networking (just handling sockets would be a huge thing), and some basic system calls.

Isn't Boost library basically that? C++ has been slowly adopting freatures from it to its standard library.


C++ has been adopting features from a lot of different libraries into the stdlib. These libraries don't always do it "The Boost Way™", even when Boost has an equivalent library. Boost has a lot of good stuff though, it's just a little farther from being "std++, why care about std?" than commonly advertised.


I wish C++'s optional was less of a compromise. It would be great if it had specialization for sentinel values, like Rust. As-is it can be pretty wasteful in data structures.


std::variant is an abomination that should never be used by anyone ever. Everything about it is sooooo bad.


That's a bit hyperbolic. Sure, it's not exactly ergonomic, but that doesn't mean I can't use it. One thing that bugs me is that there is still no overload helper in the standard:

    // helper type for the visitor
    template<class... Ts>
    struct overloads : Ts... { using Ts::operator()...; };
Of course, having true pattern matching would be much nicer. At least there's a proposal: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p26...


I will boldly state that std::variant makes all code worse and it is always better to not use it.

I love sum types in Rust. They’re great. This stuff exists solely to add ergonomics. If it’s not ergonomic it’s just making your code worse!

std::variant is fundamentally broken because it uses types as its discriminant. Want a variant with two ints that represent two things? Fuck you, you can’t. Rust enums of course can do this.

std::visit is an abomination. As is the template overloaded bullshit you have to copy paste into projects.

And of course the error messages when you get when it’s wrong are atrocious even by C++ standards.

No. std::variant is awful and you should simply never use it. This could change in 2040 when you can use C++32. But for now just avoid it.


>Want a variant with two ints that represent two things? Fuck you, you can’t.

This is false, I also find that in general you have a tendency to make false claims about C++ in most of your posts about it. I would suggest you check out a resource like https://en.cppreference.com/ just as a sanity check before you make claims about the language in the future, and also because it's a very good resource for learning the ins and outs of the language.

As for your claim, std::variant supports index based discrimination similar to std::tuple, so you can absolutely have a std::variant<int, int>, and access the second int along the lines of:

    auto foo = std::variant<int, int>();
    std::get<1>(foo) = 123;
    std::cout << std::get<1>(foo);
This is all documented with examples:

https://en.cppreference.com/w/cpp/utility/variant/get.html


Touché.

If anyone ever submitted a diff that required you to know the difference between get<1> and get<2> I would reject it with a polite message that this is extremely unclear, unintuitive, and error prone. Don’t do that.

I write C++ every day and use cppreference on the regular. If you’re going to scroll my post history you can also read my blog to help decide if I’m a dumbass or not!

Also, if you create a variant with multiple instances of the same type you now lose the ability to use visitor overloaded lambdas. So in practice you need to wrap the type. Which in some ways is what Rust does. But all that is to say that std::variant is extremely non-ergonomic and you’re better off just not using it.


>If anyone ever submitted a diff that required you to know the difference between get<1> and get<2> I would reject it with a polite message that this is extremely unclear, unintuitive, and error prone. Don’t do that.

But that wasn't your argument. If you have a coding standard that prohibits magic numbers then that's great, use a named constant instead just like most coding standards require:

    constexpr auto FOO = 1;
    constexpr auto BAR = 2;
    std::get<FOO>(my_variant);
    std::get<BAR>(my_variant) = "hello world";
>If you’re going to scroll my post history you can also read my blog to help decide if I’m a dumbass or not!

I don't think you're a dumbass, I think you repeatedly express very strong opinions without taking just a small amount of time to verify that the argument you're making is correct. That's why I advised to just take like 1 or 2 minutes to quickly perform a sanity check and ensure that what you're claiming is factual.

Heck most people here complain about C++ constantly based on their personal experience with the language, and they have every right to do so. I don't take issue with that.

I take issue when people express very strong statements that would convince people who don't know any better simply on the basis of how confident the opinion is being expressed. Your original claim is simply too strong of a claim to make given that you are not properly informed on this subject.


I think you repeatedly express very strong opinions without taking just a small amount of time to verify that the argument you're making is correct. That's why I advised to just take like 1 or 2 minutes to quickly perform a sanity check and ensure that what you're claiming is factual.

Are you sure this isn't projection?


Literally every single time I have ever used std::variant or worked with code that used std::variant I wish it was done without it. Every time. And that’s not an exaggeration or me being hyperbolic.

Named constants aren’t significantly better. Multiple types in a variant breaks many things with god awful error messages. And that is a fact.

I am hyperbolic on HN. That’s true. My sentiment is sometimes but rarely wrong!

std::variant is bad and no one should use it ever, imho. It sucks and is horribly ergonomic and doing certain things makes it even less ergonomic. Friends don’t let friends use std::variant.

Now ask me my opinion on global variables and how many times I have had to debug mysterious crashes that you’ll never guess the root cause!


Would you ever really use that? That just shows how bad it is, IMO. Any time I want a sum type I want names, not hardcoded integers.


I wouldn't use it (in any language), but the claim was that it is impossible. It isn't.


I treat std::variant the same way I treat std::tuple, which is that I use them internally/privately and don't expose them as part of a public API.

If I want to expose a std::variant publicly then I take the effort to emulate Rust, which I think everyone agrees has an incredibly useful and elegant enum type so it looks like this:

    int main() {
      auto s1 = ConnectionState::Disconnected();
      auto s2 = ConnectionState::Connecting(3);
      auto s3 = ConnectionState::Connected("192.168.1.5", 8080);

      for (const auto& state : {s1, s2, s3}) {
        state.visit(
          [](Disconnected) {
            std::cout << "Disconnected\n";
          },
          [](int retries) {
            std::cout << "Connecting (" << retries << " retries)\n";
          },
          [](const IpAddress& ip) {
            std::cout << "Connected to " << ip.host << ":" << ip.port << "\n";
          });
      }
    }
To implement that I currently do need to write out boilerplate like below, but with C++26 I will be able to use the upcoming reflection feature to automatically implement the bulk of this code by reflecting on the std::variant directly:

    class ConnectionState : private std::variant<std::monostate, int, IpAddress> {
      public:
        static auto Disconnected() { return ConnectionState(std::monostate{}); }
        static auto Connecting(int retries) { return ConnectionState(retries); }
        static auto Connected(std::string host, uint16_t port) {
          return ConnectionState(IpAddress{std::move(host), port});
        }

        template <typename... Fs>
        decltype(auto) visit(Fs&&... fs) const {
          auto visitor = Overload{std::forward<Fs>(fs)...};
          return std::visit(visitor, *this);
        }

      private:
        using std::variant<std::monostate, int, IpAddress>::variant;
    };

    using Disconnected = std::monostate;
...


> I will boldly state that std::variant makes all code worse and it is always better to not use it.

And I will boldly state that your comment is again completely hyperbolic.

> std::variant is fundamentally broken because it uses types as its discriminant.

In my experience, the typical use case for std::variant is static polymorphism. For this purpose it works just fine because the type is the discriminator.

Another popular use case is tagged unions for non-POD types. In this case, you probably access the active member with std::get. This even works with duplicate types, if you access by index.

Would proper sum types and pattern matching be nice? Of course! But does this mean that std::variant is fundamentally broken? I don't think so.


Kill me, but I never saw the point over abstract base classes and polymorphic behavior defined in the subclasses.

Now purists will scream “compile time optimization!”, but in reality std::variant is inplemeted very múch líne a vtable. There was a good talk at Cppcon a few years ago on this issue. In particular, they found no difference in performance.


> Kill me, but I never saw the point over abstract base classes and polymorphic behavior defined in the subclasses.

They're different tools which can fit different situations better or worse. std::variant fits some situations better (e.g., expressing a known closed set of types which may or may not share a common interface - tree nodes, option/result types, state machines, etc.), while interfaces/virtual functions fit some other situations better (e.g., expressing a potentially-open set of types with a shared interface - widget hierarchies, type erasure, etc.).

> Now purists will scream “compile time optimization!”

I actually feel like under most circumstances it's how closely one option or the other matches your problem domain that influences the decision one way or the other, not performance.

In any case, it doesn't really help that std::variant is somewhat neutered in C++ by a rather awkward interface as well as the lack of generalized/ergonomic pattern matching such as that offered by Haskell/ML-likes/Rust.


out of curiosity, can you elaborate on that a bit? I've shipped std::variant in a few designs and it's been OK, but maybe I didn't use it deeply enough to see issues.


It doesn't have proper pattern matching (you have to use a visitor class or the weird overload trick), and it sucks for compile times. It should have been a language construct rather than adding it to std, but the committee is often too scared to touch the language because they are stubbornly unwilling to break backwards compatibility.


C++ is a proud New Jersey language. So the priority has been and continues to be ease of implementation. If it's tricky to make it work right, don't sweat it - the users (in this case that's us programmers) will put up with it not working right.

std::variant has valueless_by_exception - the C++ language wasn't able to ensure that your variant of a Dog or a Cat is always, in fact, a Dog or a Cat, in some cases it's neither so... here's valueless_by_exception instead, sorry.


Ease of implementation was not the issue here, on the contrary the easier implementation would have guaranteed that std::variant<Dog, Cat> always contains a Dog or a Cat. The issue was performance in that guaranteeing a Dog or a Cat requires dynamic storage duration if either of Dog or Cat has a throwing constructor.

If neither Dog or Cat have throwing constructors, then std::variant<Dog, Cat> is guaranteed to always be a Dog or a Cat.


> The issue was performance in that guaranteeing a Dog or a Cat requires dynamic storage duration if either of Dog or Cat has a throwing constructor.

Any need here for an allocator is due to a language defect. Rather than implement a fix for the C++ programming language they just punted this problem to every C++ programmer. That's the New Jersey style.


I wonder if some Apple-made software, like Final Cut, make use of all of those "duplicated" instructions at the same time for getting a better performance...

I know how just the multitasking nature of the OS probably make this situation happens across different programs, but nonetheless would be pretty cool!


We already have some USB-C flash drives. Isn't just more practical to have USB-C keyboards, mice, and other devices like that instead of conserving the USB A?


IIRC some countries, like UK, have a similar thing like Credit Score, mostly used for what type of loans they can give you, max credit card limits, and other financial products, but won't affect at all to buy a car in cash or renting an apartment.

For me it's crazy not being able to rent a place even paying a whole year beforehand.


You can avoid this problem easily not letting students using phones or computers in the classroom and making doing more tasks at the classroom than at home.

If you're going to say "but in a working environment you use a computer", then teach them how to use text processing and spreadsheets int the computer room, a thing that didn't happen today in most schools btw.


In my Uni we still had some coding test done with pen and paper (2014-2018), and AFAIK, they're still doing them. I even done a part of an exam in assembly with a provided Xilinx PicoBlaze assembly mnemonics list.

I don't know why people demonize them. If you know the syntax you're asked for, you can write in that language, and if you were asked to write in pseudo-code some algorithms, you should be able without any additional computerize help.


I remember in the mid 2000s how some Xara Designer illustration software was able to do that, I don't know how isn't a more common thing (or maybe I don't know enough specialized software in that matter).

I remember well that program because it was available both for Windows and Linux, a really rare thing in that time.


I still refuse to use anything but Xara designer, and have made my children play with it.


IMHO yes. Sometimes if the deviation is too big yo can get coocked, overcooked, and mostly raw pieces in the same pan, and that's heavily undesirable.


When cutting potatoes into chunks, for something like a stew, I often find myself thinking about this problem, and how I would write a program for a robot to do it.

They are fairly well approximated as ellipsoids of different sizes. Typically, I want pieces around half the volume of the smallest potatoes, but with the range of sizes, this means cutting the larger ones into at least 5 pieces. While it would be simple to make parallel slices giving equal volume, these would have very different shape to the halved smalls. Some can be quartered to give nice chunks, others into thirds with 2 perpendicular cuts...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: