Hacker Newsnew | past | comments | ask | show | jobs | submit | backslash_16's commentslogin

In case anyone wants to point me in the right direction or give me some pointers, I’m a lifelong windows developer who switched to Linux (Ubuntu 24.04 lts) on my personal desktop and a laptop (I’m fully in on the switch) and it’s not great.

I think we need to accurately represent the shortcomings so people who switch aren’t surprised.

So far those are:

  1. Laptop - Battery life is bad compared to windows. It’s about half.

  2. Laptop - sleep doesn’t work. 

  3. All - multi-monitor setup with different pixel scaling doesn’t work for many applications.. unless you dig into all the Wayland options and issues and figure out how to launch all these apps under Wayland. 

  4. All - In general Wayland vs X issues. I can’t screen share with zoom. 

  5. All - Bluetooth driver issues - my Bluetooth headset won’t connect as an audio input and output device at the same time.
Now to be fair, I think all these are okay trade offs but they are a conscious choice. If you have anything outside a standard one monitor, wired peripherals setup you will probably hit issues you need to debug.

I started paying for Ubuntu pro to put my money into it, so I’m hopeful for these kinds of things in the long term.


This is why I chose a thinkpad for my laptop: I knew I wanted to switch to linux eventuality, and lenovo is very linux friendly. Many of these issues exist (or are exacerbated) because the hardware drivers don't support linux the way they support windows.

I absolutely agree, linux advocates must be honest about the shortcomings. In my case even on the thinkpad I experience the multi display scaling issue you mentioned, and bluetooth can be a little finnicky for my headphone (though this is much better than a couple years back! Usually simoly restarting the headphone solves everything).

I think it's very much worth it, and other than some of those minor issues I think current linux distributions are good enough to wholeheartedly recommend them over windows. That is if you're not held hostage by some windows only software.

E: about screensharing, I can't screenshare from teams on firefox, but from chrome it works fine, maybe that's the same for zoom?


It’s not a perfect rule, but in addition to ThinkPads, generally any laptop that only has a an Intel/AMD iGPU is going to fare better under Linux, and Intel for WiFi/Bluetooth is also very solid. The problems start to creep in with discrete GPUs and odd-brand/cheaper chipsets.

That, and don’t expect brand new hardware to work well unless you’re willing to deal with a cutting edge distribution and all the trouble those can bring. One gen back from current is usually enough of a lead time for things to catch up.


For the battery life is your CPU scaling set to ondemand or performance? One can write an alias or function to switch from ondemand to performance for gaming then switch back to save power. One can also cap the max CPU frequency but that takes some experimentation to see what the lowest frequency usable with Zoom would be. When switching from Zoom to actually getting work done one can use an alias or function to switch back to max frequency options.

    sudo cpupower frequency-info
    # or
    cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
    cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors
    cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies
Also take a look at powertop you will probably have to install this. One can set any devices they are not using to optimal settings. I avoid touching USB used by keyboard/mice and network interfaces I am using to minimize lag. Powertop can output to a file and that can be used in a startup script to automate the optimizations one has chosen.

There is also a sysctl setting called "vm.laptop_mode" which defaults to 0. On a laptop it can be set to 5 to combine writes and minimize storage wake-up. The caveat is that if the OS crashes one can lose up to 10 minutes of work. Most developers should avoid this setting unless their code editor autosaves frequently and syncs / flushes storage write caches. If unsure don't use it.

Another small gain is to ensure all daemons, desktop services and widgets not required are disabled or even removed. Some of them are power-hogs, some especially more than others. Powertop can sometimes expose this if left running for a while.

Another small gain can sometimes be installing "tlp" but different laptops and usage will see different amounts of power saving.

Oh and keeping the laptop off the lap can sometimes save power. More heat means more fan usage and thus more power usage. When at a dedicated desk using a laptop cooling stand multiple fans can extend battery life.

If one is feeling very adventurous they can install the latest bleeding edge kernel to net some small power savings but it may not be worth it if the laptop is used for anything critical.


As a linux user since linux's entire life: Yeah.

Simply facts that are true.

There are problems on Windows too, but they are not these problems, and the problems I mostly have are only problems I have and not problems the usual Windows user has.

The normal windows user doesn't even try to login without a microsoft account, or even try to remove cortana/bing/copilot/whatever-this-week, remove edge, prevent the "HP Smart" driver bundle that installs for every HP printer or scanner these days and find the old style drivers without all the cloud shit, etc.

But I have not found scaling to be especially good on windows either, even with a simple single monitor. My mother in law can't run viber in her desktop because the app scales so bizarrely that some buttons are moved under other things or out of the window or even off the screen, but on top of that, the active areawhere a click is registered does not overlay where the buttons are displayed on screen. Maybe it's just an especially crappy app but she only uses like 3 things and two of those are firefox and libreoffice (which are because I set them up of coursae she never asked for that).

Fonts look ridiculously comically bad in browsers for some reason.

And of course the ads and notifications and onedrive nagging...


I agree it's totally worth it! I'm lucky that I have just enough free time to debug these things and I work with a few excellent Linux devs who have helped me with a few things.

Thanks for understanding the spirit of my point about the shortcomings above and I really like the way you phrased the "Windows has its issues as well, they're just different ones" - and I completely agree there.

With Windows you need to navigate the Microsoft account, files getting stored in OneDrive, updates happening outside your control (arguably a good thing for most users), and more that I'm sure I'm not thinking of.

I do think the Windows issues are more abstract like security, privacy, and default on features - while the Linux ones tend to be more in my face usability ones. Again agreeing that choosing your hardware and desk/laptop setup can alleviate many of things. But that requires knowing ahead of time and people switching in reaction to something Windows is doing don't get that benefit.

I guess I'm writing all this because the idea of a Linux distribution working perfectly on most/all laptops really excites me and I think being candid about the shortcomings yet providing support to the distributions is how we can get ace these fit and finish issues.

Food for thought for anyone else reading this - the end goal of Linux for everyone is why I don't get too worked up about snaps. If they get to a point where I can tell my mom she can safely install apps X, Y, and Z by pointing and clicking in the app center it's a great computing future.


I agree with all of these broadly, though I've never run into a case where sleep doesn't work fwiw, but people are also really blind to how many warts windows has. Multi monitor stuff is a shitshow there too for instance, or Windows Update, or... I haven't personally used Windows for well over a decade but I have loved ones who do and I would say as of recent years we really have crossed over to where Windows has more shit like this than Linux I reckon.

I wish X supported mixed DPI per monitor, ugh.

I will say one notable difference is that Linux issues as a rule at least are debuggable, whereas Windows issues can just be utterly intractible. It's not that rare for me to watch friends with computer science degrees frustratedly embark on the long misadventure that is "reinstalling Windows".


I agree that laptop hardware compatibility on Linux is not the best but it can work if you buy the right device. Thinkpads are particularly well supported. You might also want to try a more up to date Linux distribution like Fedora. I never had problems with Fedora on my laptops but for example OpenSUSE Thumbleweed wouldn’t sleep properly for me and had broken Thunderbolt support.


Hello,

long time Ubuntu user here, had been bulletproof on an i5 Panasonic toughbook until 24.04 and now it’s not so stable. Sleep also stopped working correctly on an i5 Lenovo yoga and I downgraded that one back to 22.04.

However that same distro runs smoothly (and the UI isn’t constantly glitching out) on an i7 thinkpad that I don’t enjoy using because it runs red hot and the fan is always going…. FWIW that’s also the only system I have that’s even capable of running win11 smoothly… but up until now, Linux was great on castaways that windows had forgotten.

I have acpi and charging issues on the stock 24.04 kernel tree with the Panasonic , which is a laptop that supports two batteries. If either battery gets pulled on that platform it stops charging on AC.

This issue isn’t present after putting Ubuntu packages for kernel 6.14 on it , which only came out two weeks ago.

It still wanders all over the place as far as whether I can get 8 hours on a charge (or two hours), swapping the batteries confuses the system still and I haven’t had the free time recently to nail down whether this is acpi, kernel, or Ubuntu specifically. I’ve mumbled a little bit about that one on launchpad and ordered a second battery for a different laptop that has that capability but don’t have answers yet.

Would need to know your Bluetooth chipset to speculate too much because some bleeding combo cards with wifi6 are also better supported by recent kernels. For example my Intel BE200 worked fine for WiFi but the Bluetooth didn’t work at all until either 24.04 or applying 6.14 to it. Not sure which, I just noticed it was there in the menu about a week ago.

with that said my laptop still has a resource conflict I haven’t pinned down where, when WiFi and wwan card are both powered on and active my WiFi speed is clipped down to about 2mb/s. I’m just powering the wwan off when I don’t need it and I’m inclined to think it’s still a driver issue or the two cards don’t get along or are conflicting for resources somehow… I don’t have a solid enough theory to report it as a “bug” or know for sure whether it’s just my hardware yet.

Ubuntu and Wayland were the first distro where I went “hey, using Linux on the desktop finally isn’t *ss” so I’ll give them that. But 24.04 has been the one that had me wondering if it’s time to get acquainted with another. Many are mentioned ITT, I just haven’t “distro hopped” and “tried them all” in almost two decades and it may be time again.


Does 5. mean that I can't join a virtual meeting with a bluetooth headset and use the headset mic? That would actually be a major barrier to switching to linux, this is a required feature for any laptop I use. So much so I am shocked that it could be broken in ubuntu.


Re: Bluetooth, that's just how Bluetooth is. I've never seen any device that supports simultaneous HFP and ADP. You typically get either microphone and shitty mono audio or high quality ADP audio, but not both at once.


I like Pop_OS! from System76 quite a lot. They also peddle their own hardware (I use a custom desktop), so you can be reasonably sure their stuff will work with it. Quite excited about the new DE they're building too.


Try EndeavourOS. I've had less issues with a "riskier" distro like this than the recommended safer distros.


I wont pretend to downplay these issues, I do however absolutely share screenshare in the zoom web page thing.


This inconsistency, where something works for one and not for another, is yet another problem.

The fact it works for you while not working for someone else is actually worse than if it didn't work for anyone.


I would love separate variable initialization vs assignment. I know that ship has sailed.

It's interesting, I never cared about that until I did "Crafting Interpreters" and realized how separating those two actions helps clarify scopes.

Being realistic I know it's too large of a breaking change and will never land in the language but I can dream right?


  s/separating/not stupidly conflating/
In computer science, binding a variable and assignment are not only separated, but by an ideological chasm.


I don't see it being such a bad thing, it ensures variables always have a defined value.


Thanks for the info. Using different containers for build and test vs the packaged app makes a lot of sense.

> easier if you don't have to worry about compatibility with the target environment.

I develop on Windows and deploy onto Windows right now, and as we get everything onto .Net Core (and maybe eventually) Linux I'm thinking the same thing. If I can build/test/run my app and service on the same target OS it can only make life better.

Any good guides for this or am I overthinking it and anything on containers will work?


I like Black and Pylance, which is backed up pyright. Pylance and VS Code for dev time analysis (even though it has a few annoying issues like auto-importing the wrong paths) and pyright for "build" and test time analysis.

I set black to a longer line length like 100 or 110 depending on who I'm working with. Black makes sure everything has consistent formatting.


Sorry for focusing on such a small part of your comment. I’m learning about language design (as much as I can) and I don’t really understand what you mean by “double dispatch like Python”.

I think (maybe soon thought) that Python has single dispatch. Since you’ve invented languages and work on them I’m pretty much 100% sure I’m wrong and would love to learn why.

I read https://en.m.wikipedia.org/wiki/Multiple_dispatch and came to the conclusion Python has “single dispatch polymorphism” because the method resolution is based on the type of the calling object dynamically at runtime and there is no method signature overloading, which means the argument type(s) doesn’t play a part in picking/resolving the method to be called.

If you have time, do you mind explaining or pointing me to some resources?


I think he meant that for Python operators, the method can be dispatched based on the first or the second argument.


Yes, this is a special case that Python implements for binary operators. When evaluating "a + b", if a.__add__(b) doesn't work, it'll try b.__radd__(a). See https://docs.python.org/3/reference/datamodel.html#object.__...


Yeah - `__add__` then `__radd__` isn't exactly double dispatch, but it's close. (There are corner cases involving inheritance where double-dispatch will work correctly but Python's approach will fail to pick the most-specific method.)


Yes, this is what I was referring to.


One way to know if it’s a .Net client is you can run ILDasm or DotPeek against the windows binary and see if it decompiles it.


Not an easy job, but fishing, especially crab fishing in Alaska pays well and you get more than the summer off if you want.

Here are the fishing seasons: https://www.adfg.alaska.gov/static/fishing/pdfs/commercial/c...

I know crabbing seasons are different but I can’t find the docs.


Met a CPA. Or maybe just book keeper. Worked through April 15 tax day then 90 day vacation in Costa Rica. Left the returns with extended deadlines to others in the office.


I switched to using RipGrep at https://github.com/BurntSushi/ripgrep

It's native, really really fast, supports regex, and has nice defaults. The only catch is you need to understand its default ignores if you're working in a git repo.


Thanks, this makes a lot of sense. I'd like to think I was slowly closing in on this. Mine isn't as clean as what you put but I made the data within each of the enum possibilities a discrete struct.

  pub enum Expr {
      Binary(BinaryExpr),
      Grouping(GroupingExpr),
      LiteralBool(bool),
      LiteralNil,
      LiteralNumber(f32),
      LiteralString(String),
      Unary(UnaryExpr)
  }


Explaining my experience - I haven't done any hardware or firmware programming besides messing around with Ardunios. I have written a low amount of C (tiny VM for toy language) and a moderate amount of Rust (a couple thousand lines for various projects, mostly console apps that needed to run fast).

It sounds like you have some experience in this and I have been curious for a long time why Rust can't or doesn't run on almost every micro-controller? Is it based on the compiler? I thought that because Rust has no runtime/produces a binary which does not require a runtime that as long as the Rust compiler emits the right backend instructions it should just work?

Adding to my confusing, I think rustc uses LLVM which is a way of separating a language's frontend from the backend code emitted, so I thought any micro-controller supported by LLVM would work for Rust.


My day job is writing Rust code for microcontrollers.

What you're saying isn't wrong, it's just that... embedded development is incredibly diverse. LLVM doesn't support as many platforms as GCC does, and even once LLVM gets support for something, we need to do some work in Rust to get things working; support isn't automatic.

If you get to pick what your hardware is, Rust is fantastic. If you don't get to pick... it's a roll of the dice.


Thanks for the great and clear explanation, it makes more sense now! (a lot more sense)

I didn’t know that Rust support for a LLVM target isn’t free and didn’t realize micro-controller targets are so diverse. Very cool.


I'm only about a third of the way through the rust manual myself, so I can't say with confidence, but I suspect that rust's safe memory abstractions don't necessarily scale down to the lowest end of MCUs. The machine code instructions that make the code efficient simply may not exist on an 8-bit segmented memory CPU. That also touches on the gaps in LLVM support. I'll bet it's perfectly viable these days on any cortex-m MCU, though.

Also, a lot of embedded code consists of peripheral drivers, and in rust that likely means "unsafe" code. At that point, it's a bit of a wash which language you work in. That's especially true since rust can interoperate with the C ABI.

I haven't familiarized myself with all of rust's built-in data structures yet, and I'm not sure how much stuff necessitates allocating on a heap under the hood. That would be a compelling reason to carefully consider its use in some embedded systems. That's analogous to C++, where it's typical to avoid most of the STL data structures.


Most of Rust's memory safety comes from compile-time checks, not runtime code.

You do need some unsafe, but not as much as you may assume at first.

Rust doesn't really have any complex built-in data structures, and doesn't require a heap at all. That's a standard library thing, which you wouldn't be using in this context.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: