Hacker Newsnew | past | comments | ask | show | jobs | submit | kanbankaren's commentslogin

Wayland has still no way to set DPI of multiple monitors. The fonts look terrible on it. I had to move to KDE Plasma on X11 ever since GNOME started forcing Wayland on us.

I guess I have to buy a 4K monitor in future.


If you haven't tried KDE on Wayland in a while, do try it. Fonts looking terrible on Wayland was a GNOME thing, and KDE/Kwin handles display scaling and mixed DPI fine, GNOME/Mutter didn't until very recently.

KDE had the setting to allow X11 apps to scale themselves (no more blurry XWayland apps) years ahead of GNOME.


I thought this problem was Wayland's reason for existing?

Nah mate, it's all about the Wayland Trust model. No keylogging, consent-based screen recording, and no window spying. Isolation.

Sure it does. I have that set right now... Fonts looking terrible seems to only be when using an x app on wayland

> Fonts looking terrible seems to only be when using an x app on wayland

I suspect the original commenter had an issue with GNOME specifically, as I've noticed it too, on Wayland native apps. GNOME handled fractional scaling poorly, and fonts didn't align to the grid right and looked fuzzy at anything that's not 1x or 2x scale.

KDE got this right from day 1.


How to do it? I am not talking about fractional scaling.

Display settings in kde

You can. KDE Wayland allows you to even set fractional scaling. I had 125% on one monitor and 100% on three others. all work like a chgarm

How did you arrive at 125%? What is the formula? Just eyeballing?

I set DPI so that a 15pt font occupies 15pt physical space on screen. Not sure how to set DPI using fractional scaling.


The formula is DPI ÷ 96. 100% is 96 dpi, 125% is 120 dpi.

Doesn't that depend on how far you sit from the screen?

My monitor DPI is 70. 70/96 is 0.73, but there doesn't seem to be a way to set 73%?

You might be out of luck. I don't think it's possible to set the scaling lower than 100%. DPI scaling is primarily concerned with high-DPI.

That is why some of us still need X11 support.

I could set my screen to 75%, not really 73% but close may be?

How did you do it? It doesn't allow any value below 100%?

On KDE you can just type a number into the scale percentage field in the display configuration settings pane. I typed "73" and it snapped to 72.5 which is probably close enough.

I don't know whether GNOME supports anything similar, unlike KDE they really don't like giving users very many configuration options.


I tried it on KDE Plasma(6.5.3) just now and it resets to 100%.

Weird, mine lets me go down to 50%: https://files.catbox.moe/gjuzl6.png . Out of curiosity, do you use an nVidia graphics card? I know in the past their drivers have had problems with scaling on Linux.

The other commentator is right. I have an AMD graphics card and I can do this as well. https://imgur.com/a/7mjm8O9

> Wayland has still no way to set DPI of multiple monitors.

It does, but not every DE expose that functionality. There is some command that should be DE-agnostic like wlr-randr that should allow you to do that.


The exFAT suggested below is really not resilient enough that I could trust GBs of data on it. Easily gets corrupted.

If you don't need to use it on Mac/Windows, use a FS like BTRFS with checksumming feature.

Don't use any FS that doesn't have checksum feature as silent bitrot is real.


So, nothing? Unbelievable


Not true.

I was programming in the 90s when these languages emerged. Developments environments were emacs, vi, Brief, Borland IDE, etc. There were a few other IDEs available, but about $200 per seat.

All the scripting languages mentioned didn't come as default in Unix or Windows. You had to download from their own websites.

It was mostly Visual Basic, C, COBOL that were popular.


I think that's what I mean. After the time you talk about ('90s), these languages matured, and they happened to mature around the same time binary package managers became a thing, i.e. in the early-to-mid '00s.


There was also ELK Scheme, the Extension Language Kit, a scheme interpreter designed to be used as an extension language for other appllications.

https://www.usenix.org/legacy/publications/compsystems/1994/...

>Elk, the Extension Language Kit, is a Scheme implementation that is intended to be used as a general, reusable extension language subsystem for integration into existing and future applications. Applications can define their own Scheme data types and primitives, providing for a tightly-knit integration of the CIC++ parts of the application with Scheme code. Library interfaces, for example to the UNIX operating system and to various X window system libraries, show the effectiveness of this approach. Several features of Elk such as dynamic loading of object files and freezing of fully customized applications into executables (implemented for those UNIX environments where it was feasible) increase its usability as the backbone of a complex application. Elk has been used in this way for seven years within a locally-developed ODA-based multimedia document editor; it has been used in numerous other projects after it could be made freely available five years ago.

Also Gnu Guile:

https://en.wikipedia.org/wiki/GNU_Guile

>GNU Ubiquitous Intelligent Language for Extensions[3] (GNU Guile) is the preferred extension language system for the GNU Project[4] and features an implementation of the programming language Scheme. Its first version was released in 1993.[1] In addition to large parts of Scheme standards, Guile Scheme includes modularized extensions for many different programming tasks.[5][6]

Also Winterp, which used XLisp:

https://dl.acm.org/doi/10.1145/121994.121998

>Winterp is an interactive, language-based user-interface and application-construction environment enabling rapid prototyping of applications with graphical user interfaces based on the OSF/Motif UI Toolkit. Winterp also serves as a customization environment for delivered applications by providing a real programming language as an extension language. Many existing user-interface languages only have the expressive power to describe static layout of user interface forms; by using a high-level language for extensions and prototyping, Winterp also handles the dynamic aspects of UI presentation, e.g. the use of direct manipulation, browsers, and dialog. Winterp makes rapid prototyping possible because its language is based on an interpreter, thereby enabling interactive construction of application functionality and giving immediate feedback on incremental changes.Winterp's language is based on David Betz's public domain Xlisp interpreter which features a subset of Common Lisp's functionality. The language is extensible, permitting new Lisp primitives to be added in the C language and allowing hybrid implementations constructed from interpreted Lisp and compiled C. Hybrid implementation gives Winterp-based applications the successful extension and rapid-prototyping capabilities of Lisp-based environments, while delivering the multiprocessing perfor- mance of C applications running on personal Unix workstations.

And TCL/Tk of course!

https://www.tcl-lang.org/

And on the commercial side, there was Visix Galaxy, which was extensible in PostScript, inspired by NeWS:

https://www.ambiencia.com/products.php

https://0-hr.com/Wolfe/Programming/Visix.htm

https://groups.google.com/g/comp.lang.java.programmer/c/LPkz...

https://donhopkins.com/home/interval/pluggers/galaxy.html

https://wiki.c2.com/?SpringsAndStruts

>The Visix Galaxy project was a ridiculously overpriced and overfeatured portable GraphicalUserInterface. You could do things like swivel an entire panel full of their custom widgets 32 degrees clockwise, and it would render all its text at this new angle without jaggies. The company went out of business after gaining only a handful of customers. For USD$ 10,000 a seat they sure didn't see the OpenSource movement coming. Their last attempt before going under was (guess what?) a Java IDE.

Galaxy competed with Neuron Data Systems in the "cross platform gui framework" space (which got steamrolled by the web permanently and for a window of time Java):

https://donhopkins.com/home/interval/pluggers/neuron.html

Here is a great overview of User Interface Software and Tools by Brad Myers:

https://www.cs.cmu.edu/~bam/uicourse/2001spring/lecture05too...

https://www.cs.cmu.edu/~bam/toolnames/

https://docs.google.com/document/d/1hQbMwK_iyjX-wpu_Xw_H-3zL...


Well, there are already multiple skin creams with Vitamin C. They have been available for a long time, but they are expensive for what it provides.

Just taking a 500mg x 2 Vitamin C supplements should provide enough for skin repair.


Let's not engage in quakery and resort to knowledge instead.

Oral and transdermal (topical) application of Vitamin C (and other molecules in general) follow completely different routes with different absorption rates and accompanying nuances.

Oral intake. Absorption rate is dosage dependent:

  – At moderate doses (≤ 250 mg/day): 70–90 per cent of ascorbate is absorbed into the bloodstream. Bloodstream means just that – Vitamin C will be distributed throughout the entire body, which includes tissues, internal organs and skin. Active absorption takes place in the small intestine predominantly by SVCT1 and SVCT2 sodium-ascorbate co-transporters.

  – At high doses (≥ 1g a day): passive diffusion takes over and also takes place in the small intestine although now via GLUT transporters that become saturated and absorption efficiency drops to 50 per cent or lower.
The half-life of Vitamin C taken orally is approximately four hours anyway, after which any excess of it still circulating will be rapidly excreted via the renal route (kidneys). Studies report that significantly less than 0.1 per cent makes into the epidermal (skin) layer.

Transdermal (topical) application. Depends on the concentration and several factors, but a 20% concentration serum (not a cream) can achieve a > 80% absorption rate through the skin into receptor fluid after 24 hours. Half-life of Vitamin C applied topically is approximately 4 days.

Recap: less than < 0.1 % / 4 hours half-life for the oral route vs more than 80 % / 4 days half-life for the transdermal route.


Liposomal C will achieve higher concentrations in cells as it doesn't rely on GLUT/SVCT.

Otherwise, the absorption of high doses depends on stress level - when you are not healthy, your body will absorb A LOT more, as shown by vitamin C bowel tolerance method.

To be sure you have it where it counts, take all forms of C - liposomal, film, AA and topical


Ascorbyl palmitate («liposomal C»), when taken orally, is absorbed by the same active‐transport and passive‐diffusion mechanisms as plain vitamin C, with the same saturation thresholds. And it has the same problem as ascorbic acid, sodium ascorbate and calcium ascorbate – it gets distributed throughout the entire body with only minute amounts reaching the «skin».

Topical application of ascorbyl palmitate/«liposomal C», on the other hand, has very poor uptake due to the molecule size being too big to penetrate the skin layer[0]:

  L-ascorbic acid must be formulated at pH levels less than 3.5 to enter the skin. Maximal concentration for optimal percutaneous absorption was 20%. Tissue levels were saturated after three daily applications; the half-life of tissue disappearance was about 4 days. Derivatives of ascorbic acid including magnesium ascorbyl phosphate, ascorbyl-6-palmitate, and dehydroascorbic acid did not increase skin levels of L-ascorbic acid.
Key takeway: «Derivatives of ascorbic acid including magnesium ascorbyl phosphate, ascorbyl-6-palmitate (a.k.a «liposomal C», and dehydroascorbic acid did not increase skin levels of L-ascorbic acid».

[0] Source: https://europepmc.org/article/MED/11207686


Liposomal C IS NOT ascorbyl palmitate. The point is about liposome anyway, not the form of vitamin C. There are a number of research papers showing higher bioavailability, some even claim its similar to IV.


Ah, so you are actually talking about the liposome encased ascorbic acid. I have seen a number of products that misrepresented ascorbyl palmitate as liposomal vitamin C, hence the enquiry.

Taking any form of vitamin C orally still confers statistically insignificant benefits for the skin due to having to propagate and get distributed throughout the entire body.

The article in question discusses the benefits of the topical application of vitamin C, the benefits of which have been extensively studied. Vitamin C (especially in combination with ferulic acid) is amongst very few skincare products that actually work – it has been known for a long time.


> Taking any form of vitamin C orally still confers statistically insignificant benefits for the skin due to having to propagate and get distributed throughout the entire body.

Maybe not if you take it in multiple of grams, e.g. you brute force it to replace non-working GULO gene you have, that would do it in that range if not defective.


Can you explain and cite a study reference? I'm not following it. Taking vitamin C by grams will give one a diarrhoea, and a pretty violent one.

One of the common strategies to prolong the circulation of vitamin C is to recycle it by coupling it with, e.g. N-acetyl cysteine.


> Taking vitamin C by grams will give one a diarrhoea, and a pretty violent one.

Yes when your body gets enough of it (its called Vitamin C Flush and its not harmful), which is dynamic. I take 10+ grams and do not have diarrhoea, I might get it on 20+ IF I am healthy. I don't get it with 100g when I have influenza which is the state of the system when body requires more and SVCT pumps are active like crazy. This is trivially easy to check out yourself, you don't need a study. I have never seen a better feedback system for any drug, really.

> Can you explain and cite a study reference?

There are no studies about it, you need to try it yourself. Vitamin C is non-toxic and doesn't produce kidney stones, contrary to popular belief.

There are medical hypothesis and Linus Pauling wrote a few books about it long time ago.

https://www.sciencedirect.com/science/article/abs/pii/030698...

Check out pharmacokinetics here:

https://www.tandfonline.com/doi/abs/10.1080/1359084080230542...

> One of the common strategies to prolong the circulation of vitamin C is to recycle it by coupling it with, e.g. N-acetyl cysteine.

Yes, I take NAC too, however, the worst offender is sugar, as GLUT2 transports both C and glucose, and since its passive transporter C gets outcompeted given the levels of both.

I can explain 2 decades of experience with it, if you need some info send me a note.


Is the 80% absorbed Vitamin C through transdermal route cross the epidermis & dermis layers?


That is indeed correct.


What is wrong with C++?

With POSIX semaphores, mutexes, and shared pointers, it is very rare to hit upon a memory issue in modern C++.

Source: Writing code in C/C++ for 30 years.


> With POSIX semaphores, mutexes, and shared pointers, it is very rare to hit upon a memory issue in modern C++.

There is a mountain of evidence (two examples follow) that this is not true. Roughly two-thirds of serious security bugs in large C++ products are still memory-safety violations.

(1) https://msrc.microsoft.com/blog/2019/07/we-need-a-safer-syst... (2) https://www.chromium.org/Home/chromium-security/memory-safet...


I write high performance backends in C++. Works approximately as described in article and all data are in RAM and in structures specialized for access patterns. Works like a charm and runs 24x7 without a trace of problem.

I've never had a single complaint from my customers. Well I do have bugs in logic during development but those are found and eliminated after testing. And every new backend I do I base on already battle tested C++ foundation code. Why FFS would I ever want to change it (rewrite in Rust). As a language Rust has way less features that I am accustomed to use and this safety of Rust does not provide me any business benefits. It is quite the opposite. I would just lose time, money and still have those same logical bugs to iron out.


How many other programmers have you trained up to that level of results? Can you get them to work on Windows, Chrome, etc. so users stop getting exposed to bugs which are common in C-like languages but not memory-safe languages?


I do not train programmers. I hire subcontractors when I need help. They're all same level as myself or better. Easy to find amongst East Europeans and does not cost much. Actually cheaper than some mediocre programmer from North America who can only program using single language / framework and has no clue about architecture and how various things work together in general.


Show me a memory issue that was caused by proper usage of POSIX concurrency primitives.


Any reasonable meaning of “proper” would include not causing memory issues, so you’ve just defined away any problems. Note that this is substantially different from not having any problems.

The great lesson in software security of the past few decades is that you can’t just document “proper usage,” declare all other usage to be the programmer’s fault, and achieve anything close to secure software. You must have systems that either disallow unsafe constructs (e.g. rust preventing references from escaping at compile time) or can handle “improper usage” without allowing it to become a security vulnerability (e.g. sandboxing).

Correctly use your concurrency primitives and you won’t have thread safety bugs, hooray! And when was the last time you found a bug in C-family code caused by someone who didn’t correctly use concurrency primitives because the programmer incorrectly believed that a certain piece of mutable data would only be accessed on a single thread? I’ll give you my answer: it was yesterday. Quite likely the only reason it’s not today is because I have the day off.


> And when was the last time you found a bug in C-family code caused by someone who didn’t correctly use concurrency primitives because the programmer incorrectly believed that a certain piece of mutable data would only be accessed on a single thread? I’ll give you my answer: it was yesterday.

You answered my question. My original argument was using concurrency primitives "properly" in C++ prevents memory issues and Rust isn't strictly necessary.

I have nothing against Rust. I will use it when they freeze the language and publish a ISO spec and multiple compilers are available.


> My original argument was using concurrency primitives "properly" in C++ prevents memory issues

Yes, I know, I addressed that. It's true by definition, and a useless statement. Improper usage will happen. If improper usage results in security vulnerabilities, that means you will have security vulnerabilities.

Note that I say this as someone who makes a very good living writing C++ and has only dabbled in rust. I like C++ and it can be a good tool, but we must be clear-eyed about its downsides. "It's safe if you write correct code" is a longer way to say "it's unsafe."


You're right, if you use the concurrency primitives properly you won't have data races. But the issue is when people don't use the concurrency primitives properly, which there is ample evidence for (posted in this thread) happening all the time.

But with this argument, the response is "well they didn't use the primitives properly so the problem is them", which shifts the blame onto the developer and away from the tools which are too easy to silently misuse.

This also ignores memory safety issues that aren't data races, like buffer overflows, UAF, etc.


Proper usage is fine. The problem is that it is easy to make mistakes. The compiler won't tell you and you may not notice until too late in production, and it will take forever to debug.


Here's two: CVE-2021-33574, CVE-2023-6705. The former had to be fixed in glibc, illustrating that proper usage of POSIX concurrency primitives does nothing when the rest of the ecosystem is a minefield of memory safety issues. There are some good citations on page 6 of this NSA Software Memory Safety overview in case you're interested://media.defense.gov/2022/Nov/10/2003112742/-1/-1/0/CSI_SOFTWARE_MEMORY_SAFETY.PDF


dozens caused by folks thinking pthread_cancel() was the right tool for the job


Real Programmers use C++


What a terrifying statement.

Edit: to be less glib, this is like saying “our shred-o-matic is perfectly safe due to its robust and thoroughly tested off switch.” An off switch is essential but not nearly enough. It only provides acceptable safety if the operator is perfect, and people are not. You need guards and safety interlocks that ensure, for example, that the machine can’t be turned on while Bob is inside lubricating the bearings.

Mutexes and smart pointers are important constructs but they don’t provide safety. Safety isn’t the presence of safe constructs, but the absence of unsafe ones. Smart pointers don’t save you when you manage to escape a reference beyond the lifetime of the object because C++ encourages passing parameters by reference all over the place. Mutexes and semaphores don’t save you from failing to realize that some shared state can be mutated on two threads simultaneously. And none of this saves you from indexing off the end of a vector.

You can probably pick a subset of C++ that lets you write reasonably safe code. But the presence of semaphores, mutexes, and shared pointers isn’t what does it.

Source: also writing C and C++ for 30 years.


>"What a terrifying statement."

The statement may not be correct but calling it terrifying is way melodramatic.


I don’t think so. The fact that someone with extensive experience thinks modern C++ is safe because it has semaphores and mutexes and smart pointers is legitimately scary. It’s not merely wrong, it reflects a fundamental misunderstanding of what the problem even is. It’s like an engineer designing airliners saying that they can be perfectly safe without any redundant systems because they have good wheels. That should have you backing away slowly while asking which manufacturer they work for.


I think their statement amounts to something like in line of: subset of modern C++ and feature usage patterns can be reasonably safe and I am ok with it. Nothing is ever really safe of course. One should consider trade offs of quality / safety vs costs and make their own conclusion on where to lean more and where enough is enough.


There's an argument to be made that you can write safe C++ by using the right subset of the modern language. It might even be a decent argument. But that's not the argument that was made here. They mentioned two things that have only the most tangential connection to security and that aren't even part of C++, plus one C++ feature that solves exactly one problem.


> Safety isn’t the presence of safe constructs, but the absence of unsafe ones.

Exactly. Here is a data point: https://spinroot.com/spin/Doc/rax.pdf

Tl;DR: This was software that ran on a spacecraft. Specifically designed to be safe, formally analyzed, and tested out the wazoo, but nonetheless failed in flight because someone did an end-run around the safe constructs to get something to work, which ended up producing a race condition.


The worst code is usually written by someone who’s doing it for 30 years and can’t find a problem with their technology of choice.

Especially with shared pointers you can encounter pretty terrible memory issues.


Dude, provide examples of "terrible" memory issues. Otherwise, you are just repeating the folklore which is outdated.


The Economist?

I just unsubscribed from the digital edition. A neoliberal and globalization bias in overall tone.


They've always been upfront about their bias, in no way are they trying to hide it.

Way back when I was in college 20 years ago they ran a very funny article poking fun at all the PhD's doing "deconstruction" on The Economist. Like super post-modernist fluff. I could tell the writer had a great time responding to it.

Their punchline: "so there you have it - a newspaper to make you feel good about tomorrow by promoting capitalism today!"


Haha, that's so funny.


Agree, The Economist knows who they are and they're very happy to throw some acidic British humor into their writing for fun.


I have gave up on E. once they supported GWB over Gore. I can barely understand over the top devotion to neoliberalism and deregulation. But the shortcomings of GWB were sticking out in the campaign, so closing the eyes and singing "la la liberalism" was way too much for me.


> We still have 50,000 watt clear channel stations

On shortwave, we even have 250,000 W transmitters just blasting RF everywhere.

We call them flame throwers for a reason.


> current neighborhood

Context please? Which country and city?


I assume US, city doesn’t matter since this is the default opinion for most NIMBY suburban Americans in all US cities.


A suburb in southern winter garden, FL.


What parts of the world?

I have found Asian countries, Japan and UK the same as US when it comes to customs experience.


It is true that LuaJIT is stuck at 5.1, but you could write any performance critical sections in C/C++ and call it from Lua.

Lack of LuaJIT for 5.1+ isn't that big of a deal for desktop apps. The embedded world is still stuck in 5.1, but for them, the benefits of the latest Lua is marginal.


And despite it being stuck at 5.1, it still implements features from other versions. For example, there is the "LJ_52" macro, so you can compile "table.pack" and "table.unpack" into LuaJIT, which I do, because I use both at times.

As someone else have pointed it out, they are cherry picked: https://luajit.org/extensions.html.


There is also the compat53 library which polyfills most of the missing parts. The Teal compiler has --gen-target and --gen-compat flags which adapts the generated Lua code for different Lua versions, and allows using the compat53 library behind the scenes if desired, so you can get a mostly Lua-5.3+ experience over LuaJIT using Teal.


and if you use luajit ffi, those calls actually get called just as fast as from a c program


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: