>8K at jumbo TV size has relatively large pixels compared to an 8K desktop monitor. It’s easier to manufacture.
I don't think that's true.
I've been using a 8k 55" TV as my main monitor for years now. It was available for sub-800 USD before all such tv's vanished from the market. Smaller pixels were not more expensive even then, the 55"s were the cheapest.
4k monitors can be had for sub-200 usd, selling 4x the area of the same panel should be at most 4x that price. And it was, years ago.
So they were clearly not complicated or expensive to manufacture - but there was no compelling reason for having 8k on a TV so they didn't sell. However, there IS a compelling reason to have 8K on a desktop monitor!
That such monitors sell for 8000 usd+ is IMO a very unfortunate situation caused by a weird incompetence in market segmentation by the monitor makers.
I firmly believe that they could sell 100x as many if they cut the price to 1/10th, which they clearly could do. The market that never appeared for tv's is present among the world's knowledge workers, for sure.
I've been using an 8k 65" TV as a monitor for four years now. When I bought it, you could buy the Samsung QN700B 55" 8k, but at the time it was 50% more than the 65" I bought (TCL).
I wish the 55" 8k TVs still existed (or that the announced 55" 8k monitors were ever shipped). I make do with 65", but it's just a tad too large. I would never switch back to 4k, however.
Average bitrate from anything not a Bluray for even HD is not good, so you do not benefit from more pixels anyway. Sure, you are decompressing and displaying 8K worth of pixels, but the actual resolution of your content is more like 1080p anyway, especially in color space.
Normally, games are the place where arbitrarily high pixel counts could shine, because you could literally ensure that every pixel is calculated and make real use of it, but that's actually stupidly hard at 4k and above, so nvidia just told people to eat smeary and AI garbage instead, throwing away the entire point of having a beefy GPU.
I was even skeptical of 1440p at higher refresh rates, but bought a nice monitor with those specs anyway and was happily surprised with the improvement, but it's obvious diminishing returns.
This is exactly why 8K tv's failed in the market, but the point here is that your computer desktop is _great_ 8k content.
The tv's that were sold for sub-1000 usd just a few years ago should be sold as monitors instead. Replace the TV tuners, app support, network cards and such and add a displayport.
Having a high-resolution desktop that basically covers your useable FOV is great, and is a way more compelling use case than watching TV on 8K ever was.
HDMI 2.1 is required, and the cables are not too expensive now.
For newer gpus (nvidia 3000+ or equivalent) and high end (or M4+) macs hdmi 2.1 works fine but Linux drivers have some licensing issue that makes hdmi 2.1 problematic.
It works with certain nvidia drivers but I ended up getting a DP to HDMI 8K cable which was more reliable. I think it could work with AMD and Intel also but I haven't tried.
In my case I have a 55 and sit normal monitor distance away. I made a "double floor" on my desk and a cutout for the monitor so the monitor legs are some 10cm below the actual desk, and the screen starts basically at the level of the actual desk surface. The gap between the desk panels is nice for keeping usb hubs, drives, headphone amps and such. And the mac mini.
I usually have reference material windows upper left and right, coding project upper center, coding editor bottom center, and 2 or 4 terminals, teams, slack and mail on either side of the coding window. The center column is about tice as wide as the sides. I also have other layouts depending on the kind of work.
I use layout arrangers like fancyzones (from powertoys) in windows and a similar mechanism in KDE, and manual window management on the mac.
I run double scaling, so I get basically 4K desktop area but at retina (ish) resolution. 55 is a bit too big but since I run doubling I can read stuff also in the corners. 50" 8K would be ideal.
Basically the biggest problem with this setup is it spoils you and it was only available several years ago. :(
I managed to grab a 55" 8K LG before 8K went out of fashion. I run it at 4k120 for games and 8k60 with doubling for productivity.
I've never had a better monitor and if one should exist it's not available in any store I know about. Monitors costing 2-3x as much as this TV did back then are worse. When it dies I will have to downgrade. :-/
Hmmm, I'm confused now: aren't 8K displays just becoming a thing? Your perspective sounds like they are a dying breed. In the meantime, for me, they are still prohibitively expensive.
>aren't 8K displays just becoming a thing? Your perspective sounds like they are a dying breed. In the meantime, for me, they are still prohibitively expensive.
I'd say yes and no - they are becoming a thing - again. And you're right that they are prohibitively expensive this time.
Some 5 years ago 8K tv's were heavily marketed and displayed in many electronics stores, but consumers apparently didn't bite - basically no 8K content available and for "normal" TV use you can barely see a difference between 4k and 8k anyway.
So these TV's were very cheap for a short while before they basically disappeared.
And they make for great PC monitors. At normal working distance from a monitor you definitely notice the difference between 4k and 8k.
The screen area is basically the same as a 2x2 grid of 27" 4k monitors, but in one screen. For productivity work it's absolutely glorious, text is super-crisp.
I think there is a market structure problem that blocks progress here. Most people who work all day at a monitor would love to have such a screen, but the people paying for screens buy what the producers are selling, based on price.
So we end up with dual or triple small-monitor setups even in the wealthiest companies. If a few of the FAAMGs started asking for a 50" 8K maybe something would happen, but it hasn't yet. :(
Storage is not needed. You can consume solar power as it's generated, and it is as useful then as if it came from oil, gas or coal.
When the sun goes down, you have saved tons of oil, gas and goal that didn't have to burn during the day. Which is very very good. You don't have to "solve nighttime" before solar makes sense, it makes sense immediately.
Edit: And of course, nighttime is also being solved, in many ways, already.
Unless blackouts or brownouts are going to be allowed to increase in frequency from very rare at present, sources of immediate dispatchable electricity are needed. Large scale hydro plants, battery storage, are expensive to have sitting around. Coal can take several hours to ramp up generation , Gas can be much faster, a few hours from a cold start.
Nighttime is predictable. I'm specifically include bad weather in solar / wind generation which can mean zero output. Can be for days. Storage needs to be reserved for these events.
Solar "subsidies" are almost universally tax credits, meaning the only money involved is the money paid by the homeowners. So, for society, rooftop solar is by far the cheapest option. It costs the rest of us nothing, the homeowners pay for it.
Money is also not limited, it is in fact created by the banks when someone takes a loan, for example to put solar on their roofs.
Meaning there is no money lost from society that could instead be used to build utility solar, just because someone puts solar on their roofs. If your county borrows money to make utility solar, that money is also created by the bank then and there.
Also note that you are quoting last years Lazard report. Solar is way cheaper in this year's report. It will probably be even cheaper in the next one.
While technically correct, assuming a balanced budget means less tax revenue will have to be offset either by reducing government expenses or raising taxes for everyone, so in practice all taxpayers 'pay for' a tax credit.
But you assume that 100% of homeowners will buy solar at full price no matter what, with such certainty that it can even be included as a given in the budget before it happens.
That's an unreasonable assumption IMO. Solar is not food, homeowners can choose to skip it.
>Solar "subsidies" are almost universally tax credits, meaning the only money involved is the money paid by the homeowners.
Well, it means that the government will have less available money to spend on other priorities. Often there are also state and utility subsidies, but those subsidies are often not the largest subsidy. Besides the direct subsidies, wealthier home owners have often been paid the retail rate for the electricity they sell to the grid which causes higher electricity bills for those who can't afford to put panels on their roof. As I said before, the whole thing is sort of a reverse Robin Hood scheme.
>...Solar is way cheaper in this year's report.
No it is not. I used last year's numbers since the 2025 report for reasons, does not include consumer rooftop solar. The closest comparison would likely be the category of Solar PV—Community & C&I. In 2024, the cost estimate was $54 - $191 in 2025, the price range was $81 - $217.
>Well, it means that the government will have less available money to spend on other priorities.
That assumes people have to buy solar at any price, but they don't. The money the gov gets from people choosing solar because of the tax credit is extra money that they wouldn't have gotten otherwise. So it's likely they get more money, not less money.
If everyone scrambled to buy all available solar at full price already, then yes, of course nobody should give tax credits. But that's not where we are, tax credits cause an increase in installations.
I am convinced everyone benefits from wealthy homeowners installing solar, not just the homeowners.
Solar panels make electricity cheaper from base principles, despite any political schemes that are employed right now. Once installed, the panels generate electricity for free.
I think that's often overlooked - all talk about subsidies for solar is just for the installations. Once they are done, solar electricity costs nothing.
>No it is not. I used last year's numbers since the 2025 report for reasons, does not include consumer rooftop solar.
You're right, I made a mistake and didn't notice the rooftop category was gone. My bad.
>…I think that's often overlooked - all talk about subsidies for solar is just for the installations. Once they are done, solar electricity costs nothing.
No, the subsidies don’t stop at installation - they often just really begin. Often wealthier households have been able to sell back their electricity to the grid at the retail rate. Providing the infrastructure and reliability of the grid is very expensive, so there is a huge difference between the wholesale costs and retail rates for delivered electricity. In CA, it was estimated that all non-solar households in CA paid an estimated extra $115 to $245 per year to cover the subsidies given to their wealthier neighbors. It was estimated that as the number of consumer solar installation increase, that increased cost would grow to between $385 and $550 per year by 2030. Ignoring all the subsidies given to install the system, that $115 per household per year adds up to a great deal of money. Money is limited and is fungible - a dollar spent subsidizing utility solar will go much, much further to decarbonizing the grid than a dollar spent subsidizing rooftop residential solar. It is understandable that anyone getting free money thinks it is good. But if the less well off people (renters, etc.) learn that they are paying a great deal more for power to subsidize wealthier residents (when that money could have gone MUCH further if spent on other solar projects) - it isn’t hard to imagine that might lower enthusiasm for government subsidizing the move away from fossil fuels. This sort of wealth transfer to the more wealthy actually hurts everyone in the long run. The goal is to decarbonize the grid - not implement some kind of a reverse Robinhood scheme.
>…You're right, I made a mistake and didn't notice the rooftop category was gone. My bad.
Yea it is unfortunate they removed that category - hopefully it will return in future versions.
>No, the subsidies don’t stop at installation - they often just really begin.[..] Often wealthier households have been able to sell back their electricity to the grid at the retail rate.
It's not a subsidy to be allowed to sell a thing you produce at market price. If taxes were used to pay a guaranteed price above market rate to solar panel owners sure, but that's not the case (generally speaking, local political absurdities may exist of course).
>Providing the infrastructure and reliability of the grid is very expensive,
There is no additional infrastructure needed to cover rooftop solar though. It's just electricity being added to an existing grid.
If someone is increasing your power bill and blaming it on some one else's solar panels, I'd say you are being scammed and I would not take such claims at face value! "Yeah you have to pay cause Jim got solar panels, so we had to uhm, you know, we had to, err, well you have to pay more anyway". ;-)
>a dollar spent subsidizing utility solar will go much, much further to decarbonizing the grid than a dollar spent subsidizing rooftop residential solar.
But this is a choice that doesn't exist. We are not talking about a bunch of money that has been collected and is being spent on people's rooftop solar instead of being spent on utility solar. There is no money except the homeowner's money that is being spent here, and they can only choose to get rooftop solar.
>This sort of wealth transfer to the more wealthy actually hurts everyone in the long run. The goal is to decarbonize the grid - not implement some kind of a reverse Robinhood scheme.
Rooftop solar decarbonizes the grid faster than anything else at the moment, since lots of people get to decide for themselves instead of waiting for politicians. It transfers no money from anyone but from the homeowners to makers of solar panels.
It also lowers the production price of electricity, which should lower the purchase price too, unless you are in the hands of corrupt politicans and utility cos.
>...It's not a subsidy to be allowed to sell a thing you produce at market price.
The market price is not the retail price. Does a grocery store buys produce from a supplier at the price they sell it to the consumer? Of course not. The wholesale price for power in CA is variable but generally around 4 cents a kilowatt wile the retail price that customers pay is generally 30 cents and above. If the market price for power is 4 cents, and a supplier can effectively sell their power for 30 cents, they are not selling it at the market price. If consumer solar producers were treated as every other supplier of electricity and were paid at the same rates, then there would be no subsidy.
>...If someone is increasing your power bill and blaming it on some one else's solar panels, I'd say you are being scammed
No, that is the reality of how net metering works. That $115 to $245 cost estimate paid by every other household in CA was from the CA PUC. They have made some changes to this going forward, but the current beneficiaries are still grandfathered in.
>...But this is a choice that doesn't exist. We are not talking about a bunch of money that has been collected and is being spent on people's rooftop solar instead of being spent on utility solar.
This was a choice that was made by politicians and this is money that is being spent every day. The CA legislature and PUC could have said that household rates will increase by $115 a year and the money will be used to build out grid solar and storage - if that had been done there would be MUCH more solar power and grid storage being produced. You just need to look at the LCOE for utility and consumer rooftop solar to see the cost differences. The LCOE difference grid batteries vs home batteries is also dramatic.
>There is no money except the homeowner's money that is being spent here, and they can only choose to get rooftop solar. ... It transfers no money from anyone but from the homeowners to makers of solar panels.
If every household's electricity rates go up by $115 a year, then every household is spending an additional $115. If someone is paying an extra $115 a year, it doesn't make sense to tell them they are not paying extra.
> It's not a subsidy to be allowed to sell a thing you produce at market price.
It certainly is in this case. The market price includes transmission and distribution costs, as well as fixed costs of the generators.
Domestic PV is really gaming the rate system, avoiding costs while still benefiting from the things those costs support. If enough people do it the grid falls apart. See Pakistan where they may be getting close to this.
>Domestic PV is really gaming the rate system, avoiding costs while still benefiting from the things those costs support.
All powerlines, transformers and everything else is the same with or without solar panels. What new costs are happening because of solar that solar owners are avoiding?
Tax reductions for voluntary things is not comparable to payouts. They are method to get the rich to pay for solar so the non-rich don't have to, and in the past few years it has gotten us more carbon neutral energy than anything else. Clearly a win-win good thing.
Trying to describe it as the opposite doesn't hold up.
Edit: Utility cos that sell electricity themselves may increase prices to make up what they lose when they can't sell as much gas, nuclear, coal etc, and blame it on solar. That's not a subsidy for solar though, it's a subsidy for corporations.
The powerlines, transformers, etc. have to be paid for. These are not new costs, they're existing, ongoing costs.
Previously, these were paid for by becoming part of the per-kWh price of electricity, under the assumption that all the electricity each consumer is using goes over the grid.
But with solar, consumers can largely (but not entirely) self-power, and use the grid only rarely. They are benefiting from the presence of the transmission/distribution infrastructure (and the power sources feeding it) but aren't paying the same amount for it.
>The powerlines, transformers, etc. have to be paid for. These are not new costs, they're existing, ongoing costs.
That's true but powerlines exist to transport high voltage power from power plants to consumers far away, and transformers convert that to usable voltages.
Residential solar does not require any such powerlines or transformers, because the low-voltage power is consumed locally, where it's produced.
There was a story here recently about balcony solar installations in Germany. Basically a small solar panel adds power to your house through the house's existing wiring.
Rooftop solar works the same way, but is large enough that sometimes your surplus can be consumed by your neighbor as well, and when more people get solar it expands to the next neighborhood, and so on.
This is handled by the utilities already, since the effect of adding solar to your house is basically the same as if you stop using your stove or stop charging your car.
So it's kind of perfect initially - no new technical solutions are needed and no heavy investments since it just works with existing infrastructure.
Of course this becomes a problem eventually and will require storage solutions first of all and eventually also high voltage transmission. But not yet.
The real problem here is that large corporations live from the profits of selling electricity. They will protect that profit at all costs, so when demand drops - due to solar or anything else - they will increase the prices.
Note that they are not necessarily using that profit to properly maintain any infrastructure:
> Residential solar does not require any such powerlines or transformers, because the low-voltage power is consumed locally, where it's produced.
Of course it does. If it didn't, the residential solar user would just disconnect from the grid entirely.
The point here is that even if one uses the grid rarely, one is still depending on it being there. And that dependency means one is still exploiting things with fixed costs.
Large solar rollouts will force changes to rate structures, for example charging even residential users for the maximum power they use (or could use) rather than the total energy they consume. With such rates the financial benefit to the consumer of residential solar becomes much lower or nonexistent.
>The point here is that even if one uses the grid rarely, one is still depending on it being there. And that dependency means one is still exploiting things with fixed costs
Obviously the grid is needed and anyone using it should pay. But equally obviously, you should not have to pay when you are NOT using it! And residential solar is not using it.
Also obviously, we should not have to pay more than what is required to maintain the grid, in order to protect the profits of power companies that are being squeezed by free electricity. If they are increasing the prices due to solar, they should be held accountable for the actual reasons why.
We are deviating from the topic, I just wanted to protest against the notion that there is something wrong or sinister about encouraging rooftop solar. It's one of the best things that have happened in the efforts to reduce emissions, perhaps second only to the large-scale deployment of electric cars.
What is wrong is pretending that the current rate structure is from some natural law, rather than contingent on the structure of the grid and how consumers are using it.
I've seen this repeatedly when net metering is scaled back. This is represented as some nefarious and unfair change when it is nothing of the sort.
One of the many surprising things on that list is that paperboys still exist!
I thought printed papers were all but dead even in the US and can't recall the last time I saw a stand or store where I could buy one. They faded away unceremoniously, like phone booths.
That there is enough money in it to motivate kids to get up in the morning still today I would never have guessed.
Also astonishing are the size and presence of the various fees! Around here a 5 dollar fee for invoicing is the highest I've seen, and it raises eyebrows even among the Mercedes/BMW crowd because everyone knows it doesn't cover any actual additional costs, so it's basically a scam. A way for companies to say "interest free" while still collecting interest.
I thought printed papers were all but dead even in the US
In its last financial statement, the New York Times reported 600,000 print subscribers. (Plus something like 20 million paid online subs.)
Newsstands are mostly gone (though there are a few), but my experience over the last few years is that outside of tourist areas, print newspapers are available at most chain drug stores, book stores, and some gas stations. The more urban you are, the more likely you are to find them. They also remain popular in ethnic communities. I recently picked up a monthly printed newspaper in Japanese that is distributed in the DC area.
I can think of five places with six blocks of me that sell newspapers (two drug stores, two bodegas, and a bookstore).
The latest Superman movie even did a PR stunt where the movie company printed up thousands of Gotham newspapers with Superman headlines and distributed them to newspaper racks. I saw them at Walgreens.
Even simpler, have a discussion with the group once and have cameras either be prohibited on all days except X, or allowed on all days except X, depending on the majority preference.
I agree with the gp. I like some aspects of tiling vms but gave up after a while.
The main pain points for me were
1) I often end up with two windows each taking a side of the screen leaving basically nothing of interest in the centre. So I end up jumping through some tetris-like hoops to make a window be centered.
2) If I close any window all the others move, often causing a repeat of problem 1
3) apps not supporting it properly causing weird graphical glitches
4) some apps should never be small windows, others never large.
Basically I ended up spending more time managing windows with a tiling vm than I ever did before, which eventually outweighed the benefits.
>It isn't because your resting keys require NO finding, so will always be easier
You still need to find them unless your hand is glued to the resting keys and even if it were the resting keys might not be hjkl, and even if they are they will by default type hjkl and not move the cursor in any other software you ever use except vim.
(It can't be objectively measured of course but I am convinced anyone who can use vim without thinking has spent more time learning vim than they gained from using vim for other things) ;-)
Just about any line of text I write daily uses symbols not reachable from the resting position, and once my hand has left that position the arrow keys are easier to find. (Inverted T of course, with home/end/pgdown/pgup cluster).
>there are other apps that do that, specifically, file managers
The point is that you ("you" in this case being a typical user, not you personally) will open applications every day where hjkl does nothing at all with the cursor, and you have to use the arrow keys anyway. This is mental friction that remains even after you spent years internalizing the hjkl cursor moving flaw.
Faced with this situation a user can choose to either use the arrow keys in vim, or go full Stockholm syndrome and change the default in every other piece of software to match vim.
If that seems like a good idea, it might be worth remembering that the creator of vi didn't choose hjkl because he thought it superior to using arrow keys - he did it because the computer he used had no arrow keys! ;-)
> Just about any line of text I write daily uses symbols not reachable from the resting position
Use a better setup! No-one forces you to use bad defaults and think everything must be bad.
Also, there are no such symbols even on standard setup, all of the number row keys/symbols are reachable with individual fingers, so you never miss the resting place, it's a couple of fingers moving back and forth.
> my hand has left that position the arrow keys are easier to find. (Inverted T of course, with home/end/pgdown/pgup cluster
It didn't, a couple of your fingers did. But also, why did you ignore the F6, numpad+ etc hand dance and only focus on the arrow keys?
> "you" in this case being a typical user
A typical user doesn't use vim. A real user using vim is perfectly capable of basic keyboard rebinding
> This is mental friction that remains even after you spent years internalizing the hjkl cursor moving flaw.
No, it goes away after you spend minutes oiling your system to remove friction. (Of course, it may still take years of ignorance before that...)
> change the default in every other piece of software to match vim.
Or you change the default once system-wide. See, reality is much simpler than your fantasy!
> the creator of vi didn't choose hjkl because he thought it superior
So? You're the only one here stuck on hjkl because for some reason you can't comprehend that it's just a config, not a mandatory commandment passed down by the Vim prophet.
Even if you only use your own computer, customizing the basics is a bit of a trap in many ways.
First you need to find a different setup that is actually better, not just different. Then you need to build muscle memory for it, then you need to never use any other computer because they will not have your setup.
I think getting good at using the defaults is better than changing the defaults. Basically learn to play the guitar, even if it's hard.
I customize things too, but take care to make it additive, not transformative. Aliases, plugins, better software and such are fine, but messing with my muscle memory is just not worth it.
>Also, there are no such symbols even on standard setup
That depends on the standard. In some countries you need two hands to type an @, just to take one example.
For US english the numpad is a good example though. Not so easy to find the home keys from the numpad, but your hand passes the arrow keys on the way. :)
>why did you ignore the F6, numpad+ etc hand dance and only focus on the arrow keys?
Because I'm mainly making a counterpoint to your claim that using hjkl was better than using the arrow keys.
It has admittedly grown to a more general anti-bikeshedding rant fuelled by my own bikeshedding regrets - so I better stop here. :)
> then you need to never use any other computer because they will not have your setup
Again, if you really had such a defeatist attitude to changing things you'd simply never vim, or the MC file manager for that matter because they're not available everywhere, and you need to avoid other computers etc. In reality all of this is false, of course, humans are flexible enough
> but messing with my muscle memory is just not worth it.
You don't have any muscle memory for F6. But also it's trivial for such common things as cursor keys alternatives, you ijkl is already an inverted T that your muscles are used to, just without the extra hand move
> That depends on the standard. In some countries you need two hands to type an @, just to take one example
Wrong again, you'd need two fingers, but your hands stay near their resting place.
> Because I'm mainly making a counterpoint to your claim that using hjkl was better than using the arrow keys.
You're the one who brought these keys up! My point was "Don't repeat the ancient hjkl mistake?", but then you couldn't argue with.
I honestly think Apple was interesting because of Steve Jobs.
Whatever you think of him he was clearly an strong leader who was able to make a bunch of otherwise unruly geniuses cooperate towards a common goal. He also overruled any other ego in the hierarchy so the goals that were worked on actually shipped.
This is rare, and made Apple different from other corporations, for a while.
Without a person like that at the top, corporations are money-making machines that will gravitate towards bland and predictable. Egos will battle for resources for no purpose other than being the boss of the most resources, geniuses will be increasingly bored and disillusioned and whatever special sauce there was will go stale.
Corporations can exist in that state for a long time and sometimes still make nice products of course.
>8K at jumbo TV size has relatively large pixels compared to an 8K desktop monitor. It’s easier to manufacture.
I don't think that's true.
I've been using a 8k 55" TV as my main monitor for years now. It was available for sub-800 USD before all such tv's vanished from the market. Smaller pixels were not more expensive even then, the 55"s were the cheapest.
4k monitors can be had for sub-200 usd, selling 4x the area of the same panel should be at most 4x that price. And it was, years ago.
So they were clearly not complicated or expensive to manufacture - but there was no compelling reason for having 8k on a TV so they didn't sell. However, there IS a compelling reason to have 8K on a desktop monitor!
That such monitors sell for 8000 usd+ is IMO a very unfortunate situation caused by a weird incompetence in market segmentation by the monitor makers.
I firmly believe that they could sell 100x as many if they cut the price to 1/10th, which they clearly could do. The market that never appeared for tv's is present among the world's knowledge workers, for sure.