perceived downplaying of the damage. Popular soundbites (including "don't solve social problems with technology") have it generally backwards, and most people don't go beyond them.
Except when, for some reasons, the recent trend is to release an episode per week even though they have all of them filmed and could just drop a whole season.
As a binge watcher, this irks me to no end; I usually end up delaying watching episode 1 until everything is released, and in the process forget about the show for half a year or something, at which point there's hardly any conversation happening about it anymore.
> when rewatching older Trek shows it is also a bit infuriating how nothing really has an impact
TNG: You get e.g. changes in political relationships between major powers in the Alpha/Beta quadrant, several recurring themes (e.g. Ferengi, Q, Borg), and continuous character development. However, this show does much better job at exploring the Star Trek universe breadth-first, rather than over time.
DS9: Had one of the most epic story arcs in all sci-fi television, that spanned multiple seasons. In a way, this is IMO a golden standard for how to do this: most episodes were still relatively independent of each other, but the long story arcs were also visible and pushed forward.
VOY: Different to DS9, with one overarching plot (coming home) that got pushed forward most episodes, despite individual episodes being mostly watchable in random order. They've figured out a way to have things have accumulating impact without strong serialization.
> Last season of TNG they introduced the fact that warp was damaging subspace. That fact was forgotten just a few episodes later.
True, plenty of dropped arcs in TNG in particular. But often for the better, like in the "damaging subspace" aspect - that one was easy to explain away (fixing warp engines) and was a bad metaphor for ecological anyway; conceptually interesting, but would hinder subsequent stories more than help.
> VOY: Different to DS9, with one overarching plot (coming home) that got pushed forward most episodes, despite individual episodes being mostly watchable in random order. They've figured out a way to have things have accumulating impact without strong serialization.
I wouldn't say they had any noticeable accumulating impact.
Kim was always an ensign, system damage never accumulated without a possibility of repair, they fired 123 of their non-replaceable supply of 38 photon torpedoes, the limited power reserves were quickly forgotten, …
Unless you mean they had a few call-back episodes, pretty much the only long-term changes were the doctor's portable holo-emitter, the Delta Flier, Seven replacing Kes, and Janeway's various haircuts.
> True, plenty of dropped arcs in TNG in particular. But often for the better, like in the "damaging subspace" aspect - that one was easy to explain away (fixing warp engines) and was a bad metaphor for ecological anyway; conceptually interesting, but would hinder subsequent stories more than help.
That and beta-cannon is this engine fix is why Voyager's warp engines moved.
The Doylist reason is of course "moving bits look cool".
The wildest dropped Arc were the absolutely horrifying mind control parasites. But like that the warp core speed limit I see why, you'd have to change the whole tone of the show if you wanted to keep them as a consistent threat.
To be fair, there were a couple of times where they mentioned being allowed to exceed warp speed limits for an emergency. Otherwise, they were usually traveling under Warp 6.
> That was the principle many years ago, you had to leave the world exactly in the state you found it in.
This doesn't make sense; no show I know from that time followed that principle - and for good reason, because they'd get boring the moment the viewer realizes that nothing ever happens on them, because everything gets immediately undone or rendered meaningless. Major structural changes get restored at the end (with exceptions), but characters and the world are gradually changing.
> If John dumped Jane at the beginning of the episode, they had to get back together at the end, otherwise the viewer who had to go to her son's wedding that week wouldn't know what was going on.
This got solved with "Last time on ${series name}" recaps at the beginning of the episode.
I remember when slight hint of multiepisode story was revolutionary and everybody was tallking about it as a great thing. By today standards, nothing was happening.
> Major structural changes get restored at the end
This is the point. There persistent changes in these shows tended to be very minor. Nothing big ever happened that wasn’t fully resolved by the time the credits rolled unless it was a 2-part episode, and then it was reset by the end of the second episode.
How old are you? Because I promise you, that description was pretty much spot-on for most shows through most of the history of TV prior to the late 1990s. My memory is that the main exception was daytime soap operas, which did expect viewers to watch pretty much daily. (I recall a conversation explaining Babylon 5's ongoing plot arc to my parents, and one of them said, "You mean, sort of like a soap opera?") Those "Previously on ___" intro segments were quite rare (and usually a sign that you were in the middle of some Very Special 2-part story, as described in the previous comment).
Go back and watch any two episodes (maybe not the season finale) from the same season of Star Trek TOS or TNG, or Cheers, or MASH, or Friends, or any other prime time show at all prior to 1990. You won't be able to tell which came first, certainly not in any obvious way. (Networks didn't really even have the concept of specific episode orders in that era. Again looking back to Babylon 5 which was a pioneer in the "ongoing plot arc" space, the network deliberately shuffled around the order of a number of first-season episodes because they wanted to put stronger stories earlier to hook viewers, even though doing so left some character development a bit nonsensical. You can find websites today where fans debate whether it's best to watch the show in release order or production order or something else.)
By and large, we all just understood that "nothing ever happens" with long-term impact on a show, except maybe from season to season. (I think I even remember the standard "end of episode reset" being referenced in a comedy show as a breaking-the-fourth-wall joke.) Yes, you'd get character development in a particular episode, but it was more about the audience understanding the character better than about immediate, noticeable changes to their life and behavior. At best, the character beats from one season would add up to a meaningful change in the next season. At least that's my memory of how it tended to go. Maybe there were exceptions! But this really was the norm.
> Again looking back to Babylon 5 which was a pioneer...
Heh I was going to reply "B5 is better than TNG", but thought "better check all the replies first". Wherever there's discussion of extended plots there's one of us nerds. (If anyone hasn't seen it... yes half the first season is rough, but you get a season's worth of "The Inner Light"-quality episodes by the end and for all the major characters; TNG, while lovely, has just a few because there's so little character development besides Picard)
Babylon 5 was mostly in order, if you want to see something really screwed up check out the spinoff Crusade. On top of what the network did it was written more serially than Babylon 5 was.
Most shows were like that. Yes, there was some minor character growth and minor plot development over seasons most shows basically reset every episode. You almost have to when you’re targeting syndication because reruns don’t always happen in order and they often run so frequently that viewers can’t catch them all anyway.
> It would help if TV manufacturers would clearly document what these features do, and use consistent names that reflect that.
It would also help if there was a common, universal, perfect "reference TV" to aim for (or multiple such references for different use cases), with the job of the TV being to approximate this reference as closely as possible.
Alas, much like documenting the features, this would turn TVs into commodities, which is what consumers want, but TV vendors very much don't.
I wonder if there's a video equivalent to the Yamaha NS-10[1], a studio monitor (audio) that (simplifying) sounds bad enough that audio engineers reckon if they can make the mix sound good on them, they'll sound alright on just about anything.
Probably not, or they don't go by it, since there seems to be a massive problem with people being unable to hear dialogue well enough to not need subtitles.
It was a real eye(ear?)-opener to watch Seinfeld on Netflix and suddenly have no problem understanding what they're saying. They solved the problem before, they just ... unsolved it.
My favorite thing about Kodi is an audio setting that boosts the center channel. Since most speech comes through that, it generally just turns up the voices, and the music and sound effects stay at the same level. It's a godsend. Also another great reason to have a nice backup collection on a hard drive.
It's a similar thing to watching movies from before the mid-2000 (I place the inflection point around Collateral in 2004) where after that you get overly dark scenes where you can't make out anything, while anything earlier you get these night scenes where you can clearly make out the setting, and the focused actors/props are clearly visible.
Watch An American Werewolf in London, Strange Days, True Lies, Blade Runner, or any other movie from the film era all up to the start of digital, and you can see that the sets are incredibly well lit. On film they couldn't afford to reshoot and didn't have immediate view of what everything in the frame resulted on, so they had to be conservative. They didn't have per-pixel brightness manipulation (feathering and burning were film techniques that could technically have been applied per frame, but good luck with doing that at any reasonable expense or amount of time). They didn't have hyper-fast color film-stock they could use (ISO 800 was about the fastest you could get), and it was a clear downgrade from anything slower.
The advent of digital film-making when sensors reached ISO 1600/3200 with reasonable image quality is when the allure of time/cost savings of not lighting heavily for every scene showed its ugly head, and by the 2020's you get the "Netflix look" from studios optimizing for "the cheapest possible thing we can get out the door" (the most expensive thing in any production is filming in location, a producer will want to squeeze every minute of that away, with the smallest crew they could get away with).
Reference monitor pricing has never been any where near something mere mortals could afford. The price you gave of $21k for 55” is more than 50% of the average of $1k+ per inch I’m used to seeing from Sony.
If you account for the wastage/insurance costs using standard freight carriers that seems reasonable to me as a proportion of value. I’m sure this is shipped insured, well packaged and on a pallet.
Walmart might be able to resell a damaged/open box $2k TV at a discount, but I don’t think that’s so easy for speciality calibrated equipment.
My local hummus factory puts the product destined for Costco into a different sized tub than the one destined for Walmart. Companies want to make it hard for the consumer to compare.
Costco’s whole thing is selling larger quantities, most times at a lower per unit price than other retailers such as Walmart. Walmart’s wholesale competitor to Costco is Sam’s Club. Also, Costco’s price labels always show the per unit price of the product (as do Walmart’s, in my experience).
Often a false economy. My MIL shops at Sam's Club, and ends up throwing half her food away because she cannot eat it all before it expires. I've told her that those dates often don't mean the food is instantly "bad" the next day but she refuses to touch anything that is "expired."
My wife is the same way - the "best by" date is just a date they put for best "freshness". "Sell by" date is similar. It's not about safety.
My wife grew up in a hot and humid climate where things went bad quickly, so this tendency doesn't come from nowhere. Her whole family now lives in the US midwest, and there are similar arguments between her siblings and their spouses.
The ones I’m talking about were only subtly different, like 22 oz vs 24 oz. To me it was obvious what they were doing, shoppers couldn’t compare same-size units and they could have more freedom with prices.
There is no federal law requiring unit requiring unit pricing, but the the NIST has guidelines that most grocery stores follow voluntarily. 9 states have adopted the guidelines as law.
I don't think that's correct. Prices for retail goods aren't usually even attached to the product in interstate commerce, and are shown locally on store shelving.
These exist, typically made by Panasonic or Sony, and cost upwards of 20k USD. HDTVtest has compared them to the top OLED consumer tvs in the past. Film studios use the reference models for their editing and mastering work.
Sony specifically targets the reference with their final calibration on their top TVs, assuming you are in Cinema or Dolby Vision mode, or whatever they call it this year.
There is! That is precisely how TVs work! Specs like BT.2020 and BT.2100 define the color primaries, white point, and how colors and brightness levels should be represented. Other specs define other elements of the signal. SMPTE ST 2080 defines what the mastering environment should be, which is where you get the recommendations for bias lighting.
This is all out there -- but consumers DO NOT want it, because in a back-to-back comparison, they believe they want (as you'll see in other messages in this thread) displays that are over-bright, over-blue, over-saturated, and over-contrasty. And so that's what they get.
But if you want a perfect reference TV, that's what Filmmaker Mode is for, if you've got a TV maker that's even trying.
Those "coincidences" in Connections are really no coincidence at all, but path dependence. Breakthrough advance A is impossible or useless without prerequisites B and C and economic conditions D, but once B and C and D are in place, A becomes obvious next step.
Some of those really are coincidences, like "Person A couldn't find their left shoe and ended up in London at a coffee house, where Person B accidentally ended up when their carriage hit a wall, which lead to them eventually coming up with Invention C" for example.
Although from what I remember from the TV show, most of what he investigates/talks about is indeed path dependence in one way or another, although not everything was like that.
Being related to neither software behavior nor the structure of the underlying problem, animations tend to obscure the causal relationships and make it harder for user to build a correct mental model.
I see where you're coming from: animations are overused and even when they make sense they are made too slow and flashy (because otherwise how would the implementors feel like they did something if it's barely noticeable?)
Animations are like bass in music: most people notice them only when they're missing or bad.
OTOH, it's good enough that a webapp I vibe-coded in 5 minutes on the phone is better at typesetting and aligning label stickers than Microsoft Word. Or at least easier and gives correct results on the first try, vs. Word that gives me correct results approximately never; I've wasted close to person-day fighting with it over the year already.
Could be; after ~3 years, my Samsung Galaxy S7 would reset if I tried to make a call with battery below ~20%. I immediately knew it was the battery, because I still remember noticing it as a kid on Nokia 3410 - calling would sometimes drop the battery indicator by one bar, which would come back moments after call ended. That's how I learned about internal resistance and how battery capacity is measured :).
As for fixes in software, it's either treating it as WAI, or secretly throttling down the phone, like Apple did, for which they got accused of planned obsolescence. Neither choice is good (though actually informing the users would go a long way).
Oral transmission of complex culture is one of the things that separates us from "the state of nature". As we lose it, we move further away from the beasts that conquered the planet and closer to the squirrel.
Transmission of complex culture is what separates us (and enables this complex culture in the first place); oral medium is merely one way to do it. Being the first one, it's probably not the best.
reply