In the earlier days of the web, there were a lot more plugins you'd install to get around on most websites: not just Flash, but things like PDF viewers, Real Video, etc. You'd regularly have to install new codecs, etc. To say nothing of the days when there were some sites you'd have to use a different browser for. A movement towards more of a standards-driven web (in the sense of de facto, not academic, standards) is what made most of this possible.
I don't think it's CPU-based, but I've always had an issue with my AirPods Max on my iPhone with audio cracking (my AirPods Pro work fine, and the Max works fine with my Mac)
While it may not be practical from a technical perspective, the current US president has suggested shutting down parts of the Internet to ostensibly combat terrorist recruiting.
Lets be honest about it. There is no political power on this planet that does not see information flow as a vector that needs to be controlled ( and if they don't, sadly, they likely will not remain in power for long.. ). If true, we are just very lucky, it did not happen sooner. In a weird sense, it helps that corporate interests prevent it.
> In a weird sense, it helps that corporate interests prevent it.
As you may be well aware, Arpanet - the original internet - was designed to be resilient against the deliberate targeting of any of its infrastructure nodes. Of course, it had a military objective. But that design was actually useful to the broader humanity too. We could have sticked to a uniformly resilient multilevel mesh design for the entire internet.
I'm sure that many people will object to this notion with multiple potential problems and several anecdotes. This is something that the corporate world always does. They choose and popularize inferior or suboptimal designs that serve their interests and then insist that it is the only way to do it. But we have numerous individual experiments and projects that demonstrate how effective the original mesh design was - bittorrent, wireless meshnets, IPv6 overlay networks, etc. We just had to put enough effort into it to create a singular cohesive resilient network.
We inherited the current mess that we call the internet because several layers of it were centralized to satisfy corporate interests. They are responsible for our current predicament in the first place.
You are right. I am not trying to rewrite history, but I also wonder if, had the planners thought the internet would become as big as it is, would they allow it to be as unrestrained as it was at the beginning?
<< We inherited the current mess that we call the internet because several layers of it were centralized to satisfy corporate interests. They are responsible for our current predicament in the first place.
Separately, it does open an interesting question. Right now the push is to centralize, BUT lets speculate if they would push for decentralization if it meant it became useful for a different purpose ( solar system internet -- assuming private space exploration takes off). I wonder if they would try to cooperate vs force 'their' satellite ( I am assuming a lot now ) communication standard.
> had the planners thought the internet would become as big as it is, would they allow it to be as unrestrained as it was at the beginning?
Interesting question. I think that the arpanet took that design because it started as a research project. The corporations today are unlikely to have ever adopted such a design. I don't know how the corporations back in the day were. And as for the actual planners, the relevant question is if they had any reason to believe that it wouldn't grow so big so fast. We know so many examples where research labs and academia came up with products that are revolutionary. Perhaps they did imagine the possibility and were generous enough?
> Right now the push is to centralize, BUT lets speculate if they would push for decentralization if it meant it became useful for a different purpose. I wonder if they would try to cooperate vs force 'their' satellite communication standard.
That's a very tricky question too. Here's what I think. They would probably cooperate and create an open standard - but only because they want to compete with the dominant player with the first-mover advantage. And that standard would also be so complex that it defeats the purpose of being open, and only they can practically setup anything with it. This is trend that we see widely today - the web standards, kubernetes, bios (or equivalent) firmware, many parts of the Linux software ecosystem, etc. They don't go for the simplest, most logical, orthogonal and easy-to-implement designs, ever.
They succeeded. You're linking to something from 2015 so it was about "ISIS", but in 2025 he did manage to censor TikTok so people wouldn't be "recruited" to "Hamas".
Don't remember the full context, but I heard a few years ago from Adobe that they could never sell another license to the private sector and government licenses would be self-sustaining.
I was active in the ColdFusion/CFML community for a long time, and still run some production code in it. It certainly isn't popular, but just carries on quietly, powering a lot of internal applications you'll never hear about. Many run the open source version of it (Lucee).
Indeed it does. I maintain one such application while an in-progress rewrite develops. Gotta say, it's not been that bad and the Lucee docs have served me well, but for whatever reason I tend to be pleased/impressed by all kinds of tech, even when popular opinion is negative about it.
It's actually not the hotend heating that's the largest power drain, it's heating the large heat bed. Bambu Lab is introducing firmware features to more slowly ramp up the heat, but I don't need if that could happen slowly enough to not drain a battery.
reply