To summarize the Wikipedia article on linguistic relativity, the "strong" hypothesis that language determines thought has been debunked. But there are many things that a language influences. To use a computer analogy, all mainstream programming languages are Turing complete, so you can express any computation in them. In this sense the language does not determine what programs you can write. But in practice, as any computer person will tell you, different languages are good at different things. And that is kind of this paper, they cite a lot of examples where English has poor vocabulary or odd quirks, and show by comparison to other languages that this measurably affects conclusions about certain cognitive abilities. The issue they're complaining about is like if you benchmarked Python programs and tried to draw conclusions about the speed limits of computing, but never tried C++ or assembly.
Tell it to the papers: https://www.cs.drexel.edu/~csg63/publications/onward24/onwar...https://arxiv.org/pdf/1808.03916 As vague analogies go, much more ridiculous and vague things have been published and peer reviewed and even gotten significant citations. Like ecological niches and invasive species, DNA as genetic blueprints, selfish genes, ... About all that can be said about these is that they are closer to the truth than what came before, and that if you actually learn the field then you can appreciate how they kind of get it right.
Yet what about aphasia? There seems to be direct evidence that thought, action, language are in conflict rather than seamless, so language plays a weird role in that some of us are programmed and others are blissfuly unconnected to their effects.
If aphasia is evidence that some of us don't use language to think, then language is nothing more than a programming language.
Whatever programming language using language is irrelevant to people with aphasia.
Typically people develop aphasia after a stroke. This provides a natural before/after comparison. When for example Mark Brodie had a stroke, he wasn't able to speak - and he also wasn't able to read or write computer programs. This indicates that natural and programming languages use overlapping regions of the brain.
Regarding language being in conflict with other modes of thinking, there is indeed a "savant" effect of aphasia, where after an aphasia-inducing stroke people suddenly develop amazing abilities in visual art, music, or mathematical thinking. But it is not consistent - most people don't develop these. And it comes with impairment of emotions, memory, etc. So really what the evidence suggests is that some people suppress parts of their brain, and these injuries unlock that potential because the suppression mechanisms break. It's most likely a cultural thing - people act how they are "expected" to act. There is some evidence that females actually have more biological capability for math (in cultures with very high gender equality) but typically you see less performance, so the conclusion is that the culture essentially "programs" in the lower performance. It is probably the same with the supposed "conflict" between thought, action, and language - the culture treats these as distinct modes and inhibits cross-modal thinking like synesthesia.
Sure, but thought does not end or get impeded in aphasia, merely the ability to use language. People can reason, gesture, react, which tells us they're either divergent or entirely separate abilities. Any language is an external program that runs a cultural control system, not a communication system that directly connects mental states.
You can't generalize like that. Aphasia is a symptom - of course by definition it is merely the inability to use language. But people with stroke will typically show a lot of impairment, not only aphasia - they will have difficulties reasoning, gesturing, reacting, etc. Different people will have different abilities and different levels of impairment, but this tells you very little - it is not like each ability has its own little neuron in the brain, fMRI has confirmed that most activities involve several different parts of the brain. There are complex thoughts that require linguistic involvement to process, sign language and dancing combine gesturing with language, etc. The main thing the OP paper shows is that language is pervasive and intricately involved in how activities map to mental states.
Yeah, so what that paper shows is that 6-7 cognitive tasks show low involvement with the language areas. What it doesn't show is that all cognitive tasks are independent of the language areas. As the paper itself admits, some forms of reasoning seem to involve language.
It demonstrates what divides the brain, actually, if you read her other papers and public layperson statements as well where she claims we don't use "language to think" as mental events are specific and language is arbitrary.
There was a whole experiment with this "Tab Candy" thing a few years ago. And it failed and Mozilla disabled it and it got silently removed, well, almost, because a fair amount of people complained. I wouldn't be surprised if today's tab groups go the same way. Browser innovation is hard and at this point most of the innovation is in forks of Firefox, rather than Firefox itself.
Sounds like a very restrictive tracker... but I guess the more restrictive, the more likely it has good stuff. Seems kind of strange though because most trackers I have seen just completely ban any sort of proxy or VPN.
They usually ban VPNs for website use but allow them for seedboxes (sometimes requiring approval). The rationale is to stop account sharing and ban evasion.
I don't know but it's pretty suspicious to ban VPNs to use something that is illegal in many countries. Hope they don't keep records that can be leaked.
It's really not. Even if they aren't recruiting new members you can buy an account or invites. You could also prepare alternate accounts ahead of time.
It seems to be a combination of a lot of corporations using NixOS and a lot of community members using it as their personal distro. Nix is pretty flexible as a package manager but there are tensions that do crop up and you get arguments over really unimportant things that just sort of escalate because people bring in politics and all sorts of other stuff. And my understanding is that the moderation team was cooling off a lot of these disagreements, but now that there's no moderation team, I'm kind of curious to see what happens.
Disclaimer: I use nixos but try not to participate in it (private fork), after seeing how they treat prospective contributors.
The Nixpkgs repo is a Git repository. I forked the repository and I merge in updates using the normal git workflow. I've tried flakes and stuff, but none of them are as convenient as directly modifying files.
Linux has become the dominant operating system for a wide range of devices, even though other options like FreeRTOS or the BSD family seem more specialized. The widespread adoption of Linux suggests that a single, versatile operating system may be more practical than several niche ones. However, the decision to drop support for certain hardware because it complicates maintenance, as seen here, would seem to contradict the benefit of a unified system. I wouldn't be surprised if it really just results in more Linux forks - Android is already at the point of not quite following mainline.
The RF spectrum is a public good in the US and there are requirements placed on the winners of those auctions to demonstrate it provides some public benefit. A company can't just buy spectrum and sit on it, for example. They must use start to use it in a certain timeframe.
The RF spectrum is a common good, not a public good. Public goods are non-excludable and non-rivalrous. The RF spectrum is non-excludable (anyone can transmit on any frequency, given the right equipment) but rivalrous (transmitting on one frequency prevents others from using that frequency).
Requiring the winner of a spectrum auction to use it is a way to prevent anti-competitive tactics (since the government is granting a monopoly to the winner). The goal is to incentivize productive use of limited resources, not necessarily to benefit everyone. In theory, the winner could use the spectrum for entirely internal purposes. Though in real world spectrum auctions, the government usually has stipulations such as requiring interoperability or using open standards. This reduces the value that the government captures, but likely increases the value that is created overall.
Before spectrum auctions, the government simply mandated what frequency bands were used for what, and by whom. Getting access usually meant lobbying and back room deals. Sometimes the FCC used lotteries, which caused speculators to enter lotteries and then license access (basically capturing revenue that would have gone to the government had the spectrum been auctioned). In practice, auctions are the worst form of spectrum allocation, except for all the others.
All I can find on the Smithsonian is that they did press interviews, where various staff expressed opposition, and that they also sent some report to Congress. The press interviews are, quite naturally, public statements, and it could be argued they're unrelated to lobbying. As for the report, that's part of their normal duties - it would be a real catch-22 if such a report were considered lobbying. This feels like bluster from the politicians; they write dumb letters all the time for PR purposes.
The space shuttle situation, though, is a disaster.
So all articles will be open and free to read. The ACM Open subscription mainly includes publishing at a lower overall cost than the per-article rates, but also includes "AI-assisted search, bulk downloads, and citation management" and "article usage metrics, citation trends, and Altmetric tracking".
reply