Would agree with this and think it is more than just your reasons, especially if you venture outside the US at least from what I've experienced. I've seen it at least personally more so where AI tech hubs aren't around and there is no way to "get in on the action". I see blue collar workers who are less threatened ask me directly with less to lose - why would anyone want to invent this? One of the reasons the average person on the street doesn't relate well to tech workers in general; there is a perceived lack of "street smarts" and self preservation.
Anecdotally its almost like they see them like mad scientists who are happy blowing up themselves and the world if they get to play with the new toy; almost childlike usually thinking they are doing "good" in the process. Which is seen as a sign of a lack of a type of intelligence/maturity by most people.
ChatGPT is one of the most used websites in the world and it's used by the most normal people in the world, in what way is the opinion "generally negative"?
No it's not. No one is forced to use ChatGPT, it got popular by itself. When millions use it voluntarily, that contradicts the 'generally negative' statement, even if there are legitimate criticisms of other aspects of AI.
A big reason is relative advantage. The "I have to use it because its there now and everyone else is, but I would rather no one have to use it at all" argument.
Lets say I'm a small business and I want to produce a new logo for some marketing material. In the past I would of paid someone either via a platform or some local business to do it. That would of just been the cost of business.
Now since there is a lower cost technology, and I know my competition is using it, I should use it too else all else equal I'm losing margin compared to my competition.
It's happening in software development too. Its the reason they say "if you don't use AI you will be taken over by someone who does". It may be true; but that person may of wished the AI genie was never let out of the bottle.
We'll see how long that lasts with their their new Ad framework. Probably most normal people are put off by all the other AI being marketed at them. A useful AI website is one thing, AI forced into everything else is quite another. And then they get to hear on the news or from their friends how AI-everything is going to take all the jobs so a few controversial people in tech can become trillionaires.
You can use ChatGPT for minor stuff and still have a negative view on AI. In fact the non-tech white collar workers I know use chatgpt for stuff like business writing at work but are generally concerned.
Negative sentiment also comes through in opinion polling in the US.
Yes, and I made an argument supporting that "used" and "it's bad" are not mutually exclusive . You simply repeated what I responded to and asserted you're the right opinion.
I get your argument but in this case it is that straightforward because it's not a forced monopoly like e.g. Microsoft Windows. Common folk decided to use ChatGPT because they think it is good. Think Google Search, it got its market position because it was good.
>Common folk decided to use ChatGPT because they think it is good.
That is not the only reason to use a tool you think is bad. "good enough" doesn't mean "good". If you think it's better to generate an essay due in an hour then rush something by hand, that doesn't mean it's "good". If I decide to make a toy app full of useless branches, no documentation, and tons of sleep calls, it doesn't mean the program is "good". It's just "good enough".
That's the core issue here. "good enough" varies on the context, and not too many people are using it like the sales pitch to boost the productivity of the already productive.
I don't agree with your comments, especially using PirateBay as an example. Stating either as "bad" is purely subjective. I find both PirateBay and ChatGPT both good things. They both bring value to me personally.
I'd wager that most people would find both as "good" depending on how you framed the question.
Seemingly the primary economic beneficiaries of AI are people who own companies and manage people. What this means for the average person working for a living is probably a lot of change, additional uncertainty, and additional reductions in their standard of living. Rich get richer, poor get poorer, and they aren't rich.
I'm just trying to tell you what people outside your bubble think, that AI is VERY MUCH a class thing. Using AI images at people is seen as completely not cool, it makes one look like a corporate stooge.
Sure, I meant the anglosphere. But in most countries, the less people are aware of technology or use the internet the less they are enthusiastic about AI.
I still don't see it. Look at some of the countries with the highest relatively high individual "personal tech usage" as well as "percentage of workers/economy connected to tech": South Korea, Israel, Japan, the US, UK, the Netherlands. The first three are on the positive end, the next two on the negative end, and the last one in the middle.
"Region of the world" correlation looks a lot stronger than that.
I think the interest rate and shareholder pressure were the most immediate causes. In 2021 you could get head count to do trivial projects at many tech companies and by the end of 2022 you had layoffs and hiring freezes.
What decisions were MBA instead of engineering decisions? It seems like intel has just made a lot of bad bets or failed to put their mass behind good ones.
The heights nvidia has achieved seem incidental and have depended heavily on the transformer/LLM market materializing.
Intel's biggest problem has been management remaining in denial about their serious engineering problems, and believing that they'll have things sorted out in another few quarters. They were years late to taking meaningful action to adjust their product roadmap to account for their ongoing 10nm failures. Putting all their eggs in the 10nm basket wasn't an engineering decision, and keeping them all there after years of being unable to ship a working chip wasn't an engineering decision.
Intel's in a somewhat better place today because while they continue to insist that their new fab process is just around the corner and will put their foundry back on top, they've started shipping new chips again, using TSMC where necessary.
Stock buybacks and huge sums of capital wasted on mergers and acquisitions (that went nowhere) while not investing in the very expensive EUV fabrication equipment that TSMC had been using for years.
Everything risks aggravating NIMBYism. It's hard to see how housing costs can come down in a lot of cities simply because housing is seen as an investment and people won't idly standby if the value decreases because of policies.
Are you a mathematician? I’m not an expert on the math field but it seems like they are hitting the same issues everyone else has: current LLMs still more or less need to be supervised by an expert and struggle to do something actually novel or build out a complicated proof correctly.
I work in math heavy applied setting. Randomly hired PhDs are also need to be supervised, end results being monitored, code be reviewed or they will make lots of mistakes, and my view is if you throw some problem like: build optimization model for this kind of problem on this kind of data, LLMs may produce better results.
There's a limit to how much novelty you're going to get from an LLM, especially in areas like programming and math where they've been heavily RL'd NOT to be novel, even to extent that the base model supports, and instead generate much narrower more proscribed outputs.
The limit to the novelty you are going to get from an LLM is essentially the "deductive/generative closure" of the training data. To be truly novel and move past the limits of your own past experience requires things like curiosity, continual learning, and the autonomy/agency to explore and learn.
I think it has a key advantage for China specifically though which is it consumes significantly less water and they have a lot of water poor territory.
The oakridge experiment ended and not a lot of R&D has been done on salt reactors. It makes sense that China is still basically in research and testing phases for molten salts.
I think this is the crux of it. The article discusses Ukraine but they weren't making millions of drones, the private capital wasn't there and the bureaucracy that coordinates it wasn't primed until the war.
Personally I never thought it was fine, but the solutions were all bad in some way that made direct venv and requirements files preferable. Poetry started to break this but I had issues with it. uv is the first one that actually feels good.
The API complexity really threw me when I last tried async python. It's very different from other async systems and is incredibly different from gevent or twisted which were popular when I was last writing server python.
reply