For me at least, it's a general dislike of the wider educational system. My parents taught me to read, play chess, multiply, and write in cursive before elementary school. I didn't really learn anything at preschool or kindergarten, and I imagine daycare would be worse for my educational development. Maybe it's useful for social development? but at least for me I was always pretty independent (even in kindergarten) from the other kids. Not in an isolated way, I just preferred doing my own thing.
Preschools in the UK have curriculums they have to follow. That includes maths, reading and writing too.
I’m not going to comment on preschools in your country, but in the UK the kids who attended preschool are IN GENERAL the stronger students, socially, emotionally, and academically, when it comes to starting infants/ elementary school. Particularly in the less affluent areas. Though there might be some selection bias here too due to the kinds of parents who can sand their child to daycare verses those who cannot.
In the less affluent areas, I'd expect children not attending daycare to just not be getting anything at home. Presumably their parents are both working and cannot afford daycare. In the more affluent areas, I'd expect children only don't attend daycare if their parents prioritize their children over their jobs, and so they'd be getting much more positive attention than in a daycare. But, of course, we'd have to see a study differentiated by socioeconomic status to see what is actually the case.
We prioritized our kids. In the end, what worked better for our kids was for us to earn enough income to send them to really nice daycare/preschool for several hours a day.
Lots of two-parent working families do the maths, and realize they would pay more in childcare than the income from a second job. This incentivizes one of them to stay at home. Here, the incentive is gone. This is worse for the economy and probably the family.
Suppose childcare is $15k/year and you work minimum wage making less than $15k/year. Then there's less wealth to go around, just more in your pocket. But actually, you probably don't take home all the wealth you create, so it can actually still be better for the economy. It is still worse for the economy, but not for that reason. Probably because labor has a backward-bending supply curve, and most people are already working more hours than is optimal. As another commenter said, it would probably be better for the economy to make a 30 hour work week.
Not the guy you replied to, but here are some improvements that feel obvious:
1. Memory indexing. It's a pain to avoid banking conflicts, and implement cooperative loading on transposed matrices. To improve this, (1) pop up a warning when banking conflicts are detected, (2) make cooperative loading solved by the compiler. It wouldn't be too hard to have a second form of indexing memory_{idx} that the compiler solves a linear programming problem for to maximize throughput (do you spend more thread cycles cooperative loading, or are banking conflicts fine because you have other things to work on?)
2. Why is there no warning when shared memory is unspecified? It isn't hard to check if you're accessing an index that might not have been assigned a value. The compiler should pop out a warning and assign it to 0.0, or maybe even just throw an error.
3. Timing - doesn't exist. Pretty much the gold standard is to run your kernel 10_000 times in a loop and subtract the time from before and after the loop. This isn't terribly important, I'm just getting flashbacks to before I learned `timeit` was a thing in Python.
This is interesting to think about. It’s basically just birds and primates. Birds have an ancient evolutionary tree as they are dinosaurs, which did actually walk on two legs. But the gap between dinos and primates walking on two feet, I think, is tens of millions of years. So yea pretty long time.
This makes me think something else, though. Once we were able to reason about the physics behind the way things can move, we invented wheels. From there it's a few thousand years to steam engines and a couple hundred more years to jet planes and space travel.
We may have needed a billion years of evolution from a cell swimming around to a bipedal organism. But we are no longer speed limited by evolution. Is there any reason we couldn't teach a sufficiently intelligent disembodied mind the same physics and let it pick up where we left off?
I like the notion of the LLM's understanding being "shadows on the wall of Plato's cave metaphor," and language may be just that. But math and physics can describe the world much more precisely and, of you pair them with the linguistic descriptors, a wall shadow is not very different from what we perceive with out own senses and learn to navigate.
Note that wheels, steam engines, jet planes, spaceships wouldn't survive on their own in nature. Compared to natural structures, they are very simple, very straightforward. And while biological organisms are adapted to survive or thrive in complicated, ever-changing ecosystems, our machines thrive in sanitized environments. Wheels thrive on flat surfaces like roads, jet planes thrive in empty air devoid of trees, and so on. We ensure these conditions are met, and so far, pretty much none of our technology would survive without us. All this to say, we're playing a completely different game from evolution. A much, much easier game. Apples and oranges.
As for limits, in my opinion, there are a few limits human intelligence has that evolution doesn't. For example, intent is a double-edged sword: it is extremely effective if the environment can be accurately modelled and predicted, but if it can't be, it's useless. Intelligence is limited by chaos and the real world is chaotic: every little variation will eventually snowball into large scale consequences. "Eventually" is the key word here, as it takes time, and different systems have different sensitivities, but the point is that every measure has a half-life of sorts. It doesn't matter if you know the fundamentals of how physics work, it's not like you can simulate physics, using physics, faster than physics. Every model must be approximate and therefore has a finite horizon in which its predictions are valid. The question is how long. The better we are at controlling the environment so that it stays in a specific regime, the more effective we can be, but I don't think it's likely we can do this indefinitely. Eventually, chaos overpowers everything and nothing can be done.
Evolution, of course, having no intent, just does whatever it does, including things no intelligence would ever do because it could never prove to its satisfaction that it would help realize its intent.
Okay, but (1) we don't need to simulate physics faster than physics to make accurate-enough predictions to fly a plane, in our heads, or build a plane on paper, or to model flight in code. (2) If that's only because we've cleared out the trees and the Canada Geese and whatnot from our simplified model and "built the road" for the wheels, then necessity is also the mother of invention. "Hey, I want to fly but I keep crashing into trees" could lead an AI agent to keep crashing, or model flying chainsaws, or eventually something that would flatten the ground in the shape of a runway. In other words, why are we assuming that agents cannot shape the world (virtual, for now) to facilitate their simplified mechanical and physical models of "flight" or "rolling" in the same way that we do?
Also, isn't that what's actually scary about AI, in a nutshell? The fact that it may radically simplify our world to facilitate e.g. paper clip production?
> we don't need to simulate physics faster than physics to make accurate-enough predictions to fly a plane
No, but that's only a small part of what you need to model. It won't help you negotiate a plane-saturated airspace, or avoid missiles being shot at you, for example, but even that is still a small part. Navigation models won't help you with supply chains and acquiring the necessary energy and materials for maintenance. Many things can -- and will -- go wrong there.
> In other words, why are we assuming that agents cannot shape the world
I'm not assuming anything, sorry if I'm giving the wrong impression. They could. But the "shapability" of the world is an environment constraint, it isn't fully under the agent's control. To take the paper clipper example, it's not operating with the same constraints we are. For one, unlike us (notwithstanding our best efforts to do just that), it needs to "simplify" humanity. But humanity is a fast, powerful, reactive, unpredictable monster. We are harder to cut than trees. Could it cull us with a supervirus, or by destroying all oxygen, something like that? Maybe. But it's a big maybe. Such brute force takes requires a lot of resources, the acquisition of which is something else it has to do, and it has to maintain supply chains without accidentally sabotaging them by destroying too much.
So: yes. It's possible that it could do that. But it's not easy, especially if it has to "simplify" humans. And when we simplify, we use our animal intelligence quite a bit to create just the right shapes. An entity that doesn't have that has a handicap.
>Also, isn't that what's actually scary about AI, in a nutshell? The fact that it may radically simplify our world to facilitate e.g. paper clip production?
No, it's more about massive job losses and people left to float alone, mass increase in state control and surveillance, mass brain rot due to AI slop, and full deterioration of responsibility and services through automation and AI as a "responsibility shield".
Something that isn’t obvious when we’re talking about the invention of the wheel: we aren’t actually talking about the round shape thing, we’re actually talking about the invention of the axle which allowed mounting a stationary cart on moving wheels.
It wasn't actually just terrain. It was actually availability of draft animals, climate conditions and actually most importantly... economics.
Wheeled vehicles aren't inherently better in a natural environment unless they're more efficient economically than the alternatives: pack animals, people carrying cargo, boats, etc.
South America didn't have good draft animals and lots of Africa didn't have the proper economic incentives: Sahara had bad surfaces where camels were absolutely better than carts and sub Saharan Africa had climate, terrain, tsetse flies and whatnot that made standard pack animals economically inefficient.
Humans are smart and lazy, they will do the easiest thing that let's them achieve their goals. This sometimes leads them to local maxima. That's why many "obvious" inventions took thousands of years to create (cotton gin, for example).
Yes, only humans, birds, sifakas, pangolins, kangaroos, and giant ground sloths. Only those six groups of creatures, and various lizards including the Jesus lizard which is bipedal on water, just those seven groups and sometimes goats and bears.
I get what you mean, that’s why the basically is there. Most, kangaroos and some lemurs in your list being the exception, do not move around primarily as bipeds. The ability to walk on two legs occasionally is different than genuinely having two legs and two arms.
Talking about "time to evolve something" seems patently absurd and unscientific to me. All of nature evolved simultaneously. Nature didn't first make the human body and then go "that's perfect for filling the dishwasher, now to make it talk amongst itself" and then evolve intelligence. It all evolved at the same time, in conjunction.
You cannot separate the mind and the body. They are the same physiological and material entity. Trying anyway is of course classic western canon.
>Nature didn't first make the human body and then go "that's perfect for filling the dishwasher, now to make it talk amongst itself" and then evolve intelligence. It all evolved at the same time, in conjunction.
Nature didn't make decisions about anything.
But it also absolutely didn't "all evolved at the same time, in conjunction" (if by that you mean all features, regarding body and intelligence, at the same rate).
>You cannot separate the mind and the body. They are the same physiological and material entity
The substrate is. Doesn't mean the nature of abstract thinking is the same as the nature of the body, in the same way the software as algorithm is not the same as hardware, even if it can only run on hardware.
But to the point: this is not about separating the "mind and the body". It's about how you can have humanoid form and all the typical human body functions for millions of years before you get human level intelligence, after many later evolution.
>Trying anyway is of course classic western canon.
It's also classic eastern canon, and several others besides.
> The substrate is. Doesn't mean the nature of abstract thinking is the same as the nature of the body, in the same way the software as algorithm is not the same as hardware, even if it can only run on hardware.
In this you are positing the existance of a _soul_ that exists separately from the body, and is portable amongst bodies. Analogues to how an algorithm (disembodied software) exists outside of the hardware and is portable amongst it (by embodying it as software).
I don't not agree with that at all, but it's impossible to know of you're right, but I can at least understand why you have a hard time with my argument and the east-west difference if tradition of the existance of a soul is that "obvious" to you.
I think whether it's "portable amongst bodies" is orthogonal. A specific consciousness of person X can very well only exist within the specific body of person X, and my argument still remains the same (not saying it's right, just that it's not premised on the constraint that there's a soul and it's independent/portable being true).
The argument is that whether consciousness is independent of a specific body or not, it's still of a different nature.
The consciousness part uses the body (e.g. nerve system, neurons etc), but it's nature is the informational exchange and it's essense is not in the construction of the body as a physical machine (though that's its base), but in the stored "weights" encoding memories and world-knowledge.
Same how with a CPU a specific program it runs is not defined by the CPU but the memory contents (data and variables and logic code). It might as well run in an abstract CPU, or one made of water tubes or billiard balls.
Of course in our case, the consciousness runs on a body - and only a specific body - and can't exist without one (same way a program can't exist as a running program without a CPU). But it doesn't mean its of the same nature as the body - just that the body is its substrate.
> The strategic thinking that goes into longer-horizon tasks may be something LLMs aren’t as good at, which aligns with why entry-level workers are more affected than experienced workers.
I think the article is talking in generalities, so on average entry-level software engineers have less experience with long-horizon tasks (e.g. months-long development), though there are definitely the exceptions that prove this rule.
I wonder if there's a way to do something similar to Rectangular Surface Parameterization[1] with voxels. It would allow you to get pretty even-volumed voxels, and also simplify vertex identification (same three coordinates, nonlinear connection).
They're used as a small regularization term in image/audio decoders. But GANs have a different learning dynamic (Z6 rather than Z1 or Z2) which makes them pretty unstable to train unless you're using something like Bayesian neural networks, so they fell out of favor for the entire image generation process.
Is the point of the warning to avoid liability or to actually inform the users? If you tell people everything causes cancer (instead of only saying when you've verified it doesn't), soon enough they're going to stop caring when you say stuff like, "don't eat asbestos, that causes cancer". I think a "checkmark" system makes more sense—for verified accounts/developers, put a checkmark near their name, and for unverified ones, have nothing. There's no reason to cause alarm when 99% of the time the alarm is misfounded.