While the Twitter recommendation is strange, the assertion that we will suddenly have leisure time is demonstrably false. For decades each new technological advance was supposed to make it so we could work half as long because we could get twice as much done. That never happens. The cost of a person was never in how much they could produce but in how much they would demand to do so. If you can do twice as much, then your work product becomes half as valuable. Many of the throw away things we buy everyday are cheap only because their production is so heavily automated. If we still had to cook food in a conventional kitchen instead of warming up precooked food, or use a hammer and hand plane to build furniture, we would be paying far more than we do today. If anything, the people working those jobs are paid comparatively less now than before automation because it used to take skill to work those jobs but now anyone can do it. This is why the main advice of the article - do something that can't be automated and learn how to build the automation - is good advice.
Writing the paper is a very small part of the research. It's entirely likely that - like many of their students - they love the research but hate writing papers. They are very different skill sets.
Sadly, yes, it's true. New AI projects are getting funded and existing non-AI projects are getting mothballed. It's very disruptive and yet another sign of the hype being a bubble. Companies are pivoting entirely to it and neglecting their core competencies.
While I agree entirely about what Grok teaches us about alignment, I think the argument that "alignment was never a technical problem" is false. Everything I have ever read about AI safety and alignment have started by pointing out the fundamental problem of deciding what values to align to because humanity doesn't have a consistent set of values. Nonetheless, there is a technical challenge because whatever values we choose, we need a way to get the models to follow those values. We need both. The engineers are solving the technical problem; they need others to solve the social problem.
You assume it is a solvable problem. Chances are that you will have bots following laws (as opposed to moral statements) and each jurisdiction will essentially have a different alignment. So in a social conservative country, for example, a bot will tell you not being hetero is wrong and report you to the police if you ask too many questions about it. While, in a queer friendly country, a bot would not behave like this. A bit like how some movies can only be watched in certain countries.
I highly doubt alignment as a concept works beyond making bots follow laws of a given country. And at the end of the day, the enforced laws are essentially the embodiment of the morality of that jurisdiction.
People seem to live in a fictional world if they believe countries won't force LLM companies to force the country's morality in their LLMs whatever their morality is. This is essentially what has happened with intellectual property and media and LLMs likely won't be different.
They do ask. When you set it up it presents 5 agreements to accept, only 2 of which are required. ACR, voice recognition, and a few other questionable this are covered under those optional agreements. I simply didn't accept them and ask those features were disabled.
You can stop it much earlier than this. At setup time it gives you several policies to agree to. Only two of them are required; the rest are optional. The optional ones include Live Plus and several other systems for monitoring and advertising.
The process is reproducible even if the outcome isn't always identical. Outside of computing and mathematics, real world processes never result in the exact same output - small variations in size, density, concentration, etc. will occur.
I live in a province in Canada where the electrical system is owned and operated by a crown corporation. They are mandated to maintain a very high uptime and they do through several means including redundancy. Our electrical bills are cheaper than much of the US. It certainly can be done; there are other means than competition to ensure adequate service.
This is exactly where I find myself. I've been asked several times to take on management, but I have no interest in it. I got to be a principal after 18 years of experience by being good at engineering, not management. Like you said, I can and do help with leadership through mentorship, offering guidance and advice, giving presentations on technical topics, and leading technical projects.
reply