Hacker Newsnew | past | comments | ask | show | jobs | submit | g42gregory's commentslogin

Isn’t it a bit like saying, “X% of startups are just writing code”?


In the US, free speech protections are very selective (depending on what you planning to say). The rest of the Western world does not even have the laws protecting free speech. No need to worry.


Regardless of the legal protections of free speech, the general notion worldwide is that corporations are allowed to create and post whatever nonsense they feel like with utter impunity. It's one thing for a single person to write the timecube, it's another entirely to promulgate fact-checker robots which completely obscure the real facts with utter bullshit. We could all see this coming from miles away. I see definite need to worry.


Unfortunately, this brings an obvious question:

If they sensor something like this, how could we trust platforms with the actually important subjects?


We can’t anymore. Simple as that.


I agree with this except for the "anymore" part. We never could trust them. It just wasn't as obvious before as it is now.


We will anyway because the “put a camera directly on those in power” approach ala CSPAN is boring.

Most Americans literally can’t imagine news as anything other than entertainment.


we put way too much faith in them. It's easy to fake authoritative when your substance is virtual.


Suppose we hadn't done so; what alternative method of disseminating information might we have used, that would have had within a few orders of magnitude of the same reach?


The implication here is that YouTube enabled the reach it got; whereas in reality the reach was induced because of the faith we put in it. Had we not done so, then whatever alternative method of communication we did put our faith in - like blog posts, or self-hosting videos - would have had the same reach.


lol, the evening news was always a laugh if you knew anything about the subject matter.


> trust platforms

Framing it in terms of trust is already problematic.

We don't trust the NYTimes or Washington Post, they are a source of information that needs to be taken with shovels of salt and require additional research to get to anything trustworthy. And we always understood that was their role.

We don't trust supermarkets or retailers to give us important pricing information, we do the research to get anything actionable.

Why is trust involved for YouTube ?


Because unlike NYT or Washington post, anybody can upload a video in seconds, which implies a reasonable level of freedom of speech.


How is freedom of speech lead to trust? it's more the opposite, when it's free for all anyone can lie and have their lie amplified.


Because in order for freedom of speech you also need freedom for people to say dumb and abhorrent things. There are some clear bad things like hate speech, and Grey things like Covid conspiracies, maybe they should be banned, but removal of w11 bypasses sends a clear message that google is the lapdog for other big businesses.


exactly.

and it is why total freedom of speech on a platform does not mean we can trust it. maybe even the opposite because people who tell a lie are more motivated (money or whatever)

I am not justifying w11 video removal I'm just saying thinking youtube trustworthy because it's open to everybody is a mistake


Lies and other harmful speech are against freedom of speech.


That's an interesting way of looking at things. Now we just need to have some sort of arbiter who decides what is harmful and what is true eh?


Why not just a quick duel?


I'll go arm the icbm


We can't. From COVID to wars, YouTube is like public access TV from the 80s with scam preachers. We have to take it with a bucket of salt.


We can't and we shouldn't, these people only care about making more money, even if it means teenagers contracting diseases in the process. They are then using the money to shape the public opinion about them. The societal norms should change in a way that makes these people miserable the more they are successful IMHO.


I'm not even sure I know who Billie Eilish really is but she was all over Reddit for telling billionaires to donate their money.

More or less, the charitable and responsible approach to being ultra-rich, and which has disappeared in this century.

I see the people in charge of these big corporations as lizards, given every decision they take seems to be anti-Humanity. We should cherish non-profits, small businesses, having a good and boring life, doing normal things. Instead we idolise being successful, rich, or famous. What a stupid system…


They removed hundreds of videos documenting Israel's human rights violations.

The answer is no, we can't.


*censor


This implies we could ever trust them.


Did we ever trust them?


You can’t.


like they did during COVID


You can't, and this was readily apparent in 2020 with Covid. Even doctors presenting factual information got censored and de-platformed by YouTube.

The only real competing video platform that promises no censorship is Rumble ( https://rumble.com ), but it has a very right-wing slant due to conservatives flocking to it during all the Covid-era social media censorship.


Yeah the moment they started I knew it was doomed to fail. Get it wrong once and your credibility is ruined. They should have never tried to censor content outside of what is legally required and therefore defined.


I kind of agree but laws vary from countries to countries. It's quite an hassle to know what is legal in one country and not in another.

Take freedom of speech for instance, half the thing you can say in usa would be deemed as hate speech in Europe.


Society is doomed because we stopped silencing disinformation peddlers. We know what happens when Nazis are allowed to spread propaganda freely - because that happened one time in Germany, and we saw the results. We don't know what happens when antivaxxers are allowed to spread propaganda freely, but it's not hard to guess, and measles cases are on the rise. You can argue it's not YouTube's problem to solve, but nobody else is solving it, so it's hard for me to blame them for trying.

There's also this annoying pattern where 98% of the complaints about censorship are from people who are mad that the objectively stupid and dangerous stuff they were trying to profit from got censored, so it becomes a "boy who cried wolf" situation where any complaint about internet censorship is ignored on the assumption it's one of those. (What if there really is a Nigerian prince who needs my help, and I don't read his email?)

This time, though... Society is not being destroyed by people pirating Windows 11. That is entirely different from censoring things that destroy society, and they don't have a good excuse.


>Society is doomed because we stopped silencing disinformation peddlers. We know what happens when Nazis are allowed to spread propaganda freely - because that happened one time in Germany, and we saw the results.

That one time in Germany, actually an 80 year long ongoing event in central Europe. Hitler didn't wake up one day with a novel idea about the Jews and the place of the German people, these were foundational ideas in the culture at least as far back as Wagner.

If anything, this pro-censorship argument is self defeating, because the "disinformation" peddlers that were silenced in the second reich were generally those of the liberal, anglo, and francophilic variety, those who would seek to decenter the goal of a collective German destiny.

Censorship is only ever a good if you find yourself a part of the group that would be doing the censoring.


> promises no censorship... has a very right-wing slant

https://slatestarcodex.com/2017/05/01/neutral-vs-conservativ...

> The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.


If you want to avoid censorship, self-host Peertube and have peave of mind.


That's just self censorship, since no one will see your videos there


You can do both.


I looked at the front page alone and it's full of right wing hot takes and neo-nazis. If a platform wants to accept white-supremacists that's one thing. When it's right on their front page though it's being actively promoted.

Rumble isn't going to save the internet.


Right, it is explicitly a neo-Nazi platform


>Right, it is explicitly a neo-Nazi platform

We call those "free speech" platforms nowadays, because apparently the only free speech is Nazi speech.


It's because the only valid argument nazis have for why they should be allowed to broadcast what they have to say is that (in most jurisdictions) it's not literally illegal to.


odysee is similar but maybe with more of an anarchist/conspiracy theory slant than rumble


Anybody knows memory bandwidth?


It is already illegal to use images in somebody's likeness for commercial purposes or purposes that harm their reputation, could be confusing, etc... Basically the only times you could use these images are for some parodies, for public figures, and fair use.

Now, the OpenAI will be lecturing their own users, while expecting them to make them rich. I suspect, the users will find it insulting.

Generation for personal use is not illegal, as far as I know.


you can use the images to harm someone’s reputation legally as long as you don’t represent them as real.


I would also add that many (most?) companies/entities do not sell software but have large IT departments that could write software for internal consumption. Think Exxon, BP, Caterpillar, Airlines, Gov Labs/agencies, DOD, etc...

Internally, they could actually write 1,000X more software and it will be absorbed by internal customers. They will buy less packaged software from tech firms (unless it's infrastructure), internally they could keep the same headcount or more, as AI allows them to write more software.


I have to agree with this assessment. I am currently going at the rate of 300-400 lines of spec for 1,000 LOC with Claude Code. Specs are AI-assisted also, otherwise you might go crazy. :-) Plus 2,000+ lines of AI-generated tests. Pretty restrictive, but then it works just fine.


Yes, you could write the code yourself, but keep in mind that this activity is going away for most engineers (but not for all) in 1 - 2 years.

I think a better advice would be to learn reading/reviewing an inordinate amount of code, very fast. Also heavy focus on patterns, extremely detailed SDLC processes, TDD, DDD, debugging, QA, security reviews, etc...

Kinda the opposite advice from the blog. :-)

Edit: Somebody pointed out that, in order to read/review code, you have to write it. Very true. It brings a questions of how do you acquire/extend your skills in the age of AI-coding assistance? Not sure I have an answer. Claude Code now has /output-style: Learning, which forces you to write part of the code. That's a good start.


> keep in mind that this activity is going away for most engineers (but not for all) in 1 - 2 years

sure thing. we've been '6 months' away from AI taking our jobs for years now


Not saying AI will take anybody's job. It's just that the nature of the job is changing, and we have to acknowledge that. Will still be competitive. Will still require strong SE/CS knowledge and skills. Will still require/favor CS/EE degrees, which NVIDIA CEO told us not to get anymore. :-)

Also, it looks like the OpenAI and Anthropic has completed their fundraising cycles. So the AGI "has been cancelled" for now. :-)


>Yes, you could write the code yourself, but keep in mind that this activity is going away for most engineers (but not for all) in 1 - 2 years.

I'm not saying that it definitely isn't going to happen, but there is a loooong way to go for non-FAANG medium and small companies to let their livelihoods ride on AI completely.

>I think a better advice would be to learn reading/reviewing an inordinate amount of code, very fast. Also heavy focus on patterns, extremely detailed SDLC processes, TDD, DDD, debugging, QA, security reviews, etc...

If we get to a point in 1-2 years where AI is vibe-coding at a high mostly error-free level, what makes you think that it couldn't review code as well?


I can't see into the future, but I think that AI, at any level, will not excuse people from the need to acquire top professional skills. Software engineers will need to know Software Engineering and CS, AI or not. Marketers will have to understand marketing, AI or not. And so on... I could be wrong, but that's what I think.

AI-assistance is a multiplier, not an addition. If you have zero understanding before AI, you will get zero capabilities with AI.


Nobody has any idea what will happen in 1–2 years. Will AI still be just as incompetent at writing code as it is today? Will AI wipe out biological humanity? Nobody has any idea.


Very true. One thing we could do it to take a positive/constructive view of the future and drive towards it. People can all lose their jobs, OR we could write 1,000x more software. Let's give corporate developers tools to write 1,000x more software, instead of buying it from the outside vendors, as a way of example.


It might work!


Correct, but each may be using different 20%.

In the enterprise software (think ERP), each user may be using only 0.01% of the overall functionality. And the entire company maybe using only 1% of the functionality.


I am ended up not using this option anyway. I am using B-MAD agents for planning and it gets into a long-running planning stream, where it needs permission to execute steps. So you end up running the planning in the "accept edits" mode.

I use Opus to write the planning docs for 30 min, then use Sonnet to execute them for another 30 min.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: