Discrimination along protected attributes such as gender would be highly illegal though, so no doubt you’d have tons of evidence to present beyond “gossipy HR ladies”.
I didn't mention gossip at all. Are you pretending to quote something I never said, to just perfectly illustrate the bad faith nonsense that is ever-injected into even simple conversations about this topic?
It might be worth Googling James Damore as an early example of this chilling effect.
Yeah “society” had millennia of that. It’s quite telling that perhaps less than a decade of taking women seriously led to a a vitriol filled backlash full of Tates, Trumps and the manosphere.
It’s also quite telling that your main complaint is Disney superhero movies. It’s difficult to think of something more juvenile and unimportant.
> It’s quite telling that perhaps less than a decade of taking women seriously led to a a vitriol filled backlash full of Tates, Trumps and the manosphere.
1. It's been about 30 years since the "strong independent women" meme first started in popular media.
2. Where is the vitriol and backlash in my post to which you are referring to?
Your response looks like a canned one that can be inserted into any discussion about males.
> It's been about 30 years since the "strong independent women" meme first started in popular media.
Much longer than that. While there was significant pre-war feminism, it really took off in the 1960s. Perhaps what people mean is a sort of post-"Bechdel test" world, where people will be sharply criticized if they make a piece of media that only has (properly characterized) male characters.
I see it as a co-existence problem. Trying to insist on male-only spaces or male-only values isn't going to fly any more. A lot of traditional masculinity is framed around being "not a woman", an inherently denigratory concept. It needs a programme that is (a) positive and (b) a concept of personhood and value that's not tied to gender.
lol title IX was only in the 70s. Post bechdel whatever, it was only a handful of years ago that women could finally speak out en masse about not being sexually assaulted on film and TV sets.
> it was only a handful of years ago that women could finally speak out en masse about not being sexually assaulted on film and TV sets.
That wasn't a women-only problem, IIRC. The Hollywood casting couch (and similar problems) was used against both men and women. Some actors (like Kevin Spacey) were called out/blackballed for unwanted sexual attention/acts that they perpetrated against men.
As far as women being allowed to speak out - everyone is allowed to speak out, but the rich and influential silences people who they have left aggrieved. These include both men and women.
To put things in perspective, you joined a thread discussing a singular male-only problem, and dragged female issues into it, which, on closer inspection, turned out to be not female-exclusive anyway.
Reuters reported that ByteDance (TikTok parent) in Q1 2025 had $48b in revenue.[0] They should surpass $200b for 2025 which would make them bigger than Meta.
In other words, Tiktok has already caught up with Instagram in terms of revenue.
Because TikTok is free, had no competitors and network effects given that it is a social media platform. ChatGPT already depends on subscription income, has to compete with companies that can offer the same service for free and has no network effects because you're literally talking to a commodified bot
> TikTok [..] had no competitors and network effects
TikTok, or rather ByteDance, acquired Musical.ly as a competitor to absorb the user base and jump start their network. Their also have been a lot of short-form video platforms before (e. G., Vine) and during TikToks growth (Instagram reels, YT Shorts).
I agree with the gist of your statement but the same could've been said for a number of new companies against entrenched players. Heck, I'm pretty sure google was that company against the existing search.
You'll probably argue that this time it's different but no one knows what's different until it's already changed.
So few people understand how advertising on the internet works and that is I guess why Google and Meta basically print money.
Even here the idea that it’s as simple as “just sell ads” is utterly laughable and yet it’s literally the mechanism by which most of the internet operates.
Anthropic have consistently shown they don’t know shit about anything but training LLMs. Why should we consider their political/sociological/ethical work to be anything other than garbage with no scholarly merit.
Yes, that is a serious skill. How many of the woes that we see is because people don't know what they want or are unable to describe it in such a way that others understand it.
I believe prompt engineer to properly convey how complex communication can be, when interacting with a multitude of perspectives, world views, assumptions, presumptions etc.
I believe it works well to counter the over-confidence that people have, from not paying attention to what gaps exist between what is said and what is meant.
Yes, obviously a role involving complex communication while interacting with a multitude of perspectives, world views, assumptions, presumptions, etc needs to be called "engineer."
That is why I always call technical writers "documentation engineers," why I call diplomats "international engineers," why I call managers "team engineers," and why I call historians "hindsight engineers."
I believe you're joking here, but I do think it'd be useful to have some engineering background in each of these domains.
The number of miscommunications that happen in any domain, due to oversight, presumptions and assumptions is vast.
At the very least the terminology will shape how we engage with it, so having an aspirational title like prompt engineer, may influence the level of rigor we apply to it.
I don't think that's the right direction to go in.
Despite needing much knowledge of how a planes inner workings function, a pilot is still a pilot and not an aircraft engineer.
Just because you know how human psychology works when it comes to making purchase decision and you are good at applying that to sell things, you're not a sales engineer.
Giving something a fake name, to make it seem more complicated or aspirational than it actually is makes you a bullshit engineer in my opinion.
I think what you're describing is more commonly included under epistemology under philosophy, and I agree that it would be a useful background in each of those domains, but for some reason in the last few decades we have downgraded the humanities as less useful.
Most designers can't, either. Defining a spec is a skill.
It's actually fairly difficult to put to words any specific enough vision such that it becomes understandable outside of your own head. This goes for pretty much anything, too.
… sure … but also no. For example, say I have an image. 3 people in it; there is a speech bubble above the person on the right that reads "I'A'T AY RO HERT YOU THE SAP!"¹
I give it,
Reposition the text bubble to be coming from the middle character.
DO NOT modify the poses or features of the actual characters.
Now sure, specs are hard. Gemini removed the text bubble entirely. Whatever, let's just try again:
Place a speech bubble on the image. The "tail" of the bubble should make it appear that the middle (red-headed) girl is talking. The speech bubble should read "Hide the vodka." Use a Comic Sans like font. DO NOT place the bubble on the right.
DO NOT modify the characters in the image.
There's only one red-head in the image; she's the middle character. We get a speech bubble, correctly positioned, but with a sans-serif, Arial-ish font, not Comic Sans. It reads "Hide the vokda" (sic). The facial expression of the middle character has changed.
Yes, specs are hard. Defining a spec is hard. But Gemini struggles to follow the specification given. Whole sessions are like this, and absolute struggle to get basic directions followed.
You can even see here that I & the author have started to learn the SHOUT AT IT rule. I suppose I should try more bulleted lists. Someone might learn, through experimentation "okay, the AI has these hidden idiosyncrasies that I can abuse to get what I want" but … that's not a good thing, that's just an undocumented API with a terrible UX.
(¹because that is what the AI on a previous step generated. No, that's not what was asked for. I am astounded TFA generated an NYT logo for this reason.)
You're right, of course. These models have deficiencies in their understanding related to the sophistication of the text encoder and it's relationship to the underlying tokenizer.
Which is exactly why the current discourse is about 'who does it best' (IMO, the flux series is top dog here. No one else currently strikes the proper balance between following style / composition / text rendering quite as well). That said, even flux is pretty tricky to prompt - it's really, really easy to step on your own toes here - for example, by giving conflicting(ish) prompts "The scene is shot from a high angle. We see the bottom of a passenger jet".
Talking to designers has the same problem. "I want a nice, clean logo of a distressed dog head. It should be sharp with a gritty feel". For the person defining the spec, they actually do have a vision that fits each criteria in some way, but it's unclear which parts apply to what.
at least then, we had hard overrides that were actually hard.
"This got searched verbatim, every time"
W*ldcards were handy
and so on...
Now, you get a 'system prompt' which is a vague promise that no really this bit of text is special you can totally trust us (which inevitably dies, crushed under the weight of an extended context window).
Unfortunately(?), I think this bug/feature has gotta be there. It's the price for the enormous flexibility. Frankly, I'd not be mad if we had less control - my guess is that in not too many years we're going to look back on RLHF and grimace at our draconian methods. Yeah, if you're only trying to build a "get the thing I intend done" machine I guess it's useful, but I think the real power in these models is in their propensity to expose you to new ideas and provide a tireless foil for all the half-baked concepts that would otherwise not get room to grow.
Case in point, the final image in this post (the IP bonanza) took 28 iterations of the prompt text to get something maximally interesting, and why that one is very particular about the constraints it invokes, such as specifying "distinct" characters and specifying they are present from "left to right" because the model kept exploiting that ambiguity.
Hey! The author, thank you for this post! QQ, any idea roughly how much this experimentation cost you? I'm having trouble processing their image generation pricing I may just not be finding the right table. I'm just trying to understand if I do like 50 iterations at the quality in the post, how much is that going to cost me?
All generations in the post are $0.04/image (Nano Banana doesn't have a way to increase the resolution, yet), so you can do the math and assume that you can generated about 24 images per dollar: unlike other models, Nano Banana does charge for input tokens but it's neligible.
Discounting the testing around the character JSON which became extremely expensive due to extreme iteration/my own stupidity, I'd wager it took about $5 total including iteration.
reply