Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know I'm an outlier on HN, but I really don't care if AI was used to write something I'm reading. I just care whether or not the ideas are good and clear. And if we're talking about work output 99% of what people were putting out before AI wasn't particularly good. And in my genuine experience AI's output is better than things people I worked with would spend hours and days on.

I feel like more time is wasted trying to catch your coworkers using AI vs just engaging with the plan. If it's a bad plan say that and make sure your coworker is held accountable for presenting a bad plan. But it shouldn't matter if he gave 5 bullets to Chat gpt that expanded it to a full page with a detailed plan.



>But it shouldn't matter if he gave 5 bullets to Chat gpt that expanded it to a full page with a detailed plan.

The coworker should just give me the five bullet points they put into ChatGPT. I can trivially dump it into ChatGPT or any other LLM myself to turn it into a "plan."


I feel the same way, if all one is doing is feeding stuff into AI without doing any actual work themselves, just include your prompt and workflow into how you got AI to spit this content out, it might be useful for others to learn how to use these LLMs and shows train of thought.

I had a coworker schedule a meeting to discuss a technical design of an upcoming feature, I didn't have much time so I only checked the research doc moments before the meeting, it was 26 pages long with over 70 references, of which about 30+ were reddit links. This wasn't a huge architectural decision so I was dumbfounded, seemed he barely edited the document to his own preferences, the actual meeting was maybe my most awkward meeting I've ever attended as we were expected to weigh in on the options presented but no one had opinions, not even the author, on the whole thing. It was just too much of an AI document to even process.


If ChatGPT can make a good plan for you from 5 bullet points, why was there a ticket for making a plan in the first place? If it makes a bad plan then the coworker submitted a bad plan and there's already avenues for when coworkers do bad work.


How do you know the coworker didn't bully the LLM for 20 minutes to get the desired output? It isn't often trivial to one-shot a task unless it's very basic and you don't care about details.

Asking for the prompt is also far more hostile than your coworker providing LLM-assisted word docs.


Honestly if you have a working relationship/communication norms where that's expected, I agree just send the 5 bullets.

In most of my work contexts, people want more formal documents with clean headings titles, detailed risks even if it's the same risks we've put on every project.


Agreed! I've reached the conclusion that a lot of people have completely misunderstood why we work.

It's all about the utility provided. That's the only thing that matters in the end.

Some people seem to think work is an exchange of suffering for money, and omg some colleagues are not suffering as much as they're supposed to!

The plan(or any other document) has to be judged on its own merits. Always. It doesn't matter how it was written. It really doesn't.

Does that mean AI usage can never be problematic? Of course not! If a colleague feeds their tasks to a LLM and never does anything to verify quality, and frequently submits poor quality documents for colleagues to verify and correct, that's obviously bad. But think about it: a colleague who submits poor quality work is problematic regardless of if they wrote it themselves or if they had an AI do it.

A good document is a good document. And a bad one is a bad one. Doesn't matter if it was written using vim, Emacs or Gemini 3


Ever since some non-native-English-speaking people within my company started using LLMs, I've found it much easier to interact and communicate with them in Jira tickets. The LLM conveys what they intend to say more clearly and comprehensively. It's obviously an LLM that's writing but I'm overall more productive and satisfied by talking to the LLM.

If it's fiction writing or otherwise an attempt at somewhat artful prose, having an LLM write for you isn't cool (both due to stolen valor and the lame, trite style all current LLMs output), but for relatively low-stakes white collar job tasks I think it's often fine or even an upgrade. Definitely not always, and even when it's "fine" the slopstyle can be grating, but overall it's not that bad. As the LLMs get smarter it'll be less and less of an issue.


> I just care whether or not the ideas are good and clear

That's the thing. It actually really matters whether the ideas presented are coming from a coworker, or the ideas are coming from LLM.

I've seem way too many scenarios where I'm asking a coworker, if we should do X or Y, and all I get is a useless wall of spewed text, with a complete disregard to the project and circumstances on hand. I need YOUR input, from YOUR head right now. If I could ask Copilot I'd do that myself, thanks.


I would argue that's just your coworker giving you a bad answer. If you prompt a chatbot with the right business context, look at what it spits out, and layer in your judgement before you hit send, then it's fine if the AI typed it out.

If they answer your question with irrelevant context, then that's the problem, not that it was AI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: