That’s kinda the problem. We’re already careless with the things we do ourselves. It can’t be helped, nobody’s perfect. But once we start delegating tasks, we lose the direct experience. Priorities shift, attention moves to something else and the chance of carelessness rises because it’s no longer a problem we have to concern ourselves with.
Meanwhile, the LLM “learns”. What it “learns”, nobody knows because it does so mechanically. There’s zero understanding.
It keeps “learning” every time it’s fed something, so you don’t have a static program that does what it’s told. Instead it’s a “living” program that applies what it “learns”. And that makes it unpredictable in the long run.
This turns the user into a glorified middle manager who has to hover over their employee and make sure they did their job as they should have. And how many middle managers do you know with that kind of dedication, that isn’t spiteful at its core?
The push against this is that the people depending on it to do the work become less dependable themselves.
And unless you’re an independent developer without a profit driven publisher breathing down your neck, this will be used in all the wrong ways as a standard instead of it being the exception.
I don’t think it’s important where the placeholder assets come from, or that mistakes will be more common when someone used gen AI instead of non-licensed stock image from a web search.
We don‘t know the cause in this case. Not replacing placeholder assets was a common mistake even before ai tools.
That’s kinda the problem. We’re already careless with the things we do ourselves. It can’t be helped, nobody’s perfect. But once we start delegating tasks, we lose the direct experience. Priorities shift, attention moves to something else and the chance of carelessness rises because it’s no longer a problem we have to concern ourselves with.
Meanwhile, the LLM “learns”. What it “learns”, nobody knows because it does so mechanically. There’s zero understanding.
It keeps “learning” every time it’s fed something, so you don’t have a static program that does what it’s told. Instead it’s a “living” program that applies what it “learns”. And that makes it unpredictable in the long run.
This turns the user into a glorified middle manager who has to hover over their employee and make sure they did their job as they should have. And how many middle managers do you know with that kind of dedication, that isn’t spiteful at its core?
The push against this is that the people depending on it to do the work become less dependable themselves. And unless you’re an independent developer without a profit driven publisher breathing down your neck, this will be used in all the wrong ways as a standard instead of it being the exception.
I don’t think it’s important where the placeholder assets come from, or that mistakes will be more common when someone used gen AI instead of non-licensed stock image from a web search.
You’re right. It’s an opinion and only as important as the one having the opinion decides it to be.
According to the article as cited in this comment, we do know the reason and a rush job to meet a deadline is precisely why.
I wouldn’t say „precisely“ as those are (plausible) speculations.