If you’re hiring humans just to use AI, why even hire humans? Either AI will replace them or employers will realize that they prefer employees who can think. In either case, being a human who specializes in regurgitating AI output seems like a dead end.
> If you’re hiring humans just to use AI, why even hire humans
You hire humans to help train AI and when done you fire humans.
“Prompt Engineer” as a serious job title is very strange to me. I don’t have an explanation as to why it would be a learnable skill—there’s a little, but not a lot of insight into why an LLM does what it does.
> there’s a little, but not a lot of insight into why an LLM does what it does.
That's a "black box" problem, and I think they are some of the most interesting problems the world has.
Outside of technology- the most interesting jobs in the world operate on a "black box". Sales people, psychologists are trying to work on the human mind. Politicians and market makers are trying to predict the behavior of large populations. Doctors are operating on the human body.
Technology has been getting more complicated- and I think that distributed systems and high level frameworks are starting to resemble a "black box" problem. LLMs even more so!
I agree that "prompt engineer" is a silly job title- but not because it's not a learnable skill. It's just not accurate to call yourself an engineer when consuming an LLM.
It's an experience thing. It's not about knowing what LLMs/diffusion models specifically do, but rather about knowing the pitfalls that the models you use have.
It's a bit like an audio engineer setting up your compressors and other filters. It's not difficult to fiddle with the settings, but knowing what numbers to input is not trivial.
I think it's a kind of skill that we don't really know how to measure yet.
When an audio engineer tweaks the pass band of a filter, there’s a direct casual relationship between inputs and outputs. I can imagine an audio engineer learning what different filters and effects sound like. Almost all of them are linear systems, so composing effects is easy to understand.
None of this is true of an LLM. I believe there’s a little skill involved, but it’s nothing like tuning the pass band of a filter. LLMs are chaotic systems (they kinda have to be to mimic humans); that’s one of their benefits, but it’s also one of their curses.
Now, what a human can definitely do is convince themselves that they can control somewhat the outputs of a chaotic system. Rain prognostication is perhaps a better model of the prompt engineer than the audio mixer.