What kind of things is it doing?
Wrote me:
- a SPI deserializer that sets a bit after 12 bits read in, to trigger a prefetch
- an SDC constraints file for the deserializer that correctly identified the SPI clock and bus clock as separate domains requiring their own statement
- a test bench that validated both that the prefetch bit was being set, and that it was being set at the proper time relative to the SPI clock
- a makefile with commands for build, headless test, and debug by loading the VCD into a waveform viewer
It always feels like LLMs can do lots of the easy stuff, but if they can't do everything you need the skilled engineer anyway, who'd knock the easy things out in a week anyway.
Nearly every part of the tool flow I just described, I would consider “tricky to get right”. Been doing this for ~15 years and it’s still tough to bootstrap something like this from scratch. ChatGPT-4o did this for me from zero in about 15 minutes.
I won’t lie: I love it. I can focus on the actual, bigger problems at hand, and not the tricky little details of HDLs.
People are either deluding themselves or ignorant of the capabilities of frontier models if they don’t believe LLMs offer a speedup in workflow.
I personally believe that most of the doubt and cynicism is due to:
1) a pretty big collective identity crisis among software professionals, and
2) a suspicion that LLMs make it so that anyone who is good at articulating the problem precisely no longer needs a software engineer as a translation specialist from specs to code.
I say this as an EE of ~15 years who’s always been able to articulate what I want, specifically, to a firmware counterpart, who then writes the code I need. I can turn years of practice in this skill into great prompts for an LLM, which effectively cuts out the middleman.
I really like it. It’s helped me take on a lot of projects that are just outside of my innate level of capability. It’s also helped me learn a lot of new things about these new software adjacent areas. ChatGPT is a great tutor!
Fair enough. I guess it's all about degrees - I work at a place where we use FPGAs for our hardware, and I find it really hard to imagine an LLM being remotely capable of solving the problems our FPGA guys do.
If the FPGA is mostly doing simpler stuff with lots of boilerplate, I can see current LLMs offering a lot for someone who doesn't regularly write code for them; I guess that's similar to the current case for software.
Using them to set up the initial flow is not a bad idea either - I know coworkers who use them to write the early code for a new system or driver, where it seems to work pretty well (probably because that's a huge part of the training set - loads of tutorials out there)
> I personally believe that most of the doubt and cynicism is due to:
> 1) a pretty big collective identity crisis among software professionals, and
> 2) a suspicion that LLMs make it so that anyone who is good at articulating the problem precisely no longer needs a software engineer as a translation specialist from specs to code.
... But "articulating the problem precisely" is a huge part of what software engineers do, and there's a mountain of evidence that other people are not very good at that.
I have a mountain of professional experience that indicates many software engineers are not very good at it either.
Why would I add a subpar translation layer into the process of achieving my goals? There’s no inherent value in that.
> Why would I add a subpar translation layer into the process of achieving my goals?
Because you don't have a choice. Your thoughts are not code.
I'd still take ChatGPT as that translation layer over all but the best SWEs I've worked with.