I just gave ChatGPT another chance at helping me write some code. I pasted some uncommented chunk of assembly (not trick code), and asked it to comment that code.
The most it was able to guess "intelligently" is that the code had to do with graphics. Beyond that, though, it was quite useless. It tried to comment instructions one by one with their literal meaning, and even got that wrong. It also couldn't group sequences of instructions into logical groups.
(1/4)
It was specifically unable to understand how Z80 register pairs work, where some instructions operate on the entire 16-bit pair, and others on single 8-bit registers. From there, it didn't understand the pointer arithmetic and commented it all wrong.
I then asked it to guess which specific machine the code might be for, but, without understanding the pointer arithmetic, that's hard.
I tried to help it by giving it leading questions about verifiable facts (not about code).
(2/4)
It started wrong, mentioning that the ZX Spectrum is the only Z80-based machine that stores its framebuffer in main memory.
I then asked about the Amstrad CPC. It recognized that it also has the framebuffer in main mamory, but got the address and size wrong. From that point, it did mention that both Spectrum and CPC have its framebuffer in main main memory.
When I asked about MSX, though, it said that the framebuffer is also in main memory there, which is not accurate.
(3/4)
As it stands, for my use case, ChatGPT remains useless as a coding tool. It provides inaccurate high-level information, inaccurate technical data, and its understanding of code is too shallow to be useful, with critical mistakes that end up causing more harm than good.
I'd need to suffer from Gell-Mann Amnesia to trust anything from ChatGPT, given how wrong it is on topics where I do have expertise.
(4/4)
@jbqueru Yeah they're simply not trained on relevant datasets. Ask them to write an Atari ST fullscreen demo ;)
@troed writing is easier for such systems as long as they have something to copy from.