Cool idea. Judging by the title, I was expecting abject failure. Maybe I am forgiving in my expectation of current state of the art of LLMs. I have no idea what the context prompt was for this. Was was as elaborate "act as an atari 800 programmer, draw on sources such as De Rey atari by Chris crawfor et al and all of tom hudson's game and utility programming listings in analog computing for the years 1981-1989 - using this knowledge please help me write a game that has the following...."
I mean the game shown was far better than some of the early games I wrote in the late 70s on a TRS 80 at computer camp or on my own Atari 800 in the early 1980s during the lengthy Canadian winters.
Maybe I am soft with nostalgia ...
The problem being, if you don't know the answers already, the LLM doesn't fill any gaps in knowledge, you may have. E.g, I don't know Atari BASIC (just that pesky – or is it petcii? – MS BASIC). Accordingly, I have no idea about PRINT #6 or LOCATE, and ChatGPT apparently neither, so the duo of us won't arrive anywhere.
On a slight tangent, I'm kind of surprised and impressed to learn that Atari BASIC lets you use a keyword as a name with LET, something MS BASIC won't do, tokenizing it anyways, LET or not, as there is no such thing as context awareness.