AI on AI Action

You can read this take just about everywhere in dev circles at the moment. It's an amazing, paradigm changing time to be involved in software development. In an obscenely short time the technology that once (poorly) guessed our next word or line of code has progressed to what feels like a form of alien technology that can conjure functioning software from imprecise, conversational English requirements. This game exists solely because of this technology.

Yesterday I was working with Claude on our built-in, non-LLM-powered AI bots (who I like to refer to as the GOFAI-bots). The process went something like this:

A player commented on Too Big To Fail's thread on boardgamegeek.com about how stupidly the bots were playing and made some very specific recommendations. I copied their comments and pasted them into Claude code. I added a couple of my own observations on the bot game play (in plain English) and hit send.

Two minutes later Claude had a plan for all of the suggestions, asking me how to distribute the various improvements across the three bot difficulty levels.

Two minutes after that, Claude had implemented, documented, and written unit tests for all of the proposed changes. It submitted a PR to github, waited for the CoPilot review, addressed four of the concerns while dismissing the fifth as unnecessary given our particular architecture (I agreed with its analysis). I then told Claude to add the new strategies to the LLM-bot prompts. It synthesized the strategies once more, this time creating text that an LLM would have an easy time parsing and following.

I tested the new code by running several all bot games, watching for the new strategies to emerge.

They did.

Then I had Claude close the PR, merge the branch to main, and watch the deploy. It told me when the smarter bots were online.

I hadn't written a line of traditional code. I had directed things with conversational, but admittedly precise, English. It was started and deployed within an hour. And I can't help but feel like this is a toy example of the process by which this technology could advance at an even faster pace.

An AI set out to make another set of AIs smarter and did so easily. I see this as a version of recursive self-improvement, although an admittedly degenerate example from the toy department.

But it still gives me pause as I decide whether to merge Vultara into Ponzico and if the bot will notice that I'll only be four stocks away from the majority after.

I'm sure it'll be fine.