Previously, I wrote about writing GUI’s for controlling and monitoring experiments. For ML this might be useful for tracking model learning (e.g. the popular weights and biases platform), while in the wet-lab it is great for making experiments simpler and more reliable to run, monitor and record.
And as it turns out, AI is quite good at this!
I have been using VSCode CoPilot in agent mode with Gemini 2.5 Pro to create simple GUIs that can control my experiments, which has proved pretty effective. Although there is clearly a concern when interfacing AI generated code with real hardware (especially if you “vibe code”, that is, just run whatever it generates) in practice it has allowed me to quickly generate tools for testing purposes, cutting the time required for getting a project started from hours to minutes.
As an example, I recently needed to hook up a Helmholtz coil to some custom electronics, centred around a Teensy micro-controller and designed to output a precisely controlled current.


The Teensy code was already developed, and I already had a python library for talking to the device (basically a wrapper around a bunch of serial commands), but using it involved writing new python scripts each time I wanted something changed – not ideal for quick testing on the bench.
First, I asked CoPilot to create a terminal user interface (TUI), thinking this might be a simpler task – in this proved very difficult! The model struggled to generate an interface, perhaps because of the obscurity of the TUI library or perhaps because it is simply difficult to build an interface in the terminal which updates information live (the magnet sensor readings) while also allowing user input (note it is certainly possible, see any number of utils – my most commonly used being htop).
As a second attempt, I asked CoPilot to write a GUI using NiceGUI – an easy to use python library for writing simple web apps. As guidance, I also told the model to use a queue to communicate with the electronics, since the serial interface means commands should be sent one at a time and in an orderly fashion.
Prompt: Wrie a simple web gui using the python library nicegui that (a) displays the hall sensor reading, updating every 0.1 seconds, and (b) allows for setting the current in amps using a text input and a button. Create a new file for this GUI. Remember that the syncboard1 can only handle one message at a time, so use a queue to communicate with it.
It almost worked! A few rounds of pasting error messages into the chat and letting the model do it’s thing led and the app was running, though not quite working.
Prompt: The syncboard is not changing the magnet current when it is set. I expect this is because the hall readings are getting in the way of sending the write magnet current command
Following this prompt, everything suddenly worked. In 10 minutes, I had a GUI implementing a pretty reasonable priority queue based structure that certainly served the purpose of testing the hardware. Even better, as I needed new features I simply asked for them (e.g. a shutdown button) and because the underlying structure was already decent, these were implemented correctly first time by the model.

In summary, using the agent to speed up tools development has proved very effective. Given some guidance, and using libraries that can already handle the hard tasks (e.g. rendering a GUI), the model generates perfectly good code with which to quickly get started testing some hardware. I could imagine it proving just as good (or even better) at creating a GUI interface to a software tool. It is not necessarily great at reasoning about how everything works, but it allows you to focus more on the high level, communicating your thoughts with it, while handles the nitty gritty of actually implementing said ideas.
- this is how the electronics board is referred to in the code ↩︎
