Re: [Hamlib-developer] The use of LLM generated code in Hamlib (long)
Library to control radio transceivers and receivers
Brought to you by:
n0nb
|
From: Nate B. <n0...@n0...> - 2026-03-04 02:06:31
|
* On 2026 27 Feb 02:51 -0600, Mikael Nousiainen wrote: > Hi everyone! > > Did we ever land to any decision regarding LLM usage in Hamlib? > > I see there is some form of acceptance to the general idea of > accepting **thoroughly-reviewed** code generated by an AI agent in > Hamlib, since the FTX-1 backend was already merged and it was > partially generated using LLMs. Please see the discussions in the > merged PR if you need more context: > https://github.com/Hamlib/Hamlib/pull/1961 > > But where do we draw the lines here? Who will decide? Is this a > maintainer or a community based decision? I'm certainly not up to the task of trying to verify anything against all the permutations of bots now available to everyone. As I see it, the primary party responsible for assuring any contribution is legal/ethical/unencumbered is the one submitting the code. What I think we all want to avoid is someone prompting an LLM, "Write a Hamlib backend for the Moon Melter 6000", and then issue a PR of the result without examination or testing. I am beginning to soften just a bit (old age an all that) in that I can recognize that an LLM can be a useful tool to assist, though I've not tried it myself. What it can do is save hours of searching for some bit of code that is buried in the bowels of Stack Overflow and such. In that regard LLMs can serve very usefully as an extraordinarily powerful search engine that provides context to the problem being resolved. In that sense, is it much different than hours spent searching Stack Overflow, various blogs, or GitHub? > Hypothetically, how can project maintainers even know some code has > been generated by an LLM - especially if there's a skilled developer > reviewing and cleaning up the code before sharing it? I think this > goes to the same category as using StackOverflow or "whatever Google > results give you" as a starting point. That is really where I am willing to accept contributors using LLM assistance. The key here is LLM being assistance, not doing it all. The sense I get of "vibe coding" is prompting an LLM until the offered program does what the user asks for and then the user turning around and stating, "Look what I coded!" No, I don't want that. > I think it's still up to the to the contributors (submitting code > changes) and the project maintainers to take responsibility of what > gets merged/published. And the fact is that the number of AI agent > users and the amount of LLM-produced code around will only keep > increasing rapidly - there's no way avoiding it. When a contributor is prompting an LLM, he needs to be savvy enough to recognize when something clever is being offered and then stop and investigate. It could be the LLM is offering up something that is either not licensed for Free Software distribution or is patent encumbered. In reality, I'm not sure that is really an issue with Hamlib as we're not trying to solve deep problems or obscure algorithms. A backend is mostly an exercise in translating the physical radio to Hamlib. An LLM can likely provide analysis to a new contributor to make that task much easier/faster. That seems reasonable to me. > I'd rephrase the question as: How can Hamlib "keep up" and do it in a > responsible and ethical way? I think you might have expanded on this down thread. 73, Nate -- "The optimist proclaims that we live in the best of all possible worlds. The pessimist fears this is true." Web: https://www.n0nb.us Projects: https://github.com/N0NB GPG fingerprint: 82D6 4F6B 0E67 CD41 F689 BBA6 FB2C 5130 D55A 8819 |