Re: [Hamlib-developer] The use of LLM generated code in Hamlib (long)
Library to control radio transceivers and receivers
Brought to you by:
n0nb
|
From: Greg T. <gd...@le...> - 2026-02-27 16:17:53
|
"Mikael Nousiainen" <mik...@fa...> writes: > On Fri, Feb 27, 2026, at 16:16, Greg Troxel wrote: >> "Mikael Nousiainen" <mik...@fa...> writes: >> >>> Did we ever land to any decision regarding LLM usage in Hamlib? >> >> This is an interesting opinion, and I think it has considerable merit. >> >> https://lists.debian.org/debian-vote/2026/02/msg00060.html > > Thanks for noting this. However, I read this particular opinion > saying: "there is no way genAI can be done right and we should > therefore abandon it" - which is a rather extreme viewpoint that I > don't share. The fact that many large AI companies are not doing a > particularly good job ethically (training their models) should _not_ > mean automatically that "all genAI is bad and will be like that > forever". Not sure if I understood something incorrectly? I think you're technically correct but not correct in practice. I agree that it is theoretically possible that genAI could be done properly. (Separating ethics from unsettled questions of copyright law.) Can you point to a single example of an LLM that has been trained solely with content where the author/copyright-holder has truly consented to such use? I don't mean technically clicked through something they don't understand, but true consent. > Exactly. How can we trust any contributor, since "faking it" (= > generating the code) is easier than ever and requires practically no > effort? Also, I'd argue not all LLM-generated code is automatically > proprietary - just like not all StackOverflow or "whatever Google > gives you" is proprietary. It's a rather complex issue. Sorry, I didn't mean to say LLM code was proprietary here. I just meant if there are rules, we currently trust people to follow them. > My intent was more like: How can we keep Hamlib alive and thriving > when we know masses (including many of Hamlib's current and potential > future contributors) are going for genAI? And: can we find > responsible/ethical ways of using genAI? The first part I think is the key question. I guess it's a real question how much is lost by saying no. The larger question is if it's a generational thing and that in 20 years will everyone who participates think it's fine to use LLM code. I think it's to early to presume a societal outcome. |