Though, per OSI’s definition, your code probably would no longer be open source, since training an LLM is technically considered a field of endeavour:
OSD number 6:
The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
We need a new field of licensing, something like Ethical Source License. With AI being a thing on-the-field, and even before tbh, Open Source has alas become a paradigm of the past.
If we can play semantics, the program (the compiled binary) can be used for anything with no field restrictions.
But the code is not the program itself, it’s the recipe, and usage could be restricted in some specific ways.
In my opinion, since free licenses already have restrictions regarding distribution, saying AI models trained on this data are derivative works and must be licensed compatible (ie training data set, methods and models themselves being free).
I feel it’s a better middle ground where the freedom of users are not violated nor restricted, and the code/knowledge stays free
I found this, which adds additional text to the existing licenses to prohibit training an AI on the licensed code: https://github.com/non-ai-licenses/non-ai-licenses
Though, per OSI’s definition, your code probably would no longer be open source, since training an LLM is technically considered a field of endeavour:
OSD number 6:
This is exactly what I was looking for. Thanks!
Maybe you could say that AI training is not a use of the program, it is a use of the source code.
OSI is not US court (or at least I hope not).
Playing on words isn’t going to get the license accepted.
On the other hand, why does it have to be accepted?
You are doing something different. Just do the different thing.
We need a new field of licensing, something like Ethical Source License. With AI being a thing on-the-field, and even before tbh, Open Source has alas become a paradigm of the past.
If we can play semantics, the program (the compiled binary) can be used for anything with no field restrictions.
But the code is not the program itself, it’s the recipe, and usage could be restricted in some specific ways.
In my opinion, since free licenses already have restrictions regarding distribution, saying AI models trained on this data are derivative works and must be licensed compatible (ie training data set, methods and models themselves being free).
I feel it’s a better middle ground where the freedom of users are not violated nor restricted, and the code/knowledge stays free