I saw a bunch of people complaining on Twitter about how GPT-OSS can't be customized or has no soul and I noticed that none of them said what they were trying to accomplish.
"The main use-case for fine-tuning small language models is for erotic role-play, and there’s a serious demand."
I am playing around with interactive workflow where the model suggests what can be wrong with a particular chunk of code, then the user selects one of the options, and the model immediately implements the fix.
Biggest problem? Total Wild West in terms of what the models try to suggest. Some models suggest short sentences, others spew out huge chunks at a time. GPT-OSS really likes using tables everywhere. Llama occasionally gets stuck in the loop of "memcpy() could be not what it seems and work differently than expected" followed by a handful of similar suggestions for other well-known library functions.
I mostly got it to work with some creative prompt engineering and cross-validation, but having a model fine-tuned for giving reasonable suggestions that are easy to understand from a quick glance, would be way better.
It's a well-understood self-contained use-case without many externalities and simple business models.
What more, with porn, the medium is the product probably more than the content. Having it on home-media in the 80s was the selling point. Getting it over the 1-900 phone lines or accessing it over the internet ... these were arguably the actual product. It might have been a driver of early smart phone adoption as well. Adult content is about an 80% consumption on handheld devices while the internet writ large is about 60%.
Private tunable multi-media interaction on-demand is the product here.
Also it's a unique offer. Role playing prohibited sexual acts can be done arguably victim free.
There's a good fiction story there... "I thought I was talking to AI"
There's something Freudian about the idea that the more you can customize porn, the more popular it is. That, despite the impression that "all men want one thing", it turns out that men all want very different and very oddly specific things. Imbuing somrthing with a "magical" quality that doesnt exist is the origin of the term "fetish". Its not about the raw attractive preference for a particular hair color; its a belief in the POWER of that hair color.
oh it's wildly different. About 15 years ago I worked on a porn recommendation system. The idea is that you'd follow a number of sites based on likes and recommendations and you'd get an aggregated feed with interstitial ads.
So I started with scraping and cross-reference, foaf, doing analysis. People's preferences are ... really complex.
Without getting too lewd, let's say there's about 30-80 categories with non-marginal demand depending on how you want to slice it and some of them can stack so you get a combinatoric.
In early user testing people wanted the niche and found the adventurous (of their particular kind) to be more compelling. And that was the unpredictable part. The majoritarian categories didn't have stickiness.
Nor did these niches have high correlation. Someone could be into say, specific topic A (let's say feet), and correlating that with topic B (let's say leather) was a dice roll. The probabilities were almost universally < 10% unless you went into majoritarian categories (eg. fit people in their 20s).
People want adventure on a reservation with a very well defined perimeter - one that is hard to map and different for every person.
So the value-add proposition went away since it's now just a collection of niche sites again.
Also, these days people have Reddit accounts reserved for porn where they do exactly this. So it was built after all.
what's the problem with that? we have erotic texts dating back thousands of years, basically as old as the act of writing itself https://en.wikipedia.org/wiki/Istanbul_2461
There's nothing wrong with it, but you have to understand the differences between different user groups to know which limitations are relevant to your own use cases. "It doesn't follow instructions" could mean "it won't pretend to be a horny elf" or "it hallucinates fields outside the JSON schema I specified"; the latter is much more of a problem for my uses.
I have no problem with it and I can understand why people don't want to say "I'm trying to pornify this model and it refuses to talk dirty!" in public. But if you're calling a model garbage maybe you should be honest about what the "problem" is.
The pro-porn side has zero PR because respectable public figures don't see pro-porn advocacy as a good career move. At most, you'll get some oblique references to it.
Meanwhile, the anti-porn side has a formidable alliance:
Right-wing, religiously-motivated anti-porn activists. Left-wing, feminism-motivated anti-porn activists. Big corporate types with lots of $$$$ to spend who want their customer support chatbot to be completely SFW at all times. AI safety folk who think keeping the model on a tight leash is an ethical obligation, lest future iterations take over the world. AI vendors who are keen on the yes-it-might-take-over-the-world narrative. AI vendors who just don't want their developers having to handle NSFW stuff in work. Politicians who don't know a transformer from a diffusion model, but who've heard a chorus of worries about lost jobs and AI bias and deepfakes and revenge porn.
These people will speak up in public at the drop of a hat.
You don't understand! Every erotic chatbot service keeps getting censored, what happened to CharacterAI just keeps happening. There's a serious supply-shortage, do you really want people turning to Grok? The spice must flow!!!
I've found good use of Phi-4 at home, and after a few tests of the GPT-OSS 20B version I'm quite impressed so far.
Particularly one SQL question that has tripped every other model of similar or smaller size that I've tried, like Devstral 24B, Falcon 3 7B, Qwen2.5-coder 14B and Phi 4 14B.
The question contains an key point which is obvious for most humans, and which all of the models I tried previously have failed to pick up on. GPT-OSS picked up on it, and made a reasonable assumption.
It's also much more thorough at explaining code compared to the other models, again including details the others miss.
Now if only I had a GPU that could run the whole thing...
Sadly no. I'd like to keep it untainted, but also because the tables involved are straight from my work, which is very much not OSS.
I can however try to paraphrase it so you get the gist of it.
The question asks to provide a SQL statement to update rows in table A based on related tables B and C, where table B is mentioned explicitly and C is implicit through the foreign keys provided in the context.
The key point all previous models I've tested has missed, is that the rows in A are many-to-one with B, and so the update should take this into account. This is implicit from the foreign key context and not mentioned directly in the question.
Think distributing pizza slices between a group of friends. All previous models has completely missed this part and just given each friend the whole pizza.
GPT-OSS correctly identified this issue and flagged it in the response, but also included a sensible assumption of evenly dividing the pizza.
I should note some of the previous models also missed the implicit connection to table C, and thus completely failed to do something sensible. But at least several of them figured this out. Of course I forgot to write that part down so can't say offhand which did what.
As for the code, for example I've coded a Y combinator in Delphi, using intentionally terse non-descriptive names, and asked the models to explain how the code works and what it does. Most ~7B models and larger of the past year or so have managed to explain it fairly well. However GPT-OSS was much more thorough and provider a much better explanation, showing a significantly better "understanding" of the code. It was also the first model smaller than LLama 3 70B that I've tried that correctly identified it as a Y combinator.
Does anyone know how synthetic data is commonly generated? Do they just sample the model randomly starting from an empty state, perhaps with some filtering? Or do they somehow automatically generate prompts and if how? Do they have some feedback mechanism, e.g. do they maybe test the model while training and somehow generate data related to poorly performing tests?
It’s common to use rejection sampling: sample from the model and throw out the samples which fail some criteria like a verifiable answer or a judgement from a larger model.
I don't know about Phi-5, but earlier versions of Phi were trained on stories written by larger models trained on real-world data. Since it's Microsoft, they probably used one of the OpenAI GPT series.
> for instance, they have broad general knowledge about science, but don’t know much about popular culture
That seems like a good focus. Why learn details that can change within days of it being released? Instead, train the models to have good general knowledge, and be really good at using tools, and you won't have to re-train models from scratch just because some JS library now has a different API, instead the model goes out to fetch the latest APIs/gossip when needed.
Yeah, it always seemed like a sad commentary on our world that AIs are devoting their weights to encyclopedic knowledge of Harry Potter, Pokemon, and Reddit trolling.
By definition, a model can't "know" things that are not somewhere in its training set, unless it can use a tool to query external knowledge.
The problem is that the size of the training set required for a good model is so large, that's really hard to make a good model without including almost all known written text available.
Yeah, makes sense. Good observations regarding the benchmark vs. vibes in general, and I didn't know / made the connection between the lead of phi models going to oAI and gpt-oss. Could very well be a similar exercise + their "new" prompt level adherence (system > developer > user). In all the traces I've seen of refusals the model "quotes" the policy quite religiously. Similar thing was announced for gpt5.
I think the mention of the "horny people" is warranted, they are an important part of the open models (and first to explore the idea of "identities / personas" for LLMs, AFAIK). Plenty of fine-tuning bits of know-how trickled from there to the "common knowledge".
There's a thing that I would have liked to be explored, perhaps. The idea that companies might actually want what -oss offers. While the local llm communities might want freedom and a horny assistant, businesses absolutely do not want that. And in fact they spend a lot of effort into implementing (sometimes less than ideal) guardrails, to keep the models on track. For very easy usecases like support chatbots and the like, businesses will always prefer something that errs on the side of less than useful but "safe", rather than have the bot start going crazy with sex/slurs/insults/etc.
I do have a problem with this section though:
> Really open weight, not open source, because the weights are freely available but the training data and code is not.
This is factually incorrect. The -oss models are by definition open source. Apache2.0 is open source (I think even the purists agree with this). The requirement of sharing "training data and code" is absolutely not a prerequisite for being open source (and historically it was never required. The craze surrounding LLMs suddenly made this a thing. It's not).
Here's the definition of source in "open source":
> "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
Well, for LLMs the weights are the "preffered form of making modifications". The labs themselves modify models the same as you are allowed to by the license! They might use more advanced tools, or better datasets, but in the end the definition still holds. And you get all the other stuff, like the right to modify, re-release, etc. I'd really wish people would stop proliferating this open weight nonsense.
Models released under open source licenses are open source. gpt-oss, qwens and mistrals (apache2.0), deepseeks(MIT), etc.
Models released under non open source licenses also exist, and they're not open source because the licenses under which they're released aren't. LLamas, gemmas, etc.
No the preferred way of making modifications is the weights _together_ with training (or fine tuning) scripts, and the entire evaluation pipeline to measure performance. And the data required to support all of this.
When someone joins your data science team your would give them all this code and data. Not just the weights and say - the weights are the source, modify that to improve the model, I look forward to see your MR next week.
EDIT: Heck, sometimes the way to make improvements (modifications) is just to improve the data, and not touch the training code at all. It is often one of the most powerful ways. You still need training code though, and evaluation to measure the impact.
The license gives you the right to modify the weights, how you do the modification is up to you. The rest is in the realm of IP, know-how, etc. Apples and oranges.
You also need the training data, so you can ensure you're not benchmarking on the training set, fine-tuning on the training set (overfitting with extra steps), or otherwise breaking things.
It's not about the preferred way. Else open source software would need to give you their IDE setup, CI/CD setup, access to all internal tools, etc. Software like sqlite don't release their full test suite. They paywall the preferred way of making changes, yet they are open source.
>The “source code” for a work means the preferred form of the work for making modifications
The GPL refers to a form of the artifact being released
The key is if you consider weights source code. I do not think this is a common interpretation.
> The labs themselves modify models the same as you are allowed to by the license
Do the labs do not use source code?
It is a bit like arguing that releasing a binary executable is releasing the source code. One could claim developers modify the binary the same as you are allowed to.
The weights are part of the source code. When running inference on a model you use the architecture, config files and weights together. All of these are released. Weights are nothing but "hardcoded values". The way you reach those values is irrelevant in the license discussion.
Let's take a simple example: I write a chess program that is comprised of a source file with 10 "if" statements, a config file that matches between the variables used in the if statements and a "hardcoded values" file that stores the actual values. It would be a crappy chess program, but I hope you agree that I can release that as open source and no-one would bat an eye. You would also be granted the right to edit those hardcoded values, if you wish so. You'd perhaps make the chess bot better or worse. But you would be allowed to edit it, just like I would. That's the preferred way of modifying it. Me providing the methods that I used to reach those 10 hardcoded values has 0 bearing on my crappy chess bot being open source or not. Do we agree on that?
Now instead of 10 values, make it 100billion. Hey, that's an LLM!
> It is a bit like arguing that releasing a binary executable is releasing the source code.
That's the misconception. Weights are not a binary executable. In other words, there isn't another level above weights that the labs use to "compile" the weights. The weights exist from the beginning to the end, and the labs edit the weights if they want to modify the models. And so can you. There isn't a "compilation" step anywhere in the course of training a model.
If you have 10 harcoded values, you have a binary blob, a common feature particularly in hardware drivers that is opaque and commonly considered to not be fully free unless the instructions for deriving it are also included. It's frequently just an executable, occasionally just configuration information, but difficult to change while (assuming no signing shenanigans) still remaining technically possible.
The training data is the source code and the training process is the compiler. There's a fairly direct metaphor to be made there. Different compilers can have vastly different impacts on the performance of the compiled executable.
I think source code really only exists in terms of the source code/object code dichotomy, so what "traditional" open source means for model weights is really not obvious if you only go off of traditional definitions. Personally I think the word "open source" shouldn't apply here anymore than it would for art or binary code.
Consider the following: it is possible to release binaries under the Apache2 license. Microsoft has, at least at one point, released a binary under the BSD license. These binaries are not open source because they are not source.
This isn't the same argument as given in the article though, so I guess it is a third position.
> Consider the following: it is possible to release binaries under the Apache2 license. Microsoft has, at least at one point, released a binary under the BSD license. These binaries are not open source because they are not source.
Agreed. But weights are not binaries in the licensing context. For weights to be binaries it would imply another layer of abstraction, above weights, that the labs use as the preferred way of modifying the model, and then "compile" it into weights. That layer does not exist. When you train a model you start with the weights (randomly initialised, can be 0 can be 1, can be any value, whatever works best). But you start with the weights. And at every step of the training process you modify those weights. Not another layer, not another abstraction. The weights themselves.
> They're an artifact of a training process, not code that was written by someone.
If that were relevant to the licensing discussion, then you'd have to consider every "generated" parts (interfaces, dataclasses, etc) of every open source project artefacts. Historically, that was never the case. The license doesn't care if a hardcoded value was written by a person or "tuned" via a process. It's still source code if it's the preferred way of modifying said code. And it is. You can totally edit them by hand. It would not work as well (or at all), but you could do it.
There is actually a gray area about what code "counts" as source code to the point where you would consider it "open source" if it were licensed as such. I think if you had a repository consisting of only generated code and not the code used to generate it, it would definitely raise the question of whether it should be considered "source code" or "open source", and I think you could make arguments both ways.
On the other hand, I don't really think that argument then extends to model weights, which are not just some number of steps removed from source code, but just simply not really related to source code.
I mostly agree with your assessment of what we should/shouldn't call open source for models but there is enough grey area to make the other side a valid position and not worthy of being dismissed so easily. I think there is a fine line between model weights and, say, bytecode for an interpreter and I think if you released bytecode dumps under any license it would be called out.
I also believe the four freedoms are violated to some extent (at least in spirit) by just releasing the weights and for some that might be enough to call something not open source. Your "freedom to study how the program works, and change it to make it do what you wish" is somewhat infringed by not having the training data. Additionally, gpt-oss added a (admittedly very minimal) usage policy that somewhat infringes on the first freedom, i.e. "the freedom to run the program as you wish, for any purpose".
You are free to look at every single weight and study how it affects the result. You can see how the model is architected. And you don't need training data to be provided to be able to modify the weights. Software can still be open source even if it isn't friendly to beginners.
I think you could say something remarkably similar about just releasing bytecode as well and I think most people would call foul at that. I don't think it's so cut and dry.
This isn't entirely about being a beginner or not either. Full fine-tuning without forgetting does really want the training data (or something that is a good replacement). You can do things like LoRa but, depending on your use case, it might not work.
"Good observations regarding the benchmark vs. vibes in general"
Most "vibes" people are missing that it as only has 5B active parameters.
They read 120B and expect way more performance than a 24B parameter model, even though empricaly a 120B model with 5B active parameters is expected to perform right around there.
I saw a bunch of people complaining on Twitter about how GPT-OSS can't be customized or has no soul and I noticed that none of them said what they were trying to accomplish.
"The main use-case for fine-tuning small language models is for erotic role-play, and there’s a serious demand."
Ah.
Want a good use case?
I am playing around with interactive workflow where the model suggests what can be wrong with a particular chunk of code, then the user selects one of the options, and the model immediately implements the fix.
Biggest problem? Total Wild West in terms of what the models try to suggest. Some models suggest short sentences, others spew out huge chunks at a time. GPT-OSS really likes using tables everywhere. Llama occasionally gets stuck in the loop of "memcpy() could be not what it seems and work differently than expected" followed by a handful of similar suggestions for other well-known library functions.
I mostly got it to work with some creative prompt engineering and cross-validation, but having a model fine-tuned for giving reasonable suggestions that are easy to understand from a quick glance, would be way better.
Porn is always the frontier.
It's a well-understood self-contained use-case without many externalities and simple business models.
What more, with porn, the medium is the product probably more than the content. Having it on home-media in the 80s was the selling point. Getting it over the 1-900 phone lines or accessing it over the internet ... these were arguably the actual product. It might have been a driver of early smart phone adoption as well. Adult content is about an 80% consumption on handheld devices while the internet writ large is about 60%.
Private tunable multi-media interaction on-demand is the product here.
Also it's a unique offer. Role playing prohibited sexual acts can be done arguably victim free.
There's a good fiction story there... "I thought I was talking to AI"
1, Porn. 2 Military.
There's something Freudian about the idea that the more you can customize porn, the more popular it is. That, despite the impression that "all men want one thing", it turns out that men all want very different and very oddly specific things. Imbuing somrthing with a "magical" quality that doesnt exist is the origin of the term "fetish". Its not about the raw attractive preference for a particular hair color; its a belief in the POWER of that hair color.
oh it's wildly different. About 15 years ago I worked on a porn recommendation system. The idea is that you'd follow a number of sites based on likes and recommendations and you'd get an aggregated feed with interstitial ads.
So I started with scraping and cross-reference, foaf, doing analysis. People's preferences are ... really complex.
Without getting too lewd, let's say there's about 30-80 categories with non-marginal demand depending on how you want to slice it and some of them can stack so you get a combinatoric.
In early user testing people wanted the niche and found the adventurous (of their particular kind) to be more compelling. And that was the unpredictable part. The majoritarian categories didn't have stickiness.
Nor did these niches have high correlation. Someone could be into say, specific topic A (let's say feet), and correlating that with topic B (let's say leather) was a dice roll. The probabilities were almost universally < 10% unless you went into majoritarian categories (eg. fit people in their 20s).
People want adventure on a reservation with a very well defined perimeter - one that is hard to map and different for every person.
So the value-add proposition went away since it's now just a collection of niche sites again.
Also, these days people have Reddit accounts reserved for porn where they do exactly this. So it was built after all.
You may be interested in the data surfaced by this large-scale survey[1]
[1] https://aella.substack.com/p/fetish-tabooness-and-popularity...
what's the problem with that? we have erotic texts dating back thousands of years, basically as old as the act of writing itself https://en.wikipedia.org/wiki/Istanbul_2461
There's nothing wrong with it, but you have to understand the differences between different user groups to know which limitations are relevant to your own use cases. "It doesn't follow instructions" could mean "it won't pretend to be a horny elf" or "it hallucinates fields outside the JSON schema I specified"; the latter is much more of a problem for my uses.
I have no problem with it and I can understand why people don't want to say "I'm trying to pornify this model and it refuses to talk dirty!" in public. But if you're calling a model garbage maybe you should be honest about what the "problem" is.
The pro-porn side has zero PR because respectable public figures don't see pro-porn advocacy as a good career move. At most, you'll get some oblique references to it.
Meanwhile, the anti-porn side has a formidable alliance:
Right-wing, religiously-motivated anti-porn activists. Left-wing, feminism-motivated anti-porn activists. Big corporate types with lots of $$$$ to spend who want their customer support chatbot to be completely SFW at all times. AI safety folk who think keeping the model on a tight leash is an ethical obligation, lest future iterations take over the world. AI vendors who are keen on the yes-it-might-take-over-the-world narrative. AI vendors who just don't want their developers having to handle NSFW stuff in work. Politicians who don't know a transformer from a diffusion model, but who've heard a chorus of worries about lost jobs and AI bias and deepfakes and revenge porn.
These people will speak up in public at the drop of a hat.
You don't understand! Every erotic chatbot service keeps getting censored, what happened to CharacterAI just keeps happening. There's a serious supply-shortage, do you really want people turning to Grok? The spice must flow!!!
I've found good use of Phi-4 at home, and after a few tests of the GPT-OSS 20B version I'm quite impressed so far.
Particularly one SQL question that has tripped every other model of similar or smaller size that I've tried, like Devstral 24B, Falcon 3 7B, Qwen2.5-coder 14B and Phi 4 14B.
The question contains an key point which is obvious for most humans, and which all of the models I tried previously have failed to pick up on. GPT-OSS picked up on it, and made a reasonable assumption.
It's also much more thorough at explaining code compared to the other models, again including details the others miss.
Now if only I had a GPU that could run the whole thing...
Can you share the question? Or are you intentionally trying to keep it out of the training data pool?
Sadly no. I'd like to keep it untainted, but also because the tables involved are straight from my work, which is very much not OSS.
I can however try to paraphrase it so you get the gist of it.
The question asks to provide a SQL statement to update rows in table A based on related tables B and C, where table B is mentioned explicitly and C is implicit through the foreign keys provided in the context.
The key point all previous models I've tested has missed, is that the rows in A are many-to-one with B, and so the update should take this into account. This is implicit from the foreign key context and not mentioned directly in the question.
Think distributing pizza slices between a group of friends. All previous models has completely missed this part and just given each friend the whole pizza.
GPT-OSS correctly identified this issue and flagged it in the response, but also included a sensible assumption of evenly dividing the pizza.
I should note some of the previous models also missed the implicit connection to table C, and thus completely failed to do something sensible. But at least several of them figured this out. Of course I forgot to write that part down so can't say offhand which did what.
As for the code, for example I've coded a Y combinator in Delphi, using intentionally terse non-descriptive names, and asked the models to explain how the code works and what it does. Most ~7B models and larger of the past year or so have managed to explain it fairly well. However GPT-OSS was much more thorough and provider a much better explanation, showing a significantly better "understanding" of the code. It was also the first model smaller than LLama 3 70B that I've tried that correctly identified it as a Y combinator.
Does anyone know how synthetic data is commonly generated? Do they just sample the model randomly starting from an empty state, perhaps with some filtering? Or do they somehow automatically generate prompts and if how? Do they have some feedback mechanism, e.g. do they maybe test the model while training and somehow generate data related to poorly performing tests?
It’s common to use rejection sampling: sample from the model and throw out the samples which fail some criteria like a verifiable answer or a judgement from a larger model.
I don't know about Phi-5, but earlier versions of Phi were trained on stories written by larger models trained on real-world data. Since it's Microsoft, they probably used one of the OpenAI GPT series.
> for instance, they have broad general knowledge about science, but don’t know much about popular culture
That seems like a good focus. Why learn details that can change within days of it being released? Instead, train the models to have good general knowledge, and be really good at using tools, and you won't have to re-train models from scratch just because some JS library now has a different API, instead the model goes out to fetch the latest APIs/gossip when needed.
Yeah, it always seemed like a sad commentary on our world that AIs are devoting their weights to encyclopedic knowledge of Harry Potter, Pokemon, and Reddit trolling.
Is that true that most small language models are fine tuned for erotic role-play?
If a model is trained only on synthetic data, is it still possible it will output things like this? https://x.com/elder_plinius/status/1952958577867669892
By definition, a model can't "know" things that are not somewhere in its training set, unless it can use a tool to query external knowledge.
The problem is that the size of the training set required for a good model is so large, that's really hard to make a good model without including almost all known written text available.
> all known written text available
If phi5 is trained on synthetic data only then info on how to make drugs must be in the synthetic dataset.
Yeah, makes sense. Good observations regarding the benchmark vs. vibes in general, and I didn't know / made the connection between the lead of phi models going to oAI and gpt-oss. Could very well be a similar exercise + their "new" prompt level adherence (system > developer > user). In all the traces I've seen of refusals the model "quotes" the policy quite religiously. Similar thing was announced for gpt5.
I think the mention of the "horny people" is warranted, they are an important part of the open models (and first to explore the idea of "identities / personas" for LLMs, AFAIK). Plenty of fine-tuning bits of know-how trickled from there to the "common knowledge".
There's a thing that I would have liked to be explored, perhaps. The idea that companies might actually want what -oss offers. While the local llm communities might want freedom and a horny assistant, businesses absolutely do not want that. And in fact they spend a lot of effort into implementing (sometimes less than ideal) guardrails, to keep the models on track. For very easy usecases like support chatbots and the like, businesses will always prefer something that errs on the side of less than useful but "safe", rather than have the bot start going crazy with sex/slurs/insults/etc.
I do have a problem with this section though:
> Really open weight, not open source, because the weights are freely available but the training data and code is not.
This is factually incorrect. The -oss models are by definition open source. Apache2.0 is open source (I think even the purists agree with this). The requirement of sharing "training data and code" is absolutely not a prerequisite for being open source (and historically it was never required. The craze surrounding LLMs suddenly made this a thing. It's not).
Here's the definition of source in "open source":
> "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
Well, for LLMs the weights are the "preffered form of making modifications". The labs themselves modify models the same as you are allowed to by the license! They might use more advanced tools, or better datasets, but in the end the definition still holds. And you get all the other stuff, like the right to modify, re-release, etc. I'd really wish people would stop proliferating this open weight nonsense.
Models released under open source licenses are open source. gpt-oss, qwens and mistrals (apache2.0), deepseeks(MIT), etc.
Models released under non open source licenses also exist, and they're not open source because the licenses under which they're released aren't. LLamas, gemmas, etc.
No the preferred way of making modifications is the weights _together_ with training (or fine tuning) scripts, and the entire evaluation pipeline to measure performance. And the data required to support all of this.
When someone joins your data science team your would give them all this code and data. Not just the weights and say - the weights are the source, modify that to improve the model, I look forward to see your MR next week.
EDIT: Heck, sometimes the way to make improvements (modifications) is just to improve the data, and not touch the training code at all. It is often one of the most powerful ways. You still need training code though, and evaluation to measure the impact.
The license gives you the right to modify the weights, how you do the modification is up to you. The rest is in the realm of IP, know-how, etc. Apples and oranges.
You also need the training data, so you can ensure you're not benchmarking on the training set, fine-tuning on the training set (overfitting with extra steps), or otherwise breaking things.
It's not about the preferred way. Else open source software would need to give you their IDE setup, CI/CD setup, access to all internal tools, etc. Software like sqlite don't release their full test suite. They paywall the preferred way of making changes, yet they are open source.
>The “source code” for a work means the preferred form of the work for making modifications
The GPL refers to a form of the artifact being released
The key is if you consider weights source code. I do not think this is a common interpretation.
> The labs themselves modify models the same as you are allowed to by the license
Do the labs do not use source code?
It is a bit like arguing that releasing a binary executable is releasing the source code. One could claim developers modify the binary the same as you are allowed to.
> Do the labs do not use source code?
The weights are part of the source code. When running inference on a model you use the architecture, config files and weights together. All of these are released. Weights are nothing but "hardcoded values". The way you reach those values is irrelevant in the license discussion.
Let's take a simple example: I write a chess program that is comprised of a source file with 10 "if" statements, a config file that matches between the variables used in the if statements and a "hardcoded values" file that stores the actual values. It would be a crappy chess program, but I hope you agree that I can release that as open source and no-one would bat an eye. You would also be granted the right to edit those hardcoded values, if you wish so. You'd perhaps make the chess bot better or worse. But you would be allowed to edit it, just like I would. That's the preferred way of modifying it. Me providing the methods that I used to reach those 10 hardcoded values has 0 bearing on my crappy chess bot being open source or not. Do we agree on that?
Now instead of 10 values, make it 100billion. Hey, that's an LLM!
> It is a bit like arguing that releasing a binary executable is releasing the source code.
That's the misconception. Weights are not a binary executable. In other words, there isn't another level above weights that the labs use to "compile" the weights. The weights exist from the beginning to the end, and the labs edit the weights if they want to modify the models. And so can you. There isn't a "compilation" step anywhere in the course of training a model.
If you have 10 harcoded values, you have a binary blob, a common feature particularly in hardware drivers that is opaque and commonly considered to not be fully free unless the instructions for deriving it are also included. It's frequently just an executable, occasionally just configuration information, but difficult to change while (assuming no signing shenanigans) still remaining technically possible.
The training data is the source code and the training process is the compiler. There's a fairly direct metaphor to be made there. Different compilers can have vastly different impacts on the performance of the compiled executable.
Training is obviously the compilation step.
I think source code really only exists in terms of the source code/object code dichotomy, so what "traditional" open source means for model weights is really not obvious if you only go off of traditional definitions. Personally I think the word "open source" shouldn't apply here anymore than it would for art or binary code.
Consider the following: it is possible to release binaries under the Apache2 license. Microsoft has, at least at one point, released a binary under the BSD license. These binaries are not open source because they are not source.
This isn't the same argument as given in the article though, so I guess it is a third position.
> Consider the following: it is possible to release binaries under the Apache2 license. Microsoft has, at least at one point, released a binary under the BSD license. These binaries are not open source because they are not source.
Agreed. But weights are not binaries in the licensing context. For weights to be binaries it would imply another layer of abstraction, above weights, that the labs use as the preferred way of modifying the model, and then "compile" it into weights. That layer does not exist. When you train a model you start with the weights (randomly initialised, can be 0 can be 1, can be any value, whatever works best). But you start with the weights. And at every step of the training process you modify those weights. Not another layer, not another abstraction. The weights themselves.
In my opinion, though, they're also not really source code either. They're an artifact of a training process, not code that was written by someone.
> They're an artifact of a training process, not code that was written by someone.
If that were relevant to the licensing discussion, then you'd have to consider every "generated" parts (interfaces, dataclasses, etc) of every open source project artefacts. Historically, that was never the case. The license doesn't care if a hardcoded value was written by a person or "tuned" via a process. It's still source code if it's the preferred way of modifying said code. And it is. You can totally edit them by hand. It would not work as well (or at all), but you could do it.
There is actually a gray area about what code "counts" as source code to the point where you would consider it "open source" if it were licensed as such. I think if you had a repository consisting of only generated code and not the code used to generate it, it would definitely raise the question of whether it should be considered "source code" or "open source", and I think you could make arguments both ways.
On the other hand, I don't really think that argument then extends to model weights, which are not just some number of steps removed from source code, but just simply not really related to source code.
I mostly agree with your assessment of what we should/shouldn't call open source for models but there is enough grey area to make the other side a valid position and not worthy of being dismissed so easily. I think there is a fine line between model weights and, say, bytecode for an interpreter and I think if you released bytecode dumps under any license it would be called out.
I also believe the four freedoms are violated to some extent (at least in spirit) by just releasing the weights and for some that might be enough to call something not open source. Your "freedom to study how the program works, and change it to make it do what you wish" is somewhat infringed by not having the training data. Additionally, gpt-oss added a (admittedly very minimal) usage policy that somewhat infringes on the first freedom, i.e. "the freedom to run the program as you wish, for any purpose".
You are free to look at every single weight and study how it affects the result. You can see how the model is architected. And you don't need training data to be provided to be able to modify the weights. Software can still be open source even if it isn't friendly to beginners.
I think you could say something remarkably similar about just releasing bytecode as well and I think most people would call foul at that. I don't think it's so cut and dry.
This isn't entirely about being a beginner or not either. Full fine-tuning without forgetting does really want the training data (or something that is a good replacement). You can do things like LoRa but, depending on your use case, it might not work.
"Good observations regarding the benchmark vs. vibes in general"
Most "vibes" people are missing that it as only has 5B active parameters.
They read 120B and expect way more performance than a 24B parameter model, even though empricaly a 120B model with 5B active parameters is expected to perform right around there.