
For example, I used the open model called GPT-OSS-Swallow, trained by a Japanese university. I only used a very small one, even though this machine, of course, can run the 120 billion one. But I found that for most of my daily tasks, the 20 billion one is more than good enough. So I think the point here is whether you're optimizing for some abstract max score, or whether you're just satisficing, meaning that it's good enough. And I think we've long hit the "good enough" point by this year. For most of my daily use, 20 billion or less models are good enough.