This was meant to be detached from commercial use, just on “training models”.
This was meant to be detached from commercial use, just on “training models”.
Do you think it’s a good idea to adapt licenses to be able to disallow training models on the source code? Do you think this could be enforced? If so, how?
If these were links to issues that could be reacted to, I’d totally do that.
Yes exactly, they’re similar. I guess there’s a universal desire for tree like data stores.
If you’re using a Linux distribution: are you familiar with gsettings
or some equivalent?
Haha and Reddit ofc asks you the same. But I have the feeling that it’s more widespread practice nowadays. I haven’t seen this in Europe that strongly yet though. Seems like there are more proective data regulation laws.
Why? Played it years ago but don’t remember tears.
Who would buy consumer grade hardware for machine learning?