Your “probably not” argument gets thinner every major AI update.
Right, but I’m talking about whether they’re already using it, not whether they will in the future. It’s certainly interesting to speculate about it though. I don’t think we really know for sure how good it will get, and how fast.
Something interesting that’s come up is scaling laws. Compute, dataset size, and parameters so far appear to create a limit to how low the error rate can go, regardless of the model’s architecture. And dataset size and model size appear to require being scaled up in tandem to avoid over-/under-fitting. It’s possible, although not guaranteed, that we’re discovering fundamental laws about pattern recognition. Or maybe it’s just an issue with our current approach.
Right, but I’m talking about whether they’re already using it, not whether they will in the future. It’s certainly interesting to speculate about it though. I don’t think we really know for sure how good it will get, and how fast.
Something interesting that’s come up is scaling laws. Compute, dataset size, and parameters so far appear to create a limit to how low the error rate can go, regardless of the model’s architecture. And dataset size and model size appear to require being scaled up in tandem to avoid over-/under-fitting. It’s possible, although not guaranteed, that we’re discovering fundamental laws about pattern recognition. Or maybe it’s just an issue with our current approach.