Eisenhower waited until the very last days of his office to say something about it. He took no actions. He also neutered his speech by rewriting "congressional military industrial complex" to "military industrial complex". Yes, I appreciate he said something at all, but he could have done much much more.
It reminds me of employees at Facebook leaving saying its so unethical what the company is doing, after having received massive pay for years.
If anyone could have killed the demon, surely it would have been a reelected president and 5-star "General of the Army". (It's not as though Truman the haberdasher would have had that sort of clout!) But that would have required years of work, and he was more interested in British Petroleum's plans for Iran.
He may have been thinking of his son John's safety and career in the military (somehow John became a general). If they can't get to you any other way, they can always put the screws to your kids. Look at how many presidential (and presidential candidate) children have been awarded bullshit media jobs for which they have no qualifications or aptitude.
Unless your model actually has trillions of parameters (and it doesn't, even gpt-3 only has 175 billion) it is not even possible to overfit on 1.4 trillion training inputs. You can't actually pigeonhole it.
Suppose that you train a neural network to predict the next number in an arithmetic sequence (a, a+b, a+2b, a+3b, a+4b, ...). As input it gets two numbers, the last number and the current number and has to predict the next one.
Suppose you had 1.4 trillion examples in the following test set (using a model with 175 billion parameters):
(1,2)->3
(2,3)->4
(3,4)->5
...
Do you think it is possible to overfit and score perfect on the test set, while failing to generalize?
I think you've specified this problem in a very strange way. But if you're saying that you're trying to train on the specific dataset where a = 1 and b = 1, then your model will fit the data perfectly with 175 billion parameters. It will also fit the data perfectly with, like, 15 parameters.
If you're trying to fit to some more complex space where a and b are unknown and you're given 3 numbers in the sequence, then what you're trying to fit is `f(a, b) = a + 2(b - a)` (or 2b - a, however you want to represent it), which is a swell function, but if you only give data that can be equally represented by `f(a, b) = b + 1`, you're mis-training your model.
But you could once again do that with a model with a dozen parameters. In both cases, the issue isn't overfitting, but misrepresentative data.
I didn't specify the training set, just the test set. It's possible that your model actually models an arithmetic series. Or that it simply overfits. The point is that it doesn't require trillions of parameters to overfit to a trillion-sized test set.
What you need are more parameters than the complexity of the underlying distribution. If you drop to a linear function you're modelling, you only need a couple of parameters.
"Overfitting" is memorizing the training data instead of generalizing. The example you're providing isn't overfitting, it's just generalizing to the wrong function. Overfitting would be if the validation set was, say, 30 random values that you got right, but didn't get other values along the same lines correct.
> I didn't specify the training set, just the test set
Then unless you constructed the training set with the intent of mistraining the model, I think a training set that got good accuracy on that validation set would generalize.
> The point is that it doesn't require trillions of parameters to overfit to a trillion-sized test set.
You can't "overfit" a validation set, unless you've done something wrong. Overfitting is, by definition, learning the training set too well such that you fail to generalize to a validation set.
Overfitting is, by definition, learning a model that doesn't generalize to the distribution of inputs you care about. If your validation set has the same distribution as the inputs you care about, then your definition holds. But that's definitely not true in practice. Usually the data you collect won't be exactly representative of the conditions you're looking to test, unless your problem is very simple.
> Overfitting is, by definition, learning a model that doesn't generalize to the distribution of inputs you care about.
No, that's just mis-modelling. Overfitting is specifically doing so in a way that learns the training data too well, at the cost of generalizing. If you try and have a single layer perception network classify a nonlinear function, it will fail to generalize. But it certainly isn't "overfitting".
Overfitting is not the only form of mistake when training a model. You've presented a different one, which is just like trying to train on misrepresentative data. But that isn't "overfitting", it's just having bad data. Your model isn't "failing to generalize", it has nothing to generalize over.
The classic demonstration of this is that overfitting usually results in a accuracy curve that "frowns" on validation data. Your accuracy peaks, but then decreases as you learn the structure of the test data instead of the general structure. In your example that won't happen.
Training a model in the wrong problem isn't overfitting. In fact, your example is more like underfitting than overfitting. The model in your example would fail to see the full complexity of the structure, instead of as in overfitting, make it more complicated than reality.
And my point is that is not what overfit is. Overfit is a specific problem where the network fails to recognize a commonality in the training set and instead interprets the irrelevant details of some subset of training samples (in the extreme, individual samples) as distinct properties.
Your example training set is not filled with noise that the network is picking up on to its detriment. Your example training set is simply not representative of the function you are trying to teach.
I don't have an example training set. I don't have an example model.
My exact point is that if your test set isn't representative of the underlying distribution, then accuracy on the test set doesn't mean that your model isn't overfit.