I am new to ML, and I have an intuition question. In order to start with a linear regression model, we have to run the regression on the training data, and get the model's parameters. Then we use these parameters with the testing data, and calculate our cost. Using GD we could iterate until we find the best fitting parameters to minimize the cost function. My question is this, OLS by definition give us the best fitting line, right, but when we use the min cost function, we find out that the parameters obtained by the regression models weren't the best, why is that? Is it because of estimation error? and ML job here is to get eliminate the estimation error? Did I even get the process correctly? Maybe I am missing something IDK, please LMK, thanks.
Last edited: