A continuing mystery in understanding the empirical success of deep neural networks is their ability to achieve zero training error and generalize well, even when the training data is noisy and there are more parameters than data points. We investigate this overparameterized regime in linear regression, where all solutions that minimize training error interpolate the data, including noise. We lower-bound the fundamental generalization (mean-squared) error of any interpolating solution in the presence of noise, and show that this bound decays to zero with the number of features. Thus, overparameterization can be beneficial in ensuring harmless interpolation of noise. We discuss two root causes for poor generalization that are complementary in nature - signal “bleeding” into a large number of alias features, and overfitting of noise by parsimonious feature selectors. For the sparse linear model with noise, we provide a hybrid interpolating scheme that mitigates both these issues and achieves order-optimal MSE over all possible interpolating solutions.