

When analyzing data contaminated with one or several outliers, the ROUT method performs well at outlier identification, with an average False Discovery Rate less than 1%. When analyzing simulated data, where all scatter is Gaussian, our method detects (falsely) one or more outlier in only about 1–3% of experiments. Because the method combines robust regression and outlier removal, we call it the ROUT method. We then remove the outliers, and analyze the data using ordinary least-squares regression.

To define outliers, we adapted the false discovery rate approach to handling multiple comparisons. We devised a new adaptive method that gradually becomes more robust as the method proceeds. We first fit the data using a robust form of nonlinear regression, based on the assumption that scatter follows a Lorentzian distribution. We describe a new method for identifying outliers when fitting data with nonlinear regression. However, we know of no practical method for routinely identifying outliers when fitting curves with nonlinear regression. Outliers can dominate the sum-of-the-squares calculation, and lead to misleading results. This assumption leads to the familiar goal of regression: to minimize the sum of the squares of the vertical or Y-value distances between the points and the curve. Nonlinear regression, like linear regression, assumes that the scatter of data around the ideal curve follows a Gaussian or normal distribution.
