I recently read a very interesting exam work made by two students at BTH in Sweden. The title is Predicting Fault Inflow in Highly Iterative Software Development Processes. They apply different predictive models to real projects and see how good they are at approximating the number of faults that appear over time. You have probably heard enthusiastic measurement people that try to convince you that they have the best answer. I won't go into details regarding the actual measurements but will go directly to the conclusions. In brief thay found out that the S-curve was the worst model of the ones compared. Their measurements showed that more complex models did not necessarily give more accurate results and a simple linear model was a valid alternative.
So IF you need to measure. Use a simple linear model that will be good enough. And in my experience bug measurements can be one factor to measure progress with but it is seldom very exact and never give you the full picture.
Remeber: It does not matter how exact you are when the measurements are wrong to start with!