I'm a big fan of automating as much as you can, but blindly automating processes controlled by metrics may cause more problems than they solve. The idea behind capturing metrics or in the case I'm currently working on, writing a suite of unit tests is to automate the testing so I can make changes at-will, run my unit test and know everything is still working good. If I come to rely on these unit tests (or really any sort of metrics) to help me make business decisions as to when I am code complete and ready to ship;unless I'm 100% sure that the unit tests take into account 99.99% of all the edge cases; I'm doing the recipients of the software a disservice. As an efficient coder, I want to take what the customer tells me they need and turn it into the compiled bits that successfully solve their problem in a straight of a line as possible. Unit testing and other software related automated processes go a long ways to helping ensure an efficient path, but as these automated processes are developed we need to pay special attention to make sure these are complete. In a large sense, if you are capturing KPI or Key Performance Indicators about your organizations performance, are you sure that the core measures you capture are indeed complete and accurate? Remember the purpose of capturing these metrics is to influence business decisions. If your metrics are not accurate or your measures just can't be trusted, there are three core problems that this causes. First it clouds the judgment of the people directing the efforts, they may think they need to do A, but the metrics tell them B really needs to happen, A is clearly the right thing to do, but since B is out there as a potential solution it may dilute the affect of implementing A. Second it puts in place a false sense that you are indeed doing something to solve your problems and all you are really doing is delaying the true solution costing the most valuable asset in software development, time. Finally it takes time to capture metrics and if done properly it is a great tool. If your metrics are worthless you waste the peoples time building the infrastructure to capture the metrics, you waste the peoples time to capture the metrics and you may even unintentionally sabotage future metrics capturing initiatives.
In a previous post I discussed the race car analogy, where the drivers have a significant amount of safety gear in their cars. They need to be 99.99% sure that if they get in an accident the gear will do it's job and protect them. If they can rely on their gear, they can take more risks to give themselves a competitive edge. Are your sure the test cases, scenarios and metrics your are employing would help you prevent or survive a crash and give you a competitive edge?