Have you ever run into false positives from a static source code analysis tool? What’s the best way to identify those so devs can focus on fixing real issues?
Sort By:
Oldest
Senior Data Scientist in Miscellaneous2 months ago
A.t.m.h.o. the question itself is misleading. Classical statisic tests assume a probability to be wrong (" probsbility of error"). That's why one rejects a hypothesis if the probability drops below a certain threshold (most often 5%). That probability covers either the possibility, that the hypothesis is correct, but the data sample is not representative or the hypothesis is wrong, but the data sample suggests otherwise (like studies published, saying red wine or coffee are supporting one's health).Chief Techical Officer in Software2 months ago
The best way is for the dev to review the finding to work out if it is a false positive and discuss with their team as to whether to mark as such or restructure the code so that it doesn't get flagged in the future. Even false positives can flag a code weakness which should be investigated. There will always be a low percentage of false positives, that is just life.