Security Policy in the Real World

With respect to reducing risk, managing costs, and timely, overall effectiveness of an appsec policy, which approach is better?

Option 1: Scan for everything some tool is capable of finding, fix what’s found
Option 2: Scan for high-fidelity, high-criticality weaknesses only, fix what’s found, then, widen the scanning scope iteratively

Follow up question – assuming the first question was rhetorical, and we all agree option 2 is (much) better; how do you change the mind of security policy writers inside an organization that take the all or nothing approach to security?

"Scan for and fix everything or it’s not secure (aka security policy-compliant)."

As if that’s even possible…

Some security policy puts so much faith in security tooling, but when we try to help them see the best way to manage their software risk, they don’t care to listen.

It’s better to focus on finding and fixing critical flaws across all applications quickly than to go deep on all possible flaws, regardless of severity, (unavoidably) much slower application by application. There’s also an element of threat modeling and criticality of applications and the data they process, which should govern security policy – instead of treating all applications with the same security rigor.

My primary question remains – how do you evolve security policy to something smarter?

Any published research on the topic would be most welcome here.