I'm interested in arguments as to how said flak could qualify as valid criticism.
Security research is inherently adversarial in nature, and it seems fitting to have competiting parties doing security research on one another's products.
Presumably, Android development involves some measure of security assessment?
If project zero never spent a day looking at Android, but all their competitors did, I don't see the issue.
If there aren't any/enough competitors, that seems very unlikely to be a security or security research related problem.
It goes back to they first time they publicised a Microsoft exploit after the 90 disclosure window, but before Microsoft had patched it. Microsoft went into full media blitz mode about how terrible P0 was, how insecure they were making everyone etc.
I basically stopped reading Peter Bright's articles on ArsTechnica because of how he spent a couple weeks writing uninformed articles about how responsible disclosure in general and P0 specifically were terrible.
> Security research is inherently adversarial in nature
Interesting. My perception is it's often based on bragging rights. Which is more about ego than about an adversary. According to that theory, what matters is how deeply you understand systems or how determined you are to go the extra mile to find issues.
This extends to organizations which want to bolster their image by being at the leading edge of research.
Anyway, having an adversary is part of the picture, but what you really care about is not the victory over that adversary but your superiority on the battlefield
Perhaps I was taking "adversarial" too literally, but to me it normally suggests an antagonistic or hostile attitude toward the other side. For example, if two next door neighbors don't get along and one of them reports any little infraction to their homeowners' association, they are adversarial. It's sort of the opposite of cooperative.
And this is not how I see the motivation and attitude of most security people. For them it is mostly about the satisfaction of (or other inclination toward) understanding how and where something might be vulnerable to exploit. It is a particular type of thinking related to creativity, thinking outside the box, and seeing things from a different perspective. (So basically what Schneier's essay says. Which fits with my point.)
There is nothing sophisticated or clever about a neighbor calling the homeowners' association. What they're interested in is the effect their actions will have on their adversary. But a security researcher doesn't usually care to actually exploit vulnerabilities. Or if they do, it is only to prove that the vulnerability exists, not to gain from it.
So, getting back to the original point, I just don't follow the reasoning that security researchers would prefer to avoid finding holes in their own employer's systems. If they viewed everything as us vs. them, then yes, they would want to take sides and protect their employer. Instead, I think that because what they really care about is understanding vulnerabilities, they would want to understand them wherever they see them, own employer's systems included.
... which is what vulnerability research teams are supposed to do, of course. Finding bugs in other people's products is, effectively, a donation to those projects. Google pays consultants many tens of thousands of dollars a pop to get the same work done for their own products.
The narrative seems to be both that p0 doesn't do research into Google's stuff, and that p0 doesn't post bulletins about vulns in Google's stuff. Both are untrue, but if you point out one, people flip to the other.
The amount of work they put into finding issues isn't relevant though, it's the benefit google might be getting from other companies bad PR. So this situation still seems like a counterexample to me.
It's only a donation if you assume the vulnerability is not cutting edge, otherwise it doesn't seem unfair to label the disclosure as corrosive to the current zeitgeist in terms of user safety. Just because you've managed to marry genuine research and politics doesn't mean that you've eliminated the more general bias produced by researching/disclosing information about opposing products more frequently than your own. At the same time, this also doesn't mean the system is bad, it could be entirely possible that the status quo of security without adversarial research is much worse for the end user.
I don't know what any of this means. The vulnerabilities exist no matter who discovers them, and vulnerability discovery is a service every major vendor pays dearly for. If the optics of discovery are bad for Apple or Microsoft, Apple or Microsoft should spend more to keep from being one-upped by P0.
From what you've said it seems like the difference is more an issue of framing. You're focused on the effects for the end user, which are the same no matter what in my opinion and I agree on that. At the same time, if you're publishing exploits that are cutting edge and primarily only apply to the competition then, because you (in this hypothetical) are a company, the overall effects of the research are self serving. Why not publish more vulnerabilities that apply to your own systems otherwise? Surely you (again, the hypothetical you) are in a better position than other entities to engage in introspective research that benefits the end user. Obviously there's a possibility that the explanation is "Because the competition has more flaws than our/my own product" but without evidence such is the case then there isn't a lot of reason for outside critics to give the benefit of the doubt. Also, I'm not a security researcher so I realize some of my points might not be accurate to the state of the industry. I'm moreso trying to highlight the lines of thinking that lead to criticism of P0 and why I don't think they're easy to refute without a certain degree of burden on Google's/Totallyrelatedbutnot-google's part.
I don't understand really any of this. Finding vulnerabilities in your competitors products is in fact a donation to those competitors and, more importantly, their users.
It sounds like the idea that it's a donation is the part where we fundamentally disagree then. If you expose an unpatched exploit and the number of people using that exploit goes up because of your publication then you've contributed to worsening the security landscape for whoever the relevant user is. If an exploit is years old, or if there's some other relevant context, then sure exposing a vulnerability mainly means that the parent company doesn't have to pay for the research. But it's not a favor they asked you to perform and if you have a competing product with that company then the question becomes "Why aren't you publishing more about yourself?" since surely no one is in a better position to do security research on your products than yourself or a contractor you hired. It's basically PR motivated research and when there's also a lack of transparency it leads to easy speculation that maybe you're holding back research on yourself purposefully or even simply not publishing the more damning research you discover about your own products.
There's no "agreement" or "disagreement" to be had here. Vulnerability researchers don't create vulnerabilities, they find and ultimately eliminate them. Vulnerability research at the level practiced by P0 commands huge daily rates, and Google's competitors actively pay those rates to other researchers. It is simply a fact that what Google is doing is a donation to other vendors and their users.
You can dispute their intentions, but I'm not all that interested in debating those. What you can't do is debate the effect, which is positive.
If you're not interested in debating their intentions then why frame it as a donation? That by itself is a normative judgement in this context, and the reason I brought up framing, since normally two primary characteristics of a donation are that it can be refused and that it is being offered in good faith.
This is the same general point that the original comment you had replied to was making which itself was an explanation of why people are distrustful of P0 even if the brass tax is positive for the end user. It's fine if you don't find that political side of it interesting but it's not as simple as just being a donation.
If it's a great value then why isn't that effort being directed entirely towards google products? I get the argument you're making I think, that it's free valuable labor. But if that's true then what explanation is there for primarily publishing about competing products? Even if the argument was purely user safety then surely there would be no reason to publish unpatched problems publicly which has happened in the past and seems like the sort of thing that group like P0 could have chosen to handle in a more graceful fashion. Somewhere in the chain of command is a reason for why P0 focuses where it does and it doesn't seem likely that the reason is "Because we're rolling in gold and Apple and Microsoft could use the money."
There's no amount of axiomatic reasoning you're going to be able to generate to make your argument work. Vulnerability research is expensive. Google is giving it away to its competitors. Those competitors and, much more importantly and to a greater extent, their customers, benefit directly. Those aren't so much statements of opinion so much as they are simple facts.
This is a perfect counterexample of a really nasty privilege escalation in Google's own OS.