xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
@ 2012-07-06 16:46 George Dunlap
  2012-07-08  7:30 ` Joanna Rutkowska
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: George Dunlap @ 2012-07-06 16:46 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Lars Kurth, Matt Wilson,
	xen-devel@lists.xen.org

We've had a number of viewpoints expressed, and now we need to figure
out how to move forward in the discussion.

One thing we all seem to agree on is that with regard to the public
disclosure and the wishes of the discloser:
* In general, we should default to following the wishes of the discloser
* We should have a framework available to advise the discloser of a
reasonable embargo period if they don't have strong opinions of their
own (many have listed the oCERT guidelines)
* Disclosing early against the wishes of the disclosure is possible if
the discloser's request is unreasonable, but should only be considered
in extreme situations.

What next needs to be decided, it seems to me, is concerning
pre-disclosure: Are we going to have a pre-disclosure list (to whom we
send details before the public disclosure), and if so who is going to
be on it?  Then we can start filling in the details.

What I propose is this.  I'll try to summarize the different options
and angles discussed.  I will also try to synthesize the different
arguments people have made and make my own recommendation.  Assuming
that no creative new solutions are introduced in response, I think we
should take an anonymous "straw poll", just to see what people think
about the various options.  If that shows a strong consensus, then we
should have a formal vote.  If it does not show consensus, then we'll
at least be able to discuss the issue more constructively (by avoiding
solutions no one is championing).

So below is my summary of the options and the criteria that have been
brought up so far.  It's fairly long, so I will give my own analysis
and recommendation in a different mail, perhaps in a day or two.  I
will also be working with Lars to form a straw poll where members of
the list can informally express their preference, so we can see where
we are in terms of agreement, sometime over the next day or two.

= Proposed options =

At a high level, I think we basically have five options to consider.

In all cases, I think that we can make a public announcement that
there *is* a security vulnerability, and the date we expect to
publicly disclose the fix, so that anyone who has not been disclosed
to non-publicly can be prepared to apply it as soon as possible.

1. No pre-disclosure list.  People are brought in only to help produce
a fix.  The fix is released to everyone publicly when it's ready (or,
if the discloser has asked for a longer embargo period, when that
embargo period is up).

2. Pre-disclosure list consists only of software vendors -- people who
compile and ship binaries to others.  No updates may be given to any
user until the embargo period is up.

3. Pre-disclosure list consists of software vendors and some subset of
privleged users (e.g., service providers above a certain size).
Privileged users will be provided with patches at the same time as
software vendors.  However, they will not be permitted to update their
systems until the embargo period is up.

4. Pre-disclosure list consists of software vendors and privileged
users. Privleged users will be provided with patches at the same time
as software vendors.  They will be permitted to update their systems
at any time.  Software vendors will be permitted to send code updates
to service providers who are on the pre-disclosure list.  (This is the
status quo.)

5. Pre-disclsoure list is open to any organiation (perhaps with some
minimal entrance criteria, like having some form of incorporation, or
having registered a domain name).  Members of the list may update
their systems at any time; software vendors will be permitted to send
code updates to anyone on the pre-disclosure list.

6. Pre-disclosure list open to any organization, but no one permitted
to roll out fixes until the embargo period is up.

= Criteria =

I think there are several criteria we need to consider.

* _Risk of being exploited_.  The ultimate goal any pre-disclosure
process is to try to minimize the total risk for users of being
exploited.  That said, any policy decision must take into account both
the benefits in terms of risk reduction as well as the other costs of
implementing the policy.

To simplify things a bit, I think there are two kinds of risk.
Between the time a vulnerability has been publicly announced and the
time a user patches their system, that user is "publicly vulnerable"
-- running software that contains a public vulnerability.  However,
the user was vulnerable before that; they were vulnerable from the
time they deployed the system with the vulnerability.  I will call
this "privately vulnerable" -- running software that contains a
non-public vulnerability.

Now at first glance, it would seem obvious that being publicly
vulnerable carries a much higher risk of being privately vulnerable.
After all, to exploit a vulnerability you need to have malicious
intent, the skills to leverage a vulnerability into an exploit, and
you need to know about a vulnerability.  By announcing it publicly, a
much greater number of people with malicious intent and the requisite
skills will now know about the vulnerability; surely this increases
the chances of someone being actually exploited.

However, one should not under-estimate the risk of private
vulnerability.  Black hats prize and actively look for vulnerabilities
which have not yet been made public.  There is, in fact, a black
market for such "0-day" exploits.  If your infrastructure is at all
valuable, black hats have already been looking for the bug which makes
you vulnerable; you have no way of knowing if they have found it yet
or not.

In fact, one could make the argument that publicly announcing a
vulnerability along with a fix makes the vulnerability _less_ valuable
to black-hats.  Developing an exploit from a vulnerability requires a
significant amount of effort; and you know that security-conscious
service providers will be working as fast as possible to close the
hole.  Why would you spend your time and energy for an exploit that's
only going to be useful for a day or two at most?

Ultimately the only way to say for sure would be to talk to people who
know the black hat community well.  But we can conclude this: private
vulnerability is a definite risk which needs to be considered when
minimizing total risk.

Another thing to consider is how the nature of the pre-disclosure and
public disclosure affect the risk.  For pre-disclosure, the more
individuals have access to pre-disclosure information, the higher the
risk that the information will end up in the hands of a black-hat.
Having a list anyone can sign up to, for instance, may be very little
more secure than a quiet public disclosure.

For public disclosure, the nature of the disclosure may affect the
risk, or the perception of risk, materially.  If the fix is simply
checked into a public repository without fanfare or comment, it may
not raise the risk of public vulnerability significantly; while if the
fix is announced in press releases and on blogs, the _perception_ of
the risk will undoubtedly increase.

* _Fairness_.  Xen is a community project and relies on the good-will
of the community to continue.  Giving one sub-group of our users an
advantage over another sub-group will be costly in terms of community
good will.  Furthermore, depending on what kind of sub-group we have
and how it's run, it may well be considered anti-competitive and
illegal in some jurisdictions.  Some might say we should never
consider such a thing.  At very least, doing so should be very
carefully considered to make sure the risk is worth the benefit.

The majority of this document will focus on the impact of the policy
on actual users.  However, I think it is also legitimate to consider
the impact of the policies on software vendors as well.  Regardless of
the actual risk to users, the _perception_ of risk may have a
significant impact on the success of some vendors over others.

It is in fact very difficult to achieve perfect fairness between all
kinds of parties.  However, as much as possible, unfairness should be
based on decisions that the party themselves have a reasonable choice
about.  For instance, having a slight advantage to compiling your own
hypervisor directly from xen.org rather than using a software vendor
might be tolerable because 1) those receiving from software vendors
may have other advantages not available to those consuming directly,
and 2) anyone can switch to pulling directly from xen.org if they
wish.

* _Administrative overhead_.  This comprises a number of different
aspects: for example, how hard is it to come up with a precise and
"fair" policy?  How much effort will it be for xen.org to determine
whether or not someone should be on the list?

Another question has to do with robustness of enforcement.  If there
is a strong incentive for people on the list to break the rules
("moral hazard"), then we need to import a whole legal framework: how
do we detect breaking the rules?  Who decides that the rules have
indeed been broken, and decides the consequences?  Is there an appeals
process?  At what point is someone who has broken the rules in the
past allowed back on the list?  What are the legal, project, and
community implications of having to do this, and so on?  All of this
will impose a much heavier burden on not only this discussion, but
also on the xen.org security team.

(Disclaimer: I am not a lawyer.) It should be noted that because of
the nature of the GPL, we cannot impose additional contractual
limitations on the re-distribution of a GPL'ed patch, and thus we
cannot seek legal redress for those who re-distribute such a patch (or
resulting binaries) in a way that violates the pre-disclosure policy.
But for the purposes of this discussion, I am going to assume that we
can, however, choose to remove them from the pre-disclosure list as a
result.

I think those cover the main points that have been brought up in the
discussion.  Please feel free to give feedback.  Next week probably I
will attempt to give an analysis, applying these criteria to the
different options.  I haven't yet come up with what I think is a
satisfactory conclusion yet.

 -George

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-06 16:46 Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217) George Dunlap
@ 2012-07-08  7:30 ` Joanna Rutkowska
  2012-07-09  9:23   ` George Dunlap
  2012-07-14  0:18 ` Security discussion: Summary of proposals and criteria Matt Wilson
  2012-08-03 17:31 ` Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217) George Dunlap
  2 siblings, 1 reply; 18+ messages in thread
From: Joanna Rutkowska @ 2012-07-08  7:30 UTC (permalink / raw)
  To: George Dunlap
  Cc: Lars Kurth, xen-devel@lists.xen.org, Matt Wilson, Jan Beulich,
	Stefano Stabellini


[-- Attachment #1.1: Type: text/plain, Size: 1336 bytes --]

On 07/06/12 18:46, George Dunlap wrote:
> Another question has to do with robustness of enforcement.  If there
> is a strong incentive for people on the list to break the rules
> ("moral hazard"), then we need to import a whole legal framework: how
> do we detect breaking the rules? 

1) Realizing that somebody released patched binaries during embargo is
simple.

2) Detecting that somebody patched their systems might be harder (after
all we're not going to perform pen-tests on EC2 systems and the likes,
right? ;)

3) Detecting that somebody sold info about the bug/exploit to the black
market might be prohibitively hard -- the only thing that might
*somehow* help is the use of some smart water marking (e.g. of the proof
of concept code). Of course, if a person fully understands the
bug/exploit, she would be able to recreate it from scratch herself, and
then sell to the bad guys.

On the other hand, the #2 above, seems like the least problematic for
the safety of others. After all if the proverbial AWS folks patch their
systems quietly, it doesn't immediately give others (the bad guys)
access to the info about the bug, because nobody external (normally
should) have access to the (running) binaries on the providers machines.
So, perhaps #3 is of biggest concern to the community.

joanna.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-08  7:30 ` Joanna Rutkowska
@ 2012-07-09  9:23   ` George Dunlap
  2012-07-09 11:31     ` Joanna Rutkowska
  0 siblings, 1 reply; 18+ messages in thread
From: George Dunlap @ 2012-07-09  9:23 UTC (permalink / raw)
  To: Joanna Rutkowska
  Cc: Stefano Stabellini, Lars Kurth, Jan Beulich, Matt Wilson,
	xen-devel@lists.xen.org

On Sun, Jul 8, 2012 at 8:30 AM, Joanna Rutkowska
<joanna@invisiblethingslab.com> wrote:
> On 07/06/12 18:46, George Dunlap wrote:
>> Another question has to do with robustness of enforcement.  If there
>> is a strong incentive for people on the list to break the rules
>> ("moral hazard"), then we need to import a whole legal framework: how
>> do we detect breaking the rules?
>
> 1) Realizing that somebody released patched binaries during embargo is
> simple.
>
> 2) Detecting that somebody patched their systems might be harder (after
> all we're not going to perform pen-tests on EC2 systems and the likes,
> right? ;)
>
> 3) Detecting that somebody sold info about the bug/exploit to the black
> market might be prohibitively hard -- the only thing that might
> *somehow* help is the use of some smart water marking (e.g. of the proof
> of concept code). Of course, if a person fully understands the
> bug/exploit, she would be able to recreate it from scratch herself, and
> then sell to the bad guys.
>
> On the other hand, the #2 above, seems like the least problematic for
> the safety of others. After all if the proverbial AWS folks patch their
> systems quietly, it doesn't immediately give others (the bad guys)
> access to the info about the bug, because nobody external (normally
> should) have access to the (running) binaries on the providers machines.
> So, perhaps #3 is of biggest concern to the community.

The reason I brought up the issue above didn't so much have to do with
the risk of people leaking it, but to help evaluate the proposals that
had "No roll-out is allowed until the patch date".  There's probably
little incentive or ability for the average programmer / IT person to
sell the bug on the black market.  (I have no idea how I would begin
to go about it, for instance.)  However, if we had a "no roll-out
during embargo period" rule, there would be a huge incentive for
people or organizations to "cheat" by rolling it out early, giving
them an advantage over those either not on the list, and those on the
list but not cheating.  So from a security perspective, of course #3
is the most important; but as a community project with a wide range of
users (many of whom are both small and active), #2 is what I am most
concerned about.

BTW, Joanna, do you have any opinions / input on the argument that
disclosure does not significantly increase risk, because patched
systems means that the vulnerability has reduced value to black hats?

 -George

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-09  9:23   ` George Dunlap
@ 2012-07-09 11:31     ` Joanna Rutkowska
  2012-07-09 13:25       ` Joanna Rutkowska
  2012-07-09 13:51       ` Tim Deegan
  0 siblings, 2 replies; 18+ messages in thread
From: Joanna Rutkowska @ 2012-07-09 11:31 UTC (permalink / raw)
  To: George Dunlap
  Cc: Stefano Stabellini, Lars Kurth, Jan Beulich, Matt Wilson,
	xen-devel@lists.xen.org


[-- Attachment #1.1: Type: text/plain, Size: 4071 bytes --]

On 07/09/12 11:23, George Dunlap wrote:
> On Sun, Jul 8, 2012 at 8:30 AM, Joanna Rutkowska
> <joanna@invisiblethingslab.com> wrote:
>> On 07/06/12 18:46, George Dunlap wrote:
>>> Another question has to do with robustness of enforcement.  If there
>>> is a strong incentive for people on the list to break the rules
>>> ("moral hazard"), then we need to import a whole legal framework: how
>>> do we detect breaking the rules?
>>
>> 1) Realizing that somebody released patched binaries during embargo is
>> simple.
>>
>> 2) Detecting that somebody patched their systems might be harder (after
>> all we're not going to perform pen-tests on EC2 systems and the likes,
>> right? ;)
>>
>> 3) Detecting that somebody sold info about the bug/exploit to the black
>> market might be prohibitively hard -- the only thing that might
>> *somehow* help is the use of some smart water marking (e.g. of the proof
>> of concept code). Of course, if a person fully understands the
>> bug/exploit, she would be able to recreate it from scratch herself, and
>> then sell to the bad guys.
>>
>> On the other hand, the #2 above, seems like the least problematic for
>> the safety of others. After all if the proverbial AWS folks patch their
>> systems quietly, it doesn't immediately give others (the bad guys)
>> access to the info about the bug, because nobody external (normally
>> should) have access to the (running) binaries on the providers machines.
>> So, perhaps #3 is of biggest concern to the community.
> 
> The reason I brought up the issue above didn't so much have to do with
> the risk of people leaking it, but to help evaluate the proposals that
> had "No roll-out is allowed until the patch date".  There's probably
> little incentive or ability for the average programmer / IT person to
> sell the bug on the black market.  (I have no idea how I would begin
> to go about it, for instance.)

If you're into security industry (going to conferences, etc) you
certainly know the right people who would be delight to buy exploits
from you, believe me ;) Probably most Xen developers don't fit into this
crowd, true, but then again, do you think it would be so hard for an
interested organization to approach one of the Xen developers on the
pre-disclousure list? How many would resist if they had a chance to cash
in some 7-figure number for this (I read in the press that hot
bugs/exploits sell for this amount actually)? US gov apparently invested
lots of money in creating stuxnet and flame (and probably other malware)
-- don't you think the Chinese would not be interested in owning some of
the AWS machines in return?

> However, if we had a "no roll-out
> during embargo period" rule, there would be a huge incentive for
> people or organizations to "cheat" by rolling it out early, giving
> them an advantage over those either not on the list, and those on the
> list but not cheating.  So from a security perspective, of course #3
> is the most important; but as a community project with a wide range of
> users (many of whom are both small and active), #2 is what I am most
> concerned about.
> 

Ah, I see, so you're concerned about an unfair advantage that a large
Xen-based service company (that happens to be on the list) might have
over the others? Sure, I agree.

But then, on the other hand, it seems to me that an attack against some
large service provider, such as AWS, might be quite fatal in
consequences. So, letting them patch their systems (especially given the
fact that this shouldn't "leak out" info about the bug to the world)
might be a reasonable compromise... As a normal citizen I think  I would
sleep better knowing that important players can patch before the others.

> BTW, Joanna, do you have any opinions / input on the argument that
> disclosure does not significantly increase risk, because patched
> systems means that the vulnerability has reduced value to black hats?
> 

I'm not quite sure if I understand this question? Can you elaborate?

joanna.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-09 11:31     ` Joanna Rutkowska
@ 2012-07-09 13:25       ` Joanna Rutkowska
  2012-07-09 13:35         ` Keir Fraser
  2012-07-09 13:51       ` Tim Deegan
  1 sibling, 1 reply; 18+ messages in thread
From: Joanna Rutkowska @ 2012-07-09 13:25 UTC (permalink / raw)
  To: George Dunlap
  Cc: Lars Kurth, xen-devel@lists.xen.org, Matt Wilson, Jan Beulich,
	Stefano Stabellini


[-- Attachment #1.1: Type: text/plain, Size: 2707 bytes --]

On 07/09/12 13:31, Joanna Rutkowska wrote:
> On 07/09/12 11:23, George Dunlap wrote:
>> > On Sun, Jul 8, 2012 at 8:30 AM, Joanna Rutkowska
>> > <joanna@invisiblethingslab.com> wrote:
>>> >> On 07/06/12 18:46, George Dunlap wrote:
>>>> >>> Another question has to do with robustness of enforcement.  If there
>>>> >>> is a strong incentive for people on the list to break the rules
>>>> >>> ("moral hazard"), then we need to import a whole legal framework: how
>>>> >>> do we detect breaking the rules?
>>> >>
>>> >> 1) Realizing that somebody released patched binaries during embargo is
>>> >> simple.
>>> >>
>>> >> 2) Detecting that somebody patched their systems might be harder (after
>>> >> all we're not going to perform pen-tests on EC2 systems and the likes,
>>> >> right? ;)
>>> >>
>>> >> 3) Detecting that somebody sold info about the bug/exploit to the black
>>> >> market might be prohibitively hard -- the only thing that might
>>> >> *somehow* help is the use of some smart water marking (e.g. of the proof
>>> >> of concept code). Of course, if a person fully understands the
>>> >> bug/exploit, she would be able to recreate it from scratch herself, and
>>> >> then sell to the bad guys.
>>> >>
>>> >> On the other hand, the #2 above, seems like the least problematic for
>>> >> the safety of others. After all if the proverbial AWS folks patch their
>>> >> systems quietly, it doesn't immediately give others (the bad guys)
>>> >> access to the info about the bug, because nobody external (normally
>>> >> should) have access to the (running) binaries on the providers machines.
>>> >> So, perhaps #3 is of biggest concern to the community.
>> > 
>> > The reason I brought up the issue above didn't so much have to do with
>> > the risk of people leaking it, but to help evaluate the proposals that
>> > had "No roll-out is allowed until the patch date".  There's probably
>> > little incentive or ability for the average programmer / IT person to
>> > sell the bug on the black market.  (I have no idea how I would begin
>> > to go about it, for instance.)
> If you're into security industry (going to conferences, etc) you
> certainly know the right people who would be delight to buy exploits
> from you, believe me ;) Probably most Xen developers don't fit into this
> crowd, true, but then again, do you think it would be so hard for an
> interested organization to approach one of the Xen developers on the
> pre-disclousure list? How many would resist if they had a chance to cash
> in some 7-figure number for this (I read in the press that hot
> bugs/exploits sell for this amount actually)?

(Correction: I meant a 6-figure number)


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-09 13:25       ` Joanna Rutkowska
@ 2012-07-09 13:35         ` Keir Fraser
  2012-07-09 13:40           ` Joanna Rutkowska
  0 siblings, 1 reply; 18+ messages in thread
From: Keir Fraser @ 2012-07-09 13:35 UTC (permalink / raw)
  To: Joanna Rutkowska, George Dunlap
  Cc: Stefano Stabellini, Lars Kurth, Jan Beulich, Matt Wilson,
	xen-devel@lists.xen.org

On 09/07/2012 14:25, "Joanna Rutkowska" <joanna@invisiblethingslab.com>
wrote:

>> If you're into security industry (going to conferences, etc) you
>> certainly know the right people who would be delight to buy exploits
>> from you, believe me ;) Probably most Xen developers don't fit into this
>> crowd, true, but then again, do you think it would be so hard for an
>> interested organization to approach one of the Xen developers on the
>> pre-disclousure list? How many would resist if they had a chance to cash
>> in some 7-figure number for this (I read in the press that hot
>> bugs/exploits sell for this amount actually)?
> 
> (Correction: I meant a 6-figure number)

Thought I was in the wrong end of the business there for a while. ;)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-09 13:35         ` Keir Fraser
@ 2012-07-09 13:40           ` Joanna Rutkowska
  0 siblings, 0 replies; 18+ messages in thread
From: Joanna Rutkowska @ 2012-07-09 13:40 UTC (permalink / raw)
  To: Keir Fraser
  Cc: Matt Wilson, Stefano Stabellini, George Dunlap,
	xen-devel@lists.xen.org, Lars Kurth, Jan Beulich


[-- Attachment #1.1: Type: text/plain, Size: 1029 bytes --]

On 07/09/12 15:35, Keir Fraser wrote:
> On 09/07/2012 14:25, "Joanna Rutkowska" <joanna@invisiblethingslab.com>
> wrote:
> 
>>> >> If you're into security industry (going to conferences, etc) you
>>> >> certainly know the right people who would be delight to buy exploits
>>> >> from you, believe me ;) Probably most Xen developers don't fit into this
>>> >> crowd, true, but then again, do you think it would be so hard for an
>>> >> interested organization to approach one of the Xen developers on the
>>> >> pre-disclousure list? How many would resist if they had a chance to cash
>>> >> in some 7-figure number for this (I read in the press that hot
>>> >> bugs/exploits sell for this amount actually)?
>> > 
>> > (Correction: I meant a 6-figure number)
> Thought I was in the wrong end of the business there for a while. ;)
> 
> 

:) Yeah, I actually re-read my message when reading my 'xen-devel'
folder, and spotted the typo. A few hundred bucks for an exploit --
still not bad IMHO...


joanna.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-09 11:31     ` Joanna Rutkowska
  2012-07-09 13:25       ` Joanna Rutkowska
@ 2012-07-09 13:51       ` Tim Deegan
  2012-07-09 14:08         ` Joanna Rutkowska
  1 sibling, 1 reply; 18+ messages in thread
From: Tim Deegan @ 2012-07-09 13:51 UTC (permalink / raw)
  To: Joanna Rutkowska
  Cc: Jan Beulich, Stefano Stabellini, George Dunlap,
	xen-devel@lists.xen.org, Lars Kurth, Matt Wilson

At 13:31 +0200 on 09 Jul (1341840671), Joanna Rutkowska wrote:
> If you're into security industry (going to conferences, etc) you
> certainly know the right people who would be delight to buy exploits
> from you, believe me ;) Probably most Xen developers don't fit into this
> crowd, true, but then again, do you think it would be so hard for an
> interested organization to approach one of the Xen developers on the
> pre-disclousure list? How many would resist if they had a chance to cash
> in some 7-figure number for this (I read in the press that hot
> bugs/exploits sell for this amount actually)?

I think the argument is that an exploit that's going to be public (and
patched) in the next couple of weeks would not fetch the same kind of
price as a unknown attack that can be kept for later.

OTOH, I'm sure it's worth something for chance to get in early and
install a rootkit, or just crash your rivals' systems for the bad
publicity.

I'm not sure there's an enormous difference between a leaky
predisclosure list and full disclosure, but FWIW I'm in favour of 
(a) having a list, and (b) keeping the embargo at no more than two weeks.

Tim.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-09 13:51       ` Tim Deegan
@ 2012-07-09 14:08         ` Joanna Rutkowska
  2012-07-12 16:34           ` Stefano Stabellini
  0 siblings, 1 reply; 18+ messages in thread
From: Joanna Rutkowska @ 2012-07-09 14:08 UTC (permalink / raw)
  To: Tim Deegan
  Cc: Jan Beulich, Stefano Stabellini, George Dunlap,
	xen-devel@lists.xen.org, Lars Kurth, Matt Wilson


[-- Attachment #1.1: Type: text/plain, Size: 2243 bytes --]

On 07/09/12 15:51, Tim Deegan wrote:
> At 13:31 +0200 on 09 Jul (1341840671), Joanna Rutkowska wrote:
>> > If you're into security industry (going to conferences, etc) you
>> > certainly know the right people who would be delight to buy exploits
>> > from you, believe me ;) Probably most Xen developers don't fit into this
>> > crowd, true, but then again, do you think it would be so hard for an
>> > interested organization to approach one of the Xen developers on the
>> > pre-disclousure list? How many would resist if they had a chance to cash
>> > in some 7-figure number for this (I read in the press that hot
>> > bugs/exploits sell for this amount actually)?
> I think the argument is that an exploit that's going to be public (and
> patched) in the next couple of weeks would not fetch the same kind of
> price as a unknown attack that can be kept for later.

Depending on the type of an exploit. For client-side exploits, perhaps
you're right. But for infrastructure attacks it's a different story --
having an exploit such the Rafal's one, I could have *silently* exploit
lots of AWS machines and install backdoors in their hypervisors/dom0.
The fact that they will patch the bug two weeks later might be
irrelevant then.

After all, how are you going to check whether your physical server has
been compromised? Most people don't use any form of trusted boot, but
even if they did, it's not a silver bullet as we have demonstrated a few
times in a row. And if you don't have trusted boot, as most people, you
have very little chances to detect a custom-made backdoor. Even if you
are allowed to reboot the machine and boot "good known binaries", which
often you cannot do, are you going to manually audit all the firmware,
ACPI tables, etc? Not to mention about the integrity of the actual VMs,
that might have also got compromised (and checking for integrity of a
running OS, such as Linux or Windows, is just undoable).

Having that said, 2 weeks might be a bit short to prepare such an
advanced attack. In this respect, it would be probably beneficial to
keep the embargo period as short as possible (that still allows
important players to patch before others). 1 week perhaps?

joanna.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-09 14:08         ` Joanna Rutkowska
@ 2012-07-12 16:34           ` Stefano Stabellini
  2012-07-12 16:47             ` Joanna Rutkowska
  0 siblings, 1 reply; 18+ messages in thread
From: Stefano Stabellini @ 2012-07-12 16:34 UTC (permalink / raw)
  To: Joanna Rutkowska
  Cc: Jan Beulich, Stefano Stabellini, George Dunlap, Tim (Xen.org),
	xen-devel@lists.xen.org, Lars Kurth, Matt Wilson

On Mon, 9 Jul 2012, Joanna Rutkowska wrote:
> On 07/09/12 15:51, Tim Deegan wrote:
> > At 13:31 +0200 on 09 Jul (1341840671), Joanna Rutkowska wrote:
> >> > If you're into security industry (going to conferences, etc) you
> >> > certainly know the right people who would be delight to buy exploits
> >> > from you, believe me ;) Probably most Xen developers don't fit into this
> >> > crowd, true, but then again, do you think it would be so hard for an
> >> > interested organization to approach one of the Xen developers on the
> >> > pre-disclousure list? How many would resist if they had a chance to cash
> >> > in some 7-figure number for this (I read in the press that hot
> >> > bugs/exploits sell for this amount actually)?
> > I think the argument is that an exploit that's going to be public (and
> > patched) in the next couple of weeks would not fetch the same kind of
> > price as a unknown attack that can be kept for later.
> 
> Depending on the type of an exploit. For client-side exploits, perhaps
> you're right. But for infrastructure attacks it's a different story --
> having an exploit such the Rafal's one, I could have *silently* exploit
> lots of AWS machines and install backdoors in their hypervisors/dom0.
> The fact that they will patch the bug two weeks later might be
> irrelevant then.
> 
> After all, how are you going to check whether your physical server has
> been compromised? Most people don't use any form of trusted boot, but
> even if they did, it's not a silver bullet as we have demonstrated a few
> times in a row. And if you don't have trusted boot, as most people, you
> have very little chances to detect a custom-made backdoor. Even if you
> are allowed to reboot the machine and boot "good known binaries", which
> often you cannot do, are you going to manually audit all the firmware,
> ACPI tables, etc? Not to mention about the integrity of the actual VMs,
> that might have also got compromised (and checking for integrity of a
> running OS, such as Linux or Windows, is just undoable).
> 
> Having that said, 2 weeks might be a bit short to prepare such an
> advanced attack. In this respect, it would be probably beneficial to
> keep the embargo period as short as possible (that still allows
> important players to patch before others). 1 week perhaps?

I agree on the short embargo period and I am not against having a list.

However I don't think that the list should be limited to the "important
players". How do we define an "important player"?
If we decide that important players are the big ones, suddenly big
players become the only ones that can be entrusted with sentive
informations.

Any distros can join linux-distros, no matter the size.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-12 16:34           ` Stefano Stabellini
@ 2012-07-12 16:47             ` Joanna Rutkowska
  2012-07-12 17:00               ` Stefano Stabellini
  0 siblings, 1 reply; 18+ messages in thread
From: Joanna Rutkowska @ 2012-07-12 16:47 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Jan Beulich, George Dunlap, Tim (Xen.org),
	xen-devel@lists.xen.org, Lars Kurth, Matt Wilson


[-- Attachment #1.1: Type: text/plain, Size: 3642 bytes --]

On 07/12/12 18:34, Stefano Stabellini wrote:
> On Mon, 9 Jul 2012, Joanna Rutkowska wrote:
>> > On 07/09/12 15:51, Tim Deegan wrote:
>>> > > At 13:31 +0200 on 09 Jul (1341840671), Joanna Rutkowska wrote:
>>>>> > >> > If you're into security industry (going to conferences, etc) you
>>>>> > >> > certainly know the right people who would be delight to buy exploits
>>>>> > >> > from you, believe me ;) Probably most Xen developers don't fit into this
>>>>> > >> > crowd, true, but then again, do you think it would be so hard for an
>>>>> > >> > interested organization to approach one of the Xen developers on the
>>>>> > >> > pre-disclousure list? How many would resist if they had a chance to cash
>>>>> > >> > in some 7-figure number for this (I read in the press that hot
>>>>> > >> > bugs/exploits sell for this amount actually)?
>>> > > I think the argument is that an exploit that's going to be public (and
>>> > > patched) in the next couple of weeks would not fetch the same kind of
>>> > > price as a unknown attack that can be kept for later.
>> > 
>> > Depending on the type of an exploit. For client-side exploits, perhaps
>> > you're right. But for infrastructure attacks it's a different story --
>> > having an exploit such the Rafal's one, I could have *silently* exploit
>> > lots of AWS machines and install backdoors in their hypervisors/dom0.
>> > The fact that they will patch the bug two weeks later might be
>> > irrelevant then.
>> > 
>> > After all, how are you going to check whether your physical server has
>> > been compromised? Most people don't use any form of trusted boot, but
>> > even if they did, it's not a silver bullet as we have demonstrated a few
>> > times in a row. And if you don't have trusted boot, as most people, you
>> > have very little chances to detect a custom-made backdoor. Even if you
>> > are allowed to reboot the machine and boot "good known binaries", which
>> > often you cannot do, are you going to manually audit all the firmware,
>> > ACPI tables, etc? Not to mention about the integrity of the actual VMs,
>> > that might have also got compromised (and checking for integrity of a
>> > running OS, such as Linux or Windows, is just undoable).
>> > 
>> > Having that said, 2 weeks might be a bit short to prepare such an
>> > advanced attack. In this respect, it would be probably beneficial to
>> > keep the embargo period as short as possible (that still allows
>> > important players to patch before others). 1 week perhaps?
> I agree on the short embargo period and I am not against having a list.
> 
> However I don't think that the list should be limited to the "important
> players". How do we define an "important player"?
> If we decide that important players are the big ones, suddenly big
> players become the only ones that can be entrusted with sentive
> informations.
> 
> Any distros can join linux-distros, no matter the size.

I didn't say the list should be limited to the important players -- I
said it likely makes no harm, from the security standpoint, if we
allowed select "important" _service providers_ to patch before others
(because, again, the world doesn't see what Xen binaries are running on
their servers, and so this doesn't leak out info about the bug, at least
it shouldn't in most cases).

I think that the list should essentially contain only some core Xen
devels and all the software vendors that _are known_ to build and
distribute products on top of Xen (such as qubes-os.org ;) Perhaps the
"important" service providers could be brought in on a case-by-case basis.

joanna.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-12 16:47             ` Joanna Rutkowska
@ 2012-07-12 17:00               ` Stefano Stabellini
  2012-07-12 17:22                 ` Joanna Rutkowska
  0 siblings, 1 reply; 18+ messages in thread
From: Stefano Stabellini @ 2012-07-12 17:00 UTC (permalink / raw)
  To: Joanna Rutkowska
  Cc: Jan Beulich, Stefano Stabellini, George Dunlap, Tim (Xen.org),
	xen-devel@lists.xen.org, Lars Kurth, Matt Wilson

On Thu, 12 Jul 2012, Joanna Rutkowska wrote:
> On 07/12/12 18:34, Stefano Stabellini wrote:
> > On Mon, 9 Jul 2012, Joanna Rutkowska wrote:
> >> > On 07/09/12 15:51, Tim Deegan wrote:
> >>> > > At 13:31 +0200 on 09 Jul (1341840671), Joanna Rutkowska wrote:
> >>>>> > >> > If you're into security industry (going to conferences, etc) you
> >>>>> > >> > certainly know the right people who would be delight to buy exploits
> >>>>> > >> > from you, believe me ;) Probably most Xen developers don't fit into this
> >>>>> > >> > crowd, true, but then again, do you think it would be so hard for an
> >>>>> > >> > interested organization to approach one of the Xen developers on the
> >>>>> > >> > pre-disclousure list? How many would resist if they had a chance to cash
> >>>>> > >> > in some 7-figure number for this (I read in the press that hot
> >>>>> > >> > bugs/exploits sell for this amount actually)?
> >>> > > I think the argument is that an exploit that's going to be public (and
> >>> > > patched) in the next couple of weeks would not fetch the same kind of
> >>> > > price as a unknown attack that can be kept for later.
> >> > 
> >> > Depending on the type of an exploit. For client-side exploits, perhaps
> >> > you're right. But for infrastructure attacks it's a different story --
> >> > having an exploit such the Rafal's one, I could have *silently* exploit
> >> > lots of AWS machines and install backdoors in their hypervisors/dom0.
> >> > The fact that they will patch the bug two weeks later might be
> >> > irrelevant then.
> >> > 
> >> > After all, how are you going to check whether your physical server has
> >> > been compromised? Most people don't use any form of trusted boot, but
> >> > even if they did, it's not a silver bullet as we have demonstrated a few
> >> > times in a row. And if you don't have trusted boot, as most people, you
> >> > have very little chances to detect a custom-made backdoor. Even if you
> >> > are allowed to reboot the machine and boot "good known binaries", which
> >> > often you cannot do, are you going to manually audit all the firmware,
> >> > ACPI tables, etc? Not to mention about the integrity of the actual VMs,
> >> > that might have also got compromised (and checking for integrity of a
> >> > running OS, such as Linux or Windows, is just undoable).
> >> > 
> >> > Having that said, 2 weeks might be a bit short to prepare such an
> >> > advanced attack. In this respect, it would be probably beneficial to
> >> > keep the embargo period as short as possible (that still allows
> >> > important players to patch before others). 1 week perhaps?
> > I agree on the short embargo period and I am not against having a list.
> > 
> > However I don't think that the list should be limited to the "important
> > players". How do we define an "important player"?
> > If we decide that important players are the big ones, suddenly big
> > players become the only ones that can be entrusted with sentive
> > informations.
> > 
> > Any distros can join linux-distros, no matter the size.
> 
> I didn't say the list should be limited to the important players -- I
> said it likely makes no harm, from the security standpoint, if we
> allowed select "important" _service providers_ to patch before others
> (because, again, the world doesn't see what Xen binaries are running on
> their servers, and so this doesn't leak out info about the bug, at least
> it shouldn't in most cases).
> 
> I think that the list should essentially contain only some core Xen
> devels and all the software vendors that _are known_ to build and
> distribute products on top of Xen (such as qubes-os.org ;) Perhaps the
> "important" service providers could be brought in on a case-by-case basis.

I understand now. This is very similar to having two list, one for
software vendors and another one for service providers.

While all the security issues would be discussed in the software vendors
list, only the most critical ones would be sent to the service providers
list?

But the key question remain: would we allow a small service provider to
join the service providers list? I think that we should.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-12 17:00               ` Stefano Stabellini
@ 2012-07-12 17:22                 ` Joanna Rutkowska
  2012-07-13 18:15                   ` Stefano Stabellini
  0 siblings, 1 reply; 18+ messages in thread
From: Joanna Rutkowska @ 2012-07-12 17:22 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Jan Beulich, George Dunlap, Tim (Xen.org),
	xen-devel@lists.xen.org, Lars Kurth, Matt Wilson


[-- Attachment #1.1: Type: text/plain, Size: 6244 bytes --]

On 07/12/12 19:00, Stefano Stabellini wrote:
> On Thu, 12 Jul 2012, Joanna Rutkowska wrote:
>> > On 07/12/12 18:34, Stefano Stabellini wrote:
>>> > > On Mon, 9 Jul 2012, Joanna Rutkowska wrote:
>>>>> > >> > On 07/09/12 15:51, Tim Deegan wrote:
>>>>>>> > >>> > > At 13:31 +0200 on 09 Jul (1341840671), Joanna Rutkowska wrote:
>>>>>>>>>>> > >>>>> > >> > If you're into security industry (going to conferences, etc) you
>>>>>>>>>>> > >>>>> > >> > certainly know the right people who would be delight to buy exploits
>>>>>>>>>>> > >>>>> > >> > from you, believe me ;) Probably most Xen developers don't fit into this
>>>>>>>>>>> > >>>>> > >> > crowd, true, but then again, do you think it would be so hard for an
>>>>>>>>>>> > >>>>> > >> > interested organization to approach one of the Xen developers on the
>>>>>>>>>>> > >>>>> > >> > pre-disclousure list? How many would resist if they had a chance to cash
>>>>>>>>>>> > >>>>> > >> > in some 7-figure number for this (I read in the press that hot
>>>>>>>>>>> > >>>>> > >> > bugs/exploits sell for this amount actually)?
>>>>>>> > >>> > > I think the argument is that an exploit that's going to be public (and
>>>>>>> > >>> > > patched) in the next couple of weeks would not fetch the same kind of
>>>>>>> > >>> > > price as a unknown attack that can be kept for later.
>>>>> > >> > 
>>>>> > >> > Depending on the type of an exploit. For client-side exploits, perhaps
>>>>> > >> > you're right. But for infrastructure attacks it's a different story --
>>>>> > >> > having an exploit such the Rafal's one, I could have *silently* exploit
>>>>> > >> > lots of AWS machines and install backdoors in their hypervisors/dom0.
>>>>> > >> > The fact that they will patch the bug two weeks later might be
>>>>> > >> > irrelevant then.
>>>>> > >> > 
>>>>> > >> > After all, how are you going to check whether your physical server has
>>>>> > >> > been compromised? Most people don't use any form of trusted boot, but
>>>>> > >> > even if they did, it's not a silver bullet as we have demonstrated a few
>>>>> > >> > times in a row. And if you don't have trusted boot, as most people, you
>>>>> > >> > have very little chances to detect a custom-made backdoor. Even if you
>>>>> > >> > are allowed to reboot the machine and boot "good known binaries", which
>>>>> > >> > often you cannot do, are you going to manually audit all the firmware,
>>>>> > >> > ACPI tables, etc? Not to mention about the integrity of the actual VMs,
>>>>> > >> > that might have also got compromised (and checking for integrity of a
>>>>> > >> > running OS, such as Linux or Windows, is just undoable).
>>>>> > >> > 
>>>>> > >> > Having that said, 2 weeks might be a bit short to prepare such an
>>>>> > >> > advanced attack. In this respect, it would be probably beneficial to
>>>>> > >> > keep the embargo period as short as possible (that still allows
>>>>> > >> > important players to patch before others). 1 week perhaps?
>>> > > I agree on the short embargo period and I am not against having a list.
>>> > > 
>>> > > However I don't think that the list should be limited to the "important
>>> > > players". How do we define an "important player"?
>>> > > If we decide that important players are the big ones, suddenly big
>>> > > players become the only ones that can be entrusted with sentive
>>> > > informations.
>>> > > 
>>> > > Any distros can join linux-distros, no matter the size.
>> > 
>> > I didn't say the list should be limited to the important players -- I
>> > said it likely makes no harm, from the security standpoint, if we
>> > allowed select "important" _service providers_ to patch before others
>> > (because, again, the world doesn't see what Xen binaries are running on
>> > their servers, and so this doesn't leak out info about the bug, at least
>> > it shouldn't in most cases).
>> > 
>> > I think that the list should essentially contain only some core Xen
>> > devels and all the software vendors that _are known_ to build and
>> > distribute products on top of Xen (such as qubes-os.org ;) Perhaps the
>> > "important" service providers could be brought in on a case-by-case basis.
> I understand now. This is very similar to having two list, one for
> software vendors and another one for service providers.
> 
> While all the security issues would be discussed in the software vendors
> list, only the most critical ones would be sent to the service providers
> list?
> 
Now, when you mentioned it, I thought that perhaps it would be a better
idea to have one list only for core Xen developers, and the other list
for Xen users, i.e. software vendors such as qubes-os.org, an perhaps
also for some providers.

Perhaps the users from the 2nd list could be informed only when the
specific vulnerability affects their product. As an example, say there
is a bug in qemu reported to the primary list. Now, a query can be send
to all the members of the 2nd list whether any of them are interested in
getting info about the bug, something like:

"There's been a bug found in the qemu used by Xen X.X.X, if you believe
this might affect your product/infrastructure security, reply to this
message with YES, and you will get more info".

The catch is, of course, that if it could be later proven that some
vendors answered YES, although their products were not affected (e.g. by
design), those vendors should be somehow banned from the list. E.g. if I
replied YES to the above query, you should know I went on the dark side
of the force, as Qubes OS keeps qemu outside of the TCB, and so we
shouldn't be caring about such things :)

This is somehow problematic with Xen providers, as opposed to software
vendors, as it's might not be clear how Xen has been deployed by a
particular provider, and thus whether e.g. a bug in pygrub affects them
or not...

> But the key question remain: would we allow a small service provider to
> join the service providers list? I think that we should.

It would probably be very easy for the proverbial Chinese to setup a
small service provider and join the list just to get prompt info about
bugs they can exploit in order to 0wn the proverbial AWS...

joanna.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-12 17:22                 ` Joanna Rutkowska
@ 2012-07-13 18:15                   ` Stefano Stabellini
  0 siblings, 0 replies; 18+ messages in thread
From: Stefano Stabellini @ 2012-07-13 18:15 UTC (permalink / raw)
  To: Joanna Rutkowska
  Cc: Jan Beulich, Stefano Stabellini, George Dunlap, Tim (Xen.org),
	xen-devel@lists.xen.org, Lars Kurth, Matt Wilson

On Thu, 12 Jul 2012, Joanna Rutkowska wrote:
> > But the key question remain: would we allow a small service provider to
> > join the service providers list? I think that we should.
> 
> It would probably be very easy for the proverbial Chinese to setup a
> small service provider and join the list just to get prompt info about
> bugs they can exploit in order to 0wn the proverbial AWS...

I think that there is no way around that: not only a small service
provider, but even a small Linux distro could be easily setup just to
get in the security list.

Even linux-distros has distros that are single-handedly developed by one
developer as subscribers. It would certainly be possible and probably
profitable paying a developer just to be on these lists.

And of course people in big companies can be corrupted and leak
informations, even if the company is believed trustworthy.
Bigger the company, more people will know about the vulnerability,
higher is the chance that one of them will leak the information.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria
  2012-07-06 16:46 Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217) George Dunlap
  2012-07-08  7:30 ` Joanna Rutkowska
@ 2012-07-14  0:18 ` Matt Wilson
  2012-07-16 17:56   ` George Dunlap
  2012-08-03 17:31 ` Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217) George Dunlap
  2 siblings, 1 reply; 18+ messages in thread
From: Matt Wilson @ 2012-07-14  0:18 UTC (permalink / raw)
  To: George Dunlap
  Cc: Stefano Stabellini, Lars Kurth, Jan Beulich,
	xen-devel@lists.xen.org

On Fri, Jul 06, 2012 at 09:46:48AM -0700, George Dunlap wrote:
> We've had a number of viewpoints expressed, and now we need to figure
> out how to move forward in the discussion.

Hi George,

Thanks for summarizing the discussion thus far. I think that everyone
agrees that the end goal is to improve security for everyone. The
varied opinions that show the careful thought that many people have
put into how best to accomplish the goal. 

Our diverse experiences and backgrounds leads each of us to an
approach that I believe is fundamentally similar, but have nuanced
differences. It is a sign of a strong community that important issues
can be hashed out in a way that accommodates as many viewpoints as
possible, leading to a decision that unifies more than it alienates.
I'm hopeful that this will happen here.

> One thing we all seem to agree on is that with regard to the public
> disclosure and the wishes of the discloser:
> * In general, we should default to following the wishes of the discloser
> * We should have a framework available to advise the discloser of a
> reasonable embargo period if they don't have strong opinions of their
> own (many have listed the oCERT guidelines)
> * Disclosing early against the wishes of the disclosure is possible if
> the discloser's request is unreasonable, but should only be considered
> in extreme situations.

I agree with the first two bullets. On the third, I think that it is a
very unusual circumstance for a discoverer to make a request that is
categorized as unreasonable by the security process. For example, I
think that it is reasonable for a discoverer to request that a
different group take over the coordination of an issue if it is
discovered that the vulnerability reaches beyond Xen projects. Should
this occur in the future, I think that Xen should honor the request
and work with the new coordinator.

I think that most Computer Security Incident Response Teams (CSIRTs)
that are likely to act as coordinator have default disclosure
timelines that are compatible with Xen's goal of providing timely
security updates.

On the other hand if a discoverer requests a disclosure date six
months in the future because they want to announce at a security
conference, it might be considered "unreasonable." I hope that this is
rare; I'd imagine that most security researchers prefer for issues to
be fixed sooner rather than later, just as Xen does.

> What next needs to be decided, it seems to me, is concerning
> pre-disclosure: Are we going to have a pre-disclosure list (to whom we
> send details before the public disclosure), and if so who is going to
> be on it?  Then we can start filling in the details.
> 
> What I propose is this.  I'll try to summarize the different options
> and angles discussed.  I will also try to synthesize the different
> arguments people have made and make my own recommendation.  Assuming
> that no creative new solutions are introduced in response, I think we
> should take an anonymous "straw poll", just to see what people think
> about the various options.  If that shows a strong consensus, then we
> should have a formal vote.  If it does not show consensus, then we'll
> at least be able to discuss the issue more constructively (by avoiding
> solutions no one is championing).

I think that we should attempt to come to some consensus before taking
a straw poll. There are too many options below, and I think that we
should be able to eliminate some of them through open discussion.
Also, I wonder why the straw poll should be anonymous. It seems that
we should be able to come to a quick lazy consensus on the actual
process text changes through email replies of -1 / 0 / +1. I think
that once we have new proposed text there should be a formal vote to
ratify it.

> So below is my summary of the options and the criteria that have been
> brought up so far.  It's fairly long, so I will give my own analysis
> and recommendation in a different mail, perhaps in a day or two.  I
> will also be working with Lars to form a straw poll where members of
> the list can informally express their preference, so we can see where
> we are in terms of agreement, sometime over the next day or two.
> 
> = Proposed options =
> 
> At a high level, I think we basically have five options to consider.
> 
> In all cases, I think that we can make a public announcement that
> there *is* a security vulnerability, and the date we expect to
> publicly disclose the fix, so that anyone who has not been disclosed
> to non-publicly can be prepared to apply it as soon as possible.

I've not seen this path taken by other open source projects, nor
commercial software vendors. I think that well written security
advisories that are distributed broadly (i.e., sent to bugtraq,
full-disclosure, oss-security, and regional CSIRTs if warranted) are
effective alert mechanisms for users. As an aside, JPCERT/CC published
some nice guidelines for software providers on how best to format
advisories:
  http://www.jpcert.or.jp/english/vh/2009/vuln_announce_manual_en2009.pdf

> 1. No pre-disclosure list.  People are brought in only to help produce
> a fix.  The fix is released to everyone publicly when it's ready (or,
> if the discloser has asked for a longer embargo period, when that
> embargo period is up).
> 
> 2. Pre-disclosure list consists only of software vendors -- people who
> compile and ship binaries to others.  No updates may be given to any
> user until the embargo period is up.
> 
> 3. Pre-disclosure list consists of software vendors and some subset of
> privleged users (e.g., service providers above a certain size).
> Privileged users will be provided with patches at the same time as
> software vendors.  However, they will not be permitted to update their
> systems until the embargo period is up.
> 
> 4. Pre-disclosure list consists of software vendors and privileged
> users. Privleged users will be provided with patches at the same time
> as software vendors.  They will be permitted to update their systems
> at any time.  Software vendors will be permitted to send code updates
> to service providers who are on the pre-disclosure list.  (This is the
> status quo.)
> 
> 5. Pre-disclsoure list is open to any organiation (perhaps with some
> minimal entrance criteria, like having some form of incorporation, or
> having registered a domain name).  Members of the list may update
> their systems at any time; software vendors will be permitted to send
> code updates to anyone on the pre-disclosure list.
> 
> 6. Pre-disclosure list open to any organization, but no one permitted
> to roll out fixes until the embargo period is up.

I think that option 1 effectively abandons much of the benefits of a
coordinated/responsible security disclosure in an open source
context. I don't think that a patch from Xen.org provides an
immediately consumable remediation for a security issue that users
need.

Of the remaining options, to me it seems that a refinement of the
status quo is in order. I don't think that the current policy is
fundamentally flawed. If we approach the problem the same way as code,
wouldn't an iterative approach make sense here rather than a rewrite?

I'll say again that I think that the "software provider" versus
"service provider" distinction is artificial. Some software providers
will undoubtedly be using their software in both private and public
installations. Some service providers will be providing software as a
service.

> = Criteria =
> 
> I think there are several criteria we need to consider.
> 
> * _Risk of being exploited_.  The ultimate goal any pre-disclosure
> process is to try to minimize the total risk for users of being
> exploited.  That said, any policy decision must take into account both
> the benefits in terms of risk reduction as well as the other costs of
> implementing the policy.

If there is an unreasonable cost difference to implement one solution
that improves security for users more than another, it deserves to be
called out.

> To simplify things a bit, I think there are two kinds of risk.
> Between the time a vulnerability has been publicly announced and the
> time a user patches their system, that user is "publicly vulnerable"
> -- running software that contains a public vulnerability.  However,
> the user was vulnerable before that; they were vulnerable from the
> time they deployed the system with the vulnerability.  I will call
> this "privately vulnerable" -- running software that contains a
> non-public vulnerability.
> 
> Now at first glance, it would seem obvious that being publicly
> vulnerable carries a much higher risk of being privately vulnerable.
> After all, to exploit a vulnerability you need to have malicious
> intent, the skills to leverage a vulnerability into an exploit, and
> you need to know about a vulnerability.  By announcing it publicly, a
> much greater number of people with malicious intent and the requisite
> skills will now know about the vulnerability; surely this increases
> the chances of someone being actually exploited.

Indeed, this is something that Bruce Schneier explored nearly twelve
years ago in his Crypto-Gram Newsletter on full disclosure:
  http://www.schneier.com/crypto-gram-0009.html#1

In all the time since Bruce wrote this article, I don't think that the
arguments have substantially changed. We end up rehashing the same
points, sometimes using different terminology.
"""
    The problem is that for the most part, the size and shape of the
    window of exposure is not under the control of any central
    authority. Not publishing a vulnerability is no guarantee that
    someone else won't publish it. Publishing a vulnerability is no
    guarantee that someone else won't write an exploit tool, and no
    guarantee that the vendor will fix it. Releasing a patch is no
    guarantee that a network administrator will actually install
    it. Trying to impose rules on such a chaotic system just doesn't
    work.
"""

However, one development during the past 12 years of arguing is the
idea of responsible/coordinated disclosure as a middle ground for
addressing vulnerabilities in a more controlled way. By and large I
think that coordinated disclosure is the best approach available, and
we should look to incorporate established best practices by other
organizations who have traveled this road before us.

> However, one should not under-estimate the risk of private
> vulnerability.  Black hats prize and actively look for vulnerabilities
> which have not yet been made public.  There is, in fact, a black
> market for such "0-day" exploits.  If your infrastructure is at all
> valuable, black hats have already been looking for the bug which makes
> you vulnerable; you have no way of knowing if they have found it yet
> or not.
> 
> In fact, one could make the argument that publicly announcing a
> vulnerability along with a fix makes the vulnerability _less_ valuable
> to black-hats.  Developing an exploit from a vulnerability requires a
> significant amount of effort; and you know that security-conscious
> service providers will be working as fast as possible to close the
> hole.  Why would you spend your time and energy for an exploit that's
> only going to be useful for a day or two at most?

I think that the only responsible approach is to assume that a
malicious actor will undoubtedly expend effort to take advantage of
any window of opportunity available to them, even if that window is
only minutes long.

> Ultimately the only way to say for sure would be to talk to people who
> know the black hat community well.  But we can conclude this: private
> vulnerability is a definite risk which needs to be considered when
> minimizing total risk.
> 
> Another thing to consider is how the nature of the pre-disclosure and
> public disclosure affect the risk.  For pre-disclosure, the more
> individuals have access to pre-disclosure information, the higher the
> risk that the information will end up in the hands of a black-hat.
> Having a list anyone can sign up to, for instance, may be very little
> more secure than a quiet public disclosure.

Right. This goes back to two points Bruce made back in 2000 on
attempts to reduce the window of exposure for a vulnerability: 1)
limit knowledge of the vulnerability and 2) limit the duration of the
window. He was speaking more to the secrecy approach versus full
disclosure, but I think that the points still apply here. His
conclusion is also as correct today as it was in 2000: "the debate has
no solution because there is no one solution."

> For public disclosure, the nature of the disclosure may affect the
> risk, or the perception of risk, materially.  If the fix is simply
> checked into a public repository without fanfare or comment, it may
> not raise the risk of public vulnerability significantly; while if the
> fix is announced in press releases and on blogs, the _perception_ of
> the risk will undoubtedly increase.

I do not think that silent check-ins provide adequate notice for users
that a security vulnerability has been addressed. You're right that
the perception of risk may increase, and to me that seems a reasonable
price for providing clear guidance to consumers of Xen projects.

> * _Fairness_.  Xen is a community project and relies on the good-will
> of the community to continue.  Giving one sub-group of our users an
> advantage over another sub-group will be costly in terms of community
> good will.  Furthermore, depending on what kind of sub-group we have
> and how it's run, it may well be considered anti-competitive and
> illegal in some jurisdictions.  Some might say we should never
> consider such a thing.  At very least, doing so should be very
> carefully considered to make sure the risk is worth the benefit.
>
> The majority of this document will focus on the impact of the policy
> on actual users.  However, I think it is also legitimate to consider
> the impact of the policies on software vendors as well.  Regardless of
> the actual risk to users, the _perception_ of risk may have a
> significant impact on the success of some vendors over others.
> 
> It is in fact very difficult to achieve perfect fairness between all
> kinds of parties.  However, as much as possible, unfairness should be
> based on decisions that the party themselves have a reasonable choice
> about.  For instance, having a slight advantage to compiling your own
> hypervisor directly from xen.org rather than using a software vendor
> might be tolerable because 1) those receiving from software vendors
> may have other advantages not available to those consuming directly,
> and 2) anyone can switch to pulling directly from xen.org if they
> wish.

I think that any concerns about fairness should be raised now by the
parties that feel they are impacted, as part of the open discussion,
rather than as speculation.
 
> * _Administrative overhead_.  This comprises a number of different
> aspects: for example, how hard is it to come up with a precise and
> "fair" policy?  How much effort will it be for xen.org to determine
> whether or not someone should be on the list?

The transparent list request process for the distros email list seems
low impact.

> Another question has to do with robustness of enforcement.  If there
> is a strong incentive for people on the list to break the rules
> ("moral hazard"), then we need to import a whole legal framework: how
> do we detect breaking the rules?  Who decides that the rules have
> indeed been broken, and decides the consequences?  Is there an appeals
> process?  At what point is someone who has broken the rules in the
> past allowed back on the list?  What are the legal, project, and
> community implications of having to do this, and so on?  All of this
> will impose a much heavier burden on not only this discussion, but
> also on the xen.org security team.

I think that before we become too concerned about rules and
enforcement, we should have a better sense of what rules are in the
best interest of the largest population of users possible. I don't
feel that we've discussed this enough.

You're right that there's likely no solution that everyone is going to
feel is "fair." But we should be able to come up with a proposal that
everyone agrees improves security for the most users.

This is a trade-off that is made constantly in coordinated response
activities. If there's a commonly embedded bit of code, for example
zlib, that many software vendors ship in their products, it will be
impossible to do a coordinated disclosure among every single one of
them. Typically a response is coordinated among as many parties that
can be reasonably handled while preserving some confidence that leaks
will be minimized.

For a recent example of this, see the LWN article on the DES-based
crypt() coordination posted last month: http://lwn.net/Articles/500444/

> (Disclaimer: I am not a lawyer.) It should be noted that because of
> the nature of the GPL, we cannot impose additional contractual
> limitations on the re-distribution of a GPL'ed patch, and thus we
> cannot seek legal redress for those who re-distribute such a patch (or
> resulting binaries) in a way that violates the pre-disclosure policy.
> But for the purposes of this discussion, I am going to assume that we
> can, however, choose to remove them from the pre-disclosure list as a
> result.

I'm also not a lawyer. I don't see any reason why removing someone
from a mailing list would be prohibited by the GPL. But again, before
we decide that such rules should be in place we need to consider if
that is the best interest of improving security.

> I think those cover the main points that have been brought up in the
> discussion.  Please feel free to give feedback.  Next week probably I
> will attempt to give an analysis, applying these criteria to the
> different options.  I haven't yet come up with what I think is a
> satisfactory conclusion yet.

Thanks again for writing this up. I'm looking forward to your analysis.

By the way, I'll be at OSCON next week. I'd love to meet up and talk
in person.

Matt

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria
  2012-07-14  0:18 ` Security discussion: Summary of proposals and criteria Matt Wilson
@ 2012-07-16 17:56   ` George Dunlap
  0 siblings, 0 replies; 18+ messages in thread
From: George Dunlap @ 2012-07-16 17:56 UTC (permalink / raw)
  To: Matt Wilson
  Cc: Lars Kurth, xen-devel@lists.xen.org, Jan Beulich,
	Stefano Stabellini

On Fri, Jul 13, 2012 at 5:18 PM, Matt Wilson <msw@amazon.com> wrote:
>> One thing we all seem to agree on is that with regard to the public
>> disclosure and the wishes of the discloser:
>> * In general, we should default to following the wishes of the discloser
>> * We should have a framework available to advise the discloser of a
>> reasonable embargo period if they don't have strong opinions of their
>> own (many have listed the oCERT guidelines)
>> * Disclosing early against the wishes of the disclosure is possible if
>> the discloser's request is unreasonable, but should only be considered
>> in extreme situations.
>
> I agree with the first two bullets. On the third, I think that it is a
> very unusual circumstance for a discoverer to make a request that is
> categorized as unreasonable by the security process. For example, I
> think that it is reasonable for a discoverer to request that a
> different group take over the coordination of an issue if it is
> discovered that the vulnerability reaches beyond Xen projects. Should
> this occur in the future, I think that Xen should honor the request
> and work with the new coordinator.

It sounds to me like you fundamentally agree with the concept, but
just want to be clear exactly what "extreme" and "unreasonable" means.

WRT this particular situation: Realizing that the vulnerability
extended beyond the Xen community and wanting to pass it on to someone
else to coordinate all related projects is certainly an understandable
thing to do.  However, as I understand it, that's not actually what
happened in this case.  Originally the release date for Xen was set
for a certain date; the discloser asked the date to be changed within
12 hours of the original disclosure date.  And the reason for the
change was not because they wanted to bring in other projects, but
because one particular member of the list mounted an intense campaign
to have it extended.  The reason that member mounted the campaign was
not because they were concerned with other projects, but only because
they themselves had (as I understand it) not paid attention to the
initial announcement (as an organization) and thus didn't have enough
time to patch their systems before the end of the embargo period.  If
that organization had been paying attention, they would not have
mounted the campaign, and the Xen vulnerability would have been
disclosed without reference to any other projects.

(Also, if that organization had been paying attention, we would not be
having this discussion right now, as no fault would have been found
with the status quo.)

> I think that we should attempt to come to some consensus before taking
> a straw poll. There are too many options below, and I think that we
> should be able to eliminate some of them through open discussion.

Well, it's clear that we do not have consensus.  What I had in mind
was actually a particular kind of poll designed to help focus the
discussion on points that are actually important.  It's called
"Identify the Champion", and I've seen it used really well on program
committee meetings.  (http://scg.unibe.ch/download/champion/)

The basic idea is this:  For each of the items below, people respond
in one of 4 ways:
* This is an excellent idea, and I will argue for it.
* I am happy with this idea, but I will not argue for it.
* I am unhappy with this idea, but I will not argue against it.
* This is a terrible idea, and I will argue against it.

If we find that there is one option no one will argue against, then we
can take a real vote on that option.  If not, we can discard ideas no
one is arguing for with no further discussion, and focus on the areas
where there is disagreement.

Does that make sense?

>> In all cases, I think that we can make a public announcement that
>> there *is* a security vulnerability, and the date we expect to
>> publicly disclose the fix, so that anyone who has not been disclosed
>> to non-publicly can be prepared to apply it as soon as possible.
>
> I've not seen this path taken by other open source projects, nor
> commercial software vendors. I think that well written security
> advisories that are distributed broadly (i.e., sent to bugtraq,
> full-disclosure, oss-security, and regional CSIRTs if warranted) are
> effective alert mechanisms for users. As an aside, JPCERT/CC published
> some nice guidelines for software providers on how best to format
> advisories:
>   http://www.jpcert.or.jp/english/vh/2009/vuln_announce_manual_en2009.pdf

OK -- well this is a detail I think we can work out after we address
the main point.

> I think that option 1 effectively abandons much of the benefits of a
> coordinated/responsible security disclosure in an open source
> context. I don't think that a patch from Xen.org provides an
> immediately consumable remediation for a security issue that users
> need.
>
> Of the remaining options, to me it seems that a refinement of the
> status quo is in order. I don't think that the current policy is
> fundamentally flawed. If we approach the problem the same way as code,
> wouldn't an iterative approach make sense here rather than a rewrite?
>
> I'll say again that I think that the "software provider" versus
> "service provider" distinction is artificial. Some software providers
> will undoubtedly be using their software in both private and public
> installations. Some service providers will be providing software as a
> service.

Well, no, it's not at all artificial.  There is a *very big*
difference between software providers and service providers, and it's
who their customers are and what they want.  That has a very large
impact on their incentives, as well as actual cost.

What you want is not to blur the distinction between "software
provider" and "service provider"; what you want is to make an
additional distinction between users: "vendor-supplied" users, who
simply take binaries directly from a software provider, and
"self-supplied" users, who do their own development based on upstream
sources.

So it's true that both "software providers" and "self-supplied users"
both do development, the effect of the disclosure on both is very
different.  I had intended to make this clear in my analysis; but
there are so many factors now I'm not sure the analysis will not
collapse under its own weight. :-)

> However, one development during the past 12 years of arguing is the
> idea of responsible/coordinated disclosure as a middle ground for
> addressing vulnerabilities in a more controlled way. By and large I
> think that coordinated disclosure is the best approach available, and
> we should look to incorporate established best practices by other
> organizations who have traveled this road before us.

Well one very large organization (Linux) has decided not to have any
pre-disclosure; so no matter what we choose, there are precedents. :-)

> I think that any concerns about fairness should be raised now by the
> parties that feel they are impacted, as part of the open discussion,
> rather than as speculation.

Well, many already have been raised, both on the list and in
discussions with individuals.

>> Another question has to do with robustness of enforcement.  If there
>> is a strong incentive for people on the list to break the rules
>> ("moral hazard"), then we need to import a whole legal framework: how
>> do we detect breaking the rules?  Who decides that the rules have
>> indeed been broken, and decides the consequences?  Is there an appeals
>> process?  At what point is someone who has broken the rules in the
>> past allowed back on the list?  What are the legal, project, and
>> community implications of having to do this, and so on?  All of this
>> will impose a much heavier burden on not only this discussion, but
>> also on the xen.org security team.
>
> I think that before we become too concerned about rules and
> enforcement, we should have a better sense of what rules are in the
> best interest of the largest population of users possible. I don't
> feel that we've discussed this enough.
>
> You're right that there's likely no solution that everyone is going to
> feel is "fair." But we should be able to come up with a proposal that
> everyone agrees improves security for the most users.
>
> This is a trade-off that is made constantly in coordinated response
> activities. If there's a commonly embedded bit of code, for example
> zlib, that many software vendors ship in their products, it will be
> impossible to do a coordinated disclosure among every single one of
> them. Typically a response is coordinated among as many parties that
> can be reasonably handled while preserving some confidence that leaks
> will be minimized.
>
> For a recent example of this, see the LWN article on the DES-based
> crypt() coordination posted last month: http://lwn.net/Articles/500444/
>
>> I think those cover the main points that have been brought up in the
>> discussion.  Please feel free to give feedback.  Next week probably I
>> will attempt to give an analysis, applying these criteria to the
>> different options.  I haven't yet come up with what I think is a
>> satisfactory conclusion yet.
>
> Thanks again for writing this up. I'm looking forward to your analysis.
>
> By the way, I'll be at OSCON next week. I'd love to meet up and talk
> in person.

That would be great. :-)

 -George

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-07-06 16:46 Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217) George Dunlap
  2012-07-08  7:30 ` Joanna Rutkowska
  2012-07-14  0:18 ` Security discussion: Summary of proposals and criteria Matt Wilson
@ 2012-08-03 17:31 ` George Dunlap
  2012-08-06  6:55   ` Jan Beulich
  2 siblings, 1 reply; 18+ messages in thread
From: George Dunlap @ 2012-08-03 17:31 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Lars Kurth, xen-devel@lists.xen.org, Matt Wilson,
	Stefano Stabellini

I said before that I was going to give an analysis, and I had a very
detailed one written out.  The result of that analysis looked very
clear-cut.  But a couple of new arguments have come to light, and it
makes the whole thing much less clear to me.  So what I'm going to do
instead is describe the arguments that I think are pertinent, and then
my own recommendation.

Next week I plan on sending a poll out.  The poll won't be structured
like a vote; rather, the purpose of the poll is to help move the
discussion forwards by idenfitying where the sentiment lies.  The poll
will ask you to rate each option with one of the following selections:
* This is an excellent idea, and I will argue for it.
* I am happy with this idea, but I will not argue for it.
* I am not happy with this idea, but I will not argue against it.
* This is a terrible idea, and I will argue against it.

If we have some options which has at least one "argue for" and no
"argue againsts", then we can simply take a formal vote and move on to
the smaller points int he discussion.  Otherwise, we can eliminate
options for which there are no "argue for"s, and focus on the points
where there both "argue for"s and "argue against"s.

Back to the discussion.  There are two additional points I want to bring out.

First, as Joanna and others have pointed out, closing a vulnerability
will not close a back-door that has been installed while the user was
vulnerable.  So it may well be worth an attacker's time to develop an
exploit based on a bug report.

Secondly, my original discussion had assumed that the risk during
"public vulnerability" for all users was the same.  Unfortunately, I
don't think that's true.  Some targets may be more valuable than
others.  In particular, the value of attacking a hosting provider may
be correlated to the value to an attacker of the aggregate of all of
their customers.  Thus it is simply more likely for a large provider
to be the targt of an attack than a small provider.

Thus public vulnerability for a large provider may be very risky
indeed; and I tend to agree with the idea that large providers, and
other large potential users (such as large governmetns, &c) should be
given some pre-disclosure to minimize this risk.

However, as has been previously mentioned, being on the pre-disclosure
list is a very large advantage, and is unfair towards the majority of
users, who are also at significant risk during their own public
vulnerability period.

So right now I think the best option is to have a pre-disclosure list
that is fairly easy to join: if the security team has reason to
believe you are a hosting company, they can put you on the list.

Although I am unhappy with the idea of only large providers being on
the list, I still think it's a better option than giving them no
pre-disclsoure.

 -George

On Fri, Jul 6, 2012 at 5:46 PM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> We've had a number of viewpoints expressed, and now we need to figure
> out how to move forward in the discussion.
>
> One thing we all seem to agree on is that with regard to the public
> disclosure and the wishes of the discloser:
> * In general, we should default to following the wishes of the discloser
> * We should have a framework available to advise the discloser of a
> reasonable embargo period if they don't have strong opinions of their
> own (many have listed the oCERT guidelines)
> * Disclosing early against the wishes of the disclosure is possible if
> the discloser's request is unreasonable, but should only be considered
> in extreme situations.
>
> What next needs to be decided, it seems to me, is concerning
> pre-disclosure: Are we going to have a pre-disclosure list (to whom we
> send details before the public disclosure), and if so who is going to
> be on it?  Then we can start filling in the details.
>
> What I propose is this.  I'll try to summarize the different options
> and angles discussed.  I will also try to synthesize the different
> arguments people have made and make my own recommendation.  Assuming
> that no creative new solutions are introduced in response, I think we
> should take an anonymous "straw poll", just to see what people think
> about the various options.  If that shows a strong consensus, then we
> should have a formal vote.  If it does not show consensus, then we'll
> at least be able to discuss the issue more constructively (by avoiding
> solutions no one is championing).
>
> So below is my summary of the options and the criteria that have been
> brought up so far.  It's fairly long, so I will give my own analysis
> and recommendation in a different mail, perhaps in a day or two.  I
> will also be working with Lars to form a straw poll where members of
> the list can informally express their preference, so we can see where
> we are in terms of agreement, sometime over the next day or two.
>
> = Proposed options =
>
> At a high level, I think we basically have five options to consider.
>
> In all cases, I think that we can make a public announcement that
> there *is* a security vulnerability, and the date we expect to
> publicly disclose the fix, so that anyone who has not been disclosed
> to non-publicly can be prepared to apply it as soon as possible.
>
> 1. No pre-disclosure list.  People are brought in only to help produce
> a fix.  The fix is released to everyone publicly when it's ready (or,
> if the discloser has asked for a longer embargo period, when that
> embargo period is up).
>
> 2. Pre-disclosure list consists only of software vendors -- people who
> compile and ship binaries to others.  No updates may be given to any
> user until the embargo period is up.
>
> 3. Pre-disclosure list consists of software vendors and some subset of
> privleged users (e.g., service providers above a certain size).
> Privileged users will be provided with patches at the same time as
> software vendors.  However, they will not be permitted to update their
> systems until the embargo period is up.
>
> 4. Pre-disclosure list consists of software vendors and privileged
> users. Privleged users will be provided with patches at the same time
> as software vendors.  They will be permitted to update their systems
> at any time.  Software vendors will be permitted to send code updates
> to service providers who are on the pre-disclosure list.  (This is the
> status quo.)
>
> 5. Pre-disclsoure list is open to any organiation (perhaps with some
> minimal entrance criteria, like having some form of incorporation, or
> having registered a domain name).  Members of the list may update
> their systems at any time; software vendors will be permitted to send
> code updates to anyone on the pre-disclosure list.
>
> 6. Pre-disclosure list open to any organization, but no one permitted
> to roll out fixes until the embargo period is up.
>
> = Criteria =
>
> I think there are several criteria we need to consider.
>
> * _Risk of being exploited_.  The ultimate goal any pre-disclosure
> process is to try to minimize the total risk for users of being
> exploited.  That said, any policy decision must take into account both
> the benefits in terms of risk reduction as well as the other costs of
> implementing the policy.
>
> To simplify things a bit, I think there are two kinds of risk.
> Between the time a vulnerability has been publicly announced and the
> time a user patches their system, that user is "publicly vulnerable"
> -- running software that contains a public vulnerability.  However,
> the user was vulnerable before that; they were vulnerable from the
> time they deployed the system with the vulnerability.  I will call
> this "privately vulnerable" -- running software that contains a
> non-public vulnerability.
>
> Now at first glance, it would seem obvious that being publicly
> vulnerable carries a much higher risk of being privately vulnerable.
> After all, to exploit a vulnerability you need to have malicious
> intent, the skills to leverage a vulnerability into an exploit, and
> you need to know about a vulnerability.  By announcing it publicly, a
> much greater number of people with malicious intent and the requisite
> skills will now know about the vulnerability; surely this increases
> the chances of someone being actually exploited.
>
> However, one should not under-estimate the risk of private
> vulnerability.  Black hats prize and actively look for vulnerabilities
> which have not yet been made public.  There is, in fact, a black
> market for such "0-day" exploits.  If your infrastructure is at all
> valuable, black hats have already been looking for the bug which makes
> you vulnerable; you have no way of knowing if they have found it yet
> or not.
>
> In fact, one could make the argument that publicly announcing a
> vulnerability along with a fix makes the vulnerability _less_ valuable
> to black-hats.  Developing an exploit from a vulnerability requires a
> significant amount of effort; and you know that security-conscious
> service providers will be working as fast as possible to close the
> hole.  Why would you spend your time and energy for an exploit that's
> only going to be useful for a day or two at most?
>
> Ultimately the only way to say for sure would be to talk to people who
> know the black hat community well.  But we can conclude this: private
> vulnerability is a definite risk which needs to be considered when
> minimizing total risk.
>
> Another thing to consider is how the nature of the pre-disclosure and
> public disclosure affect the risk.  For pre-disclosure, the more
> individuals have access to pre-disclosure information, the higher the
> risk that the information will end up in the hands of a black-hat.
> Having a list anyone can sign up to, for instance, may be very little
> more secure than a quiet public disclosure.
>
> For public disclosure, the nature of the disclosure may affect the
> risk, or the perception of risk, materially.  If the fix is simply
> checked into a public repository without fanfare or comment, it may
> not raise the risk of public vulnerability significantly; while if the
> fix is announced in press releases and on blogs, the _perception_ of
> the risk will undoubtedly increase.
>
> * _Fairness_.  Xen is a community project and relies on the good-will
> of the community to continue.  Giving one sub-group of our users an
> advantage over another sub-group will be costly in terms of community
> good will.  Furthermore, depending on what kind of sub-group we have
> and how it's run, it may well be considered anti-competitive and
> illegal in some jurisdictions.  Some might say we should never
> consider such a thing.  At very least, doing so should be very
> carefully considered to make sure the risk is worth the benefit.
>
> The majority of this document will focus on the impact of the policy
> on actual users.  However, I think it is also legitimate to consider
> the impact of the policies on software vendors as well.  Regardless of
> the actual risk to users, the _perception_ of risk may have a
> significant impact on the success of some vendors over others.
>
> It is in fact very difficult to achieve perfect fairness between all
> kinds of parties.  However, as much as possible, unfairness should be
> based on decisions that the party themselves have a reasonable choice
> about.  For instance, having a slight advantage to compiling your own
> hypervisor directly from xen.org rather than using a software vendor
> might be tolerable because 1) those receiving from software vendors
> may have other advantages not available to those consuming directly,
> and 2) anyone can switch to pulling directly from xen.org if they
> wish.
>
> * _Administrative overhead_.  This comprises a number of different
> aspects: for example, how hard is it to come up with a precise and
> "fair" policy?  How much effort will it be for xen.org to determine
> whether or not someone should be on the list?
>
> Another question has to do with robustness of enforcement.  If there
> is a strong incentive for people on the list to break the rules
> ("moral hazard"), then we need to import a whole legal framework: how
> do we detect breaking the rules?  Who decides that the rules have
> indeed been broken, and decides the consequences?  Is there an appeals
> process?  At what point is someone who has broken the rules in the
> past allowed back on the list?  What are the legal, project, and
> community implications of having to do this, and so on?  All of this
> will impose a much heavier burden on not only this discussion, but
> also on the xen.org security team.
>
> (Disclaimer: I am not a lawyer.) It should be noted that because of
> the nature of the GPL, we cannot impose additional contractual
> limitations on the re-distribution of a GPL'ed patch, and thus we
> cannot seek legal redress for those who re-distribute such a patch (or
> resulting binaries) in a way that violates the pre-disclosure policy.
> But for the purposes of this discussion, I am going to assume that we
> can, however, choose to remove them from the pre-disclosure list as a
> result.
>
> I think those cover the main points that have been brought up in the
> discussion.  Please feel free to give feedback.  Next week probably I
> will attempt to give an analysis, applying these criteria to the
> different options.  I haven't yet come up with what I think is a
> satisfactory conclusion yet.
>
>  -George
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)
  2012-08-03 17:31 ` Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217) George Dunlap
@ 2012-08-06  6:55   ` Jan Beulich
  0 siblings, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2012-08-06  6:55 UTC (permalink / raw)
  To: George Dunlap
  Cc: xen-devel@lists.xen.org, Lars Kurth, Matt Wilson,
	Stefano Stabellini

>>> On 03.08.12 at 19:31, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> Secondly, my original discussion had assumed that the risk during
> "public vulnerability" for all users was the same.  Unfortunately, I
> don't think that's true.  Some targets may be more valuable than
> others.  In particular, the value of attacking a hosting provider may
> be correlated to the value to an attacker of the aggregate of all of
> their customers.  Thus it is simply more likely for a large provider
> to be the targt of an attack than a small provider.

Not necessarily - if the same attack works universally (or can
be made work with very little additional effort), using it against
many smaller ones may be as worthwhile to the attacker.

Jan

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2012-08-06  6:55 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-07-06 16:46 Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217) George Dunlap
2012-07-08  7:30 ` Joanna Rutkowska
2012-07-09  9:23   ` George Dunlap
2012-07-09 11:31     ` Joanna Rutkowska
2012-07-09 13:25       ` Joanna Rutkowska
2012-07-09 13:35         ` Keir Fraser
2012-07-09 13:40           ` Joanna Rutkowska
2012-07-09 13:51       ` Tim Deegan
2012-07-09 14:08         ` Joanna Rutkowska
2012-07-12 16:34           ` Stefano Stabellini
2012-07-12 16:47             ` Joanna Rutkowska
2012-07-12 17:00               ` Stefano Stabellini
2012-07-12 17:22                 ` Joanna Rutkowska
2012-07-13 18:15                   ` Stefano Stabellini
2012-07-14  0:18 ` Security discussion: Summary of proposals and criteria Matt Wilson
2012-07-16 17:56   ` George Dunlap
2012-08-03 17:31 ` Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217) George Dunlap
2012-08-06  6:55   ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).