* DRM security flaws and security levels.
@ 2014-04-11 12:42 Thomas Hellstrom
2014-04-11 20:31 ` David Herrmann
2014-04-14 12:41 ` One Thousand Gnomes
0 siblings, 2 replies; 7+ messages in thread
From: Thomas Hellstrom @ 2014-04-11 12:42 UTC (permalink / raw)
To: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org
Hi,
as was discussed a while ago, there are some serious security flaws with
the current drm master model, that allows a
user that had previous access or current access to an X server terminal
to access the GPU memory of the active X server, without being
authenticated to the X server and thereby also access other user's
secret information
Scenario 1a)
User 1 uses the X server, then locks the screen. User 2 then VT
switches, perhaps using fast user-switching, opens a DRM connection and
becomes authenticated with itself. It then starts to guess GEM names
used by the switched-away X server and open the corresponding objects.
Then mmaps those objects and dumps data.
Scenario 1b)
As in 1, but instead of mmaping the GEM objects, crafts a command buffer
that dumps all GPU memory to a local buffer and copies it out.
Scenario 2
User 1 logs in on X. Starts a daemon that authenticates with X. Then
logs out. User 2 logs in. User 1's daemon can now access data in a
similar fashion to what's done in 1a and 1b.
I don't think any driver is immune against all these scenarios. I think
all GEM drivers are vulnerable to 1a) and 2a), but that could be easily
fixed by only allowing GEM open of shared buffers from the same master.
I'm not sure about 1b) and 2b) but according to the driver developers,
radeon and noveau should be safe. vmwgfx should be safe against 1) but
not currently against 2 because the DRM fd is being kept open across X
server generations.
I think these flaws can be fixed in all modern drivers. For a) type
scenarios, refuse open of shared buffers that belong to other masters,
and on new X server generations, release the old master completely by
closing the FD or a special ioctl that releases master instead of drops
master.
For b) type scenarios, either provide a command verifier, per fd virtual
GPU memory or for simpler hardware:
throw out all GPU memory on master drop and block ioctls requiring
authentication until master becomes active again.
In any case, before enabling render nodes for drm we discussed a sysfs
attribute that stated the security level of the device, so that udev
could set up permissions accordingly. My suggestion is:
-1: The driver allows an authenticated client to craft command streams
that could access any part of system memory. These drivers should be
kept in staging until they are fixed.
0: Drivers that are vulnerable to any of the above scenarios.
1: Drivers that are immune against all above scenarios but allows any
authenticated client with *active* master to access all GPU memory. Any
enabled render nodes will be insecure, while primary nodes are secure.
2: Drivers that are immune against all above scenarios and can protect
clients from accessing eachother's gpu memory:
Render nodes will be secure.
Thoughts?
Thomas
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: DRM security flaws and security levels.
2014-04-11 12:42 DRM security flaws and security levels Thomas Hellstrom
@ 2014-04-11 20:31 ` David Herrmann
2014-04-11 21:15 ` Thomas Hellstrom
2014-04-14 12:41 ` One Thousand Gnomes
1 sibling, 1 reply; 7+ messages in thread
From: David Herrmann @ 2014-04-11 20:31 UTC (permalink / raw)
To: Thomas Hellstrom
Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org
Hi
On Fri, Apr 11, 2014 at 2:42 PM, Thomas Hellstrom <thellstrom@vmware.com> wrote:
> as was discussed a while ago, there are some serious security flaws with
> the current drm master model, that allows a
> user that had previous access or current access to an X server terminal
> to access the GPU memory of the active X server, without being
> authenticated to the X server and thereby also access other user's
> secret information
1a) and 1b) are moot if you disallow primary-node access but require
clients to use render-nodes with dma-buf. There're no gem-names on
render-nodes so no way to access other buffers (assuming the GPU does
command-stream checking and/or VM).
2) There is no DRM-generic data other than buffers that is global. So
imho this is a driver-specific issue.
So I cannot see why this is a DRM issue. The only leaks I see are
legacy interfaces and driver-specific interfaces. The first can be
disabled via chmod() for clients, and the second is something driver
authors should fix.
Thanks
David
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: DRM security flaws and security levels.
2014-04-11 20:31 ` David Herrmann
@ 2014-04-11 21:15 ` Thomas Hellstrom
2014-04-11 22:05 ` Rob Clark
0 siblings, 1 reply; 7+ messages in thread
From: Thomas Hellstrom @ 2014-04-11 21:15 UTC (permalink / raw)
To: David Herrmann
Cc: Thomas Hellstrom, linux-kernel@vger.kernel.org,
dri-devel@lists.freedesktop.org
On 04/11/2014 10:31 PM, David Herrmann wrote:
> Hi
>
> On Fri, Apr 11, 2014 at 2:42 PM, Thomas Hellstrom <thellstrom@vmware.com> wrote:
>> as was discussed a while ago, there are some serious security flaws with
>> the current drm master model, that allows a
>> user that had previous access or current access to an X server terminal
>> to access the GPU memory of the active X server, without being
>> authenticated to the X server and thereby also access other user's
>> secret information
> 1a) and 1b) are moot if you disallow primary-node access but require
> clients to use render-nodes with dma-buf. There're no gem-names on
> render-nodes so no way to access other buffers (assuming the GPU does
> command-stream checking and/or VM).
Disallowing primary node access will break older user-space drivers and
non-root
EGL clients. I'm not sure that's OK, even if the change is done from
user-space.
A simple gem fix would also do the trick.
>
> 2) There is no DRM-generic data other than buffers that is global. So
> imho this is a driver-specific issue.
>
> So I cannot see why this is a DRM issue. The only leaks I see are
> legacy interfaces and driver-specific interfaces. The first can be
> disabled via chmod() for clients, and the second is something driver
> authors should fix.
Yeah, but some driver authors can't or won't fix the drivers w r t this,
hence the security levels.
Thanks,
/Thomas
>
> Thanks
> David
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: DRM security flaws and security levels.
2014-04-11 21:15 ` Thomas Hellstrom
@ 2014-04-11 22:05 ` Rob Clark
0 siblings, 0 replies; 7+ messages in thread
From: Rob Clark @ 2014-04-11 22:05 UTC (permalink / raw)
To: Thomas Hellstrom
Cc: David Herrmann, linux-kernel@vger.kernel.org,
dri-devel@lists.freedesktop.org
On Fri, Apr 11, 2014 at 5:15 PM, Thomas Hellstrom <thellstrom@vmware.com> wrote:
> On 04/11/2014 10:31 PM, David Herrmann wrote:
>> Hi
>>
>> On Fri, Apr 11, 2014 at 2:42 PM, Thomas Hellstrom <thellstrom@vmware.com> wrote:
>>> as was discussed a while ago, there are some serious security flaws with
>>> the current drm master model, that allows a
>>> user that had previous access or current access to an X server terminal
>>> to access the GPU memory of the active X server, without being
>>> authenticated to the X server and thereby also access other user's
>>> secret information
>> 1a) and 1b) are moot if you disallow primary-node access but require
>> clients to use render-nodes with dma-buf. There're no gem-names on
>> render-nodes so no way to access other buffers (assuming the GPU does
>> command-stream checking and/or VM).
>
> Disallowing primary node access will break older user-space drivers and
> non-root
> EGL clients. I'm not sure that's OK, even if the change is done from
> user-space.
> A simple gem fix would also do the trick.
>
>>
>> 2) There is no DRM-generic data other than buffers that is global. So
>> imho this is a driver-specific issue.
>>
>> So I cannot see why this is a DRM issue. The only leaks I see are
>> legacy interfaces and driver-specific interfaces. The first can be
>> disabled via chmod() for clients, and the second is something driver
>> authors should fix.
>
> Yeah, but some driver authors can't or won't fix the drivers w r t this,
> hence the security levels.
fwiw, I do think we want security level reporting for drivers that
don't have per-process pagetables (either the hw doesn't support, or
simply just not implemented yet) to avoid giving a false sense of
security with rendernodes. It might be useful to even be able to
request a security level.. ie. some hw might be able to support
process isolation of gpu buffers, but at a performance penalty.. Joe
Gamer might be ok with the tradeoff in return for moar fps. Ideally
you could request on a per process basis (via some sort of egl/glx
extension) to firewall off, say, your online banking session.
note, sencario 1a is, I think, only an issue for shared buffers (ie.
ones that have an flink name).. so, ok, another process can see the
video game you were playing. Ok, that is not quite true, since
browsers use gpu accel (but maybe they want to decide to
enable/disable that, at least for certain sites (like your online
banking) based on the max-security-level of the driver). And gl
compositing window managers.
I doubt any of that is worse than any closed src gpu driver. But
either way, for really classified/sensitive material you might want to
think about a computer with no gpu. I wouldn't be surprised if the
NSA knew as much or more about these gpu's as we do.
BR,
-R
> Thanks,
> /Thomas
>
>
>>
>> Thanks
>> David
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: DRM security flaws and security levels.
2014-04-11 12:42 DRM security flaws and security levels Thomas Hellstrom
2014-04-11 20:31 ` David Herrmann
@ 2014-04-14 12:41 ` One Thousand Gnomes
2014-04-14 12:56 ` Thomas Hellstrom
1 sibling, 1 reply; 7+ messages in thread
From: One Thousand Gnomes @ 2014-04-14 12:41 UTC (permalink / raw)
To: Thomas Hellstrom
Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org
> throw out all GPU memory on master drop and block ioctls requiring
> authentication until master becomes active again.
If you have a per driver method then the driver can implement whatever is
optimal (possibly including throwing it all out).
> -1: The driver allows an authenticated client to craft command streams
> that could access any part of system memory. These drivers should be
> kept in staging until they are fixed.
I am not sure they belong in staging even.
> 0: Drivers that are vulnerable to any of the above scenarios.
> 1: Drivers that are immune against all above scenarios but allows any
> authenticated client with *active* master to access all GPU memory. Any
> enabled render nodes will be insecure, while primary nodes are secure.
> 2: Drivers that are immune against all above scenarios and can protect
> clients from accessing eachother's gpu memory:
> Render nodes will be secure.
>
> Thoughts?
Another magic number to read, another case to get wrong where the OS
isn't providing security by default.
If the driver can be fixed to handle it by flushing out all GPU memory
then the driver should be fixed to do so. Adding magic udev nodes is just
adding complexity that ought to be made to go away before it even becomes
an API.
So I think there are three cases
- insecure junk driver. Shouldn't even be in staging
- hardware isn't as smart enough, or perhaps has a performance problem so
sometimes flushes all buffers away on a switch
- drivers that behave well
Do you then even need a sysfs node and udev hacks (remembering not
everyone even deploys udev on their Linux based products)
For the other cases
- how prevalent are the problem older user space drivers nowdays ?
- the fix for "won't fix" drivers is to move them to staging, and then
if they are not fixed or do not acquire a new maintainer who will,
delete them.
- if we have 'can't fix drivers' then its a bit different and we need to
understand better *why*.
Don't screw the kernel up because there are people who can't be bothered
to fix bugs. Moving them out of the tree is a great incentive to find
someone to fix it.
[Rob Clark]
>fwiw, I do think we want security level reporting for drivers that
>don't have per-process pagetables (either the hw doesn't support, or
>simply just not implemented yet)
Agreed - and this is an example of where it can't really be hidden very
easily. Even better would be if asking for an isolation safe buffer was
required to be sure you got one. Policy is then in the X server (although
I'm not sure how you'd map it nicely - Xsecurity or something new ?)
This discussion is missing one other thing. If you have a desktop
spanning multiple hardware interfaces you need to get the right
information to the desktop. Do you just refuse to support the secure
buffer for the banking app, or does it merely get tied into the display
area rendered by a particular adapter ?
Alan
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: DRM security flaws and security levels.
2014-04-14 12:41 ` One Thousand Gnomes
@ 2014-04-14 12:56 ` Thomas Hellstrom
2014-04-14 13:09 ` Rob Clark
0 siblings, 1 reply; 7+ messages in thread
From: Thomas Hellstrom @ 2014-04-14 12:56 UTC (permalink / raw)
To: One Thousand Gnomes
Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org
On 04/14/2014 02:41 PM, One Thousand Gnomes wrote:
>> throw out all GPU memory on master drop and block ioctls requiring
>> authentication until master becomes active again.
> If you have a per driver method then the driver can implement whatever is
> optimal (possibly including throwing it all out).
>
>> -1: The driver allows an authenticated client to craft command streams
>> that could access any part of system memory. These drivers should be
>> kept in staging until they are fixed.
> I am not sure they belong in staging even.
>
>> 0: Drivers that are vulnerable to any of the above scenarios.
>> 1: Drivers that are immune against all above scenarios but allows any
>> authenticated client with *active* master to access all GPU memory. Any
>> enabled render nodes will be insecure, while primary nodes are secure.
>> 2: Drivers that are immune against all above scenarios and can protect
>> clients from accessing eachother's gpu memory:
>> Render nodes will be secure.
>>
>> Thoughts?
> Another magic number to read, another case to get wrong where the OS
> isn't providing security by default.
>
> If the driver can be fixed to handle it by flushing out all GPU memory
> then the driver should be fixed to do so. Adding magic udev nodes is just
> adding complexity that ought to be made to go away before it even becomes
> an API.
>
> So I think there are three cases
>
> - insecure junk driver. Shouldn't even be in staging
> - hardware isn't as smart enough, or perhaps has a performance problem so
> sometimes flushes all buffers away on a switch
> - drivers that behave well
>
> Do you then even need a sysfs node and udev hacks (remembering not
> everyone even deploys udev on their Linux based products)
>
> For the other cases
>
> - how prevalent are the problem older user space drivers nowdays ?
>
> - the fix for "won't fix" drivers is to move them to staging, and then
> if they are not fixed or do not acquire a new maintainer who will,
> delete them.
>
> - if we have 'can't fix drivers' then its a bit different and we need to
> understand better *why*.
>
> Don't screw the kernel up because there are people who can't be bothered
> to fix bugs. Moving them out of the tree is a great incentive to find
> someone to fix it.
>
On second thought I'm dropping this whole issue.
I've brought this and other security issues up before but nobody really
seems to care.
/Thomas
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: DRM security flaws and security levels.
2014-04-14 12:56 ` Thomas Hellstrom
@ 2014-04-14 13:09 ` Rob Clark
0 siblings, 0 replies; 7+ messages in thread
From: Rob Clark @ 2014-04-14 13:09 UTC (permalink / raw)
To: Thomas Hellstrom
Cc: One Thousand Gnomes, linux-kernel@vger.kernel.org,
dri-devel@lists.freedesktop.org
On Mon, Apr 14, 2014 at 8:56 AM, Thomas Hellstrom <thellstrom@vmware.com> wrote:
> On 04/14/2014 02:41 PM, One Thousand Gnomes wrote:
>>> throw out all GPU memory on master drop and block ioctls requiring
>>> authentication until master becomes active again.
>> If you have a per driver method then the driver can implement whatever is
>> optimal (possibly including throwing it all out).
>>
>>> -1: The driver allows an authenticated client to craft command streams
>>> that could access any part of system memory. These drivers should be
>>> kept in staging until they are fixed.
>> I am not sure they belong in staging even.
>>
>>> 0: Drivers that are vulnerable to any of the above scenarios.
>>> 1: Drivers that are immune against all above scenarios but allows any
>>> authenticated client with *active* master to access all GPU memory. Any
>>> enabled render nodes will be insecure, while primary nodes are secure.
>>> 2: Drivers that are immune against all above scenarios and can protect
>>> clients from accessing eachother's gpu memory:
>>> Render nodes will be secure.
>>>
>>> Thoughts?
>> Another magic number to read, another case to get wrong where the OS
>> isn't providing security by default.
>>
>> If the driver can be fixed to handle it by flushing out all GPU memory
>> then the driver should be fixed to do so. Adding magic udev nodes is just
>> adding complexity that ought to be made to go away before it even becomes
>> an API.
>>
>> So I think there are three cases
>>
>> - insecure junk driver. Shouldn't even be in staging
>> - hardware isn't as smart enough, or perhaps has a performance problem so
>> sometimes flushes all buffers away on a switch
>> - drivers that behave well
>>
>> Do you then even need a sysfs node and udev hacks (remembering not
>> everyone even deploys udev on their Linux based products)
>>
>> For the other cases
>>
>> - how prevalent are the problem older user space drivers nowdays ?
>>
>> - the fix for "won't fix" drivers is to move them to staging, and then
>> if they are not fixed or do not acquire a new maintainer who will,
>> delete them.
>>
>> - if we have 'can't fix drivers' then its a bit different and we need to
>> understand better *why*.
>>
>> Don't screw the kernel up because there are people who can't be bothered
>> to fix bugs. Moving them out of the tree is a great incentive to find
>> someone to fix it.
>>
>
> On second thought I'm dropping this whole issue.
> I've brought this and other security issues up before but nobody really
> seems to care.
I wouldn't say that.. render-nodes, dri3/prime/dmabuf, etc, wouldn't
exist if we weren't trying to solve these issues.
Like I said earlier, I think we do want some way to expose range of
supported security levels, and in case multiple levels are supported
by driver some way to configure desired level.
Well, "range" may be overkill, I only see two sensible values, either
"gpu can access anyone's gpu memory (but not arbitrary system
memory)", or "we can also do per-process isolation of gpu buffers".
Of course the "I am a root hole" security level has no place in the
kernel.
BR,
-R
> /Thomas
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2014-04-14 13:09 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-04-11 12:42 DRM security flaws and security levels Thomas Hellstrom
2014-04-11 20:31 ` David Herrmann
2014-04-11 21:15 ` Thomas Hellstrom
2014-04-11 22:05 ` Rob Clark
2014-04-14 12:41 ` One Thousand Gnomes
2014-04-14 12:56 ` Thomas Hellstrom
2014-04-14 13:09 ` Rob Clark
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox