From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1170596AbdDXNEL (ORCPT ); Mon, 24 Apr 2017 09:04:11 -0400 Received: from mga07.intel.com ([134.134.136.100]:50254 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1170584AbdDXNED (ORCPT ); Mon, 24 Apr 2017 09:04:03 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,244,1488873600"; d="scan'208";a="78153626" Date: Mon, 24 Apr 2017 16:03:48 +0300 From: Ville =?iso-8859-1?Q?Syrj=E4l=E4?= To: Michel =?iso-8859-1?Q?D=E4nzer?= Cc: Gerd Hoffmann , amd-gfx@lists.freedesktop.org, open list , dri-devel@lists.freedesktop.org, Daniel Vetter , Christian =?iso-8859-1?Q?K=F6nig?= Subject: Re: [PATCH] drm: fourcc byteorder: brings header file comments in line with reality. Message-ID: <20170424130348.GV30290@intel.com> References: <20170421075825.6307-1-kraxel@redhat.com> <20170421092530.GE30290@intel.com> <1492768218.25675.33.camel@redhat.com> <20170421110804.GH30290@intel.com> <1492780323.25675.45.camel@redhat.com> <1492791271.25675.57.camel@redhat.com> <20170422100522.GS30290@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Apr 24, 2017 at 03:57:02PM +0900, Michel Dänzer wrote: > On 22/04/17 07:05 PM, Ville Syrjälä wrote: > > On Fri, Apr 21, 2017 at 06:14:31PM +0200, Gerd Hoffmann wrote: > >> Hi, > >> > >>>> My personal opinion is that formats in drm_fourcc.h should be > >>>> independent of the CPU byte order and the function > >>>> drm_mode_legacy_fb_format() and drivers depending on that incorrect > >>>> assumption be fixed instead. > >>> > >>> The problem is this isn't a kernel-internal thing any more. With the > >>> addition of the ADDFB2 ioctl the fourcc codes became part of the > >>> kernel/userspace abi ... > >> > >> Ok, added some printk's to the ADDFB and ADDFB2 code paths and tested a > >> bit. Apparently pretty much all userspace still uses the ADDFB ioctl. > >> xorg (modesetting driver) does. gnome-shell in wayland mode does. > >> Seems the big transition to ADDFB2 didn't happen yet. > >> > >> I guess that makes changing drm_mode_legacy_fb_format + drivers a > >> reasonable option ... > > > > Yeah, I came to the same conclusion after chatting with some > > folks on irc. > > > > So my current idea is that we change any driver that wants to follow the > > CPU endianness > > This isn't really optional for various reasons, some of which have been > covered in this discussion. > > > > to declare support for big endian formats if the CPU is > > big endian. Presumably these are mostly the virtual GPU drivers. > > > > Additonally we'll make the mapping performed by drm_mode_legacy_fb_format() > > driver controlled. That way drivers that got changed to follow CPU > > endianness can return a framebuffer that matches CPU endianness. And > > drivers that expect the GPU endianness to not depend on the CPU > > endianness will keep working as they do now. The downside is that users > > of the legacy addfb ioctl will need to magically know which endianness > > they will get, but that is apparently already the case. And users of > > addfb2 will keep on specifying the endianness explicitly with > > DRM_FORMAT_BIG_ENDIAN vs. 0. > > I'm afraid it's not that simple. > > The display hardware of older (pre-R600 generation) Radeon GPUs does not > support the "big endian" formats directly. In order to allow userspace > to access pixel data in native endianness with the CPU, we instead use > byte-swapping functionality which only affects CPU access. OK, I'm getting confused. Based on our irc discussion I got the impression you don't byte swap CPU accesses. But since you do, how do you deal with mixing 8bpp vs. 16bpp vs. 32bpp? > This means > that the GPU and CPU effectively see different representations of the > same video memory contents. > > Userspace code dealing with GPU access to pixel data needs to know the > format as seen by the GPU, whereas code dealing with CPU access needs to > know the format as seen by the CPU. I don't see any way to express this > with a single format definition. Hmm. Well that certainly makes life even more interesting. -- Ville Syrjälä Intel OTC