From: Adrian Cox <apc@agelectronics.co.uk>
To: Kumar Gala <kumar@chaos.ph.utexas.edu>
Cc: Jim Terman <terman@ddi.com>,
linuxppc-dev <linuxppc-dev@lists.linuxppc.org>
Subject: Re: question about altivec registers
Date: Wed, 27 Oct 1999 09:58:15 +0100 [thread overview]
Message-ID: <3816BEA7.23AF7175@agelectronics.co.uk> (raw)
In-Reply-To: Pine.LNX.4.10.9910261733160.10102-100000@chaos.ph.utexas.edu
Kumar Gala wrote:
> The AltiVec registers have to be saved and restore explicitly, if you look
> at /arch/ppc/kernel/head.S and look for load_up_fp you will see how the
> floating point unit is handled on exceptions. Essential what is done is
> there are some checks done, and a pointer is kept to the last
> process using the FP unit (last_task_used_fp) which then if needed the FP
> regs are saved in to that processes context and the FPs for the incoming
> are restored.
Linux on PowerPC should end up doing a classic lazy save/restore for the
vector context, as it already does for the floating point registers. On
SMP systems this simple approach isn't possible, but a quick
approximation is to detect the first time a process uses Altivec, and
marking it to always save and restore vector context from then on.
I'd recommend that compiler writers use the vrsave register to mark
which vector registers they use, as a precaution against future kernels
which may look at this. Note that the G4 is extremely fast at linear
sequences of cacheable stores (store miss merging), and it is probably
cheaper for the kernel to ignore vrsave and avoid branches in the save
and restore sequence. Of course, it is correct to simply set every bit
in vrsave at the start of your application, and never change it again.
It may be non-optimal on future systems, but it should remain correct.
As for the cache thrashing effect, remember that 512 bytes going in and
out of the L2 cache is not very expensive, and that there is probably 1
or 2MB of L2 fitted.
- Adrian Cox, AG Electronics
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
next prev parent reply other threads:[~1999-10-27 8:58 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
1999-10-25 20:51 question about altivec registers Jim Terman
1999-10-25 22:27 ` Claude Robitaille
1999-10-25 22:31 ` Jim Terman
1999-10-25 22:44 ` erik cameron
1999-10-25 23:28 ` Claude Robitaille
[not found] ` <Pine.LNX.4.10.9910251916060.5902-100000@modemcable220.93-200-24.mtl.mc.vi deotron.net>
1999-10-25 23:53 ` Rob Barris
1999-10-26 18:22 ` Geert Uytterhoeven
1999-10-26 22:13 ` Rob Barris
1999-10-26 22:38 ` Tom Vier
1999-10-26 22:03 ` Tom Vier
1999-10-26 4:42 ` Kumar Gala
1999-10-26 21:52 ` Jim Terman
1999-10-26 22:43 ` Kumar Gala
1999-10-27 8:58 ` Adrian Cox [this message]
1999-10-27 13:21 ` Gabriel Paubert
1999-10-27 16:05 ` Geert Uytterhoeven
1999-10-27 18:23 ` Kumar Gala
1999-10-27 22:39 ` Tony Mantler
1999-10-28 11:01 ` Gabriel Paubert
1999-10-28 21:20 ` Tony Mantler
1999-10-29 11:58 ` Benjamin Herrenschmidt
1999-10-29 12:49 ` Gabriel Paubert
1999-10-30 4:14 ` Tony Mantler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3816BEA7.23AF7175@agelectronics.co.uk \
--to=apc@agelectronics.co.uk \
--cc=kumar@chaos.ph.utexas.edu \
--cc=linuxppc-dev@lists.linuxppc.org \
--cc=terman@ddi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).