From: "Brian D. Carlstrom" <bdc@carlstrom.com>
To: linas@austin.ibm.com (Linas Vepstas)
Cc: 'Paul Mackerras' <paulus@samba.org>,
'linuxppc-dev list' <linuxppc-dev@ozlabs.org>
Subject: Re: AltiVec in the kernel
Date: Thu, 20 Jul 2006 11:47:04 -0700 [thread overview]
Message-ID: <le8xmoktav.wl%bdc@carlstrom.com> (raw)
In-Reply-To: <20060720174255.GP5905@austin.ibm.com>
At Thu, 20 Jul 2006 12:42:55 -0500,
Linas Vepstas wrote:
> > We found glibc sucked for that.
>
> Only because someone was asleep at the wheel, or there was a bug.
>
> When glibc gets ported to a new architecture, one of the earliest
> tasks is to create optimized versions of memcpy and the like.
> Presumably, on powerpc, this would have been done more than a
> decade ago; its hard for me to imagine that there'd be a problem
> there. Now, I haven't looked at the code, but I just can't imagine
> how this would not have been found and fixed by now. Is there
> really a problem wiht glibc performance on powerpc? I mean,
> this is a pretty serious accusation, and something that should
> be fixed asap.
In the course of my work, I use powerpc architecture simulators. When
working on Mac OS X with a G5, I had to implement some of the basic
AltiVec specifically for use by their libc memcpy implementation. A
quick grep memcpy in the recent glibc sources on my linux/ppc box seems
to show no where near that level of optimization, but I admit that I
could have missed something. However, I would not be surprised that
glibc avoided AltiVec specific optimizations since it would add to the
complexity of supporting various architectures with one binary. On Mac
OS X, libc actually delegated a small number of libc calls such as
memcpy via a kernel managed page at the end of the address space that
setup which routines to use based on currently running architecture.
-bri
next prev parent reply other threads:[~2006-07-20 18:53 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-07-18 12:48 AltiVec in the kernel Matt Sealey
2006-07-18 13:53 ` Kumar Gala
2006-07-18 15:10 ` Matt Sealey
2006-07-18 17:56 ` Paul Mackerras
2006-07-19 18:10 ` Linas Vepstas
2006-07-19 18:19 ` Paul Mackerras
2006-07-19 18:38 ` Johannes Berg
2006-07-19 18:57 ` Linas Vepstas
2006-07-20 12:31 ` Matt Sealey
2006-07-20 13:23 ` Kumar Gala
2006-07-20 13:33 ` Matt Sealey
2006-07-20 17:42 ` Linas Vepstas
2006-07-20 18:47 ` Brian D. Carlstrom [this message]
2006-07-20 19:05 ` Olof Johansson
2006-07-20 21:56 ` Brian D. Carlstrom
2006-07-20 22:39 ` Daniel Ostrow
2006-07-21 6:35 ` Olof Johansson
2006-07-21 14:42 ` Matt Sealey
2006-07-21 16:51 ` Linas Vepstas
2006-07-21 18:08 ` Matt Sealey
2006-07-22 3:09 ` Segher Boessenkool
2006-07-23 13:28 ` Matt Sealey
2006-07-23 21:37 ` Benjamin Herrenschmidt
2006-07-21 18:46 ` Brian D. Carlstrom
2006-07-21 21:30 ` Hollis Blanchard
2006-07-21 22:21 ` Peter Bergner
2006-07-18 18:39 ` Benjamin Herrenschmidt
2006-07-18 17:43 ` Paul Mackerras
-- strict thread matches above, loose matches on Subject: below --
2009-12-11 11:45 Simon Richter
2009-12-11 15:49 ` Arnd Bergmann
2009-12-16 22:11 ` Sebastian Andrzej Siewior
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=le8xmoktav.wl%bdc@carlstrom.com \
--to=bdc@carlstrom.com \
--cc=linas@austin.ibm.com \
--cc=linuxppc-dev@ozlabs.org \
--cc=paulus@samba.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).