From: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Jon Masters <jcm@redhat.com>, Lee Revell <rlrevell@joe-job.com>,
linux-rt-users@vger.kernel.org,
LKML <linux-kernel@vger.kernel.org>,
williams <williams@redhat.com>,
"Luis Claudio R. Goncalves" <lgoncalv@redhat.com>
Subject: Re: [RT] [RFC] simple SMI detector
Date: Sun, 25 Jan 2009 12:07:45 -0200 [thread overview]
Message-ID: <20090125140745.GC12776@khazad-dum.debian.net> (raw)
In-Reply-To: <alpine.LFD.2.00.0901251032060.3424@localhost.localdomain>
On Sun, 25 Jan 2009, Thomas Gleixner wrote:
> On Sat, 24 Jan 2009, Jon Masters wrote:
> > > The only reasonable thing you can do on a SMI plagued system is to
> > > identify the device which makes use of SMIs. Legacy ISA devices and
> > > USB are usually good candidates. If that does not help, don't use it
> > > for real-time :)
> >
> > Indeed. This is why I wrote an smi_detector that sits in kernel space
> > and can be reasonably sure measured discrepancies are attributable to
> > SMI behavior. We want to log and detect such things before RT systems
> > are deployed, not have users actively trying to work around SMI overhead
> > after the fact.
>
> Agreed. A tool to detect SMI disturbance is a good thing. It just
> needs to be documented that users should take the results and talk to
> their board vendor. I know that off the shelf hardware will not be
> fixed, but industrial grade hardware vendors usually have an interest
> to get such problems resolved.
"gamer enthusiast" hardware might also get fixed. You just need two or
three posts to the "enthusiasts" forums about how SMI steals CPU cycles and
slow down their framerates, and suddenly, benchmarks will start going on and
on about how MoBo x has a high number of SMIs per minute, where MoBo y
doesn't...
The people who want low-latency desktops for audio work will also pay
attention to such benchmarks and will vote with their wallet.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
next prev parent reply other threads:[~2009-01-25 14:24 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-01-23 22:55 [RT] [RFC] simple SMI detector Jon Masters
2009-01-24 2:33 ` Lee Revell
2009-01-24 16:30 ` Thomas Gleixner
2009-01-25 0:57 ` Jon Masters
2009-01-25 2:12 ` Sven-Thorsten Dietrich
2009-01-25 4:02 ` Theodore Tso
2009-01-25 9:40 ` Thomas Gleixner
2009-01-25 11:49 ` Bastien ROUCARIES
2009-01-25 15:04 ` Clark Williams
2009-01-25 21:41 ` Jon Masters
2009-01-25 21:38 ` Jon Masters
2009-01-25 9:34 ` Thomas Gleixner
2009-01-25 14:07 ` Henrique de Moraes Holschuh [this message]
2009-01-25 22:52 ` Mike Kravetz
2009-01-26 17:51 ` Jon Masters
2009-01-27 2:23 ` Lee Revell
2009-01-27 2:48 ` Keith Mannthey
2009-01-27 11:22 ` Pavel Machek
2009-01-27 15:17 ` Jon Masters
2009-01-27 18:00 ` Len Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090125140745.GC12776@khazad-dum.debian.net \
--to=hmh@hmh.eng.br \
--cc=jcm@redhat.com \
--cc=lgoncalv@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=rlrevell@joe-job.com \
--cc=tglx@linutronix.de \
--cc=williams@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox