xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: George Dunlap <george.dunlap@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [PATCH 3 of 3] IRQ: Introduce old_vector to irq_cfg
Date: Mon, 5 Sep 2011 12:45:27 +0100	[thread overview]
Message-ID: <1315223127.5679.9118.camel@elijah> (raw)
In-Reply-To: <4E64B5E9.6010606@citrix.com>

On Mon, 2011-09-05 at 12:43 +0100, Andrew Cooper wrote:
> >> diff -r cf93a1825d66 -r 1a244d4ca6ac xen/arch/x86/irq.c
> >> --- a/xen/arch/x86/irq.c        Fri Sep 02 17:33:17 2011 +0100
> >> +++ b/xen/arch/x86/irq.c        Fri Sep 02 17:33:17 2011 +0100
> >> @@ -211,15 +211,9 @@ static void __clear_irq_vector(int irq)
> >>
> >>     cpus_and(tmp_mask, cfg->old_cpu_mask, cpu_online_map);
> >>     for_each_cpu_mask(cpu, tmp_mask) {
> >> -        for (vector = FIRST_DYNAMIC_VECTOR; vector <= LAST_DYNAMIC_VECTOR;
> >> -                                vector++) {
> >> -            if (per_cpu(vector_irq, cpu)[vector] != irq)
> >> -                continue;
> >> -            TRACE_3D(TRC_HW_IRQ_MOVE_FINISH,
> >> -                     irq, vector, cpu);
> >> -            per_cpu(vector_irq, cpu)[vector] = -1;
> >> -             break;
> >> -        }
> >> +        ASSERT( per_cpu(vector_irq, cpu)[cfg->old_vector] == irq );
> >> +        TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, vector, cpu);
> >> +        per_cpu(vector_irq, cpu)[vector] = -1;
> > Do you mean cfg->old_vector here, instead of vector?
> 
> No - the TRACE_3D and per_cpu lines are only diffs because of the change
> in whitespace when removing the loop (and this is the code which should
> actually remove the vector mapping).  You are correct however that
> cfg->old_vector should be set to IRQ_VECTOR_UNASSIGNED at the end of the
> for_each for consistency.  (In reality, you cant get to this bit of code
> without having a valid cfg->old_vector)

But you're also removing the for loop, which sets vector.  (I.e.,
there's some bad coding in the original code, where the variable
"vector" means different things in different parts of the function.)

Before the patch, vector in that line will be any vector between
FIRST_DYNAMIC_VECTOR and LAST_DYNAMIC_VECTOR s.t. per_cpu(vector_irq,
cpu)[vector] == irq.

After the patch, vector at that line will be equal to cfg->vector (set
above).

Since we're looking through the cpus in cfg->old_cpu_mask, I would think
that we would be clearing cfg->old_vector, would we not?

In any case, it's certain that the ASSERT() should be checking the same
thing as the clearing line; i.e., either ASSERT(...[vector]==irq) and
then set ...[vector]=-1, or ASSERT(...[cfg->old_vector]==irq) and then
set ...[cfg->old_vector]=-1.

 -George

> 
> >>      }
> >>
> >>     if ( cfg->used_vectors )
> >> @@ -279,6 +273,7 @@ static void __init init_one_irq_desc(str
> >>  static void __init init_one_irq_cfg(struct irq_cfg *cfg)
> >>  {
> >>     cfg->vector = IRQ_VECTOR_UNASSIGNED;
> >> +    cfg->old_vector = IRQ_VECTOR_UNASSIGNED;
> >>     cpus_clear(cfg->cpu_mask);
> >>     cpus_clear(cfg->old_cpu_mask);
> >>     cfg->used_vectors = NULL;
> >> @@ -418,6 +413,7 @@ next:
> >>         if (old_vector) {
> >>             cfg->move_in_progress = 1;
> >>             cpus_copy(cfg->old_cpu_mask, cfg->cpu_mask);
> >> +            cfg->old_vector = cfg->vector;
> >>         }
> >>         trace_irq_mask(TRC_HW_IRQ_ASSIGN_VECTOR, irq, vector, &tmp_mask);
> >>         for_each_cpu_mask(new_cpu, tmp_mask)
> >> diff -r cf93a1825d66 -r 1a244d4ca6ac xen/include/asm-x86/irq.h
> >> --- a/xen/include/asm-x86/irq.h Fri Sep 02 17:33:17 2011 +0100
> >> +++ b/xen/include/asm-x86/irq.h Fri Sep 02 17:33:17 2011 +0100
> >> @@ -28,7 +28,8 @@ typedef struct {
> >>  } vmask_t;
> >>
> >>  struct irq_cfg {
> >> -        int  vector;
> >> +        s16 vector;                  /* vector itself is only 8 bits, */
> >> +        s16 old_vector;              /* but we use -1 for unassigned  */
> >>         cpumask_t cpu_mask;
> >>         cpumask_t old_cpu_mask;
> >>         unsigned move_cleanup_count;
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xensource.com
> >> http://lists.xensource.com/xen-devel
> >>
> 

  reply	other threads:[~2011-09-05 11:45 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-09-02 16:35 [PATCH 0 of 3] IRQ: Part 1 of the irq code cleanup Andrew Cooper
2011-09-02 16:35 ` [PATCH 1 of 3] IRQ: Remove bit-rotten code Andrew Cooper
2011-09-05 10:10   ` George Dunlap
2011-09-02 16:35 ` [PATCH 2 of 3] IRQ: Fold irq_status into irq_cfg Andrew Cooper
2011-09-02 16:35 ` [PATCH 3 of 3] IRQ: Introduce old_vector to irq_cfg Andrew Cooper
2011-09-05 10:14   ` George Dunlap
2011-09-05 11:43     ` Andrew Cooper
2011-09-05 11:45       ` George Dunlap [this message]
2011-09-05 13:17         ` [PATCH 3 of 3] IRQ: Introduce old_vector to irq_cfg v2 Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1315223127.5679.9118.camel@elijah \
    --to=george.dunlap@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).