cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
* [Cluster-devel] bug reports
@ 2012-10-24 10:38 Heiko Nardmann
  2012-10-24 10:47 ` Fabio M. Di Nitto
  2012-10-24 11:00 ` Steven Whitehouse
  0 siblings, 2 replies; 5+ messages in thread
From: Heiko Nardmann @ 2012-10-24 10:38 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Hi together!

Since all (or almost all?) GFS2 developers (as far as I can tell) are
employed by RedHat I wonder whether it makes sense to additionally post
bug reports to this mailing list beside reporting them to the RH support?


Kind regards,

    Heiko



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Cluster-devel] bug reports
  2012-10-24 10:38 [Cluster-devel] bug reports Heiko Nardmann
@ 2012-10-24 10:47 ` Fabio M. Di Nitto
  2012-10-24 11:00 ` Steven Whitehouse
  1 sibling, 0 replies; 5+ messages in thread
From: Fabio M. Di Nitto @ 2012-10-24 10:47 UTC (permalink / raw)
  To: cluster-devel.redhat.com

On 10/24/2012 12:38 PM, Heiko Nardmann wrote:
> Hi together!
> 
> Since all (or almost all?) GFS2 developers (as far as I can tell) are
> employed by RedHat I wonder whether it makes sense to additionally post
> bug reports to this mailing list beside reporting them to the RH support?

No, please report bugs via RH support. This list is for development only.

Fabio



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Cluster-devel] bug reports
  2012-10-24 10:38 [Cluster-devel] bug reports Heiko Nardmann
  2012-10-24 10:47 ` Fabio M. Di Nitto
@ 2012-10-24 11:00 ` Steven Whitehouse
  2012-10-24 11:46   ` Heiko Nardmann
  1 sibling, 1 reply; 5+ messages in thread
From: Steven Whitehouse @ 2012-10-24 11:00 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Hi,

On Wed, 2012-10-24 at 12:38 +0200, Heiko Nardmann wrote:
> Hi together!
> 
> Since all (or almost all?) GFS2 developers (as far as I can tell) are
> employed by RedHat I wonder whether it makes sense to additionally post
> bug reports to this mailing list beside reporting them to the RH support?
> 
> 
> Kind regards,
> 
>     Heiko
> 

It depends what the reports are really... for those with subscriptions
and who are Red Hat customers, then using our official support channels
is the best way. That is not to preclude discussing the issue on mailing
lists too, if you want to, but by going through our official support
channels that does ensure that issues are handled in a timely manner.

Using systems like our ticketing system and/or bugzilla means that we
have bugs in a filing system where we can keep track of them, and where
we can look for common features (sometimes, being able to see several
different reports of the same bug, but on different configurations can
be very helpful in tracking things down). By contrast issues posted to
mailing lists can be more easily lost track of, even though they are
likely to reach a wider audience which can be an advantage if the issue
is something which another user already has experience of.

This particular list however, is intended for development discussion, so
thats mostly at the level of proposed patches for both bugs and new
features. So if you have a patch which fixes a bug in the upstream code,
then do please feel free to post it here whether or not you've opened a
bug for it,


Steve.




^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Cluster-devel] bug reports
  2012-10-24 11:00 ` Steven Whitehouse
@ 2012-10-24 11:46   ` Heiko Nardmann
  2012-10-24 11:50     ` Steven Whitehouse
  0 siblings, 1 reply; 5+ messages in thread
From: Heiko Nardmann @ 2012-10-24 11:46 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Am 24.10.2012 13:00, schrieb Steven Whitehouse:
> Hi,
>
> On Wed, 2012-10-24 at 12:38 +0200, Heiko Nardmann wrote:
>> Hi together!
>>
>> Since all (or almost all?) GFS2 developers (as far as I can tell) are
>> employed by RedHat I wonder whether it makes sense to additionally post
>> bug reports to this mailing list beside reporting them to the RH support?
>>
>>
>> Kind regards,
>>
>>     Heiko
>>
> It depends what the reports are really... for those with subscriptions
> and who are Red Hat customers, then using our official support channels
> is the best way. That is not to preclude discussing the issue on mailing
> lists too, if you want to, but by going through our official support
> channels that does ensure that issues are handled in a timely manner.
>
> Using systems like our ticketing system and/or bugzilla means that we
> have bugs in a filing system where we can keep track of them, and where
> we can look for common features (sometimes, being able to see several
> different reports of the same bug, but on different configurations can
> be very helpful in tracking things down). By contrast issues posted to
> mailing lists can be more easily lost track of, even though they are
> likely to reach a wider audience which can be an advantage if the issue
> is something which another user already has experience of.
>
> This particular list however, is intended for development discussion, so
> thats mostly at the level of proposed patches for both bugs and new
> features. So if you have a patch which fixes a bug in the upstream code,
> then do please feel free to post it here whether or not you've opened a
> bug for it,
>
>
> Steve.
>
>
Hi Steven!

Since I am busy with the other tasks inside the current project I have
no time to understand and check the code - so no fixes have to be
expected from my side. It is just that I am experiencing bugs and maybe
someone has an idea. Or is experienced enough with the code to almost
immediately know what might be the reason for it.

Currently I am in the state that the RH support recommended running the
debug kernel of RHEL to maybe get further details (I am using 6.1). My
setup is a two-node cluster (HA) using GFS2 to access a SAN. I have
tried to run a worst case scenario for GFS2, i.e.

1) create traffic on the active node (thus leading to traffic on the SAN)
2) run an endless loop of 'find /SAN-Storage -ls' on the passive node
for some minutes
3) stopping the endless loop on the passive node
4) unmounting /SAN-Storage on the passive node

This sequence has lead to a crash of the passive node almost immediately
after typing 'umount /SAN-Storage' and pressing 'Enter'. First I get the
following on the console (being logged into the machine using SSH):

Message from syslogd....
 kernel:general protection fault: 0000 [#1] SMP

 kernel:last sysfs file:
/sys/devices/platform/host8/session2/target8:0:0/8:0:0:1/timeout
Write failed: Broken pipe

Then I see a kernel panic on the iDRAC6 console; I've captured a
screenshot of the stack trace if someone is interested. Sorry, no kdump
vmcore gets created in this situation.

Since I am currently considering switching away from GFS2 (being too
unstable) and instead using ext4 on the SAN (and handle mounting
explicitly of it on our own) the experienced problems might get lost
otherwise if not reported here (IMHO).


Kind regards,

    Heiko



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Cluster-devel] bug reports
  2012-10-24 11:46   ` Heiko Nardmann
@ 2012-10-24 11:50     ` Steven Whitehouse
  0 siblings, 0 replies; 5+ messages in thread
From: Steven Whitehouse @ 2012-10-24 11:50 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Hi,

On Wed, 2012-10-24 at 13:46 +0200, Heiko Nardmann wrote:
> Am 24.10.2012 13:00, schrieb Steven Whitehouse:
> > Hi,
> >
> > On Wed, 2012-10-24 at 12:38 +0200, Heiko Nardmann wrote:
> >> Hi together!
> >>
> >> Since all (or almost all?) GFS2 developers (as far as I can tell) are
> >> employed by RedHat I wonder whether it makes sense to additionally post
> >> bug reports to this mailing list beside reporting them to the RH support?
> >>
> >>
> >> Kind regards,
> >>
> >>     Heiko
> >>
> > It depends what the reports are really... for those with subscriptions
> > and who are Red Hat customers, then using our official support channels
> > is the best way. That is not to preclude discussing the issue on mailing
> > lists too, if you want to, but by going through our official support
> > channels that does ensure that issues are handled in a timely manner.
> >
> > Using systems like our ticketing system and/or bugzilla means that we
> > have bugs in a filing system where we can keep track of them, and where
> > we can look for common features (sometimes, being able to see several
> > different reports of the same bug, but on different configurations can
> > be very helpful in tracking things down). By contrast issues posted to
> > mailing lists can be more easily lost track of, even though they are
> > likely to reach a wider audience which can be an advantage if the issue
> > is something which another user already has experience of.
> >
> > This particular list however, is intended for development discussion, so
> > thats mostly at the level of proposed patches for both bugs and new
> > features. So if you have a patch which fixes a bug in the upstream code,
> > then do please feel free to post it here whether or not you've opened a
> > bug for it,
> >
> >
> > Steve.
> >
> >
> Hi Steven!
> 
> Since I am busy with the other tasks inside the current project I have
> no time to understand and check the code - so no fixes have to be
> expected from my side. It is just that I am experiencing bugs and maybe
> someone has an idea. Or is experienced enough with the code to almost
> immediately know what might be the reason for it.
> 
> Currently I am in the state that the RH support recommended running the
> debug kernel of RHEL to maybe get further details (I am using 6.1). My
> setup is a two-node cluster (HA) using GFS2 to access a SAN. I have
> tried to run a worst case scenario for GFS2, i.e.
> 
> 1) create traffic on the active node (thus leading to traffic on the SAN)
> 2) run an endless loop of 'find /SAN-Storage -ls' on the passive node
> for some minutes
> 3) stopping the endless loop on the passive node
> 4) unmounting /SAN-Storage on the passive node
> 
> This sequence has lead to a crash of the passive node almost immediately
> after typing 'umount /SAN-Storage' and pressing 'Enter'. First I get the
> following on the console (being logged into the machine using SSH):
> 
> Message from syslogd....
>  kernel:general protection fault: 0000 [#1] SMP
> 
>  kernel:last sysfs file:
> /sys/devices/platform/host8/session2/target8:0:0/8:0:0:1/timeout
> Write failed: Broken pipe
> 
> Then I see a kernel panic on the iDRAC6 console; I've captured a
> screenshot of the stack trace if someone is interested. Sorry, no kdump
> vmcore gets created in this situation.
> 
> Since I am currently considering switching away from GFS2 (being too
> unstable) and instead using ext4 on the SAN (and handle mounting
> explicitly of it on our own) the experienced problems might get lost
> otherwise if not reported here (IMHO).
> 
> 
> Kind regards,
> 
>     Heiko
> 

I assume from the fact that you are in touch with our support team that
you must have opened a ticket. Can you upload that screen shot (if you
have not already) and let me know what the ticket number is?

Steve.




^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-10-24 11:50 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-10-24 10:38 [Cluster-devel] bug reports Heiko Nardmann
2012-10-24 10:47 ` Fabio M. Di Nitto
2012-10-24 11:00 ` Steven Whitehouse
2012-10-24 11:46   ` Heiko Nardmann
2012-10-24 11:50     ` Steven Whitehouse

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).