cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
From: teigland@sourceware.org <teigland@sourceware.org>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] cluster/group/gfs_controld main.c
Date: 31 Aug 2006 18:46:25 -0000	[thread overview]
Message-ID: <20060831184625.6878.qmail@sourceware.org> (raw)

CVSROOT:	/cvs/cluster
Module name:	cluster
Changes by:	teigland at sourceware.org	2006-08-31 18:46:24

Modified files:
	group/gfs_controld: main.c 

Log message:
	tidy up a couple style things

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/main.c.diff?cvsroot=cluster&r1=1.11&r2=1.12

--- cluster/group/gfs_controld/main.c	2006/08/31 18:17:32	1.11
+++ cluster/group/gfs_controld/main.c	2006/08/31 18:46:24	1.12
@@ -89,8 +89,8 @@
 {
 	int i;
 
-	while (1) { /* I hate gotos */
-		/* This is expected to fail the first time, with nothing allocated: */
+	while (1) {
+		/* This fails the first time with client_size of zero */
 		for (i = 0; i < client_size; i++) {
 			if (client[i].fd == -1) {
 				client[i].fd = fd;
@@ -98,22 +98,25 @@
 				pollfd[i].events = POLLIN;
 				if (i > *maxi)
 					*maxi = i;
-				/* log_debug("client %d fd %d added", i, fd); */
 				return i;
 			}
 		}
+
 		/* We didn't find an empty slot, so allocate more. */
 		client_size += MAX_CLIENTS;
+
 		if (!client) {
 			client = malloc(client_size * sizeof(struct client));
 			pollfd = malloc(client_size * sizeof(struct pollfd));
-		}
-		else {
-			client = realloc(client, client_size * sizeof(struct client));
-			pollfd = realloc(pollfd, client_size * sizeof(struct pollfd));
+		} else {
+			client = realloc(client, client_size *
+						 sizeof(struct client));
+			pollfd = realloc(pollfd, client_size *
+						 sizeof(struct pollfd));
 		}
 		if (!client || !pollfd)
 			log_error("Can't allocate client memory.");
+
 		for (i = client_size - MAX_CLIENTS; i < client_size; i++) {
 			client[i].fd = -1;
 			pollfd[i].fd = -1;
@@ -129,14 +132,6 @@
 	pollfd[ci].fd = -1;
 }
 
-static void client_init(void)
-{
-	int i;
-
-	for (i = 0; i < client_size; i++)
-		client[i].fd = -1;
-}
-
 int client_send(int ci, char *buf, int len)
 {
 	return write(client[ci].fd, buf, len);
@@ -586,7 +581,6 @@
 	prog_name = argv[0];
 	INIT_LIST_HEAD(&mounts);
 	INIT_LIST_HEAD(&withdrawn_mounts);
-	client_init();
 
 	decode_arguments(argc, argv);
 



             reply	other threads:[~2006-08-31 18:46 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-08-31 18:46 teigland [this message]
  -- strict thread matches above, loose matches on Subject: below --
2006-10-06 14:43 [Cluster-devel] cluster/group/gfs_controld main.c teigland
2006-10-20 19:32 teigland
2006-11-14 20:37 teigland
2006-11-14 21:06 teigland
2006-11-15 14:32 teigland
2006-11-27 22:42 teigland
2006-11-27 22:43 teigland
2006-11-27 22:43 teigland
2006-11-28 20:52 teigland
2006-11-28 20:52 teigland
2006-11-28 20:52 teigland
2006-12-01 15:28 teigland
2006-12-01 15:29 teigland
2006-12-01 15:29 teigland
2006-12-05 16:59 teigland
2006-12-05 17:26 teigland
2006-12-05 17:26 teigland

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20060831184625.6878.qmail@sourceware.org \
    --to=teigland@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).