netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC] Removing VLA usage in l1oip_core
@ 2018-03-08 22:50 Gustavo A. R. Silva
  0 siblings, 0 replies; only message in thread
From: Gustavo A. R. Silva @ 2018-03-08 22:50 UTC (permalink / raw)
  To: Karsten Keil; +Cc: netdev, Gustavo A. R. Silva

Hi Karsten,

I'm trying to figure out the best way to fix the following VLA warning:

drivers/isdn/mISDN/l1oip_core.c:282:2: warning: ISO C90 forbids variable length array ‘frame’ [-Wvla]
  u8 frame[len + 32];
  ^~

So while doing some research I've found the following.

Based on this code at include/linux/mISDNhw.h:38: #define MAX_DFRAME_LEN_L1 300 and the following at drivers/isdn/mISDN/l1oip_core.c:1115:


		if (skb->len > MAX_DFRAME_LEN_L1 || skb->len > L1OIP_MAX_LEN) {
			printk(KERN_WARNING "%s: skb too large\n",
			       __func__);
			break;
		}
		/* check for AIS / ulaw-silence */
		l = skb->len;
		if (!memchr_inv(skb->data, 0xff, l)) {
			if (debug & DEBUG_L1OIP_MSG)
				printk(KERN_DEBUG "%s: got AIS, not sending, "
				       "but counting\n", __func__);
			hc->chan[bch->slot].tx_counter += l;
			skb_trim(skb, 0);
			queue_ch_frame(ch, PH_DATA_CNF, hh->id, skb);
			return 0;
		}
		/* check for silence */
		l = skb->len;
		if (!memchr_inv(skb->data, 0x2a, l)) {
			if (debug & DEBUG_L1OIP_MSG)
				printk(KERN_DEBUG "%s: got silence, not sending"
				       ", but counting\n", __func__);
			hc->chan[bch->slot].tx_counter += l;
			skb_trim(skb, 0);
			queue_ch_frame(ch, PH_DATA_CNF, hh->id, skb);
			return 0;
		}

		/* send frame */
		p = skb->data;
		l = skb->len;
		while (l) {
			ll = (l < L1OIP_MAX_PERFRAME) ? l : L1OIP_MAX_PERFRAME;
			l1oip_socket_send(hc, hc->codec, bch->slot, 0,
					  hc->chan[bch->slot].tx_counter, p, ll);
			hc->chan[bch->slot].tx_counter += ll;
			p += ll;
			l -= ll;
		}


it seems that the maximum value 'len' can take at drivers/isdn/mISDN/l1oip_core.c:282 is 300


/*
 * send a frame via socket, if open and restart timer
 */
static int
l1oip_socket_send(struct l1oip *hc, u8 localcodec, u8 channel, u32 chanmask,
                  u16 timebase, u8 *buf, int len)
{
        u8 *p;
        u8 frame[len + 32];


If this is correct, I could send the following patch to fix the VLA warning:

diff --git a/drivers/isdn/mISDN/l1oip_core.c b/drivers/isdn/mISDN/l1oip_core.c
index 21d50e4..31e3cd5 100644
--- a/drivers/isdn/mISDN/l1oip_core.c
+++ b/drivers/isdn/mISDN/l1oip_core.c
@@ -279,7 +279,7 @@ l1oip_socket_send(struct l1oip *hc, u8 localcodec, u8 channel, u32 chanmask,
                  u16 timebase, u8 *buf, int len)
 {
        u8 *p;
-       u8 frame[len + 32];
+       u8 frame[332];
        struct socket *socket = NULL;
 
        if (debug & DEBUG_L1OIP_MSG)

But I wanted to ask for your feedback first, in case I may be missing something.

Another solution is to use dynamic memory allocation, but if the maximum size for 'frame' is in the hundreds of bytes, it might not be worth the performace penalty.

What do you think?

Thanks!
--
Gustavo

^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2018-03-08 22:50 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-03-08 22:50 [RFC] Removing VLA usage in l1oip_core Gustavo A. R. Silva

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).