From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=53817 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OErA2-0008Rp-3b for qemu-devel@nongnu.org; Wed, 19 May 2010 17:50:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OEr2p-0001Mq-4H for qemu-devel@nongnu.org; Wed, 19 May 2010 17:43:17 -0400 Received: from mail-pv0-f173.google.com ([74.125.83.173]:37779) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OEr2o-0001Mg-R2 for qemu-devel@nongnu.org; Wed, 19 May 2010 17:43:15 -0400 Received: by pvg6 with SMTP id 6so1462641pvg.4 for ; Wed, 19 May 2010 14:43:12 -0700 (PDT) Message-ID: <4BF45B6C.8000908@codemonkey.ws> Date: Wed, 19 May 2010 16:43:08 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 0/6]: QMP: Fix issues in parser/lexer References: <1274303733-3700-1-git-send-email-lcapitulino@redhat.com> In-Reply-To: <1274303733-3700-1-git-send-email-lcapitulino@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Luiz Capitulino Cc: aliguori@us.ibm.com, qemu-devel@nongnu.org On 05/19/2010 04:15 PM, Luiz Capitulino wrote: > Hi Anthony, > > While investigating a QMP bug reported by a user, I've found a few issues > in our parser/lexer. > > The patches in this series fix the problems I was able to solve, but we > still have the following issues: > > 1. Our 'private extension' is open to the public > > Eg. The following input issued by a client is valid: > > { 'execute': 'query-pci' } > > I don't think it's a good idea to have clients relying on this kind of > JSON extension. > > To fix this we could add a 'extension' flag to JSONLexer and set it to > nonzero in internal functions (eg. qobject_from_jsonf()), of course that > the lexer code should handle this too. > The JSON specification explicitly says: "A JSON parser transforms a JSON text into another representation. A JSON parser MUST accept all texts that conform to the JSON grammar. A JSON parser MAY accept non-JSON forms or extensions." IOW, we're under no obligation to reject extensions and I can't think of a reason why we should. > 2. QMP doesn't check the return of json_message_parser_feed() > > Which means we don't handle JSON syntax errors. While the fix might seem > trivial (ie. just return an error!), I'm not sure what's the best way > to handle this, because the streamer seems to return multiple errors for > the same input string. > > For example, this input: > > { "execute": yy_uu } > > Seems to return an error for each bad character (yy_uu), shouldn't it > return only once and stop processing the whole string? > It probably should kill the connection. > 3. The lexer enter in ERROR state when processing is done > > Not sure whether this is an issue, but I found it while reviewing the code > and maybe this is related with item 2 above. > > When json_lexer_feed_char() is finished scanning a string, (ie. ch='\0') > the JSON_SKIP clause will set lexer->state to ERROR as there's no entry > for '\0' in the IN_START array. > > Shouldn't we have a LEXER_DONE or something like it instead? > No, you must have malformed input if an error occurs. [IN_WHITESPACE] -> TERMINAL(JSON_SKIP) JSON_SKIP is a terminal so once you're in that state, you go back to IN_START. > 4. Lexer expects a 'terminal' char to process a token > > Which means clients must send a sort of end of line char, so that we > process their input. > > Maybe I'm missing something here, but I thought that the whole point of > writing our own parser was to avoid this. > If the lexer gets: "abc" It has no way of knowing if that's a token or if we're going to get: "abcd" As a token. You can fix this in two ways. You can either flush() the lexer to significant end of input or you can wait until there's some other valid symbol to cause the previous symbol to be emitted. IOW, a client either needs to: 1) send the request and follow it with a newline or some form of whitespace or 2) close the connection to flush the request Regards, Anthony Liguori