From: Jakub Narebski <jnareb@gmail.com>
To: "J.H." <warthog9@eaglescrag.net>
Cc: git@vger.kernel.org
Subject: Re: [PATCH 17/18] gitweb: Prepare for cached error pages & better error page handling
Date: Fri, 10 Dec 2010 21:33:31 +0100 [thread overview]
Message-ID: <201012102133.31816.jnareb@gmail.com> (raw)
In-Reply-To: <4D01E5CC.6010301@eaglescrag.net>
On Fri, 10 Dec 2010, J.H. wrote:
> > There is no problem with capturing output of die_error, nor there is a
> > problem with caching error pages (perhaps transiently in memory).
> >
> > The problem is that subroutines calling die_error assum that it would
> > exit ending subroutine that is responsible for generating current
> > action; see "goto DONE_GITWEB" which should be "goto DONE_REQUEST",
> > and which was "exit 0" some time ago at the end of die_error().
> >
> > With caching error pages you want die_error to exit $actions{$action}->(),
> > but not exit cache_fetch(). How do you intend to do it?
>
> Well there's one bug in how that function ends in looking at it again,
> basically the return case shouldn't happen, and that function should
> end, like your suggesting in the first part of your question (with
> respect to DONE_GITWEB)
>
> In the second part, your not thinking with the fork() going (though in
> thinking sans the fork this might not work right).
>
> It's the background process that will call die_error in such a way that
> die_error_cache will get invoked. die_error_cache will write the .err
> file out, and the whole thing should just exit.
>
> Though now that I say that there's an obvious bug in the case where
> forking didn't work at all, in that case you would get a blank page as
> the connection would just be closed. If you refreshed (say hitting F5)
> you'd get the error at that point.
>
> Need to fix that non-forked problem though.
Well, if you, the author, cannot follow code flow of your own code, what
does it matter for being sure that this code is bug free? What does this
matter for maintability of this code?
That rant aside, error / exception handling in gitweb is currently not
fitting well with output caching, at least the locking one.
die_error() functions as a kind of exception handling; we rely that on
the fact that calling die_error() would end request, independent on how
deep in the stack we are. Originally die_error() ended with 'exit',
which ModPerl::Registry redefined for it to end request and not exit
worker. Then 'exit' was replaced by 'goto DONE_GITWEB' to jump out of
several levels of calls; I didn't know then about ModPerl::Registry
redefining 'exit'... and actually it should be 'goto DONE_REQUEST', like
in "[PATCH/RFC] gitweb: Go to DONE_REQUEST rather than DONE_GITWEB in
die_error"
http://permalink.gmane.org/gmane.comp.version-control.git/162156
It is because die_error is exception mechanism, and in current incarnation
always ends request, that is why error pages (generated by die_error) were
not cached: we jump out of capturing and out of caching. The additional
reasoning is that we don't want to bloat cache with error pages, which
IMHO usually gets requested only once (assumption: different clients makes
different errors).
Now in most cases the approach taken to modify die_error for caching only
by adding explicit turning off capturing at the beginning of die_error is
enough.
1. Single client, no generating in background (note that if given URL is
never cached, we would not invoke background generation to refresh shown
stale data - there wouldn't be stale data).
In this case if there is an expected error, die_error() gets explicitely
invoked, turns off capturing, prints error page to client, and ends
request.
In the case of uncaught "die", it would be caught by CGI::Carp error
handler, and passed to handle_errors_html() subroutine (thanks to gitweb
using set_message(\&handle_errors_html)), which runs die_error() with
options making it not print HTTP header (which was already printed by
CGI::Carp error handler), and not exit - the CGI::Carp error handler
would end request instead. die_error() turns of capturing, prints
error page, and CGI::Carp error handler ends request.
2. Two clients arriving at exactly the same error (same link), at the
same time. This is quite unlikely.
In my rewrite there is loop in ->compute method in rewritten caching
engine, which reads:
do {
...
} until (<received data to show to user> || <tried to generate data ourself>);
This means that one client acquires writers lock, die_error prints error
page and exists, second client notices that it didn't get anything but
didn't try it itself yet, and dies itself on die_error()
Dealing with "die"-ing works the same as in the case described above,
so there is no problem from this area neither.
Alternate solution would be to treat it as the case described below.
3. Gitweb runs generating cache entry in background. Note that if error
pages are not cached, there would be no stale pages to serve while
regenerating data in background - so entering background process can
be done only thanks to "Generating..." page.
We can try _prevent this from happening_, as I did in my rewrite by
introducing initial/startup delay in "Generating..." (which has also
other reasons to use), or via 'generating_info_is_safe' protection.
Otherwise we need to pass error page from background process to
foreground proces that is showing "Generating..." page; well, to be
more exact, with current workings of "Generating..." it would be its
successor (next request, after reload / refresh).
Note: the fact that it is *next request* that needs an error page
(otherwise we would show "Generating..." page yet again).
So what die_error needs to do if it finds itself in the background
process (perhaps explicit $background boolean variable, perhaps
comparing $$ with $my_pid, perhaps checking if STDOUT is closed)
it needs to somehow write cache entry, perhaps in a special way
marking it as error page. The problem is to do it in generic way,
that would not make it impossible to use other caching engine, or
other capturing engine, in the future.
Note also that at the end of background process (perhaps at the
end of die_error) we need to exit process, and not just end request,
so we should use 'CORE::exit(0);'.
The problem with 3rd case makes me think that it is high time that
die_error use Perl 5 exception throwing and handling mechanism, namely
"die" (for throwing errors, to be used in die_error), and "eval BLOCK"
(to catch errors).
As proposed on #perl channel when asking about this situation, die_error
would use 'die \$DIE_ERROR' to throw reference, or throw an object, to
easy distinguish between handled error from die_error, and unhandled
error from Perl (where we assume that all errors are strings).
run_request() or run() would then use 'eval { ... }', which has the
additional advantage that we can get rid of CGI::Carp::set_message,
which doesn't allow to use custom HTTP status, and supposedly doesn't
work with mod_perl 2.0. Instead of adding capture_stop() to die_error(),
the capture mechanism should use 'eval { ... }', and just print response
if there was exception (like Capture::Tiny does)... or return captured
error page to be cached in the case of being in background process.
Well, any way we choose to handle it, the code should be very clear,
and handle all cases (other caching engines, perhaps also other capture
engines, non-persistent and persistent environments, redefined 'exit'
like in ModPerl::Registry case, not redefined 'exit' like I think in
FastCGI case, etc., etc.).
>>> This adds two functions:
>>>
>>> die_error_cache() - this gets back called from die_error() so
>>> that the error message generated can be cached.
>>
>> *How* die_error_cache() gets called back from die_error()? I don't
>> see any changes to die_error(), or actually any calling sites for
>> die_error_cache() in the patch below.
>>
>>> cacheDisplayErr() - this is a simplified version of cacheDisplay()
>>> that does an initial check, if the error page exists - display it
>>> and exit. If not, return.
>>
>> Errr... isn't it removed in _preceding_ patch? WTF???
>
> in breaking up the series it got included in the wrong spot, and
> apparently removed and re-added correctly, should be fixed in v9
[...]
>
> If you'd rather I can squash 17 & 18 into a single commit.
Yes, please. Splitting those changes into 17 & 18 didn't make it more
clear (usually smaller commit == easier to review), but rather less
transparent.
>>> +sub die_error_cache {
>>> + my ($output) = @_;
>>> +
>>> + open(my $cacheFileErr, '>:utf8', "$fullhashpath.err");
>>> + my $lockStatus = flock($cacheFileErr,LOCK_EX|LOCK_NB);
>>
>> Why do you need to lock here? A comment would be nice.
>
> At any point when a write happens there's the potential for multiple
> simultaneous writes. Locking becomes obvious, when your trying to
> prevent multiple processes from writing to the same thing at the same
> time...
Or you can use 'write to File::Temp::tempfile, rename file' trick for
atomic update to file. I use it in early commits in my rewrite of gitweb
caching, see e.g.:
http://repo.or.cz/w/git/jnareb-git.git/blob/refs/heads/gitweb/cache-kernel-v6:/gitweb/lib/GitwebCache/SimpleFileCache.pm#l362
>>> +
>>> + if (! $lockStatus ){
>>> + if ( $areForked ){
>>
>> Grrrr...
Global variables, action at considerable distance.
>> But if it is here to stay, a comment if you please.
>>
>>> + exit(0);
>>> + }else{
>>> + return;
>>> + }
>>> + }
>
> The exit(0) or return have been removed in favor of DONE_GITWEB, as
> we've already errored if we are broken here we should just die.
Note the difference between exit and Core::exit, when running gitweb
from mod_perl using ModPerl::Registry handler.
>>> +
>>> + flock($cacheFileErr,LOCK_UN);
>>> + close($cacheFileErr);
>>
>> Closing file will unlock it.
>
> Doesn't really hurt to be explicit though.
O.K., but please note that I have found LOCK_UN to be unreliable.
>>> +
>>> + if ( $areForked ){
>>> + exit(0);
>>> + }else{
>>> + return;
>>
>> So die_error_cache would not actually work like "die" here and like
>> die_error(), isn't it?
>
> that was ejected, it was a bug. DONE_GITWEB is more correct, though I
> might need to add a hook to display the error message in the case that
> the process didn't fork.
By the way, why do you fork indiscriminately (remember that forking
is not without performance cost), even when background generation is
turned off, or you don't need background generation?
Wouldn't fallback on non-background generation if fork() fails, as in
my rewrite of gitweb caching series be a better solution?
>>> + while( <$cacheFileErr> ){
>>> + print $_;
>>> + }
>>
>> Why not 'print <$cacheFileErr>' (list context), like in insert_file()
>> subroutine?
>
> I've had buffer problems with 'print <$cacheFileErr>' in some cases.
> This is small enough it shouldn't happen, but I've gotten into the habit
> of doing it this way. I can change it if you like.
Perhaps
print while <$cacheFileErr>;
(we use it in already in "print while <$fd>;" in git_blame_common())?
Or, if we use File::Copy, perhaps File::Copy::copy($cacheFileErr, \*STDOUT);
or something like that.
--
Jakub Narebski
Poland
next prev parent reply other threads:[~2010-12-10 20:33 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-12-09 21:57 [PATCH 00/18] Gitweb caching v8 John 'Warthog9' Hawley
2010-12-09 21:57 ` [PATCH 01/18] gitweb: Prepare for splitting gitweb John 'Warthog9' Hawley
2010-12-09 23:30 ` Jakub Narebski
2010-12-09 21:57 ` [PATCH 02/18] gitweb: add output buffering and associated functions John 'Warthog9' Hawley
2010-12-09 21:57 ` [PATCH 03/18] gitweb: File based caching layer (from git.kernel.org) John 'Warthog9' Hawley
2010-12-09 21:57 ` [PATCH 04/18] gitweb: Minimal testing of gitweb caching John 'Warthog9' Hawley
2010-12-09 21:57 ` [PATCH 05/18] gitweb: Regression fix concerning binary output of files John 'Warthog9' Hawley
2010-12-09 23:33 ` Jakub Narebski
2010-12-09 21:57 ` [PATCH 06/18] gitweb: Add more explicit means of disabling 'Generating...' page John 'Warthog9' Hawley
2010-12-09 21:57 ` [PATCH 07/18] gitweb: Revert back to $cache_enable vs. $caching_enabled John 'Warthog9' Hawley
2010-12-09 23:38 ` Jakub Narebski
2010-12-10 2:38 ` J.H.
2010-12-10 13:48 ` Jakub Narebski
2010-12-09 21:57 ` [PATCH 08/18] gitweb: Change is_cacheable() to return true always John 'Warthog9' Hawley
2010-12-09 23:46 ` Jakub Narebski
2010-12-09 21:57 ` [PATCH 09/18] gitweb: Revert reset_output() back to original code John 'Warthog9' Hawley
2010-12-09 23:58 ` Jakub Narebski
2010-12-10 2:43 ` J.H.
2010-12-09 21:57 ` [PATCH 10/18] gitweb: Adding isBinaryAction() and isFeedAction() to determine the action type John 'Warthog9' Hawley
2010-12-10 0:06 ` Jakub Narebski
2010-12-10 3:39 ` J.H.
2010-12-10 12:10 ` Jakub Narebski
2010-12-10 12:25 ` Jakub Narebski
2010-12-09 21:57 ` [PATCH 11/18] gitweb: add isDumbClient() check John 'Warthog9' Hawley
2010-12-10 0:12 ` Jakub Narebski
2010-12-10 4:00 ` J.H.
2010-12-11 0:07 ` Junio C Hamano
2010-12-11 0:15 ` Jakub Narebski
2010-12-11 1:15 ` J.H.
2010-12-11 1:40 ` Jakub Narebski
2010-12-09 21:57 ` [PATCH 12/18] gitweb: Change file handles (in caching) to lexical variables as opposed to globs John 'Warthog9' Hawley
2010-12-10 0:16 ` Jakub Narebski
2010-12-10 0:32 ` Junio C Hamano
2010-12-10 0:47 ` Jakub Narebski
2010-12-10 5:56 ` J.H.
2010-12-09 21:57 ` [PATCH 13/18] gitweb: Add commented url & url hash to page footer John 'Warthog9' Hawley
2010-12-10 0:26 ` Jakub Narebski
2010-12-10 6:10 ` J.H.
2010-12-09 21:57 ` [PATCH 14/18] gitweb: add print_transient_header() function for central header printing John 'Warthog9' Hawley
2010-12-10 0:36 ` Jakub Narebski
2010-12-10 6:18 ` J.H.
2010-12-09 21:57 ` [PATCH 15/18] gitweb: Add show_warning() to display an immediate warning, with refresh John 'Warthog9' Hawley
2010-12-10 1:01 ` Jakub Narebski
2010-12-10 7:38 ` J.H.
2010-12-10 14:10 ` Jakub Narebski
2010-12-09 21:57 ` [PATCH 16/18] gitweb: When changing output (STDOUT) change STDERR as well John 'Warthog9' Hawley
2010-12-10 1:36 ` Jakub Narebski
2010-12-12 5:25 ` J.H.
2010-12-12 15:17 ` Jakub Narebski
2010-12-09 21:57 ` [PATCH 17/18] gitweb: Prepare for cached error pages & better error page handling John 'Warthog9' Hawley
2010-12-10 1:49 ` Jakub Narebski
2010-12-10 8:33 ` J.H.
2010-12-10 20:33 ` Jakub Narebski [this message]
2010-12-09 21:57 ` [PATCH 18/18] gitweb: Add better error handling for gitweb caching John 'Warthog9' Hawley
2010-12-10 1:56 ` Jakub Narebski
2010-12-09 23:26 ` [PATCH 00/18] Gitweb caching v8 Jakub Narebski
2010-12-10 0:43 ` J.H.
2010-12-10 1:27 ` Jakub Narebski
2010-12-10 0:39 ` Junio C Hamano
2010-12-10 0:45 ` J.H.
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=201012102133.31816.jnareb@gmail.com \
--to=jnareb@gmail.com \
--cc=git@vger.kernel.org \
--cc=warthog9@eaglescrag.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).