install prerequisite Perl modules in the systemwide locations. They may
have to be installed under the home directory, such as by using local::lib
(which is how the cPanel Perl-module installer works, on systems that use it).
For that to work, the local::lib-defined value for PERL5LIB must be in
the environment when Perl starts up. The former way %config{ENV} was handled
was too late, depending on the Perl code to unpack it from the storable and
put it into the environment.
Easy solution is to build the wrapper to repopulate the environment based on
%config{ENV} before ever exec'ing Perl (and then remove it from the storable
as there is nothing more that the Perl code will need to do with it).
In test, set up the post-commit hook for more realism (and bugs!).
To make wrappers work in test, set PERL5LIB, and allow the wrappee's
path to be overridden. Meta-test that post-commit is really hooked
up by verifying that content is getting generated in destdir.
About the longstanding bug, which as far as I know was harmless:
CVS can't operate outside a srcdir, so we're always setting $CWD.
"local $CWD" restores the previous value when we go out of scope.
Usually that's correct. But if we're removing the last file from a
directory, the post-commit hook will exec in a working directory
that's about to not exist (CVS will prune it).
The fix: chdir() manually in cvs_runcvs(), so we can selectively
not chdir() back.
Try to avoid a situation in which so many ikiwiki cgi wrapper programs are
running, all waiting on some long-running thing like a site rebuild, that
it prevents the web server from doing anything else. The current approach
only avoids this problem for GET requests; if multiple cgi's run GETs on a
site at the same time, one will display a "please wait" page for a
configurable number of seconds, which then redirects to retry. To enable
this protection, set cgi_overload_delay to the number of seconds to wait.
This is not enabled by default.
This kind of change is scary, but this particular lock is very simply
used and so it seems ok to make it even just for better portability to
SunOS. (People still use that?)
Needed for attachment to return json when requested.
I think some browsers send Accept: * , so I made sure to check that json
was explicitly listed as to be accepted, as well as having a high
priority.
To review, tcc does not really use environ, so you have to use clearenv
there. But POSIX, in their wisdom, didn't standardise clearenv yet,
so on FreeBSD, one still needs to manipulate environ on their own.
(If you use tcc on FreeBSD, this may leave you unsatisfied.)
Loading and use of IkiWiki::Receive can all be pushed into the git plugin,
rather than scattered around.
I had at first wanted to make a receive plugin and move it there,
but a plugin was not a good fit; you don't want users to have to manually
load it, and making the git plugin load the receive plugin at the right
times would need more, and ugly code.
* In Wrapper.pm, add a new hook "wrapperargcheck" to examine argc/argv
and return success or failure. In the failure case, the wrapper
terminates.
* In cvs.pm, implement the new hook to return failure if a directory is
being cvs added.
can evaluate them, check them in the wrapper right off the bat.
This doesn't prevent the deadlock in web commits that need to cvs
add directories, but I'm committing so Joey can take a look if he
wants.
This is necessary so that things that fork to the background,
like pinger, and inline ping, don't block other cgis from running.
Note that websetup also calls unlockwiki, before refreshing / rebuilding
the wiki. It makes perfect sense for that not to block other cgis.
Fixed by making the cgi wrapper wait on a cgilock.
If you had to set apache's MaxClients low to avoid ikiwiki thrashing
your server, you can now turn it up to a high value.
The downside to this is that a cgi call that doesn't need to call lockwiki
will be serialised by this so only one can run at a time. (For example,
do=search.) There are few such calls, and all of them call loadindex,
so each still eats gobs of memory, so serialising them still seems ok.
This speeds up web commits by 1/4th of a second or so, since perl does
not have to start up for the post commit hook.
perl's locking is completly FuBar, since it's impossible to tell what perl
flock() really does, and thus difficult to write code in other languages
that interoperates with perl's locking. (Let alone interoperating with
existing fcntl locking from perl...)
In this particular case, I think I was able to find a way to avoid the
insanity, mostly. The C code does a true flock(2), and if perl is using an
incompatable lock method that does not use the same locking primative at
the kernel level, then the C code's test will fail, and it will go ahead
and run the perl code. Then the perl code's test will test the right thing.
On Debian, at least lately, perl's flock() does a true flock(2), so the
optimisation does work.