That resulted in double encoded display when using perl's stub
readline module. Apparently that module unconditionally upgrades
text to utf8, in a quite braindead way.
(Term::ReadLine::Gnu::Perl worked ok.)
It will set up an ikiwiki instance tuned for use in blogging.
As part of this change, move the example sites into /usr/share/ikiwiki so
they are available even if docs are not installed.
Asking for only the head worked in my tests, but I've found a site where it
didn't -- apparently ikiwiki didn't get a chance to do or finish the
refresh when HEADed. Getting the whole url, waiting for ikiwiki to finish,
avoided the update problem.
* repolist: New plugin to support the rel=vcs-* microformat.
* goodstuff: Include repolist by default. (But it does nothing until
configured with the repository locations.)
It seems to be a failing of i18n in unix that the translation stops at the
commands and the parameters to them, and ikiwiki is no exception with its
currently untranslated directives. So the little bit that's translated sticks
out like a sore thumb. It also breaks building of wikis if a different locale
happens to be set.
I suppose the best thing to do is either give up on the localisation of this
part completly, or make it recognise English in addition to the locale. I've
tenatively chosen the latter.
(Also accept 1 and 0 as input.)
inline has a format hook that is an optimisation hack. Until this hook
runs, the inlined content is not present on the page. This can prevent
other format hooks, that process that content, from acting on inlined
content. In bug ##509710, we discovered this happened commonly for the
embed plugin, but it could in theory happen for many other plugins (color,
cutpaste, etc) that use format to fill in special html after sanitization.
The ordering was essentially random (hash key order). That's kinda a good
thing, because hooks should be independent of other hooks and able to run
in any order. But for things like inline, that just doesn't work.
To fix the immediate problem, let's make hooks able to be registered as
running "first". There was already the ability to make them run "last".
Now, this simple first/middle/last ordering is obviously not going to work
if a lot of things need to run first, or last, since then we'll be back to
being unable to specify ordering inside those sets. But before worrying about
that too much, and considering dependency ordering, etc, observe how few
plugins use last ordering: Exactly one needs it. And, so far, exactly one
needs first ordering. So for now, KISS.
Another implementation note: I could have sorted the plugins with
first/last/middle as the primary key, and plugin name secondary, to get a
guaranteed stable order. Instead, I chose to preserve hash order. Two
opposing things pulled me toward that decision:
1. Since has order is randomish, it will ensure that no accidental
ordering assumptions are made.
2. Assume for a minute that ordering matters a lot more than expected.
Drastically changing the order a particular configuration uses could
result in a lot of subtle bugs cropping up. (I hope this assumption is
false, partly due to #1, but can't rule it out.)
I see that this plugin's lists of safe content are already well out of
date, and htmlscrubber_skip offers a non whitelist based approach, so let's
deprecate this plugin for 3.0.
People seem to be able to expect to enter www.foo.com and get away with it.
The resulting my.wiki/www.foo.com link was not ideal.
To fix it, use URI::Heuristic to expand such things into a real url. It
even looks up hostnames in the DNS if necessary.
Since ikiwiki uses open :utf8, perl assumes that files contain valid utf-8.
If it turns out to be malformed it may later crash while processing strings
read from them, with 'Malformed UTF-8 character (fatal)'.
As at least a quick fix, use utf8::valid as soon as data is read, and if
it's not valid, call encode_utf8 on the string, thus clearing the utf-8
flag. This may cause follow-on encoding problems, but will avoid this
crash, and the input file was broken anyway, so GIGO is a reasonable
response. (I looked at calling decode_utf8 after, but it seemed to cause
more trouble than it was worth. BTW, use open ':encoding(utf8)' avaoids
this problem, but the corrupted data later causes Storable to crash when
writing the index.)
This is a quick fix, clearly imperfect:
- It might be better to explicitly call decode_utf8 when reading files,
rather than using the IO layer.
- Data read other than by readfile() can still sneak in bad utf-8. While
ikiwiki does very little file input not using it, stdin for the CGI
would be one way.
This is necessary so that things that fork to the background,
like pinger, and inline ping, don't block other cgis from running.
Note that websetup also calls unlockwiki, before refreshing / rebuilding
the wiki. It makes perfect sense for that not to block other cgis.
* Stop busy-waiting in lockwiki, as this could delay ikiwiki from waking up
for up to one second. The bailout code is no longer needed.
* Remove support for unused optional wait parameter from lockwiki.
This fixes a problem exposed by the recent change to tags
(a2839de936). That recorded tag links as
absolute by including a leading slash in the link. The same could also be
done with an absolute wikilink.
In either case, link() would not match such links, unless the leading slash
was included in the link to match. But that's not right, because pagespecs
match absolute by default. So strip the leading slash.
Note that to keep any existing `link(/foo)` pagespecs working after this
change, the leading slash is removed from there, too.
The syslog value from the setup file is purposfully ignored when doing
ikiwiki -setup, so that it will output to stdout (while generating wrappers
that do use the syslog). But that caused -dumpsetup to not preserve
the syslog value from the setup file.
This speeds up web commits by 1/4th of a second or so, since perl does
not have to start up for the post commit hook.
perl's locking is completly FuBar, since it's impossible to tell what perl
flock() really does, and thus difficult to write code in other languages
that interoperates with perl's locking. (Let alone interoperating with
existing fcntl locking from perl...)
In this particular case, I think I was able to find a way to avoid the
insanity, mostly. The C code does a true flock(2), and if perl is using an
incompatable lock method that does not use the same locking primative at
the kernel level, then the C code's test will fail, and it will go ahead
and run the perl code. Then the perl code's test will test the right thing.
On Debian, at least lately, perl's flock() does a true flock(2), so the
optimisation does work.
Add an inject function, that can be used by plugins that want to replace
one of ikiwiki's functions with their own version. (This is a scary thing
that grubs through the symbol table, and replaces all exported occurances
of a function with the injected version.)
external: RPC functions can be injected to replace exported functions.
Removed the stupid displaytime hook, and use injection instead.
The html links already went there, but internally the links were not
recorded as absolute, which could cause confusing backlinks etc.
For example, with tagbase=tags, if blog/tags/bar existed and blog/foo was
tagged bar, it would link to /tags/bar. But, the link would be recorded
simply as a link to tags/bar, and so later blog/tags/bar would appear to
have the backlink.
* Add an underlay for javascript, and add ikiwiki.js containing some utility
code.
* toggle: Stop embedding the full toggle code on each page using it, and
move it to toggle.js in the javascript underlay.
This is the easy part of supporting foo/index.mdwn sources for page foo.
Note that if foo.mdwn exists too, there will be a warning about multiple
sources for the same page, and which is used is indeterminate.
indexpages should also cause web based editing to create index source pages
by default; this and other fallout of the option not yet implemented.
Upgrades to the new index format should be transparent.
The version field is 3, because 1 was the old textual index, 2 was the
pre-versioned format.
This also includes some efficiency improvements to index loading, by
not copying a hash and using a reference.
* htmltidy: Avoid returning undef if tidy fails. Also avoid returning the
untidied content if tidy crashes. In either case, it seems best to tidy
the content to nothing.
* htmltidy: Avoid spewing tidy errors to stderr.
Seems that the problem is that once the \nnn coming from git is converted
to a single character, decode_utf8 decides that this is a standalone
character, and not part of a multibyte utf-8 sequence, and so does nothing.
I tried playing with the utf-8 flag, but that didn't work. Instead, use
decode("utf8"), which doesn't have the same qualms, and successfully
decodes the octets into a utf-8 character.
Rant:
Think for a minute about fact that any and every program that parses git-log,
or git-show, etc output to figure out what files were in a commit needs to
contain this snippet of code, to convert from git-log's wacky output to a
regular character set:
if ($file =~ m/^"(.*)"$/) {
($file=$1) =~ s/\\([0-7]{1,3})/chr(oct($1))/eg;
}
(And it's only that "simple" if you don't care about filenames with
embedded \n or \t or other control characters.)
Does that strike anyone else as putting the parsing and conversion in the
wrong place (ie, in gitweb, ikiwiki, etc, etc)? Doesn't anyone who actually
uses git with utf-8 filenames get a bit pissed off at seeing \xxx\xxx
instead of the utf-8 in git-commit and other output?
I saw this in the wild, apparently a page was not present on disk, but was
in the aggregate db, and not marked as expired either. Not sure how that
happened, but such pages should get marked as expired since they have an
effectively zero ctime.
* edittemplate: Default new page file type to the same type as the template.
(willu)
* edittemplate: Add "silent" parameter. (Willu)
* edittemplate: Link to template, to allow creating it. (Willu)
Setup output is once again output to stdout in this case.
Implemented by stashing the verbose/syslog values set in the setup file,
and using those values in the generated wrappers, but not allowing them to take
effect during the setup operation itself, so that command-line options,
appearing before or after -setup, are honored.
Also, some cleanups to how %config is generated for wrappers, removing some
fields that do not need to be recorded inside the wrapper.
* goodstuff: Remove otl plugin from the bundle since it needs a significant
external dependency and is not commonly used. If you use otl, make sure
you explicitly enable it now.
* goodstuff: Add more, progress, and table plugins to the bundle.
* teximg: The prefix is configurable, and has changed to not include the
nonstandard mhchem by default. (willu)
* teximg: dvipng is used if available to render images. Its output is
antialiased and better than dvips. If not available, the old dvips+convert
chain will be used. (willu)
* Drop suggests on texlive-science, add suggests on dvipng.
The use of $dummy was not sufficient, because it only stuck around for the
first element after a dummy parent, and was then lost. Instead, use a
$addparent that contains the actual dummy parent, so it can be compared
with the new item to see if we're still under that parent or have moved to
another one.
The locked pages configuration is moving to a locked_pages option in the
setup file, and the allowed attachments configuration to
allowed_attachments. The admin prefs page can still be used for these, but
that's depreacted and will only be shown if there's currently a value.
Implemented for git and svn so far.
Note that rcs_commit_staged does assume that the rcs has the ability to
"stage" multiple changes for a later commit. Support for this varies, but
all we really care about is staging removals and renames, which, AFAIK, all
modern rcs's support.
can be used to avoid a security check that is a good safe default, but
problimatic overkill in some situations.
I decided to underdocument this, because the option looks ugly, and I don't
want people randomly turning it on because it looks like a good idea. So if
you need it, you'll get an error message mentioning how to fix it.
The committer's email address is not used (because leaking email addresses
is not liked by many users). Closes: #451023
A "Web-commit" trailer is added, to allow telling the difference between
web commits and direct commits.
Smileys need to be double-escaped to work, since the smiley plugin runs as
a sanitize hook, and markdown helpfully removes one level of escapes first.
There were some bugs in the smiley handling code that made escaped smileys
still be expanded. After unescaping a smiley, it needed to move pos forward
past it or the next pass would expand it.
Also, once the m//g got to the end, it seemed to loop back through and make
one more pass (a difference in perl 5.10's regexp exngine? I observed that
pos was undefined when this happened, so added a `last unless defined pos`.
This is truely horribly disgusting. CGI::tmpFileName, in current perls, is
an undocumented function (which should be a clue..) that takes the original
filename of an uploaded attachment, and returns the name of the tempfile
that CGI has stored it in.
In old perls, though, CGI::tmpFileName does not take a filename. It takes
a key from the object's {'.tmpfiles'} hash. This key is something
crazy like '*Fh::fh00001group' -- apparently the stringification of a
filehandle object.
Just to add to the fun, tmpFileName doesn't take the key, it expects a
refernce to the key. Argh?!
But the fun doesn't stop there, because in perl 5.8, CGI.pm is also broken
in two other ways. The upload() method is supposed to return a filehandle
to the temp file. It doesn't. The param() method is supposed to return
a filehandle to the temp file, that stringifies to the original filename.
It returns just the original filename, no filehandle.
Combine all these bugs, and you end up with this disgusting commit. Since
I have no way to get the filehandle, I *need* to get the tempfile name.
If I had the filehandle, I could probably pass it into tmpFileName, and
it might strigify to the right key name. But I don't, so the only way to
determine the key is to grub through the .tmpfiles hash ourselves.
And finally, one the temp file name is discovered, a filehandle can finally
be obtained by (re)opening it.
I recommend that this commit be reverted when perl 5.8 is a mercifully
faded memory.
I'm really, really, really glad I'm actually being paid for working on
this right now!
So the problem is that ikiwiki would generate a relative link like
href="colon:problem", which web browsers treat as being in the "colon:"
uri scheme.
The best fix seems to be to make url beautification fix this, by slapping
a "./" in front.
* The editpage form now uses the raw page name, not the page title, in its
'page' cgi parameter. Using the title was ambiguous and made it
impossible to tell between some pages, like "foo/bar" and "foo__47__bar",
sometimes causing the wrong page to be edited.
* This change means that some edit links need to be updated.
Force a rebuild on upgrade to this version.
* Above change also allowed really fixing escaped slashes from the blogpost
form.
* toc: Revert change in 2.45 that made it run at sanitize time. This breaks
use of toc in a sidebar.
* Call format hooks when generating page previews, thus fixing toc display
there, as well as fixing inlins to again display in page previews, since
it's started using format hooks. This also allows several other things,
like embed, that use format hooks, to work during page preview time.
* Format hooks should not rely on getting an entire html document, as they
will only get the body during page preview.
* toggle: Deal with preview mode when adding javascript.
This special case crops up when generating the parentlink to the toplevel
index page. urlto("") had been generating a link to "./" (or "../" etc)
for that, which is fine, if the web server redirects that to the toplevel
index.html. It's less fine if there is no web server.
I actually ran into the problem first when using gopher. (Yes, yes, don't
laugh.. see upcoming tip.) But it also crops up when browsing local wiki
files.
Of course, the index.html is stripped back off if usedirs is enabled.
Because the search plugin needed it, also because it's one of the few
plugins that didn't already have it.
I also considered adding it to htmlize, but I really cannot imagine caring
what the destpage is when htmlizing. (I'll probably be poven wrong later.)
This fixes a problem sgran saw on alioth. Apparently nss-db sets errno to
ENOENT as a side effect trying to read an optional file, but succeeds
anyway. Then, somehow, errno remains set across the library calls made by
$).
So unset it first as a workaround; there's probably a nss-db, libc, and/or
perl bug underneath.
Something has changed in CGI.pm in perl 5.10. It used to not care
if STDIN was opened using :utf8, but now it'll mis-encode utf-8 values
when used that way by ikiwiki. Now I have to binmode(STDIN) before
instantiating the CGI object.
In 57bba4dac1, I changed from decoding
CGI::Formbuilder fields to utf-8, to decoding cgi parameters before setting
up the form object. As of perl 5.10, that approach no longer has any effect
(reason unknown). To get correctly encoded values in FormBuilder forms,
they must once again be decoded after the form is set up.
As noted in 57bba4da, this can cause one set of problems for
formbuilder_setup hooks if decode_form_utf8 is called before the hooks, and
a different set if it's called after. To avoid both sets of problems, call
it both before and after. (Only remaining problem is the sheer ugliness and
inefficiency of that..)
I think that these changes will also work with older perl versions, but I
haven't checked.
Also, in the case of the poll plugin, the cgi parameter needs to be
explcitly decoded before it is used to handle utf-8 values. (This may have
always been broken, not sure if it's related to perl 5.10 or not.)
* Add a Bundle::Ikiwiki to the source for use with CPAN to install *all*
the modules ikiwiki can use.
* Add a cpan directory containing a CPAN::MyConfig that can ease use of
CPAN to install in a home directory on shared hosting providers.
* With these changes, it's pretty easy to install onto nearlyfreespeech.net
and probably other shared hosting providers like dreamhost. Added
a tip page documentng the process for nearlyfreespeech.