This better defines what the filter hook is passed, to only be the raw,
complete text of a page. Not some snippet, or data read in from an
unrelated template.
Several plugins that filtered text that originates from an (already
filtered) page were modified not to do that. Note that this was not
done very consistently before; other plugins that receive text from a
page called preprocess on it w/o first calling filter.
The template plugin gets text from elsewhere, and was also changed not to
filter it. That leads to one known regression -- the embed plugin cannot
be used to embed stuff in templates now. But that plugin is deprecated
anyway.
Later we may want to increase the coverage of what is filtered. Perhaps
a good goal would be to allow writing a filter plugin that filters
out unwanted words, from any input. We're not there yet; not only
does the template plugin load unfiltered text from its templates now,
but so can the table plugin, and other plugins that use templates (like
inline!). I think we can cross that bridge when we come to it. If I wanted
such a censoring plugin, I'd probably make it use a sanitize hook instead,
for the better coverage.
For now I am concentrating on the needs of the two non-deprecated users
of filter. This should fix bugs/po_vs_templates, and it probably fixes
an obscure bug around txt's use of filter for robots.txt.
Renamed usershort => nickname.
Note that this means existing user login sessions will not have the nickname
recorded, and so it won't be used for those.
Using named parameters for these is overdue. Passing the session in a
parameter instead of passing username and IP separately will later allow
storing other session info, like username or part of the email.
Note that these functions are not part of the exported API,
and the prototype change will catch (most) skew, so I am not changing
API versions. Any third-party plugins that call them will need updated
though.
Everywhere that REMOTE_ADDR was used, a session object is available, so
instead use its remote_addr method.
In IkiWiki::Receive, stop setting a dummy REMOTE_ADDR.
Note that it's possible for a session cookie to be obtained using one IP
address, and then used from another IP. In this case, the first IP will now
be used. I think that should be ok.
A short story:
Once there was a unicode string, let's call him Srcdir.
Along came a crufy old File::Find, who went through a tree and pasted each
of the leaves in turn onto Srcdir. But this 90's relic didn't decode the
leaves -- despite some of them using unicode! Poor Srcdir, with these
leaves stuck on him, tainted them with his nice unicode-ness. They didn't
look like leaves at all, but instead garbage.
(In other words, perl's unicode support sucks mightily, and drives
us all to drink and bad storytelling. But we knew that..)
So, srcdir is not normally flagged as unicode, because typically it's pure
ascii. And in that case, things work ok; File::Find finds filenames, which
are not yet decoded to unicode, and appends them to the srcdir, and then
decode_utf8 happily converts the whole thing.
But, if the srcdir does contain utf8 characters, that breaks. Or, if a Yaml
setup file is used, Yaml::Syck's implicitunicode sets the unicode flag of
*all* strings, even those containing only ascii. In either case, srcdir
has the unicode flag set; a non-decoded filename is appended, and the flag
remains set; and decode_utf8 sees the flag and does *nothing*. The result
is that the filename is not decoded, so looks valid and gets skipped.
File::Find only sticks the directory and filenames together in no_chdir
mode .. but we need that mode for security. In order to retain the
security, and avoid the problem, I made it not pass srcdir to File::Find.
Instead, chdir to the srcdir, and pass ".". Since "." is ascii, the problem
is avoided.
Note that chdir srcdir is safe because we check for symlinks in the srcdir
path.
Note that it takes care to chdir back to the starting location. Because
the user may have specified relative paths and so staying in the srcdir
might break. A relative path could even be specifed for an underlay dir, so
it chdirs back after each.
Problem is that by the time rendering calls render_dependent, %pagesources
has had deleted files removed from it. So match_comment's lookup of
files in there to see if they had the _comment extension failed.
I had to introduce a hash that temporarily holds filenames of deleted pages
to fix this.
Note that unlike comment(), internal() had avoided this pitfall by being
defined to match both internal and non-internal pages.
If the site is configured to allow comments on *, then the comment post
interface was being added to cgi pages like signin and prefs. This fixes it
w/o requiring more page.tmpl changes. The pagetemplate hook is called by
misctemplate with an empty page name for dynamic pages.
Instead, add a custom do=commentsignin, that calls cgi_signin.
This allows a plugin to inject a custom cgi_signin, that uses a different
do= parameter, and have it be used consitently. (This was the only
place to hardcode a link to do=signin.)
test isinternal first, because match_glob with internal => 1 also returns
non-internal pages that match. This order should also be faster.
Remove test to see if pagesources is set. isinternal will not succeed if it
is not.
* comments: Comments pending moderation are now stored in the srcdir
alongside accepted comments, but with a `._comment_pending` extension.
* This allows easier byhand moderation, as the "_pending" need
only be stripped off and the comment be committed to version control.
* The `comment_pending()` pagespec can be used to match such unmoderated
comments, which makes it easy to add a feed of them, or a counter
indicating how many there are.
* Belatedly added a `comment()` pagespec.
Note that I put comment-header in a <header> despite it being
below the comment. Using a <footer> would be confusing given
the class name. Also, the content is semantically closer to
a header than a footer.
This entailed changing template_params; it no longer takes the template
filename as its first parameter.
Add template_depends to api and replace calls to template() with
template_depends() in appropriate places, where a dependency should be
added on the template.
Other plugins don't use template(), so will need further work.
Also, includes are disabled for security. Enabling includes only when using
templates from the templatedir would be nice, but would add a lot of
complexity to the implementation.
Many calls to file_prune were incorrectly calling it with 2 parameters.
In cases where the filename being checked is relative to the srcdir,
that is not needed.
Made absolute filenames be pruned. (This won't work for the 2 parameter call
style.)
This makes them consistent with the rest of the meta keys. A wiki rebuild
will be needed on upgrade to this version; until the wiki is rebuilt,
double-escaping will occur in the titles of pages that have not changed.
The meta title data set by comments needs to be encoded the same way that
meta encodes it. (NB The security implications of the missing encoding
are small.)
Note that meta's encoding of title, description, and guid data, and not
other data, is probably a special case that should be removed. Instead,
these values should be encoded when used. I have avoided doing so here
because that would mean forcing a wiki rebuild on upgrade to have the data
consitently encoded.
This prevented comments containing some utf-8, including euro sign, from
being submitted. Since md5_hex is a C implementation, the string has to be
converted from perl's internal encoding to utf-8 when it is called. Some
utf-8 happened to work before, apparently by accident.
Note that this will change the checksums returned.
unique_comment_location is only used when posting comments, so the checksum
does not need to be stable there.
I only changed page_to_id for completeness; it is passed a comment page
name, and they can currently never contain utf-8.
In teximg, the bug could perhaps be triggered if the tex source contained
utf-8. If that happens, the checksum will change, and some extra work might
be performed on upgrade to rebuild the image.
This was not doable before, but when I added transitive dependency handling
in the big dependency rewrite, it became possible to include a comment
count when inlining.
This also improves the action link when a page has no comments. It will
link direct to the cgi to allow posting the first comment. And if the page
is locked to prevent posting new comments, the link is no longer shown.
Rationalle: Comments need to be user-editable so that they can be posted
via git commit etc.
The _comment directive is still supported, for back-compat.
The munged ids were looking pretty nasty, and were not completly guaranteed
to be unique. So a md5sum seems like a better approach. (Would have used
sha1, but md5 is in perl core.)