The intention was that signed-in users (for instance via httpauth,
passwordauth or openid) are already adequately identified, but
there's nothing to indicate who an anonymous commenter is unless
their IP address is recorded.
I want to write my blog posts in a convenient format (Emacs org mode)
but do not want commenters to be able to use this format for security
reasons. This patch allows to configure which formats are allowed for
writing comments.
Effectively, it restricts the formats enabled with add_plugin to those
mentioned in comments_allowformats. If this is empty, all formats are
allowed, which is the behavior without this patch.
Previously, prune("wiki/srcdir/sandbox/test.mdwn") could delete srcdir
or even wiki, if they happened to be empty. This is rarely what you
want: there's usually some base directory (destdir, srcdir, transientdir
or another subdirectory of wikistatedir) beyond which you do not want to
delete.
Technically, when the user does this, a passwordless account is created
for them. The notify mails include a login url, and once logged in that
way, the user can enter a password to get a regular account (although
one with an annoying username).
This all requires the passwordauth plugin is enabled. A future enhancement
could be to split the passwordless user concept out into a separate plugin.
strftime is a C function, it does not return decoded utf8.
Several places in ikiwiki manually decoded it, but at least two
forgot to.
Also, strftime might not return even encoded utf8, if LC_TIME is set
to a non-utf8 value. Went ahead and supported decoding whatever encoding
it uses.
The remaining direct calls to strftime() are all ones that first set
LC_TIME=C, in order to get times that are not for human display.
There is a tension between looking up the avatar at post time
and build time. I have not yet decided which is better.
Lookup at build time has the benefit that if a user changes their
email address, or sets up their own federated libravatar
server, on rebuild their new avatar will show up.
It also allows getting a https version of the avatar easily if
the site was using http but was changed to use https.
And it can look up avatars for posts that have already been made.
Which is a nice thing, especially as we roll this out, eh?
But it has a drawback, that it depends on the sessiondb contents
for emails and so rebuilding a site w/o that will lose info.
And, it means dns lookups every time a comment is rendered. A page
with a lot of comments on it would render them all whenever another is
posted or the page is changed, and that could significantly slow things
down. (This could be amelorated by caching the lookups.)
Since I'm undecided, I have moved it into a function that could be called
either way. Currently looking up only at post time.
Don't fail if libravatar fails for some reason. Reasons I can think
of:
* too old version to do openid lookups (fall back to email lookup)
* network problem perhaps
cgitemplate is a modified misctemplate that takes an optional cgi object
and uses it to set the baseurl, and also optionally the forcebaseurl,
if a page is provided.
If no cgi object is provided, it will fall back to using $config{url}.
I expect this will only be needed in exceptional cases where
that doesn't much matter, such as cgierror().
showform uses cgitemplate, so there is no more need for showform_preview.
This way, do=goto will go to the page relative to
the current location, while the permalinks in feeds
will be absolute (unless an url is not configured at all).
Now that page.tmpl is used for cgi, the parentlinks are able to be
displayed even when creating or editing a page. So it's redundant to
include the path to the page in the title, remove it.
This better defines what the filter hook is passed, to only be the raw,
complete text of a page. Not some snippet, or data read in from an
unrelated template.
Several plugins that filtered text that originates from an (already
filtered) page were modified not to do that. Note that this was not
done very consistently before; other plugins that receive text from a
page called preprocess on it w/o first calling filter.
The template plugin gets text from elsewhere, and was also changed not to
filter it. That leads to one known regression -- the embed plugin cannot
be used to embed stuff in templates now. But that plugin is deprecated
anyway.
Later we may want to increase the coverage of what is filtered. Perhaps
a good goal would be to allow writing a filter plugin that filters
out unwanted words, from any input. We're not there yet; not only
does the template plugin load unfiltered text from its templates now,
but so can the table plugin, and other plugins that use templates (like
inline!). I think we can cross that bridge when we come to it. If I wanted
such a censoring plugin, I'd probably make it use a sanitize hook instead,
for the better coverage.
For now I am concentrating on the needs of the two non-deprecated users
of filter. This should fix bugs/po_vs_templates, and it probably fixes
an obscure bug around txt's use of filter for robots.txt.
Renamed usershort => nickname.
Note that this means existing user login sessions will not have the nickname
recorded, and so it won't be used for those.
Using named parameters for these is overdue. Passing the session in a
parameter instead of passing username and IP separately will later allow
storing other session info, like username or part of the email.
Note that these functions are not part of the exported API,
and the prototype change will catch (most) skew, so I am not changing
API versions. Any third-party plugins that call them will need updated
though.
Everywhere that REMOTE_ADDR was used, a session object is available, so
instead use its remote_addr method.
In IkiWiki::Receive, stop setting a dummy REMOTE_ADDR.
Note that it's possible for a session cookie to be obtained using one IP
address, and then used from another IP. In this case, the first IP will now
be used. I think that should be ok.
A short story:
Once there was a unicode string, let's call him Srcdir.
Along came a crufy old File::Find, who went through a tree and pasted each
of the leaves in turn onto Srcdir. But this 90's relic didn't decode the
leaves -- despite some of them using unicode! Poor Srcdir, with these
leaves stuck on him, tainted them with his nice unicode-ness. They didn't
look like leaves at all, but instead garbage.
(In other words, perl's unicode support sucks mightily, and drives
us all to drink and bad storytelling. But we knew that..)
So, srcdir is not normally flagged as unicode, because typically it's pure
ascii. And in that case, things work ok; File::Find finds filenames, which
are not yet decoded to unicode, and appends them to the srcdir, and then
decode_utf8 happily converts the whole thing.
But, if the srcdir does contain utf8 characters, that breaks. Or, if a Yaml
setup file is used, Yaml::Syck's implicitunicode sets the unicode flag of
*all* strings, even those containing only ascii. In either case, srcdir
has the unicode flag set; a non-decoded filename is appended, and the flag
remains set; and decode_utf8 sees the flag and does *nothing*. The result
is that the filename is not decoded, so looks valid and gets skipped.
File::Find only sticks the directory and filenames together in no_chdir
mode .. but we need that mode for security. In order to retain the
security, and avoid the problem, I made it not pass srcdir to File::Find.
Instead, chdir to the srcdir, and pass ".". Since "." is ascii, the problem
is avoided.
Note that chdir srcdir is safe because we check for symlinks in the srcdir
path.
Note that it takes care to chdir back to the starting location. Because
the user may have specified relative paths and so staying in the srcdir
might break. A relative path could even be specifed for an underlay dir, so
it chdirs back after each.