plovs reported a crash when templates were not installed properly,
with a non-useful error about the template object not being defined.
I've audited all uses of template_depends(), and template(), and it makes
sense for them to throw an error if the template cannot be found. All code
with a user-supplied template catches errors already, to handle template
parse failures.
It did not make sense for template_file to throw errors, as some code uses
it to probe if a template file is available.
The HTML::Tree changelog says:
[THINGS THAT MAY BREAK YOUR CODE OR TESTS]
...
* Attribute names are now validated in as_XML and invalid names will
cause an error.
and indeed the regression tests do get an error.
With a relative xrds-location, the openid perl client module will fail.
I haven't checked the specs to see if it needs to be absolute, but all
examples I've seen are absolute, so it seems a very good idea.
I also tried setting RPC::XML::ENCODING but that did not prevent the crash,
and it seems that blogspam.net doesn't like getting xml encoded in unicode,
since it mis-flagged comments as spammy that way that are normally allowed
through.
If I am not mistaking all source files in ikiwiki are encoded in Unicode UTF-8.
Adding `\usepackage[utf8]{inputenc}` enables LaTeX to deal with the encoding.
As a consequence some special characters like umlauts can be used in the source
code which is useful for foreign languages.
[[!teximg code="a = b \text{ für alle } b \neq 2"]]
But for example »≠« cannot be used in LaTeX right now. One has to use other TeX
systems like XeTeX or LuaTeX featuring native UTF-8 support or use additional
nonstandard packages like uniinput [1].
I used the package `inputenc` (`texdoc inputenc`) and not `inputenx` (`texdoc
inputenx`), because I have not used `inputenx` that much and using the option
`math` is not supported in Debian (and I guess other distributions too) since
`inpmath` is not included in CTAN.
[1] http://wiki.neo-layout.org/browser/latex/Standard-LaTeX
Signed-off-by: Paul Menzel <paulepanter@users.sourceforge.net>
A missing smileys.mdwn caused the plugin to error out interrupting the
building process. Instead, we check for the file presence and warn without
erroring out in case it's missing, in a similar fashion as it's
currently done for the shortcut plugin.
This reverts commit 3ef8864122.
Most aggregators block javascript and so it would display uglily.
Need to find a way to fallback to static buttons instead.
This makes the javascript be added to rss feeds, which allows the buttons
to be displayed by aggregators. At least, if the aggregator does not
sanitize javascript.
The po rescan hook re-runs the scan hooks, and runs the preprocess ones in scan
mode, both on the po-to-markup converted content. This way, plugins such as meta
are given a chance to gather correct information, rather than ugly/buggy escaped
data it did gather from unconverted PO files.
This is needed for the po plugin vs. e.g. meta titles.
In order to get rid of the ugly "rebuilding all pages to fix meta titles" thing,
Joey suggested to make "po, at scan time, re-run the scan hooks, passing them
modified content (either converted from po to mdwn or with the escaped stuff
cheaply de-escaped)". This would unfortunately not work, as the meta plugin
gathers its data using the preprocess hook in scan mode: it would overwrite with
buggy data the correct data we would have forced it to gather in po's scan hook.
We then need a hook that runs *after* the preprocess hook has been run in scan
mode, but *before* any page rendering is started. Hence this one.
The idea here is that <meta name="foo" description="bar">
can be written like [[!meta name="foo" description="bar">.
Of course, [[!meta foo=bar]] is still supported; this new feature
provides some DWIM when trying to directly convert a meta tag into
a meta directive.
This reverts commit 4cf185e781.
That commit broke t/po.t (probably the test case only is testing too
close the the old implementation and needs correcting).
Also, we have not decided how to want to represent it yet, so I'm not
ready for this change.
Conflicts:
IkiWiki/Plugin/po.pm
doc/plugins/po.mdwn
Probably best to store it unsanitized and sanitize as needed on use.
And it already was for comments, leaving only the need to sanitize the
nickname when git committing, to ensure the email address is legal.
... after having audited the po4a Xml and Xhtml modules for security issues.
Signed-off-by: intrigeri <intrigeri@boum.org>
(cherry picked from commit a128c256a5)
This better defines what the filter hook is passed, to only be the raw,
complete text of a page. Not some snippet, or data read in from an
unrelated template.
Several plugins that filtered text that originates from an (already
filtered) page were modified not to do that. Note that this was not
done very consistently before; other plugins that receive text from a
page called preprocess on it w/o first calling filter.
The template plugin gets text from elsewhere, and was also changed not to
filter it. That leads to one known regression -- the embed plugin cannot
be used to embed stuff in templates now. But that plugin is deprecated
anyway.
Later we may want to increase the coverage of what is filtered. Perhaps
a good goal would be to allow writing a filter plugin that filters
out unwanted words, from any input. We're not there yet; not only
does the template plugin load unfiltered text from its templates now,
but so can the table plugin, and other plugins that use templates (like
inline!). I think we can cross that bridge when we come to it. If I wanted
such a censoring plugin, I'd probably make it use a sanitize hook instead,
for the better coverage.
For now I am concentrating on the needs of the two non-deprecated users
of filter. This should fix bugs/po_vs_templates, and it probably fixes
an obscure bug around txt's use of filter for robots.txt.
Set it to true every time IkiWiki::filter is called on a full page's content.
This is a much nicer solution, for the po plugin, than previous whitelisting
using caller().
The protection against processing loops (i.e. the alreadyfiltered stuff) was
playing against us: the template plugin triggered a filter hooks run with the
very same ($page, $destpage) arguments pair that we use to identify a already
filtered page. Processing an included template could then mark the whole
translation page as already filtered, which prevented po_to_markup to be called
on the PO content.
This commit only runs the whole PO filter logic when our filter hook is run by
IkiWiki::render, which only happens when the full page needs to be filtered.
Renamed usershort => nickname.
Note that this means existing user login sessions will not have the nickname
recorded, and so it won't be used for those.
There was some confusion about whether the filename was
relative to srcdir or not. Some test cases, and the bzr
plugin assumed it was relative to the srcdir. Most everything else
assumed it was absolute.
Changed it to relative, for consistency with the rest
of the rcs_ functions.
Using named parameters for these is overdue. Passing the session in a
parameter instead of passing username and IP separately will later allow
storing other session info, like username or part of the email.
Note that these functions are not part of the exported API,
and the prototype change will catch (most) skew, so I am not changing
API versions. Any third-party plugins that call them will need updated
though.
In the process, lost the commits from special usernames
when committing changed po files. Instead of trying to dummy up a session
object for the special username, I just don't pass one, and the commit will
appear to be from whatever user ikiwiki runs as.
Everywhere that REMOTE_ADDR was used, a session object is available, so
instead use its remote_addr method.
In IkiWiki::Receive, stop setting a dummy REMOTE_ADDR.
Note that it's possible for a session cookie to be obtained using one IP
address, and then used from another IP. In this case, the first IP will now
be used. I think that should be ok.
Now the git plugin supports commits with author fields that look like:
Author: http://my.openid/ <me@web>
Then in recentchanges, the short username will be displayed, linking
to the openid.
Particularly useful for the horrible google openids, of course.
This way, an email-like link will be a mailto until a matching page
is created, then it will link to the page. And removing the page will
convert it back to a mailto.
At least two bugfixes in here. First, an old bug;
\[[foo#0]] was displayed as [[foo]], losing the anchor
as the anchor text was false. Secondly, a new bug;
an email like foo#bar@baz should not check bestlink("foo@baz").
The following ways to create a link are supported now:
[[url]]
[[text|url]]
url can be one of the following:
- an internal wikilink: will be handled as before
- any other kind of URL, including mailto: proper links will be created:
<a href="url">url</a>
<a href="url">text</a>
- an email address:
<a href="mailto:url">url</a>
<a href="mailto:url">text</a>
For now, a rebuild is the only way to ensure the changed theme is used.
Ikiwiki normally will not realize style.css has changed, since themes
tend to have the same timestamp for the file.
A short story:
Once there was a unicode string, let's call him Srcdir.
Along came a crufy old File::Find, who went through a tree and pasted each
of the leaves in turn onto Srcdir. But this 90's relic didn't decode the
leaves -- despite some of them using unicode! Poor Srcdir, with these
leaves stuck on him, tainted them with his nice unicode-ness. They didn't
look like leaves at all, but instead garbage.
(In other words, perl's unicode support sucks mightily, and drives
us all to drink and bad storytelling. But we knew that..)
So, srcdir is not normally flagged as unicode, because typically it's pure
ascii. And in that case, things work ok; File::Find finds filenames, which
are not yet decoded to unicode, and appends them to the srcdir, and then
decode_utf8 happily converts the whole thing.
But, if the srcdir does contain utf8 characters, that breaks. Or, if a Yaml
setup file is used, Yaml::Syck's implicitunicode sets the unicode flag of
*all* strings, even those containing only ascii. In either case, srcdir
has the unicode flag set; a non-decoded filename is appended, and the flag
remains set; and decode_utf8 sees the flag and does *nothing*. The result
is that the filename is not decoded, so looks valid and gets skipped.
File::Find only sticks the directory and filenames together in no_chdir
mode .. but we need that mode for security. In order to retain the
security, and avoid the problem, I made it not pass srcdir to File::Find.
Instead, chdir to the srcdir, and pass ".". Since "." is ascii, the problem
is avoided.
Note that chdir srcdir is safe because we check for symlinks in the srcdir
path.
Note that it takes care to chdir back to the starting location. Because
the user may have specified relative paths and so staying in the srcdir
might break. A relative path could even be specifed for an underlay dir, so
it chdirs back after each.
Removing a plugin from add_plugins is not always enough to disable it.
It may have been redundantly added there and also pulled in via goodstuff.
Always add didabled plugins to disable_plugins.
* calendar: Shorten day names, and improve styling of month calendar.
* style.css: Reduced sidebar width back to 20ex from 30; the month calendar
will now fit in the smaller width, and 30 was feeling too large.
I've seen user(http://*) confuse someone who didn't know pagespecs to think
that just http://* would moderate all comments to every page, or something
like that.
Problem is that by the time rendering calls render_dependent, %pagesources
has had deleted files removed from it. So match_comment's lookup of
files in there to see if they had the _comment extension failed.
I had to introduce a hash that temporarily holds filenames of deleted pages
to fix this.
Note that unlike comment(), internal() had avoided this pitfall by being
defined to match both internal and non-internal pages.
If the site is configured to allow comments on *, then the comment post
interface was being added to cgi pages like signin and prefs. This fixes it
w/o requiring more page.tmpl changes. The pagetemplate hook is called by
misctemplate with an empty page name for dynamic pages.
On second thought, misctemplate can use pagetemplate hooks to provide
it, so it's better to keep back-compat, and allow full customisation
of how it's displayed via the template.
So RecentChanges shows on the action bar there,
convert recentchanges to use new pageactions hook,
with compatability code to avoid breaking old templates.
If po is imported twice, bad things happen. Guard against that.
I'm not sure what causes the double import; I saw it when websetup did a
wiki rebuild. Carp failed to show a backtrace for the second call to
import.
* openid: Incorporated a fancy openid-selector signin form.
(http://code.google.com/p/openid-selector/)
* openid: Use "openid_identifier" as the form field, as required
by OpenID Authentication v2.0 spec.
Instead, add a custom do=commentsignin, that calls cgi_signin.
This allows a plugin to inject a custom cgi_signin, that uses a different
do= parameter, and have it be used consitently. (This was the only
place to hardcode a link to do=signin.)
test isinternal first, because match_glob with internal => 1 also returns
non-internal pages that match. This order should also be faster.
Remove test to see if pagesources is set. isinternal will not succeed if it
is not.
Since all forms are wrapped in a template that defines the actual
stylesheets, formbuilder just has to be told to turn on stylesheet mode,
not what file is the style sheet.
* comments: Comments pending moderation are now stored in the srcdir
alongside accepted comments, but with a `._comment_pending` extension.
* This allows easier byhand moderation, as the "_pending" need
only be stripped off and the comment be committed to version control.
* The `comment_pending()` pagespec can be used to match such unmoderated
comments, which makes it easy to add a feed of them, or a counter
indicating how many there are.
* Belatedly added a `comment()` pagespec.