The HTML::Tree changelog says:
[THINGS THAT MAY BREAK YOUR CODE OR TESTS]
...
* Attribute names are now validated in as_XML and invalid names will
cause an error.
and indeed the regression tests do get an error.
With a relative xrds-location, the openid perl client module will fail.
I haven't checked the specs to see if it needs to be absolute, but all
examples I've seen are absolute, so it seems a very good idea.
I also tried setting RPC::XML::ENCODING but that did not prevent the crash,
and it seems that blogspam.net doesn't like getting xml encoded in unicode,
since it mis-flagged comments as spammy that way that are normally allowed
through.
If I am not mistaking all source files in ikiwiki are encoded in Unicode UTF-8.
Adding `\usepackage[utf8]{inputenc}` enables LaTeX to deal with the encoding.
As a consequence some special characters like umlauts can be used in the source
code which is useful for foreign languages.
[[!teximg code="a = b \text{ für alle } b \neq 2"]]
But for example »≠« cannot be used in LaTeX right now. One has to use other TeX
systems like XeTeX or LuaTeX featuring native UTF-8 support or use additional
nonstandard packages like uniinput [1].
I used the package `inputenc` (`texdoc inputenc`) and not `inputenx` (`texdoc
inputenx`), because I have not used `inputenx` that much and using the option
`math` is not supported in Debian (and I guess other distributions too) since
`inpmath` is not included in CTAN.
[1] http://wiki.neo-layout.org/browser/latex/Standard-LaTeX
Signed-off-by: Paul Menzel <paulepanter@users.sourceforge.net>
Avoid the generic "you are not allowed to change" message,
and instead allow check_canedit to propigate out useful error messages.
Went back to calling check_canedit in fatal mode, but added a parameter to
avoid calling the troublesome subs that might cause a login attempt.
A missing smileys.mdwn caused the plugin to error out interrupting the
building process. Instead, we check for the file presence and warn without
erroring out in case it's missing, in a similar fashion as it's
currently done for the shortcut plugin.
This reverts commit 3ef8864122.
Most aggregators block javascript and so it would display uglily.
Need to find a way to fallback to static buttons instead.
This makes the javascript be added to rss feeds, which allows the buttons
to be displayed by aggregators. At least, if the aggregator does not
sanitize javascript.
The po rescan hook re-runs the scan hooks, and runs the preprocess ones in scan
mode, both on the po-to-markup converted content. This way, plugins such as meta
are given a chance to gather correct information, rather than ugly/buggy escaped
data it did gather from unconverted PO files.
This is needed for the po plugin vs. e.g. meta titles.
In order to get rid of the ugly "rebuilding all pages to fix meta titles" thing,
Joey suggested to make "po, at scan time, re-run the scan hooks, passing them
modified content (either converted from po to mdwn or with the escaped stuff
cheaply de-escaped)". This would unfortunately not work, as the meta plugin
gathers its data using the preprocess hook in scan mode: it would overwrite with
buggy data the correct data we would have forced it to gather in po's scan hook.
We then need a hook that runs *after* the preprocess hook has been run in scan
mode, but *before* any page rendering is started. Hence this one.
The idea here is that <meta name="foo" description="bar">
can be written like [[!meta name="foo" description="bar">.
Of course, [[!meta foo=bar]] is still supported; this new feature
provides some DWIM when trying to directly convert a meta tag into
a meta directive.
This reverts commit 4cf185e781.
That commit broke t/po.t (probably the test case only is testing too
close the the old implementation and needs correcting).
Also, we have not decided how to want to represent it yet, so I'm not
ready for this change.
Conflicts:
IkiWiki/Plugin/po.pm
doc/plugins/po.mdwn
Probably best to store it unsanitized and sanitize as needed on use.
And it already was for comments, leaving only the need to sanitize the
nickname when git committing, to ensure the email address is legal.
... after having audited the po4a Xml and Xhtml modules for security issues.
Signed-off-by: intrigeri <intrigeri@boum.org>
(cherry picked from commit a128c256a5)
This better defines what the filter hook is passed, to only be the raw,
complete text of a page. Not some snippet, or data read in from an
unrelated template.
Several plugins that filtered text that originates from an (already
filtered) page were modified not to do that. Note that this was not
done very consistently before; other plugins that receive text from a
page called preprocess on it w/o first calling filter.
The template plugin gets text from elsewhere, and was also changed not to
filter it. That leads to one known regression -- the embed plugin cannot
be used to embed stuff in templates now. But that plugin is deprecated
anyway.
Later we may want to increase the coverage of what is filtered. Perhaps
a good goal would be to allow writing a filter plugin that filters
out unwanted words, from any input. We're not there yet; not only
does the template plugin load unfiltered text from its templates now,
but so can the table plugin, and other plugins that use templates (like
inline!). I think we can cross that bridge when we come to it. If I wanted
such a censoring plugin, I'd probably make it use a sanitize hook instead,
for the better coverage.
For now I am concentrating on the needs of the two non-deprecated users
of filter. This should fix bugs/po_vs_templates, and it probably fixes
an obscure bug around txt's use of filter for robots.txt.
Set it to true every time IkiWiki::filter is called on a full page's content.
This is a much nicer solution, for the po plugin, than previous whitelisting
using caller().