There seems no need to allow selecting a location when creating a page this
way; the user should always want it to appear in the inline whose form they
submitted.
The lack of $from will probably hurt setups using po_link_to = current,
but at least we can fix the blocker bug that prevents any wiki using the po
plugin to build.
Use the included page name rather than the including page name. This
allows us to allow feeds in nested inlines without duplicating feeds
with the same content under different (and stupid) names.
and support all elements that HTML::Tagset knows about.
(Which doesn't include html5 just yet, but then the old version didn't either.)
Bonus: 4 times faster than old regexp method.
So formbuilder has an annoying glitch, that setting the value of a
checkbox, even without force, will override the value currently on the
form. Thus the guards against changing checkbox values when a form has been
submitted.
But those guards also prevented the checkboxes for advanced items getting
the right value when going into advanced mode.
Note that if the user makes changes to advanced mode stuff and leaves
advanced mode, those changes are lost. That seems reasonable so I didn't
change it -- and it made this fix simple.
I understand the need to avoid chdir when running git_parse_changes
for receive now. At that point, the changes have not been pushed to
the srcdir's repo yet. When running the same code for preprevert,
chdir to the srcdir is ok, and necessary.
plovs reported a crash when templates were not installed properly,
with a non-useful error about the template object not being defined.
I've audited all uses of template_depends(), and template(), and it makes
sense for them to throw an error if the template cannot be found. All code
with a user-supplied template catches errors already, to handle template
parse failures.
It did not make sense for template_file to throw errors, as some code uses
it to probe if a template file is available.
The HTML::Tree changelog says:
[THINGS THAT MAY BREAK YOUR CODE OR TESTS]
...
* Attribute names are now validated in as_XML and invalid names will
cause an error.
and indeed the regression tests do get an error.
With a relative xrds-location, the openid perl client module will fail.
I haven't checked the specs to see if it needs to be absolute, but all
examples I've seen are absolute, so it seems a very good idea.
I also tried setting RPC::XML::ENCODING but that did not prevent the crash,
and it seems that blogspam.net doesn't like getting xml encoded in unicode,
since it mis-flagged comments as spammy that way that are normally allowed
through.
If I am not mistaking all source files in ikiwiki are encoded in Unicode UTF-8.
Adding `\usepackage[utf8]{inputenc}` enables LaTeX to deal with the encoding.
As a consequence some special characters like umlauts can be used in the source
code which is useful for foreign languages.
[[!teximg code="a = b \text{ für alle } b \neq 2"]]
But for example »≠« cannot be used in LaTeX right now. One has to use other TeX
systems like XeTeX or LuaTeX featuring native UTF-8 support or use additional
nonstandard packages like uniinput [1].
I used the package `inputenc` (`texdoc inputenc`) and not `inputenx` (`texdoc
inputenx`), because I have not used `inputenx` that much and using the option
`math` is not supported in Debian (and I guess other distributions too) since
`inpmath` is not included in CTAN.
[1] http://wiki.neo-layout.org/browser/latex/Standard-LaTeX
Signed-off-by: Paul Menzel <paulepanter@users.sourceforge.net>
Avoid the generic "you are not allowed to change" message,
and instead allow check_canedit to propigate out useful error messages.
Went back to calling check_canedit in fatal mode, but added a parameter to
avoid calling the troublesome subs that might cause a login attempt.
A missing smileys.mdwn caused the plugin to error out interrupting the
building process. Instead, we check for the file presence and warn without
erroring out in case it's missing, in a similar fashion as it's
currently done for the shortcut plugin.
This reverts commit 3ef8864122.
Most aggregators block javascript and so it would display uglily.
Need to find a way to fallback to static buttons instead.
This makes the javascript be added to rss feeds, which allows the buttons
to be displayed by aggregators. At least, if the aggregator does not
sanitize javascript.
The po rescan hook re-runs the scan hooks, and runs the preprocess ones in scan
mode, both on the po-to-markup converted content. This way, plugins such as meta
are given a chance to gather correct information, rather than ugly/buggy escaped
data it did gather from unconverted PO files.
This is needed for the po plugin vs. e.g. meta titles.
In order to get rid of the ugly "rebuilding all pages to fix meta titles" thing,
Joey suggested to make "po, at scan time, re-run the scan hooks, passing them
modified content (either converted from po to mdwn or with the escaped stuff
cheaply de-escaped)". This would unfortunately not work, as the meta plugin
gathers its data using the preprocess hook in scan mode: it would overwrite with
buggy data the correct data we would have forced it to gather in po's scan hook.
We then need a hook that runs *after* the preprocess hook has been run in scan
mode, but *before* any page rendering is started. Hence this one.
The idea here is that <meta name="foo" description="bar">
can be written like [[!meta name="foo" description="bar">.
Of course, [[!meta foo=bar]] is still supported; this new feature
provides some DWIM when trying to directly convert a meta tag into
a meta directive.
This reverts commit 4cf185e781.
That commit broke t/po.t (probably the test case only is testing too
close the the old implementation and needs correcting).
Also, we have not decided how to want to represent it yet, so I'm not
ready for this change.
Conflicts:
IkiWiki/Plugin/po.pm
doc/plugins/po.mdwn