- Migrate the set of deletions to the {autofile} set, since it has
more or less the same effect. This affects the "deleted" case in the
test.
- If a page has just been deleted, add it as an autofile anyway: by
the time gen_autofile is called, it'll be in the list of deleted files,
so it'll just be added to {autofile}. This affects the "gone" case
in the test.
- Behaviour change: we don't forget that a page with no reason to be
re-created was deleted. This affects the 'expunged' and 'reinstated'
cases in the test.
This does cause a minor regression: index pages are now committed
individually rather than being a single commit per rebuild.
This also means the autoindex regression test needs to trigger the
autofile generation pass.
The default templates are also updated to make use of this information.
The rel="alternate" attribute is also inserted, for completeness.
(cherry picked from commit 618ade535e6a7967a510d9e210edaef3d37cc9bc)
cgitemplate is a modified misctemplate that takes an optional cgi object
and uses it to set the baseurl, and also optionally the forcebaseurl,
if a page is provided.
If no cgi object is provided, it will fall back to using $config{url}.
I expect this will only be needed in exceptional cases where
that doesn't much matter, such as cgierror().
showform uses cgitemplate, so there is no more need for showform_preview.
This way, do=goto will go to the page relative to
the current location, while the permalinks in feeds
will be absolute (unless an url is not configured at all).
Since tag names are now retrieved from the file names, we must revert
the escaping process that santizes the file names. Solve by adding a
`pagetitle()` call at the end of the tagname()
(cherry picked from commit 0ee0612b1ab11d76eb3790c8db7a2ba992c54f6b)
The use of typed links for tags and some of the consequent changes
introduced some unwanted functionality variations in the tag system. Two
problems in particular could be observed, when compared to the use of
tags in older versions of IkiWiki:
* tags in feeds (both rss and atom) would use the file path as their
name (e.g. you would have <category term="tags/sometag" /> in an atom
item for a page tagged sometag with a tagbase of tags), whereas they
appeared pure before
* tags containing a slash character would appear without the slash
character but be used with the slash character in other circumstances
(effect visible by tagging a page with a name such as "with/slash")
Both of these issues are fixed by introducing a tagname() function that
takes a tag link and effectively reverses (as well as possible) the
effects of taglink().
A possible alternative route would have been the reintroduction of the
global %tags hash, but the new approach as the (arguable) benefit of
introducing a small layer of sanitation for tag names.
Note that in particular calling initTheme with and empty file does not
work anymore.
use of initLanguage was replaced by loadLanguage, which seems to work
in both places.
I tried to make it a bit more robust against missing a highlight package.
There are lots of warnings, but it no longer crashes.
Now that page.tmpl is used for cgi, the parentlinks are able to be
displayed even when creating or editing a page. So it's redundant to
include the path to the page in the title, remove it.
There seems no need to allow selecting a location when creating a page this
way; the user should always want it to appear in the inline whose form they
submitted.
The lack of $from will probably hurt setups using po_link_to = current,
but at least we can fix the blocker bug that prevents any wiki using the po
plugin to build.
Use the included page name rather than the including page name. This
allows us to allow feeds in nested inlines without duplicating feeds
with the same content under different (and stupid) names.
and support all elements that HTML::Tagset knows about.
(Which doesn't include html5 just yet, but then the old version didn't either.)
Bonus: 4 times faster than old regexp method.
So formbuilder has an annoying glitch, that setting the value of a
checkbox, even without force, will override the value currently on the
form. Thus the guards against changing checkbox values when a form has been
submitted.
But those guards also prevented the checkboxes for advanced items getting
the right value when going into advanced mode.
Note that if the user makes changes to advanced mode stuff and leaves
advanced mode, those changes are lost. That seems reasonable so I didn't
change it -- and it made this fix simple.
I understand the need to avoid chdir when running git_parse_changes
for receive now. At that point, the changes have not been pushed to
the srcdir's repo yet. When running the same code for preprevert,
chdir to the srcdir is ok, and necessary.
plovs reported a crash when templates were not installed properly,
with a non-useful error about the template object not being defined.
I've audited all uses of template_depends(), and template(), and it makes
sense for them to throw an error if the template cannot be found. All code
with a user-supplied template catches errors already, to handle template
parse failures.
It did not make sense for template_file to throw errors, as some code uses
it to probe if a template file is available.
The HTML::Tree changelog says:
[THINGS THAT MAY BREAK YOUR CODE OR TESTS]
...
* Attribute names are now validated in as_XML and invalid names will
cause an error.
and indeed the regression tests do get an error.
With a relative xrds-location, the openid perl client module will fail.
I haven't checked the specs to see if it needs to be absolute, but all
examples I've seen are absolute, so it seems a very good idea.
I also tried setting RPC::XML::ENCODING but that did not prevent the crash,
and it seems that blogspam.net doesn't like getting xml encoded in unicode,
since it mis-flagged comments as spammy that way that are normally allowed
through.
If I am not mistaking all source files in ikiwiki are encoded in Unicode UTF-8.
Adding `\usepackage[utf8]{inputenc}` enables LaTeX to deal with the encoding.
As a consequence some special characters like umlauts can be used in the source
code which is useful for foreign languages.
[[!teximg code="a = b \text{ für alle } b \neq 2"]]
But for example »≠« cannot be used in LaTeX right now. One has to use other TeX
systems like XeTeX or LuaTeX featuring native UTF-8 support or use additional
nonstandard packages like uniinput [1].
I used the package `inputenc` (`texdoc inputenc`) and not `inputenx` (`texdoc
inputenx`), because I have not used `inputenx` that much and using the option
`math` is not supported in Debian (and I guess other distributions too) since
`inpmath` is not included in CTAN.
[1] http://wiki.neo-layout.org/browser/latex/Standard-LaTeX
Signed-off-by: Paul Menzel <paulepanter@users.sourceforge.net>
Avoid the generic "you are not allowed to change" message,
and instead allow check_canedit to propigate out useful error messages.
Went back to calling check_canedit in fatal mode, but added a parameter to
avoid calling the troublesome subs that might cause a login attempt.
A missing smileys.mdwn caused the plugin to error out interrupting the
building process. Instead, we check for the file presence and warn without
erroring out in case it's missing, in a similar fashion as it's
currently done for the shortcut plugin.
This reverts commit 3ef8864122.
Most aggregators block javascript and so it would display uglily.
Need to find a way to fallback to static buttons instead.
This makes the javascript be added to rss feeds, which allows the buttons
to be displayed by aggregators. At least, if the aggregator does not
sanitize javascript.
The po rescan hook re-runs the scan hooks, and runs the preprocess ones in scan
mode, both on the po-to-markup converted content. This way, plugins such as meta
are given a chance to gather correct information, rather than ugly/buggy escaped
data it did gather from unconverted PO files.
This is needed for the po plugin vs. e.g. meta titles.
In order to get rid of the ugly "rebuilding all pages to fix meta titles" thing,
Joey suggested to make "po, at scan time, re-run the scan hooks, passing them
modified content (either converted from po to mdwn or with the escaped stuff
cheaply de-escaped)". This would unfortunately not work, as the meta plugin
gathers its data using the preprocess hook in scan mode: it would overwrite with
buggy data the correct data we would have forced it to gather in po's scan hook.
We then need a hook that runs *after* the preprocess hook has been run in scan
mode, but *before* any page rendering is started. Hence this one.
The idea here is that <meta name="foo" description="bar">
can be written like [[!meta name="foo" description="bar">.
Of course, [[!meta foo=bar]] is still supported; this new feature
provides some DWIM when trying to directly convert a meta tag into
a meta directive.
This reverts commit 4cf185e781.
That commit broke t/po.t (probably the test case only is testing too
close the the old implementation and needs correcting).
Also, we have not decided how to want to represent it yet, so I'm not
ready for this change.
Conflicts:
IkiWiki/Plugin/po.pm
doc/plugins/po.mdwn
Probably best to store it unsanitized and sanitize as needed on use.
And it already was for comments, leaving only the need to sanitize the
nickname when git committing, to ensure the email address is legal.
... after having audited the po4a Xml and Xhtml modules for security issues.
Signed-off-by: intrigeri <intrigeri@boum.org>
(cherry picked from commit a128c256a5)
This better defines what the filter hook is passed, to only be the raw,
complete text of a page. Not some snippet, or data read in from an
unrelated template.
Several plugins that filtered text that originates from an (already
filtered) page were modified not to do that. Note that this was not
done very consistently before; other plugins that receive text from a
page called preprocess on it w/o first calling filter.
The template plugin gets text from elsewhere, and was also changed not to
filter it. That leads to one known regression -- the embed plugin cannot
be used to embed stuff in templates now. But that plugin is deprecated
anyway.
Later we may want to increase the coverage of what is filtered. Perhaps
a good goal would be to allow writing a filter plugin that filters
out unwanted words, from any input. We're not there yet; not only
does the template plugin load unfiltered text from its templates now,
but so can the table plugin, and other plugins that use templates (like
inline!). I think we can cross that bridge when we come to it. If I wanted
such a censoring plugin, I'd probably make it use a sanitize hook instead,
for the better coverage.
For now I am concentrating on the needs of the two non-deprecated users
of filter. This should fix bugs/po_vs_templates, and it probably fixes
an obscure bug around txt's use of filter for robots.txt.
Set it to true every time IkiWiki::filter is called on a full page's content.
This is a much nicer solution, for the po plugin, than previous whitelisting
using caller().
The protection against processing loops (i.e. the alreadyfiltered stuff) was
playing against us: the template plugin triggered a filter hooks run with the
very same ($page, $destpage) arguments pair that we use to identify a already
filtered page. Processing an included template could then mark the whole
translation page as already filtered, which prevented po_to_markup to be called
on the PO content.
This commit only runs the whole PO filter logic when our filter hook is run by
IkiWiki::render, which only happens when the full page needs to be filtered.
Renamed usershort => nickname.
Note that this means existing user login sessions will not have the nickname
recorded, and so it won't be used for those.
There was some confusion about whether the filename was
relative to srcdir or not. Some test cases, and the bzr
plugin assumed it was relative to the srcdir. Most everything else
assumed it was absolute.
Changed it to relative, for consistency with the rest
of the rcs_ functions.
Using named parameters for these is overdue. Passing the session in a
parameter instead of passing username and IP separately will later allow
storing other session info, like username or part of the email.
Note that these functions are not part of the exported API,
and the prototype change will catch (most) skew, so I am not changing
API versions. Any third-party plugins that call them will need updated
though.
In the process, lost the commits from special usernames
when committing changed po files. Instead of trying to dummy up a session
object for the special username, I just don't pass one, and the commit will
appear to be from whatever user ikiwiki runs as.
Everywhere that REMOTE_ADDR was used, a session object is available, so
instead use its remote_addr method.
In IkiWiki::Receive, stop setting a dummy REMOTE_ADDR.
Note that it's possible for a session cookie to be obtained using one IP
address, and then used from another IP. In this case, the first IP will now
be used. I think that should be ok.
Now the git plugin supports commits with author fields that look like:
Author: http://my.openid/ <me@web>
Then in recentchanges, the short username will be displayed, linking
to the openid.
Particularly useful for the horrible google openids, of course.
This way, an email-like link will be a mailto until a matching page
is created, then it will link to the page. And removing the page will
convert it back to a mailto.
At least two bugfixes in here. First, an old bug;
\[[foo#0]] was displayed as [[foo]], losing the anchor
as the anchor text was false. Secondly, a new bug;
an email like foo#bar@baz should not check bestlink("foo@baz").
The following ways to create a link are supported now:
[[url]]
[[text|url]]
url can be one of the following:
- an internal wikilink: will be handled as before
- any other kind of URL, including mailto: proper links will be created:
<a href="url">url</a>
<a href="url">text</a>
- an email address:
<a href="mailto:url">url</a>
<a href="mailto:url">text</a>
For now, a rebuild is the only way to ensure the changed theme is used.
Ikiwiki normally will not realize style.css has changed, since themes
tend to have the same timestamp for the file.
A short story:
Once there was a unicode string, let's call him Srcdir.
Along came a crufy old File::Find, who went through a tree and pasted each
of the leaves in turn onto Srcdir. But this 90's relic didn't decode the
leaves -- despite some of them using unicode! Poor Srcdir, with these
leaves stuck on him, tainted them with his nice unicode-ness. They didn't
look like leaves at all, but instead garbage.
(In other words, perl's unicode support sucks mightily, and drives
us all to drink and bad storytelling. But we knew that..)
So, srcdir is not normally flagged as unicode, because typically it's pure
ascii. And in that case, things work ok; File::Find finds filenames, which
are not yet decoded to unicode, and appends them to the srcdir, and then
decode_utf8 happily converts the whole thing.
But, if the srcdir does contain utf8 characters, that breaks. Or, if a Yaml
setup file is used, Yaml::Syck's implicitunicode sets the unicode flag of
*all* strings, even those containing only ascii. In either case, srcdir
has the unicode flag set; a non-decoded filename is appended, and the flag
remains set; and decode_utf8 sees the flag and does *nothing*. The result
is that the filename is not decoded, so looks valid and gets skipped.
File::Find only sticks the directory and filenames together in no_chdir
mode .. but we need that mode for security. In order to retain the
security, and avoid the problem, I made it not pass srcdir to File::Find.
Instead, chdir to the srcdir, and pass ".". Since "." is ascii, the problem
is avoided.
Note that chdir srcdir is safe because we check for symlinks in the srcdir
path.
Note that it takes care to chdir back to the starting location. Because
the user may have specified relative paths and so staying in the srcdir
might break. A relative path could even be specifed for an underlay dir, so
it chdirs back after each.
A short story:
Once there was a unicode string, let's call him Srcdir.
Along came a crufy old File::Find, who went through a tree and pasted each
of the leaves in turn onto Srcdir. But this 90's relic didn't decode the
leaves -- despite some of them using unicode! Poor Srcdir, with these
leaves stuck on him, tainted them with his nice unicode-ness. They didn't
look like leaves at all, but instead garbage.
In other words, perl's unicode support sucks mightily, and drives
us all to drink and bad storytelling. But we knew that..
So, srcdir is not normally flagged as unicode, because typically it's pure
ascii. And in that case, things work ok; File::Find finds filenames, which
are not yet decoded to unicode, and appends them to the srcdir, and then
decode_utf8 happily converts the whole thing.
But, if the srcdir does contain utf8 characters, that breaks. Or, if a Yaml
setup file is used, Yaml::Syck's implicitunicode sets the unicode flag of
*all* strings, even those containing only ascii. In either case, srcdir
has the unicode flag set; a non-decoded filename is appended, and
decode_utf8 sees the flag and does *nothing*. The result is that the
filename is not decoded, so looks valid and gets skipped.
File::Find only sticks the directory and filenames together in no_chdir
mode .. but we need that mode for security. In order to retain the
security, and avoid the problem, I made it not pass srcdir to File::Find.
Instead, chdir to the srcdir, and pass ".". Since "." is ascii, the problem
is avoided.
Note that it takes care to chdir back to the starting location. Because
the user may have specified relative paths and so staying in the srcdir
might break. A relative path could even be specifed for an underlay dir, so
it chdirs back after each.
Removing a plugin from add_plugins is not always enough to disable it.
It may have been redundantly added there and also pulled in via goodstuff.
Always add didabled plugins to disable_plugins.
The bug here was that disabling a plugin included thru goodstuff, like
htmlscrubber, caused it to be added to disable_plugins, and those plugins
were never loaded, so could not be re-enabled. Fix by allowing them to be
force loaded when appropriate. (Also that allows disabled plugins to still
record their setup options when dumping a setup file.)
* calendar: Shorten day names, and improve styling of month calendar.
* style.css: Reduced sidebar width back to 20ex from 30; the month calendar
will now fit in the smaller width, and 30 was feeling too large.
In particular, perl warns if a qw{} contains a #, but openids can.
If the setup file has 'use warnings', it will turn warning messages back
on, so it seems reasonable to squelch them by default.
I've seen user(http://*) confuse someone who didn't know pagespecs to think
that just http://* would moderate all comments to every page, or something
like that.
Problem is that by the time rendering calls render_dependent, %pagesources
has had deleted files removed from it. So match_comment's lookup of
files in there to see if they had the _comment extension failed.
I had to introduce a hash that temporarily holds filenames of deleted pages
to fix this.
Note that unlike comment(), internal() had avoided this pitfall by being
defined to match both internal and non-internal pages.