Recommend the LWPx::ParanoidAgent module where appropriate.
It is particularly important for openid, since unauthenticated users
can control which URLs that plugin will contact. Conversely, it is
non-critical for blogspam, since the URL to be contacted is under
the wiki administrator's control.
Signed-off-by: Simon McVittie <smcv@debian.org>
The simple implementation of this, which I'd prefer to use, would be:
if we can import LWPx::ParanoidAgent, use it; otherwise, use
LWP::UserAgent.
However, aggregate has historically worked with proxies, and
LWPx::ParanoidAgent quite reasonably refuses to work with proxies
(because it can't know whether those proxies are going to do the same
filtering that LWPx::ParanoidAgent would).
Signed-off-by: Simon McVittie <smcv@debian.org>
This prevents the aggregate plugin from being used to read the contents
of local files via file:/// URLs.
Signed-off-by: Simon McVittie <smcv@debian.org>
The input to filter hooks is meant to be the content of a source file
on disk. If we only filter once per (page, destpage) pair, and a page
is inlined into the same destpage more than once, then the second
occurrence will render as the result of htmlizing .po source as if
it was Markdown (or whatever the type of the corresponding master page
is), which is never going to end well.
The alreadyfiltered mechanism was added in commit 1e874b3f to avoid
preprocessing loops, but I'm not sure where it could lead to a loop:
filter hooks are only called from IkiWiki::filter, which is only called
on page content from disk or on proposed content being previewed.
According to <https://bugs.debian.org/911356#41>, deleting the
alreadyfiltered mechanism resolves the problem, as well as simplifying
the code.
Closes: #911356
Tested-by: intrigeri
Javascript resources should be presented to browsers after CSS, and
"after the fold" (ATF) according to the best practices:
https://developers.google.com/speed/docs/insights/mobile#PutStylesBeforeScripts
This change allows the browser to download Javascript files in
parallel, by including Javascript on the *closing* </body> tag instead
of the opening tag.
We also improve the regex to tolerate spaces before the body tag, as
some templates have (proper) indentation for the tag.
My previous attempt to reproduce this bug used a non-alphanumeric
ASCII character. This is not currently considered to be a valid
value for rootpage, although for a "do what I mean" approach, perhaps
we should accept it and pass it through titlepage() or linkpage().
Using Chinese characters (which are considered to match [[:alnum:]]
even though the Chinese script is not, strictly speaking, an alphabet),
as in the original bug report, reproduces the bug.
Signed-off-by: Simon McVittie <smcv@debian.org>
By processing the pagenames through linkpage, we let users specify
page names that contain non-alphanumerics in a more natural way.
Signed-off-by: Simon McVittie <smcv@debian.org>
This is one of several possible bug reports on
"doc/bugs/About %2F problem" (I'm not sure what the actual bug being
reported is).
Signed-off-by: Simon McVittie <smcv@debian.org>