When building ikiwiki from a tarball, the mtime (conceptually, the
last modification date of the file) is preserved by tar, but the inode
change time (creation/metadata-change date of *this copy* of the file)
is not. This seems to lead to unstable sort ordering and
unreproducible builds.
The page can't possibly have been modified before it was created, so
we can assume that the modification date is an upper bound for the
creation date.
This doesn't prevent memory from being used to track what we have
and haven't scanned, but it does make it temporary.
This only applies to rebuilds, as a way to avoid breaking the
templatebody plugin, unlike the earlier version of this optimization.
This reverts commit c04a26f3e7, which
turns out to break the templatebody directive: readtemplate() relies
on scan() populating %templates, but if scan() is a no-op after
leaving the scan phase, we can't rely on that.
The assumption made by skipping scan() after the end of the render phase
is that everything that comes from a scan is already in the index.
However, we don't really want to put template bodies in the index:
that would force us to load and save them on every refresh, and
redundantly persist them to disk.
Test-case:
% make clean
% ./Makefile.PL
% make
% grep -E '<div class="notebox">|Use this template to' html/sandbox.html
% touch doc/sandbox/New_blog_entry.mdwn # sandbox inlines this
% make
% grep -E '<div class="notebox">|Use this template to' html/sandbox.html
Good result: html/sandbox.html contains <div class="notebox"> both times
Bad result: html/sandbox.html contains "Use this template to..." the
second time
This avoids nasty surprises on upgrade if a site is using httpauth,
or passwordauth with an account_creation_password, and relying on
only a select group of users being able to edit the site. We can revisit
this for ikiwiki 4.
This was needed due to emailauth, but I've also wrapped all IP address
exposure in cloak(), although the function doesn't yet cloak IP addresses.
(One IP address I didn't cloak is the one that appears on the password
reset email template. That is expected to be the user's own IP address,
so ok to show it to them.)
Thanks to smcv for the pointer to
http://xmlns.com/foaf/spec/#term_mbox_sha1sum
There's no real problem if they do change it, except they may get confused
and expect to be able to log in with the changed email and get the same
user account.
This makes the email not be displayed on the wiki, so spammers won't find
it there.
Note that the full email address is still put into the comment template.
The email is also used as the username of the git commit message
(when posting comments or page edits). May want to revisit this later.
This includes some CSS changes to names of elements.
Also, added Email login button (doesn't work yet of course),
and brought back the small openid login buttons. Demoted yahoo and verison
to small buttons. This makes the big buttons be the main login types, and
the small buttons be provider-specific helpers.
[[forum/refresh_and_setup]] indicates some confusion between --setup
and -setup. Both work, but it's clearer if we stick to one in
documentation and code.
A 2012 commit to [[plugins/theme]] claims that "-setup" is required
and "--setup" won't work, but I cannot find any evidence in ikiwiki's
source code that this has ever been the case.
Commit feb21ebfac added a
safe_decode_utf8 function that avoids double decoding on Perl 5.20.
But the Perl behavior change actually happened in Encode.pm 2.53
(https://github.com/dankogai/p5-encode/pull/11). Although Perl 5.20
is the first Perl version to bundle an affected version of Encode.pm,
it’s also possible to upgrade Encode.pm independently; for example,
Fedora 20 has Perl 5.18.4 with Encode.pm 2.54. On such a system,
editing a non-ASCII file still fails with errors like
Error: Cannot decode string with wide characters at
/usr/lib64/perl5/vendor_perl/Encode.pm line 216.
There doesn’t seem to be any reason not to check Encode::is_utf8 on
old versions too, so just remove the version check altogether.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Bug-Debian: https://bugs.debian.org/776181
Mobile browsers typically assume that arbitrary web pages are
designed for a "desktop-sized" browser window (around 1000px)
and display that layout, zoomed out, in order to avoid breaking
naive designs that assume nobody will ever look at a website on
a phone or something. People who are actually doing "responsive
design" need to opt-in to mobile browsers rendering it at a
more normal size.
We're running under "use strict" here, so if CGI->param's array-context
misbehaviour passes an extra non-ref parameter, it shouldn't be executed
anyway... but it's as well to be safe.
[commit message added by smcv]
CGI->param has the misfeature that it is context-sensitive, and in
particular can expand to more than one scalar in function calls.
This led to a security vulnerability in Bugzilla, and recent versions
of CGI.pm will warn when it is used in this way.
In the situations where we do want to cope with more than one parameter
of the same name, CGI->param_fetch (which always returns an
array-reference) makes the intention clearer.
[commit message added by smcv]
When CGI->param is called in list context, such as in function
parameters, it expands to all the potentially multiple values
of the parameter: for instance, if we parse query string a=b&a=c&d=e
and call func($cgi->param('a')), that's equivalent to func('b', 'c').
Most of the functions we're calling do not expect that.
I do not believe this is an exploitable security vulnerability in
ikiwiki, but it was exploitable in Bugzilla.
According to caniuse.com, a significant fraction of Web users are
still using Internet Explorer versions that do not support HTML5
sectioning elements. However, claiming we're XHTML 1.0 Strict
means we can't use features invented in the last 12 years, even if
they degrade gracefully in older browsers (like the role and placeholder
attributes).
This means our output is no longer valid according to any particular
DTD. Real browsers and other non-validator user-agents have never
cared about DTD compliance anyway, so I don't think this is a real loss.
checksessionexpiry's signature changed from
(CGI::Session, CGI->param('sid')) to (CGI, CGI::Session) in commit
985b229b, but editpage still passed the sid as a useless third
parameter, and this was later cargo-culted into remove, rename and
recentchanges.
The intention was that signed-in users (for instance via httpauth,
passwordauth or openid) are already adequately identified, but
there's nothing to indicate who an anonymous commenter is unless
their IP address is recorded.
srcfile_stat got called on a file from the underlay that no longer existed.
I am not 100% sure of the circumstances of that; I was able to reproduce
the bug but neglected to snapshot the tree, and then accidentially
got it to stop crashing. I know that a transient tag page got deleted using
the web interface to trigger the crash.
It seems that process_changed_files must have returned the file, despite it
being deleted. And since the file was not checked into git, it seems it
must have not been included in @IkiWiki::underlayfiles, which would have
caused process_changed_files to not return it.
I do not know why a transient tag page would not be in
@IkiWiki::underlayfiles. There is a bug here that I don't understand.
This is just a workaround -- run srcfile_stat such that it won't crash,
and if it is unable to stat a file, find_changed knows it's not changed,
so it's ok to skip it.
Also made find_new_files run srcfile_stat such that it won't crash, just
because I was there.
install prerequisite Perl modules in the systemwide locations. They may
have to be installed under the home directory, such as by using local::lib
(which is how the cPanel Perl-module installer works, on systems that use it).
For that to work, the local::lib-defined value for PERL5LIB must be in
the environment when Perl starts up. The former way %config{ENV} was handled
was too late, depending on the Perl code to unpack it from the storable and
put it into the environment.
Easy solution is to build the wrapper to repopulate the environment based on
%config{ENV} before ever exec'ing Perl (and then remove it from the storable
as there is nothing more that the Perl code will need to do with it).
The old name still works, if its value is numeric.
This name allows a non-numeric "show" to mean the same thing
it does for [[!map]] (show title, show description, etc.).
The `time` variable contains a fixed-format time, guaranteed suitable
for parsing by timedate.
The `formatted_time` variable contains the same time formatted by
IkiWiki::formattime.
I want to make GUIDs for my RSS feeds that don't change when I move
pages around. To that end, I've used UUID::Tiny to generate a
version 4 (random) UUID that is presented in a `uuid` variable in
the template.
At that point, you can do something like this:
[[!meta guid="urn:uuid:<TMPL_VAR uuid>"]]
In recent versions of highlight there can be more than one langdefdir.
This patch fixes the ensuing hilarity when the user adds a single
highlight lang definition and highlight.pm expects all definitions to
be in the same place.
in analogy to sparklines, this renders scaled imgs to
data:img/...;base64,... urls in preview mode.
if the image is already present on the server (eg because it was not
just inserted), the already rendered image is referenced instead.
there is now a size calculating part (which chooses a final size) and a
scaling part (which triggers if the sizes calculated by the former
indicate a downscaling).
this solves the issue of unproportional upscalings
(bugs/image_rescaling_distorts_with_small_pictures).
also, "small" pdf files (or pdf files without explicit size settings),
which would not be converted under the old mechanism, now get rendered
to pngs.
this commit affects a unit test: while svgs were previously
unconditionally rendered to pngs, this now only happens on downscaling.
this is intentional -- while a small version of an svg graphic is
likely to be more compact when rendered (eg as a preview), a large
version would not have that benefit, and why convert something that
browsers basically can show and be inconsistend with how other images
are handled. the new unit test simply makes the original svg larger to
check for the same behaviros as before.
pagespec_match_list() makes the current page depend on the pagespec
being matched, so if you use [[!trailoptions sort="..."]] to force
a sort order, the trail ends up depending on internal(*) and is
rebuilt whenever anything changes. Add a new sort_pages() and use that
instead.
If someone has explicitly disabled the postform, it seems reasonable
from a least-astonishment point of view for that to take precedence
over rootpage, even though that makes rootpage useless.
Also add a regression test; so far, this is all it tests.
After this change autoindex creates index pages also for empty directories
included in underlays, but only if it isn't going to commit them to the
srcdir ($config{autoindex_commit} = 0).
Inspired by a patch from Tuomas Jormola.
Bug-Debian: http://bugs.debian.org/611068
It's not clear that the transient underlay should be excluded from
indexing; see [[bugs/transient autocreated tagbase is not transient
autoindexed]].
In any case, the code that checks what directories might need indexes
specifically checks for the srcdir anyway, so the only effect this extra
check can have is negative (it could fail to notice files in the
transient underlay and attempt to recreate them unnecessarily).
this allows picking a page from a pdf. also, this enhances performance
greatly when rendering pdfs, as only the first page is rasterized.
(otherwise, imagemagick would treat the pdf as a list of images, work
with all of them, until finally only the first page gets saved). the
default parameter of 0 will select the single image contained in typical
image files anyway, so no specialcasing between single- and multifile
containers is needed.
imagemagick, when reading an image, sets its magick parameter to
indicate the file type, overriding the explicitly set file type for
output if it is set at creation.
as a result, previously (with graphicsmagick-libmagick-dev-compat
1.3.18-1 providing Image::Magick), svg output files were not png,
neither svg, but mvg (imagemagick vector graphics).
This doesn't prevent memory from being used to track what we have
and haven't scanned, but it does make it temporary. The existing
%rendered hash, which is filled afterwards, will be larger than %scanned
in practice anyway: %scanned will contain an entry for each page
that changed, plus an entry for each template used by templatebody,
whereas %rendered will contain an entry for each page that changed
plus an entry for each page rendered due to links or dependencies.
In the scan phase, it's too early to match pagespecs or sort pages;
in the render phase, both of those are OK.
It would be possible to add phases later, renumbering them if necessary
to maintain numerical order.
If it does nothing when a page has already been scanned, we can use it
at any time to force a page to be scanned. In particular, the
templatebody plugin is going to need this.
Previously, if a page like `plugins/trail` contained a conditional like
[[!if test="backlink(plugins/goodstuff)" all=no]]
(which it gets via `templates/gitbranch`), then the
[[plugins/conditional]] plugin would give `plugins/trail` a dependency on
`(backlink(plugins/goodstuff)) and plugins/trail`. This dependency is
useless: that pagespec can never match any page other than
`plugins/trail`, but if `plugins/trail` has been modified or deleted,
then it's going to be rendered or deleted *anyway*, so there's no point
in spending time evaluating match_backlink for it.
Conversely, the influences from the result were not taken into account,
so `plugins/trail` did not have the
`{ "plugins/goodstuff" => $DEPEND_LINKS }` dependency that it should.
Invert that, depending on the influences but not on the test.
Bug: http://ikiwiki.info/bugs/editing_gitbranch_template_is_really_slow/
As noted in the Try::Tiny man page, eval/$@ can be quite awkward in
corner cases, because $@ has the same properties and problems as C's
errno. While writing a regression test for definetemplate
in which it couldn't find an appropriate template, I received
<span class="error">Error: failed to process template
<span class="createlink">deftmpl</span> </span>
instead of the intended
<span class="error">Error: failed to process template
<span class="createlink">deftmpl</span> template deftmpl not
found</span>
which turned out to be because the "catch"-analogous block called
gettext before it used $@, and gettext can call define_gettext,
which uses eval.
This commit alters all current "catch"-like blocks that use $@, except
those that just do trivial things with $@ (string interpolation, string
concatenation) and call a function (die, error, print, etc.)
git's behaviour when doing "git push origin" is configurable, and the
default is going to change in 2.0. In particular, if you've set
push.default to "nothing", the regression test will warn:
fatal: You didn't specify any refspecs to push, and push.default
is "nothing".
'git push origin' failed: at .../lib/IkiWiki/Plugin/git.pm line 220.
Package: ikiwiki
Version: 3.20140125
Severity: wishlist
By default, LWP::UserAgent used by IkiWiki to perform outbound HTTP
requests sends the string "libwww-perl/<version number>" as User-Agent
header in HTTP requests. Some blogging platforms have blacklisted the
user agent and won't serve any content for clients using this user agent
string. With IkiWiki configuration option "useragent" it's now possible
to define a custom string that is used for the value of the User-Agent
header.
While here, mollify http://validator.w3.org/feed/ and
s/dcterms:creator/dc:creator/g, which happens to make rss2email see
and do nice things with authors.
Not sure if this is needed to avoid it trying to run an editor. Probably
there is never a controlling terminal and probably git notices and does
nothing. But I'm just copying what I have in git-annex assistant here.
(Although with a much worse git version comparion, that only really works due
to luck.)
I saw this happen with calendar, when it wanted to update a page, that
had a calendar on it, but the page had just been deleted. This caused
srcfile_stat to crash.
RPC::XML uses ascii as default encoding, we have to tell it to use utf8.
Without this, ikiwiki returns "failed to get response from blogspam server"
every time a non-ascii character is used in a content that needs checking.