the needsbuild hook. This resulted in feeds not being removed when pages
were updated, and probably other bugs.
* aggregate: Avoid uninitialised value warning when removing a feed that
has an expired guid.
links required meta to be run during scan, which complicated its data
storage, since it had to clear data stored during the scan pass to avoid
duplicating it during the normal preprocessing pass.
* If you used "meta link", you should switch to either "meta openid" (for
openid delegations), or tags (for internal, invisible links). I assume
that nobody really used "meta link" for external, non-openid links, since
the htmlscrubber ate those. (Tell me differently and I'll consider bringing
back that support.)
* meta: Improved data storage.
* meta: Drop the hackish filter hook that was used to clear
stored data before preprocessing, this hack was ugly, and broken (cf:
liw's disappearing openids).
* aggregate: Convert filter hook to a needsbuild hook.
page name to be expired and reused for several distinct guids. When this
happened, the expiry code counted each past guid that had used that page
name as a currently existing page, and thus expired too many pages.
until the wiki is building and already locked, unless it's aggregating.
When aggregating, it does not wait for the lock if it cannot get it, and
instead exits, to prevent aggregating processes from piling up.
for extended pagespecs. The old calling convention will still work for
back-compat for now.
* The calling convention for functions in the IkiWiki::PageSpec namespace
has changed so they are passed named parameters.
* Plugin interface version increased to 2.00 since I don't anticipate any
more interface changes before 2.0.
on and supported creating it (especially Tumov). This adds a "usedirs"
option that makes ikiwiki use foo/index.html instead of foo.html as
output page names. It is not yet enabled by default.
including out of disk space situations. ikiwiki should never leave
truncated files, and if the error occurs during a web-based file edit,
the user will be given an opportunity to retry.
Inspired by the many ways Moin Moin destroys itself when out of disk. :-)
* Fix syslogging of errors.
* Use precalculated backlinks info when determining if files need an update
due to a page they link to being added/removed. Mostly significant if
there are lots of pages.
* Remove duplicate link info when saving index. In some cases it could
pile up rather badly. (Probably not the best way to deal with this
problem.)
- Plugins should not need to load IkiWiki::Render to get commonly
used functions, so moved some functions from there to IkiWiki.
- Picked out the set of functions and variables that most plugins
use, documented them, and made IkiWiki export them by default,
like a proper perl module should.
- Use the other functions at your own risk.
- This is not quite complete, I still have to decide whether to
export some other things.
* Changed all plugins included in ikiwiki to not use "IkiWiki::" when
referring to stuff now exported by the IkiWiki module.
* Anyone with a third-party ikiwiki plugin is strongly enrouraged
to make like changes to it and avoid use of non-exported symboles from
"IkiWiki::".
* Link debian/changelog and debian/news to NEWS and CHANGELOG.
* Support hyperestradier version 1.4.2, which adds a new required phraseform
setting.
* Add --version.
* Man page format fixups.
* Add a %pagecase which maps lower-case page names to the actual case
used in the filename. Use this in bestlinks calculation instead of
forcing the link to lowercase.
* Also use %pagecase in various other places that want to check if a page
with a given name exists.
* This means that links to pages with mixed case names will now work,
even if the link is in some other case mixture, and mixed case pages
should be fully supported throughout ikiwiki.
* Recommend rebuilding wikis on upgrade to this version.
* Add permalink and author support to meta plugin, affecting RSS feeds
and blog pages.
* Change titlepage() to encode utf-8 alnum characters. This is necessary
to avoid UTF-8 creeping into filenames in urls. (There are still
some other ways that it can get in.)