Now aggregation will not lock the wiki. Any changes made during aggregaton are
merged in with the changed state accumulated while aggregating. A separate
lock file prevents multiple concurrent aggregators. Garbage collection
of orphaned guids is much improved. loadstate() is only called once
per process, so tricky support for reloading wiki state is not needed.
(Tested fairly thuroughly.)
gettext choked on a Unicode apostrophe in the aggregate plugin, which
appeared in a new error message in commit
4f872b5633. Replace it with an ASCII
apostrophe.
the code, since that process can change internal state as needed, and
it will automatically be cleaned up for the parent process, which proceeds
to render the changes.
the needsbuild hook. This resulted in feeds not being removed when pages
were updated, and probably other bugs.
* aggregate: Avoid uninitialised value warning when removing a feed that
has an expired guid.
links required meta to be run during scan, which complicated its data
storage, since it had to clear data stored during the scan pass to avoid
duplicating it during the normal preprocessing pass.
* If you used "meta link", you should switch to either "meta openid" (for
openid delegations), or tags (for internal, invisible links). I assume
that nobody really used "meta link" for external, non-openid links, since
the htmlscrubber ate those. (Tell me differently and I'll consider bringing
back that support.)
* meta: Improved data storage.
* meta: Drop the hackish filter hook that was used to clear
stored data before preprocessing, this hack was ugly, and broken (cf:
liw's disappearing openids).
* aggregate: Convert filter hook to a needsbuild hook.
page name to be expired and reused for several distinct guids. When this
happened, the expiry code counted each past guid that had used that page
name as a currently existing page, and thus expired too many pages.
until the wiki is building and already locked, unless it's aggregating.
When aggregating, it does not wait for the lock if it cannot get it, and
instead exits, to prevent aggregating processes from piling up.
for extended pagespecs. The old calling convention will still work for
back-compat for now.
* The calling convention for functions in the IkiWiki::PageSpec namespace
has changed so they are passed named parameters.
* Plugin interface version increased to 2.00 since I don't anticipate any
more interface changes before 2.0.
on and supported creating it (especially Tumov). This adds a "usedirs"
option that makes ikiwiki use foo/index.html instead of foo.html as
output page names. It is not yet enabled by default.
including out of disk space situations. ikiwiki should never leave
truncated files, and if the error occurs during a web-based file edit,
the user will be given an opportunity to retry.
Inspired by the many ways Moin Moin destroys itself when out of disk. :-)
* Fix syslogging of errors.
* Use precalculated backlinks info when determining if files need an update
due to a page they link to being added/removed. Mostly significant if
there are lots of pages.
* Remove duplicate link info when saving index. In some cases it could
pile up rather badly. (Probably not the best way to deal with this
problem.)
- Plugins should not need to load IkiWiki::Render to get commonly
used functions, so moved some functions from there to IkiWiki.
- Picked out the set of functions and variables that most plugins
use, documented them, and made IkiWiki export them by default,
like a proper perl module should.
- Use the other functions at your own risk.
- This is not quite complete, I still have to decide whether to
export some other things.
* Changed all plugins included in ikiwiki to not use "IkiWiki::" when
referring to stuff now exported by the IkiWiki module.
* Anyone with a third-party ikiwiki plugin is strongly enrouraged
to make like changes to it and avoid use of non-exported symboles from
"IkiWiki::".
* Link debian/changelog and debian/news to NEWS and CHANGELOG.
* Support hyperestradier version 1.4.2, which adds a new required phraseform
setting.