Made add_autofile take a generator function, and just register the
autofile, for later possible creation. The testing is moved into Render,
which allows cleaning up some stuff.
* Automatically run --gettime the first time ikiwiki is run on
a given srcdir.
* Optimise --gettime for git, so it's appropriatly screamingly
fast. (This could be done for other backends too.)
* However, --gettime for git no longer follows renames.
* Use above to fix up timestamps on docwiki, as well as ensure that
timestamps on basewiki files shipped in the deb are sane.
* Rename --getctime to --gettime. (The old name still works for
backwards compatability.)
* --gettime now also looks up last modification time.
* Add rcs_getmtime to plugin API; currently only implemented
for git.
This can be a lot faster, since huge numbers of pages are not sorted
only to mostly be thrown away. It sped up a build of my blog by at least
5 minutes.
Both markdown and tidy add paragraph tags around text, that needs to be
stripped when the text is a short, one line fragment that is being inserted
into a larger page. tidy also adds several newlines to the end, and this
broke removal of the paragraph tags.
The reason to do this is basically a user interaction design decision.
It is achieved by adding an entry, associated to the creating plugin, to
%pagestate. To find out if files were deleted a new global hash %del_hash is
%introduced.
add_autofile has to have checks, whether to create the file, anyway, so this
will make things more consistent.
Correcter check for the result of verify_src_file().
Cosmetic rename of a variable $addfile to $autofile.
pagespec_translate may set $@ if it fails to parse a pagespec, but
due to memoization, this is not reliable. If a memoized call is repeated,
and $@ is already set for some other reason previously, it will remain
set through the call to pagespec_translate.
Instead, just check if pagespec_translate returns undef.
Finally removed the last hardcoding of IkiWiki::Setup::Standard.
Take the first "IkiWiki::Setup::*" in the setup file to define the
setuptype, and remember that type to use in dumping later. (But it can be
overridden using --set, etc.)
Also, support setup file types that are not evaled.
As I was adding ngettext support, I realized I could optimize the gettext
functions by memoizing the creation of the gettext object. Note that
the object creation is still deferred until a gettext function is called,
to avoid unnecessary startup penalties on code paths that do not need
gettext.
A side benefit is that separate stub functions are no longer needed to
handle the C language case.
The objective is to provide a sensible way to let plugins add files during the
"scan stage" of the build.
Currently does a little verification and adds the file to the global array
@add_autofiles.
bestlink was looking at whether %links existed for a page in order to tell
if the page exists, but just-deleted pages still have entries in there (for
reasons it may be best not to explore). So bestlink would return
just-deleted pages. Instead, make bestlink use %pagesources.
Also, when finding a deleted page, %pagecase was not cleared of that page.
This, again, made bestlink return just-deleted pages. Now that is cleared.
Fixing bestlink exposed another issue though. The backlink calculation code
uses bestlink. So when a page was deleted, no backlinks to it are found,
and pages that really did backlink to it were not updated, and had broken
links.
To fix that, the code that actually removes deleted pages had to be split
out from find_del_files, so it can run a bit later. It is run just after
backlinks are calculated. This way, backlink calculation still sees the
deleted pages, but everything afterwards does not.
However, it does not address the original bug report that started this
whole thing, [[bugs/bestlink_returns_deleted_pages]]. Because there
bestlink is run in the needsbuild hook. And that happens before backlink
calculation, and so bestlink still returns deleted pages then. Also in the
scan hook.
If bestlink needs to work consistently during those hooks, a more involved
fix will be needed.
This avoids unnecessary influences being recorded from pagespecs
such as "link(done) and bugs/*", when a page cannot ever possibly
match.
A pagespec term that returns a value without influence is an influence
blocker. If such a blocker has a false value (possibly due to being
negated) and is ANDed with another term, it blocks that term's influence
from propigating out.
If the term is ORed, or has a true value, it does not block influence.
(Consider "link(done) or bugs/*" and "link(done) and !nosuchpage")
In the implementation in merge_influence, I had to be careful to never
negate $this or $other when testing if they are an influence blocker,
since negation mutates the object. Thus the slightly weird if statement.
I made match_* functions whose influences can vary depending on the page
matched set a special "" influence to indicate this.
Then add_depends can try just one page, and if static influences are found,
stop there.
Thought of a cleaner way to accumulate all influences in
pagespec_match_list, using the pagespec_match result object as an
accumulator.
(This also accumulates all influences from failed matches, rather than just
one failed match. I'm not sure if the old method was correct.)
Benchmarking refresh of a a wiki with 25 thousand pages showed
file_pruned() using most of the time. But, when refreshing, ikiwiki already
knows about nearly all the files. So we can skip calling file_pruned() for
those it knows about. While tricky to do, this sped up a refresh (that
otherwise does no work) by 10-50%.
If a pagespec fails to match, I had been throwing the influences away, but
that is not right. Consider `backlink(foo)`, where foo does not exist.
It still needs to be added as an influence, because if it is created, it
will influence the pagespec to match.
But with that fix, `link(bar)` had as influences all pages, whether they
link to bar or not. Which is not necessary, because modifiying a page to
add a link to bar will directly cause the pagespec to match.
So, in match_link (and all the match_* functions for page metadata),
only return an influence if the match succeeds.
match_backlink had been implemented as the inverse of match_link, but that
is no longer completly true. While match_link does not return an influence
on failure, match_backlink does.
match_created_before/after also return the influence on failure, this way
if created_after(foo) currently fails because foo does not exist, it will
still update the page with the pagespec if foo is created.
The hash will be used used to record a set of pages that influenced the
result of a pagespec match.
The influences are merged together when boolean and/or are encountered
in a pagespec. That means using a non-short-circuiting OR operator. And
so I use & and | when translating pagespecs, since those bitwise operators
can be overloaded. ("and" and "or" cannot, apparently).
Involved some code refactoring so that same code that detects
link changes for backlinks updating can be used for link dependency
checking. The nice thing is that link dep checking is thus
comopletly free!
When adding a contentless dependency, the pagespec also needs to be one
that does not look at any page content information.
As a first approximation of that, only allow glob-based pagespecs in
contentless dependencies. While there are probably a few other types of
pagespecs that can match contentless, this will work for most of them.
Dependency types are represented by bits in the values of the %depends
and %depends_simple hashes.
Change the dependslist array saved to the index to a depends hash.
depends_simple is also converted from an array to a hash.
Note that the depends field used to be a string, and we still
have compat code to handle upgrades from that, as well as from the arrays.
I didn't use ikiwiki-transition because I don't want ikiwiki to break if
users forget to run it; also we're going to recommend a full rebuild on
upgrade to this version to get the improved dependency handling. So
this compat code can be removed or moved to ikiwiki-transition later.
Here I was bitten by perl's aliasing of foreach variables
to the loop array contents, and match_link accidentially changed
the contents of %links.
In Jon's testcase, a tag added an absolute link, which was
made relative by the above bug, and then the link was added
again in preprocess, and turned into a duplicate.
I weakended the regexp, so this matches ipv6 addresses too. It does not
ensure that the address is valid, but that should not matter here.
Note that addresses ending in "::" are not matched, so eg, the unspecified
address will not match -- but should never appear here anyway.
It's not "exact" since case munging has to be done, and I think
"simple" captures the optimisation better.</pedant>
With apologies to smcv, who probably has to rebuild his wiki now.
Let E be the number of dependencies per page of the form "A depends on B and
nothing else", let D be the number of other dependencies per page,
let P be the total number of pages, and let C be the number of changed
pages in a refresh.
This patch should speed up a refresh from O(E*C*P + D*C*P) to
O(C + E*P + D*C*P), assuming that hash lookups are O(1).
In practice, plugins like inline and map produce a lot of these very simple
dependencies, and my album plugin's combination of inline with a large
number of pages causes it to suffer particularly badly.
In testing on a wiki with about 7000 objects (3500 full pages, 3500
images), a full rebuild continued to take about 5:30, and a refresh
after touching about 350 pages and 350 images reduced from 5:30 to 1:30.
As with my previous optimizations, this change will result in downgrades not
working correctly until the wiki is rebuilt.
Now that dependencies are a list of pagespecs with an implicit "or"
operation, there's no need to try to merge pagespecs under normal use.
ikiwiki-transition contains the only use of the function, so move
it there rather than deleting it entirely (it's used to concatenate all
admins' lists of locked pages).