* Add setup and teardown methods, called before and after every test sub.
* In setup, make a fresh repo; in teardown, throw it out.
* Extract runtests method and define default test methods at top.
* Move reflection routines near the xUnit-style subs they support.
Adapt existing test subs to run independently:
* In test_manual_add_and_commit(), assume a fresh repo.
While here, plan a bit better:
* Check for all modules used by cvs.pm.
* Check for program existence more generally.
* Check that we can rmdir after mkdir.
* Run all subs matching /^test_*/ (for which we can plan)...
* Unless TEST_METHOD is set, in which case run matching subs (sans plan).
* Define total number of tests very near 'use Test::More', where expected.
* Define test tempdir where it's declared, no longer any reason why not.
* Move most comments from TODO.cvs into t/cvs.t.
* Add a whole bunch more comments describing the needed test cases.
XXX existing tests are order-dependent, but currently happen to pass
* Call readfile() directly from writefile().
* Parameterize commit message for the web-commit case.
* Describe intent of test cases.
* Rename test subs to match what they actually do.
* To prove extra path slashes don't cause trouble, instead of running
the same tests a second time, just assert that checkconfig()
strips the slashes.
the test plan at runtime. Use IkiWiki unconditionally too (as that's
not what I'm testing here) to avoid the TAP error of printing a
test result before having printed the plan.
In the first test, discount returns the html attributes in a different
order, which broke the test. Test only for the important text, not the
exact html output.
In the second test, discount does some encoding of its own of the partially
encoded url, again resulting in different output.
This is such a pity. smcv had these great dates, but squeeze's Date::Parse
cannot parse them.
Oh well, at least it makes for a great bug closure title.
- Migrate the set of deletions to the {autofile} set, since it has
more or less the same effect. This affects the "deleted" case in the
test.
- If a page has just been deleted, add it as an autofile anyway: by
the time gen_autofile is called, it'll be in the list of deleted files,
so it'll just be added to {autofile}. This affects the "gone" case
in the test.
- Behaviour change: we don't forget that a page with no reason to be
re-created was deleted. This affects the 'expunged' and 'reinstated'
cases in the test.
This does cause a minor regression: index pages are now committed
individually rather than being a single commit per rebuild.
This also means the autoindex regression test needs to trigger the
autofile generation pass.
As index.{es,fr} don't exist, po::refreshpofiles copies them from the basewiki
underlay before running msgmerge. msgmerge marks as obsolete the translation
strings that came from the basewiki po files, but the link plugin
does not make the difference between obsolete and up-to-date links.
$links{'index.fr'} and $links{'index.es'} are therefore expected to contain
SandBox and ikiwiki.
There are two sub-caces. If both source files still exist, the winner that
renders the destination file is undefined. If one source file is deleted
and the other added, in a refresh, the new file will take over the
destination file.
Using named parameters for these is overdue. Passing the session in a
parameter instead of passing username and IP separately will later allow
storing other session info, like username or part of the email.
Note that these functions are not part of the exported API,
and the prototype change will catch (most) skew, so I am not changing
API versions. Any third-party plugins that call them will need updated
though.
* openid: Incorporated a fancy openid-selector signin form.
(http://code.google.com/p/openid-selector/)
* openid: Use "openid_identifier" as the form field, as required
by OpenID Authentication v2.0 spec.
Many calls to file_prune were incorrectly calling it with 2 parameters.
In cases where the filename being checked is relative to the srcdir,
that is not needed.
Made absolute filenames be pruned. (This won't work for the 2 parameter call
style.)
This can be a lot faster, since huge numbers of pages are not sorted
only to mostly be thrown away. It sped up a build of my blog by at least
5 minutes.