Build links the right way.
This also involved dropping that leading slash on the osm_default_icon.
And since it would require changing the old osm_tag_icons too,
I just removed that relic.
It just didn't work, but also, it didn't use writefile, which is not
desirable for security. Fixed both issues.
Also removed some unnecessary debug messages.
Add an underlay for the osm plugin.
Update links to right path to icon. Note that the osm plugin has a
pervasive bug in how it links to icons; it assumes the site is at /.
I did not attempt to fix that; it should be using urlto() to make a correct
relative link.
When the wiki is in a subdir of the git repo, a web revert would show
in recentchanges as eg, doc/index, instead of just index.
This happened because decode_git_file caches a $prefix that is dependant
on the $git_dir setting, and the revert code runs with a different
$git_dir, which polluted the $prefix for later.
Fix this by adding a with_git_dir that juggles the variables properly.
* Test that adding a text file under a name formerly tracked as
binary (and vice versa) gets the right keyword-substitution
behavior.
* Explicitly set -kkv for text files to make the tests pass.
* CVS warns in these cases about "changing keyword expansion mode",
but this is correct behavior, so filter it from stderr. Filter
stdout the same way in case we ever want to keep any of it.
* In rcs_add(), replace comments with obviousness.
strftime is a C function, it does not return decoded utf8.
Several places in ikiwiki manually decoded it, but at least two
forgot to.
Also, strftime might not return even encoded utf8, if LC_TIME is set
to a non-utf8 value. Went ahead and supported decoding whatever encoding
it uses.
The remaining direct calls to strftime() are all ones that first set
LC_TIME=C, in order to get times that are not for human display.
A diff was already truncated after 200 lines. But it could still be
arbitrarily enormous, if a spammer or other random noise source likes long
lines. That could use a lot of memory to html encode etc the diff and fill
it into the template. Truncating after 100kb seems sufficient; it allows
for 200 lines of up to 512 characters each.
In the code:
* general plugin API calls (in plugins/write order),
* VCS plugin API calls (in plugins/write order), then
* internal support routines (in alphabetical order).
In the tests:
* general meta-behavior (in no particular order, yet),
* general plugin API calls (in plugins/write order),
* VCS plugin API calls (in plugins/write order), then
* internal support routines (in semi-logical order).
mdwn: Can use the discount markdown library, via the
Text::Markdown::Discount perl module.
This is preferred if available since it's the fastest currently supported
markdown library, speeding up markdown rendering by a factor of 40.
That is to say, when only rendering a lot of markdown, discount is 40x
faster. When building a ikiwiki site, ikiwiki's other overhead gets in the
way, but I still see significant speedups. Building the ikiwiki docwiki
dropped from 62 to 45 seconds, for example.
However, when multimarkdown is enabled, Text::Markdown::Multimarkdown is
still used.
While discount contains some nonstandard markdown extensions,
including tables and footnotes, AFAICS most of them are not
enabled by default in the perl bindings.
I consider sticking to non-extended markdown a desirable thing, since this
is probably not the last markdown engine. In particular, sundown is waiting
in the wings to get packaged and get a perl binding.
----
Reviewing all the showdown extensions, here are the ones that are enabled:
centered paragraphs:
->centered<-
image sizes: [dust mite](http://dust.mite =150x150)
<style>..</style> blocks are eaten. The perl binding does not provide
access to the gathered CSS. This is not legal html anyway, so unlikely
to cause breakage.
Using a file was sorta not right.
Note that when previewing, %pagestate is not saved, so
it has to rebuild the graph every time until that graph is saved;
then previews can use the cached data until the next time the graph
is changed.
Also note that it's stored in the destpage's pagestate. The imagemap
could vary between a page and an inlined page if wikilinks were supported.
Also, I let preview mode write real files, rather than using data: uri.
Which is ok these days, since ikiwiki tracks files created during
previewing, and cleans them up later.
In 875d550f12 I for some reason
made $page be changed when creating a discussion page, which
broke the link on the edit page. Changing page seems unnecessary,
so reverted that part of the change.
Involved dropping some checks for .svn which didn't add anything, since if
svn is enabled and you point it at a non-svn checkout, you get both pieces.
The tricky part is add and rename, in both cases the new file can be in
some subdirectory that is not added to svn.
For add, turns out svn has a --parents that will deal with this by adding
the intermediate directories to svn as well.
For rename though, --parents fails if the directories exist but are not
yet in svn -- which is exactly the case, since ikiwiki makes them
by calling prep_writefile. So instead, svn add the parent directory,
recursively.
tldr; svn made a reasonable change in dropping the .svn directories from
everywhere, but the semantics of other svn commands, particularly their
pickiness about whether parent directories are in svn or not, means
that without the easy crutch of checking for those .svn directories,
code has to tiptoe around svn to avoid pissing it off.
There's a nice message if the plugin is loaded and used and highlight is
not available, and a nice fallback. So no need for this other warning,
which can happen any time all plugins are loaded to generate a setup file.
* mercurial: openid nicknames are now used when committing. (Daniel Andersson)
* mercurial: implement rcs_commit_staged so comments, attachments, etc
can be used. (Daniel Andersson)
* mercurial: fix viewing of a diff containing non-utf8 changes.
(Daniel Andersson)
* rename: Fix logic error that broke renaming pages when the attachment
plugin was disabled.
* rename: Fix logic error that bypassed the usual pagespec checks.
If a page that looks like an email address exists, it can't be linked to.
But that's unlikely. Better to be consistent; before this change, a
wikilink with an email address in it could link to the email address or a
page, depending on when the page was created and when the page with the
link was updated.
Imagemagick does not generate svg images very well, but it can convert
them to png quite well.
For browsers that don't yet support displaying svg, this also provides a
workaround; just scale the svg down to get a png. But the workaround is
partial, since scaling the image larger, or leaving it the same size will
cause the original svg to be displayed. Since browsers are actively
improving svg support, this is good enough for me.
Firefox sent an accept header for application/xml, not application/json,
and also weakened the priority to 0.8. So that stuff is not to be trusted;
instead I found a better way: When an ajax upload is *not* being made,
the Upload Attachment button will be used, so enable ajax if an upload
is being made without that button having been used.
Also, testing with firefox revealed it refused to process a response that
was type application/json, and checking the demo page for the jquery file
upload plugin, it actually returns the json with type text/html. Ugh.
Followed suite.
Now tested with: chromium, chromium (w/o js), firefox, firefox (w/o js),
and w3m.
Needed for attachment to return json when requested.
I think some browsers send Accept: * , so I made sure to check that json
was explicitly listed as to be accepted, as well as having a high
priority.
Left out confirmation of removal for held attachments because
a) they're not in the wiki yet, so confirmation is a bit unnecessary
b) it would be hard
c) eases later integration of jquery file upload interface
Also changed where attachments of index are held (to match where they're
stored in the srcdir).
Note that the attachment formbuilder hook was made to run last, so that
the list of attachments is not generated before removal, in the fast path
w/o confirm.
Note that it's possible for an attachment in the holding area to be older
than an attachemnt in the wiki with the same name. I intentionally
show the one in the holding area in this (unlikely) case, since saving the
page will overwrite the wiki's file with the held attachment. It does not
seem worth the bother of doing something more intelligent, since in this
case two people have basically conflicted with one-another.. and both
attachment contents will be stored in revision control in case it needs to
be sorted out.
I had to remove the hyperlink for attachments in the holding area, since
they're not yet live on the web. This could be annoying/confusing. Added
a moseover notice instead.
This makes uploading a lot of attachments somewhat faster, because
the user does not need to wait for a long website refresh after each
upload. Still probably somewhat slow, since ikiwiki has to run for each
upload.
More importantly, this opens the door for integration of things like
the jquery file upload interface, which allow drag-n-drop and multiple
file uploads to be queued and then ran.
It uses rcs_commit_staged, which leaves out tla and mercurual which lack
that, but since rename, remove, autoindex, etc also use that, I think it's
fine for attachments to also depend on it.
The attachment list is currently broken; it does not look in the holding
area yet, and its links to the attached files won't work since they're not
yet in the wiki. previewing is also currently broken.
Work sponsored by TOVA.
Two problems fixed:
1. Files are written with a .ikiwiki-new suffix, which has to be taken into
account.
2. Need to count length of bytes, not of unicode characters.
Arguably, the real bug is in the interface to add_autofile, but since
that does take a filename, not a page name, it cannot really do case
handling on its own. The only other users of add_autofile in ikiwiki proper
is autoindex, and it always uses one case. Other third party plugins might
also need to add similar workarounds though.
There is a tension between looking up the avatar at post time
and build time. I have not yet decided which is better.
Lookup at build time has the benefit that if a user changes their
email address, or sets up their own federated libravatar
server, on rebuild their new avatar will show up.
It also allows getting a https version of the avatar easily if
the site was using http but was changed to use https.
And it can look up avatars for posts that have already been made.
Which is a nice thing, especially as we roll this out, eh?
But it has a drawback, that it depends on the sessiondb contents
for emails and so rebuilding a site w/o that will lose info.
And, it means dns lookups every time a comment is rendered. A page
with a lot of comments on it would render them all whenever another is
posted or the page is changed, and that could significantly slow things
down. (This could be amelorated by caching the lookups.)
Since I'm undecided, I have moved it into a function that could be called
either way. Currently looking up only at post time.
Don't fail if libravatar fails for some reason. Reasons I can think
of:
* too old version to do openid lookups (fall back to email lookup)
* network problem perhaps
Oddly, this hadn't caused any visible breakage. Possibly inline,
which is the only thing to use targetpage, resolves the function
to the "real" one before po gets loaded?
If the inline plugin is not being loaded, or is perhaps loaded after po
(when IkiWiki::Setup::getsetup loads all the plugins, for example),
po should not inject its custom rootpage sub, as that will lead to a
redefinition error message when inline loads.
Since the plugin abuses the checkconfig hook to launch aggregation when in
--aggregate mode, it should give other plugins that have checkconfig hooks
a chance to run before they are possibly used in rendering the aggregated
content.
This allows per-form/feedlink group customization without having to
resort to counting.
(cherry picked from commit b134feb0dc2d9a8ff7ae447537fa8bc02811aabd)
With the previous logic, same-level items would go down one level and
then again up one level closing and re-opening UL tags each time. The
resulting redundant lists caused whitespace layout issues in the
rendered pages.
Adjust the "moving up?" logic to check if the current item base is
different from the previous item _base_. Adjust the "going down?" logic
by moving it to an earlier phase and checking for (1) parent item not being
what it should be and (2) remaining bits; the root is grown unconditionally as
long as (2) is verified.
Problem was this: websetup loads all plugins, but does not checkconfig
them. So, htmltidy's recently added configurable command setting was unset;
this resulted in its sanitize hook failing; the sanitize hook is called
when a sidebar was enabled, and this caused the sidebar to not display.
I put in a fix, but the underlying problem is that websetup loads all
plugins but leaves them in an unconfigured and possibly broken state while
trying to display its forms.
Probably the long-term fix is to have it cache the original hook states from
before loading the plugins, and restore it after getting their configuration.
Or, even to get the configuration using a subprocess, as plugins may do things
outside the hook system.
- Migrate the set of deletions to the {autofile} set, since it has
more or less the same effect. This affects the "deleted" case in the
test.
- If a page has just been deleted, add it as an autofile anyway: by
the time gen_autofile is called, it'll be in the list of deleted files,
so it'll just be added to {autofile}. This affects the "gone" case
in the test.
- Behaviour change: we don't forget that a page with no reason to be
re-created was deleted. This affects the 'expunged' and 'reinstated'
cases in the test.
This does cause a minor regression: index pages are now committed
individually rather than being a single commit per rebuild.
This also means the autoindex regression test needs to trigger the
autofile generation pass.
The default templates are also updated to make use of this information.
The rel="alternate" attribute is also inserted, for completeness.
(cherry picked from commit 618ade535e6a7967a510d9e210edaef3d37cc9bc)
cgitemplate is a modified misctemplate that takes an optional cgi object
and uses it to set the baseurl, and also optionally the forcebaseurl,
if a page is provided.
If no cgi object is provided, it will fall back to using $config{url}.
I expect this will only be needed in exceptional cases where
that doesn't much matter, such as cgierror().
showform uses cgitemplate, so there is no more need for showform_preview.
This way, do=goto will go to the page relative to
the current location, while the permalinks in feeds
will be absolute (unless an url is not configured at all).
Since tag names are now retrieved from the file names, we must revert
the escaping process that santizes the file names. Solve by adding a
`pagetitle()` call at the end of the tagname()
(cherry picked from commit 0ee0612b1ab11d76eb3790c8db7a2ba992c54f6b)
The use of typed links for tags and some of the consequent changes
introduced some unwanted functionality variations in the tag system. Two
problems in particular could be observed, when compared to the use of
tags in older versions of IkiWiki:
* tags in feeds (both rss and atom) would use the file path as their
name (e.g. you would have <category term="tags/sometag" /> in an atom
item for a page tagged sometag with a tagbase of tags), whereas they
appeared pure before
* tags containing a slash character would appear without the slash
character but be used with the slash character in other circumstances
(effect visible by tagging a page with a name such as "with/slash")
Both of these issues are fixed by introducing a tagname() function that
takes a tag link and effectively reverses (as well as possible) the
effects of taglink().
A possible alternative route would have been the reintroduction of the
global %tags hash, but the new approach as the (arguable) benefit of
introducing a small layer of sanitation for tag names.
Note that in particular calling initTheme with and empty file does not
work anymore.
use of initLanguage was replaced by loadLanguage, which seems to work
in both places.
I tried to make it a bit more robust against missing a highlight package.
There are lots of warnings, but it no longer crashes.
Now that page.tmpl is used for cgi, the parentlinks are able to be
displayed even when creating or editing a page. So it's redundant to
include the path to the page in the title, remove it.
There seems no need to allow selecting a location when creating a page this
way; the user should always want it to appear in the inline whose form they
submitted.
The lack of $from will probably hurt setups using po_link_to = current,
but at least we can fix the blocker bug that prevents any wiki using the po
plugin to build.
Use the included page name rather than the including page name. This
allows us to allow feeds in nested inlines without duplicating feeds
with the same content under different (and stupid) names.
and support all elements that HTML::Tagset knows about.
(Which doesn't include html5 just yet, but then the old version didn't either.)
Bonus: 4 times faster than old regexp method.
So formbuilder has an annoying glitch, that setting the value of a
checkbox, even without force, will override the value currently on the
form. Thus the guards against changing checkbox values when a form has been
submitted.
But those guards also prevented the checkboxes for advanced items getting
the right value when going into advanced mode.
Note that if the user makes changes to advanced mode stuff and leaves
advanced mode, those changes are lost. That seems reasonable so I didn't
change it -- and it made this fix simple.
I understand the need to avoid chdir when running git_parse_changes
for receive now. At that point, the changes have not been pushed to
the srcdir's repo yet. When running the same code for preprevert,
chdir to the srcdir is ok, and necessary.
plovs reported a crash when templates were not installed properly,
with a non-useful error about the template object not being defined.
I've audited all uses of template_depends(), and template(), and it makes
sense for them to throw an error if the template cannot be found. All code
with a user-supplied template catches errors already, to handle template
parse failures.
It did not make sense for template_file to throw errors, as some code uses
it to probe if a template file is available.
The HTML::Tree changelog says:
[THINGS THAT MAY BREAK YOUR CODE OR TESTS]
...
* Attribute names are now validated in as_XML and invalid names will
cause an error.
and indeed the regression tests do get an error.
With a relative xrds-location, the openid perl client module will fail.
I haven't checked the specs to see if it needs to be absolute, but all
examples I've seen are absolute, so it seems a very good idea.
I also tried setting RPC::XML::ENCODING but that did not prevent the crash,
and it seems that blogspam.net doesn't like getting xml encoded in unicode,
since it mis-flagged comments as spammy that way that are normally allowed
through.
If I am not mistaking all source files in ikiwiki are encoded in Unicode UTF-8.
Adding `\usepackage[utf8]{inputenc}` enables LaTeX to deal with the encoding.
As a consequence some special characters like umlauts can be used in the source
code which is useful for foreign languages.
[[!teximg code="a = b \text{ für alle } b \neq 2"]]
But for example »≠« cannot be used in LaTeX right now. One has to use other TeX
systems like XeTeX or LuaTeX featuring native UTF-8 support or use additional
nonstandard packages like uniinput [1].
I used the package `inputenc` (`texdoc inputenc`) and not `inputenx` (`texdoc
inputenx`), because I have not used `inputenx` that much and using the option
`math` is not supported in Debian (and I guess other distributions too) since
`inpmath` is not included in CTAN.
[1] http://wiki.neo-layout.org/browser/latex/Standard-LaTeX
Signed-off-by: Paul Menzel <paulepanter@users.sourceforge.net>
A missing smileys.mdwn caused the plugin to error out interrupting the
building process. Instead, we check for the file presence and warn without
erroring out in case it's missing, in a similar fashion as it's
currently done for the shortcut plugin.
This reverts commit 3ef8864122.
Most aggregators block javascript and so it would display uglily.
Need to find a way to fallback to static buttons instead.
This makes the javascript be added to rss feeds, which allows the buttons
to be displayed by aggregators. At least, if the aggregator does not
sanitize javascript.
The po rescan hook re-runs the scan hooks, and runs the preprocess ones in scan
mode, both on the po-to-markup converted content. This way, plugins such as meta
are given a chance to gather correct information, rather than ugly/buggy escaped
data it did gather from unconverted PO files.
This is needed for the po plugin vs. e.g. meta titles.
In order to get rid of the ugly "rebuilding all pages to fix meta titles" thing,
Joey suggested to make "po, at scan time, re-run the scan hooks, passing them
modified content (either converted from po to mdwn or with the escaped stuff
cheaply de-escaped)". This would unfortunately not work, as the meta plugin
gathers its data using the preprocess hook in scan mode: it would overwrite with
buggy data the correct data we would have forced it to gather in po's scan hook.
We then need a hook that runs *after* the preprocess hook has been run in scan
mode, but *before* any page rendering is started. Hence this one.
The idea here is that <meta name="foo" description="bar">
can be written like [[!meta name="foo" description="bar">.
Of course, [[!meta foo=bar]] is still supported; this new feature
provides some DWIM when trying to directly convert a meta tag into
a meta directive.
This reverts commit 4cf185e781.
That commit broke t/po.t (probably the test case only is testing too
close the the old implementation and needs correcting).
Also, we have not decided how to want to represent it yet, so I'm not
ready for this change.
Conflicts:
IkiWiki/Plugin/po.pm
doc/plugins/po.mdwn
Probably best to store it unsanitized and sanitize as needed on use.
And it already was for comments, leaving only the need to sanitize the
nickname when git committing, to ensure the email address is legal.