<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div>I have a question for you about the PDF would it also be possible to build an automated emailer if X number of pages get updated email updated PDF out to a mailing list. people go in and request to be added to the list. then when updates occur we have a copy to download as soon as we get an Internet connection. <br><br><div>_____________________________</div><div><br></div>Willie Pierce</div><div><br>On Sep 12, 2013, at 9:25 AM, marco ardito <<a href="mailto:ardito@apiform.to.it">ardito@apiform.to.it</a>> wrote:<br><br></div><blockquote type="cite"><div>
<meta content="text/html; charset=ISO-8859-15" http-equiv="Content-Type">
Hi all, thanks for replying. <br>
<br>
Further details that could add discussion points:<br>
1) on a good internet line (10Mbit/s fiber I have here at my job
place) the whole process take half an hour to complete, download
included. If run locally (or on a LAN) this could take much less. I
can't use wiki markup directly, I still need the wiki to generate
html (but cached static html pages could work...).<br>
<br>
2) a PDF version, I think, could be useful in many ways:<br>
- of course when you're offline (I know of people that needs this:
eg, see
<a class="moz-txt-link-freetext" href="http://blenderartists.org/forum/showthread.php?290425-Blender-Wiki-Offline-Manual&p=2419755&viewfull=1#post2419755">http://blenderartists.org/forum/showthread.php?290425-Blender-Wiki-Offline-Manual&p=2419755&viewfull=1#post2419755</a>
and other threads there)<br>
- It's easier to print, and could look better.<br>
- it has page numbers, so if you're teaching in a classroom, you can
make all students refer to a fixed PDF page, and without the need
for an internet connection<br>
- It's a kind of a "snapshot" (I can see "what's changed" simply
running a diff tool :-) )<br>
- it could be rendered in different formats (<a href="http://archive.org">archive.org</a> kinda helps
you automatically "deriving" other formats, like an online version,
and many mobile, dejavu, etc but, eg, honestly their epub conversion
really sucks...)<br>
- formats like PDF it can be annotated (someone told me he needs
this)<br>
- once it was hosted also here <a class="moz-txt-link-freetext" href="http://pdf.letworyinteractive.com/">http://pdf.letworyinteractive.com/</a> a
mini site Nathan Letwory gently provided me for free (before
<a href="http://archive.org">archive.org</a>), but I've lost the admin credentials to check download
count, but I remember it was huge!<br>
I even created a logo with Inkscape:
<a class="moz-txt-link-freetext" href="http://pdf.letworyinteractive.com/templates/ja_purity/images/logo.png">http://pdf.letworyinteractive.com/templates/ja_purity/images/logo.png</a>
to repalce the standard joomla one.<br>
<br>
3) "the process" is actually a php script, but it could be
translated it in other scripting langs if better suited (python?):
the requirements are pretty basic: <br>
=> to get/adapt wiki pages html: remote file downloading, strings
management, local file creation <br>
=> to create the pdf: <a class="moz-txt-link-freetext" href="http://code.google.com/p/wkhtmltopdf">http://code.google.com/p/wkhtmltopdf</a>, which
should run pretty anywhere, should be run by the script with some
(cli) arguments.<br>
<br>
4) @Mirek: in short, the script <br>
=> downloads the "wiki manual" TOC page, and gets all the (640+)
wiki pages links, in the same order, in a list.<br>
=> then it downloads all the links as html, one by one.<br>
=> for each downloaded page html, the script;<br>
- cuts off all the stuff not needed (headers/footers, bars, etc)<br>
- scans html tags for images/files and downloads them locally<br>
- converts the images/files links in html to load locally stored
images/files links<br>
- fixes or strips html that is not useful in "printed" docs (eg:
embedded video are replaced by a visible link to the same video, if
possible)<br>
- appends the resulting html to a big, big html file (which ends up
containing the whole wiki manual...)<br>
=> at the end of the 640+ "download&adapt" cycle, I have one
single html file and a folder full of images/files, and the big, big
html file (~6 MB latest release) has links for local images/fiels,
of course. You could open it in a browser, I think!<br>
=> at last, it calls wkhtmltopdf passing the big html file to the
guy, along with some "formatting" arguments and options. A few
seconds, and you have a (arguably) nice PDF. <br>
<br>
hope this helps you to understand how I did it.<br>
<br>
Ok, enough boring stuff..<br>
let me know what you think, if you wish... any suggestion is
welcome!<br>
<br>
Marco<br>
<br>
<blockquote cite="mid:522F13FB.9060908@apiform.to.it" type="cite">
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-15">
Hi all,<br>
in the past (2.4x) I already did a Blender wiki > PDF
conversion, ...<br>
</blockquote>
<br>
<br>
<blockquote> -------------------
[Ai sensi e per gli effetti della Legge sulla tutela della privacy
(L. 196/2003), questa mail รจ destinata unicamente alle persone sopra
indicate e le informazioni in essa contenute sono da considerarsi
strettamente riservate. E' proibito leggere, copiare, usare o
diffondere il contenuto della presente mail senza autorizzazione.
Se avete ricevuto questo messaggio per errore, siete pregati di
rispedire la stessa al mittente. Grazie]
</blockquote>
</div></blockquote><blockquote type="cite"><div><span>_______________________________________________</span><br><span>Bf-docboard mailing list</span><br><span><a href="mailto:Bf-docboard@blender.org">Bf-docboard@blender.org</a></span><br><span><a href="http://lists.blender.org/mailman/listinfo/bf-docboard">http://lists.blender.org/mailman/listinfo/bf-docboard</a></span><br></div></blockquote></body></html>