-
Notifications
You must be signed in to change notification settings - Fork 177
Importing from Etherpad to Federated Wiki #424
Comments
Have you tried @interstar's http://project.thoughtstorms.info Quick Paste Converter? The resulting output can than be saved in Wiki's data folder and will be loaded if the page that decodes to its filename is being called. |
@almereyda I just tried the Quick Paste Converter, and see that it converts from plaintext or wikish to JSON. This would suggest the next step would be to FTP the file to the server ... which isn't a good path for non-programmers. @WardCunningham had said that he might consider writing a quick translator of some sort. As an alternative to an input being HTML with breaks in it, perhaps translation from the Dokuwiki markup would be better. I was wondering why Etherpad Lite would specifically choose Dokuwiki formatting over any other wiki creole, but then found an EtherDoku project at http://sourceforge.net/projects/etherdoku/ (last updated 2013-04-18), and then "This is a guide to building your own hybrid etherpad + wiki" at http://canidu.com/etherwiki-howto.html ... not to mention an integration plug-in at https://www.dokuwiki.org/plugin:etherpadlite . The combined use of Etherpad with any wiki is probably worth discussing. So far, I've been positioning federated wiki as a multiple-perspectives inquiring system tool, whereas traditional wiki is an inductive consensual inquiring systems tool. (For those unfamiliar with inquiring systems, see http://coevolving.com/blogs/index.php/archive/the-meta-design-of-dialogues-as-inquiring-systems/ ). |
Yeah. QuickPaste is basically a solution for geeks converting one-off If you want to write a script to do the conversion and post it https://github.com/interstar/ThoughtStorms/blob/master/scripts/SFWTools.py and modify it to your purpose. Though I think that's never going to be part Phil On 9 July 2014 17:30, David Ing [email protected] wrote:
|
This is a mock-up of a programmable site scraper that came out of conversations at the recent Indie Web Camp. The idea would be that the federation could keep track of formatting on important sites while end users could exploit this to import specific articles at their convenience. http://ward.fed.wiki.org/view/bbc-world-service Most scrapers that I've written so far just run at the command line. Another alternative would be to make a plugin that could be smarter about specific formats found in a cut-and-paste. If there were a hundred variations, there could be a site that cataloged them, one per page, with instructions for use. The advantage of cut and paste is that it bypasses cross-origin restrictions in the browser. |
As a side channel for taking notes during a meeting, Etherpad Lite -- the main site is at http://etherpad.org/, but I installed the version at https://github.com/wshearn/etherpad-example -- has proved to be helpful as a complement to Federated Wiki.
For the Federated Wiki hangout on July 9, we took notes at http://pad.s2t.org/p/2014-07-09 . The export options are HTML, plain text of Dokuwiki (i.e. the wiki markup recognized by that implementation).
To post a snapshot of the digest for the day at http://fed.coevolving.com/view/digests-from-sfw-meetings/view/digest-2014-07-09 , the only way to retain the formatting is to export as HTML, and then copy the HTML source as content in the federated wiki. This unfortunately makes the entire 60+ minutes of content one single paragraph, as opposed to multiple paragraphs.
Should we just take HTML source as the best way to handle this content, or would it be possible to build some sort of converter so that the content is broken down into smaller paragraphs?
The text was updated successfully, but these errors were encountered: