Wikipedia periodically publishes full data dumps of the encyclopedia’s content. If you wanted to make your own copy of the Wikipedia site for offline viewing, you’d typically convert and import that content into MySQL using MediaWiki’s importDump.php utility. The initial import process can take over a day. Building the indexes for searching articles takes even longer.
Thanassis Tsiodras came up with a better way of using the Wikipedia dump for offline reading:
Wouldn’t it be perfect, if we could use the wikipedia “dump” data JUST as they arive after the download? Without creating a much larger (space-wize) MySQL database? And also be able to search for parts of title names and get back lists of titles with “similarity percentages”?
The end result is a Wikipedia reader that indexes the entire dump in under 30 minutes, stores the entire data in the original, though segmented, bz2 compressed format, and comes complete with a light web interface for searching and reading entries.
Building a (fast) Wikipedia offline reader – Link
Related:
WikipediaFS – a Linux MediaWiki file-system – Link
ADVERTISEMENT