Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experiment with supporting eXist-db #1570

Open
benjamingeer opened this issue Jan 6, 2020 · 33 comments
Open

Experiment with supporting eXist-db #1570

benjamingeer opened this issue Jan 6, 2020 · 33 comments
Assignees

Comments

@benjamingeer
Copy link

No description provided.

@benjamingeer
Copy link
Author

Find paragraphs containing the word "paphlagonian" marked as a noun:

xquery version "3.1";

for $par in collection("/db/books")//p[.//noun[starts-with(lower-case(.), "paphlagonian")]]
return <result doc="{util:document-name(root($par))}">{$par}</result>

But this returns two <result> elements for the same document. How can I return one <result> (possibly containing multiple matching paragraphs) per matching document?

@tobiasschweizer
Copy link
Contributor

tobiasschweizer commented Jan 6, 2020

I had a similar query:

<regions>
<h1>{ count(collection("...?select=Med_*.xml")//region) } regions defined </h1>

<ul> {
for $med in collection("...?select=Med_*.xml")/meditatio
order by number($med/@id)

return  <li> Meditatio {data($med/@id)} <ul>
    { 
    for $reg in $med//region
    
    return <li id="{data($reg/@id)}"> {data($reg/@name)} </li>
    } 
</ul></li>
}</ul>
</regions>

I just iterate over the documents' root elements (one element per document). However, per root element there could still be several occurrences of the same element.

@tobiasschweizer
Copy link
Contributor

@benjamingeer
Copy link
Author

Thanks, group by does it!

xquery version "3.1";

for $par in collection("/db/books")//p[.//noun[starts-with(lower-case(.), "paphlagonian")]]
group by $doc := util:document-name(root($par))
return <result doc="{$doc}">{$par}</result>

@benjamingeer
Copy link
Author

Search for paphlagonian as an adjective and soul as a noun, in the same paragraph (dhlab-basel/knora-large-texts#2 (comment)):

xquery version "3.1";

for $par in collection("/db/books")//p[.//adj[starts-with(lower-case(.), "paphlagonian")] and .//noun[starts-with(lower-case(.), "soul")]]
group by $doc := util:document-name(root($par))
return <result doc="{$doc}">{$par}</result>

@benjamingeer
Copy link
Author

Uploading books to eXist-db:

[info] Uploaded wealth-of-nations (6135800 bytes, 7885 ms)
[info] Uploaded the-count-of-monte-cristo (8071112 bytes, 10667 ms)
[info] Uploaded sherlock-holmes (1793512 bytes, 2358 ms)
[info] Uploaded federalist-papers (3189160 bytes, 5162 ms)
[info] Uploaded pride-and-prejudice (1933853 bytes, 2134 ms)
[info] Uploaded the-idiot (4063305 bytes, 5206 ms)
[info] Uploaded jane-eyre (2974229 bytes, 3780 ms)
[info] Uploaded king-james-bible (12533982 bytes, 15746 ms)
[info] Uploaded anna-karenina (5898594 bytes, 7536 ms)
[info] Uploaded annals-of-the-turkish-empire (3204177 bytes, 4245 ms)
[info] Uploaded madame-bovary (1978766 bytes, 2373 ms)
[info] Uploaded hard-times (1807815 bytes, 2160 ms)
[info] Uploaded a-tale-of-two-cities (2291963 bytes, 2671 ms)
[info] Uploaded the-city-of-god-vol-1 (3716365 bytes, 4989 ms)
[info] Uploaded ulysses (4662436 bytes, 6224 ms)
[info] Uploaded complete-works-of-shakespeare (16305050 bytes, 21421 ms)
[info] Uploaded the-canterbury-tales (4565606 bytes, 6269 ms)
[info] Uploaded the-city-of-god-vol-2 (3928595 bytes, 5195 ms)
[info] Uploaded mysterious-island (3362922 bytes, 4662 ms)
[info] Uploaded the-adventures-of-huckleberry-finn (1786403 bytes, 2184 ms)
[info] Uploaded notre-dame-de-paris (3239650 bytes, 4552 ms)
[info] Uploaded maupassant-stories (7934518 bytes, 10532 ms)
[info] Uploaded twenty-years-after (4393927 bytes, 5841 ms)
[info] Uploaded war-and-peace (9535752 bytes, 12614 ms)
[info] Uploaded don-quixote (6812100 bytes, 8834 ms)
[info] Uploaded ivanhoe (3327831 bytes, 4356 ms)
[info] Uploaded gullivers-travels (1671263 bytes, 1966 ms)
[info] Uploaded rizal (2940565 bytes, 3873 ms)
[info] Uploaded plato-republic (3387799 bytes, 4281 ms)
[info] Uploaded from-the-earth-to-the-moon (1599378 bytes, 1823 ms)
[info] Uploaded plutarch-lives (11621868 bytes, 14582 ms)
[info] Uploaded our-mutual-friend (5671376 bytes, 7301 ms)
[info] Uploaded little-dorrit (5675385 bytes, 7285 ms)
[info] Uploaded moby-dick (3645883 bytes, 4744 ms)
[info] Uploaded great-expectations (2980590 bytes, 3832 ms)
[info] Uploaded les-miserables (9701154 bytes, 12816 ms)
[info] Uploaded swanns-way (3029412 bytes, 4114 ms)
[info] Uploaded the-iliad (3366140 bytes, 4433 ms)
[info] Uploaded mahabarata-vol-3 (13768426 bytes, 18576 ms)
[info] Uploaded dracula (2469943 bytes, 2851 ms)
[info] Uploaded mahabarata-vol-2 (11663472 bytes, 15427 ms)
[info] Uploaded little-women (2977054 bytes, 3855 ms)
[info] Uploaded mahabarata-vol-1 (10869925 bytes, 14494 ms)
[info] Uploaded emma (2568815 bytes, 2827 ms)
[info] Uploaded grimms-fairy-tales (1648818 bytes, 1928 ms)
[info] Uploaded mahabarata-vol-4 (7486990 bytes, 10005 ms)
[info] Uploaded the-scarlet-letter (1437007 bytes, 1619 ms)
[info] Uploaded wuthering-heights (1855275 bytes, 1994 ms)
[info] Uploaded the-brothers-karamazov (5869995 bytes, 7730 ms)

@benjamingeer
Copy link
Author

With all the books uploaded, the query in #1570 (comment) takes 8 seconds. Knora did it it in 1 second, using Lucene to optimise the query. I'm going to see if I can do a similar optimisation in eXist-db.

@benjamingeer
Copy link
Author

It looks like eXist-db can use Lucene to optimise the query, but there's a limitation: you have to configure the Lucene index (in a configuration file), specifying the names of the XML elements whose content you want to index:

It is important make sure to choose the right context for an index, which has to be the same as in your query.

http://exist-db.org/exist/apps/doc/lucene.xml

So for example, I could create an index on all <p> elements. This would mean that we would need to update eXist's Lucene configuration for each project.

@benjamingeer
Copy link
Author

Or I guess I could just make an index on the root XML element, which is in effect what Knora does.

@benjamingeer
Copy link
Author

Ah, wait a minute, these configuration "files" are actually XML documents. So it wouldn't be a problem to create one per project.

http://exist-db.org/exist/apps/doc/indexing#idxconf

@benjamingeer
Copy link
Author

benjamingeer commented Jan 7, 2020

With Lucene indexes on <noun>, <verb>, and <adj>, uploading is somewhat slower:

[info] Uploaded wealth-of-nations (6135800 bytes, 9534 ms)
[info] Uploaded the-count-of-monte-cristo (8071112 bytes, 13063 ms)
[info] Uploaded sherlock-holmes (1793512 bytes, 2314 ms)
[info] Uploaded federalist-papers (3189160 bytes, 4280 ms)
[info] Uploaded pride-and-prejudice (1933853 bytes, 2348 ms)
[info] Uploaded the-idiot (4063305 bytes, 6440 ms)
[info] Uploaded jane-eyre (2974229 bytes, 4022 ms)
[info] Uploaded king-james-bible (12533982 bytes, 17571 ms)
[info] Uploaded anna-karenina (5898594 bytes, 8371 ms)
[info] Uploaded annals-of-the-turkish-empire (3204177 bytes, 5129 ms)
[info] Uploaded madame-bovary (1978766 bytes, 2417 ms)
[info] Uploaded hard-times (1807815 bytes, 2229 ms)
[info] Uploaded a-tale-of-two-cities (2291963 bytes, 2831 ms)
[info] Uploaded the-city-of-god-vol-1 (3716365 bytes, 5511 ms)
[info] Uploaded ulysses (4662436 bytes, 6785 ms)
[info] Uploaded complete-works-of-shakespeare (16305050 bytes, 26064 ms)
[info] Uploaded the-canterbury-tales (4565606 bytes, 6487 ms)
[info] Uploaded the-city-of-god-vol-2 (3928595 bytes, 5798 ms)
[info] Uploaded mysterious-island (3362922 bytes, 4635 ms)
[info] Uploaded the-adventures-of-huckleberry-finn (1786403 bytes, 2410 ms)
[info] Uploaded notre-dame-de-paris (3239650 bytes, 4527 ms)
[info] Uploaded maupassant-stories (7934518 bytes, 11952 ms)
[info] Uploaded twenty-years-after (4393927 bytes, 6188 ms)
[info] Uploaded war-and-peace (9535752 bytes, 14286 ms)
[info] Uploaded don-quixote (6812100 bytes, 10358 ms)
[info] Uploaded ivanhoe (3327831 bytes, 5505 ms)
[info] Uploaded gullivers-travels (1671263 bytes, 2096 ms)
[info] Uploaded rizal (2940565 bytes, 4087 ms)
[info] Uploaded plato-republic (3387799 bytes, 4579 ms)
[info] Uploaded from-the-earth-to-the-moon (1599378 bytes, 2489 ms)
[info] Uploaded plutarch-lives (11621868 bytes, 16800 ms)
[info] Uploaded our-mutual-friend (5671376 bytes, 8805 ms)
[info] Uploaded little-dorrit (5675385 bytes, 9305 ms)
[info] Uploaded moby-dick (3645883 bytes, 5393 ms)
[info] Uploaded great-expectations (2980590 bytes, 4152 ms)
[info] Uploaded les-miserables (9701154 bytes, 14922 ms)
[info] Uploaded swanns-way (3029412 bytes, 4086 ms)
[info] Uploaded the-iliad (3366140 bytes, 4902 ms)
[info] Uploaded mahabarata-vol-3 (13768426 bytes, 21575 ms)
[info] Uploaded dracula (2469943 bytes, 3168 ms)
[info] Uploaded mahabarata-vol-2 (11663472 bytes, 17800 ms)
[info] Uploaded little-women (2977054 bytes, 4300 ms)
[info] Uploaded mahabarata-vol-1 (10869925 bytes, 16502 ms)
[info] Uploaded emma (2568815 bytes, 3275 ms)
[info] Uploaded grimms-fairy-tales (1648818 bytes, 2074 ms)
[info] Uploaded mahabarata-vol-4 (7486990 bytes, 11513 ms)
[info] Uploaded the-scarlet-letter (1437007 bytes, 1790 ms)
[info] Uploaded wuthering-heights (1855275 bytes, 2255 ms)
[info] Uploaded the-brothers-karamazov (5869995 bytes, 8914 ms)

@benjamingeer
Copy link
Author

Searches optimised with project-specific Lucene indexes

Search for <adj>Paphlagonian</adj>: 1 result in 0.05 second:

for $par in collection("/db/books")//p[.//adj[ft:query(., "paphlagonian")]]
group by $doc := util:document-name(root($par))
return <result doc="{$doc}">{$par}</result>

Search for <noun>soul</noun>: 48 results in 1 second:

for $par in collection("/db/books")//p[.//noun[ft:query(., "soul")]]
group by $doc := util:document-name(root($par))
return <result doc="{$doc}">{$par}</result>

Search for <adj>Paphlagonian</adj> and <noun>soul</noun> in the same paragraph: 1 result in 1 second:

for $par in collection("/db/books")//p[.//adj[ft:query(., "paphlagonian")] and .//noun[ft:query(., "soul")]]
group by $doc := util:document-name(root($par))
return <result doc="{$doc}">{$par}</result>

Search for <adj>full</adj>: 49 results in 800 ms:

for $par in collection("/db/books")//p[.//adj[ft:query(., "full")]]
group by $doc := util:document-name(root($par))
return <result doc="{$doc}">{$par}</result>

@tobiasschweizer
Copy link
Contributor

So is it more efficient than Knora?

@benjamingeer
Copy link
Author

Search for <adj>full</adj> and <noun>Euchenor</noun> in the same paragraph: query never terminates, uses 100% CPU, and results in an OutOfMemoryError:

for $par in collection("/db/books")//p[.//adj[ft:query(., "full")] and ..//noun[ft:query(., "Euchenor")]]
group by $doc := util:document-name(root($par))
return <result doc="{$doc}">{$par}</result>
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "db.exist.scheduler.quartz-scheduler_QuartzSchedulerThread"

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "Scanner-0"

The shutdown.sh script and Control-C have no effect. After stopping the process with kill -9, and restarting it, eXist displays a progress meter that stays stuck like this for a while:

Redo  [================================================= ] (98 %)=== ] (98 %)

@benjamingeer
Copy link
Author

eXist then eventually restarts. Aha, now I see that there is a bug in my query (.. instead of .).

@benjamingeer
Copy link
Author

Search for full and Euchenor in the same paragraph (fixed query): 1 result in 1.6 seconds:

for $par in collection("/db/books")//p[.//adj[ft:query(., "full")] and .//noun[ft:query(., "Euchenor")]]
group by $doc := util:document-name(root($par))
return <result doc="{$doc}">{$par}</result>

This is the query that was very slow in Knora here: dhlab-basel/knora-large-texts#2 (comment)

@benjamingeer
Copy link
Author

So is it more efficient than Knora?

Yes, in this use case:

  • Very large texts (up to 13 MB of XML per document) with very dense markup
  • Queries that search for particular words in particular tags (what Gravsearch does with knora-api:matchInStandoff)

To make this efficient in eXist, you have to configure a Lucene index specifically for each tag that you want to search for. Even so, it's not lightning-fast: a typical query takes 1-2 seconds. But this seems OK to me.

Probably if I split up each book into many small fragments (a few pages each) and stored the texts in the triplestore that way, Gravsearch would be able to do these kinds of queries more efficiently. In practice, I think there are other good reasons for doing that (e.g. it makes it easier to display and edit the texts).

@benjamingeer
Copy link
Author

If I split each book into small fragments (e.g. 1000 words each), I can make a structure where each Book resource has hundreds of links to BookFragment resources (via a hasFragment property), each with a seqnum. But then the API would need to provide a way to page through the values of hasFragment (sorted by seqnum) when getting the Book resource.

@benjamingeer
Copy link
Author

benjamingeer commented Jan 8, 2020

Also, it takes a lot longer to import all this data into Knora than to import it into eXists. I'm importing the same books as in #1570 (comment) into Knora (fragmented), and it looks like it's going to take about 3 hours.

@benjamingeer
Copy link
Author

To make the import faster, I added a config option to prevent Knora from verifying the data after it's written.

@benjamingeer
Copy link
Author

I did some tests with the books split into 1000-word fragments, and it doesn't make knora-api:matchInStandoff any faster, because if you search for a common word, the query still has to check all occurrences, whether they're in a single text value or in many text values. I think the only way to make it faster would be to create a custom Lucene index for each tag, as you can in eXist.

@tobiasschweizer
Copy link
Contributor

Would the Lucene connector allow for more flexibility?

@benjamingeer
Copy link
Author

Looking at that now.

@benjamingeer
Copy link
Author

It looks like the GraphDB Lucene connector can't do this. It just indexes whatever strings you give it, but in this case we would want to index substrings at specific positions, and it doesn't seem to have that feature.

http://graphdb.ontotext.com/documentation/standard/lucene-graphdb-connector.html#adding-updating-and-deleting-data

The only way I can see to do this would be to store the substrings themselves in the standoff tags. But this would mean duplicating a lot of text in the triplestore, and would make importing data even slower.

@benjamingeer
Copy link
Author

A drawback of XQuery is that I don't see any way to search within overlapping hierarchies. For example, given this document from one of our tests:

<?xml version="1.0" encoding="UTF-8"?>
<lg xmlns="http://www.example.org/ns1" xmlns:ns2="http://www.example.org/ns2">
 <l>
  <seg foo="x" ns2:bar="y">Scorn not the sonnet;</seg>
  <ns2:s sID="s02"/>critic, you have frowned,</l>
 <l>Mindless of its just honours;<ns2:s eID="s02"/>
  <ns2:s sID="s03"/>with this key</l>
 <l>Shakespeare unlocked his heart;<ns2:s eID="s03"/>
  <ns2:s sID="s04"/>the melody</l>
 <l>Of this small lute gave ease to Petrarch's wound.<ns2:s eID="s04"/>
 </l>
</lg>

I can't figure out any way to find the ns2:s containing key or Shakespeare.

@benjamingeer
Copy link
Author

That function seems to be buggy:

eXist-db/exist#2316

@benjamingeer
Copy link
Author

More implementations here:

https://wiki.tei-c.org/index.php/Milestone-chunk.xquery

@benjamingeer
Copy link
Author

In any case, I would expect this to be very slow, because you can't make a Lucene index for the content between two milestones, only for the content of an ordinary XML element.

@benjamingeer
Copy link
Author

benjamingeer commented Jan 10, 2020

Full-text search in different open-source XML databases:

XML database Full-text search
eXist-db implementation-specific full-text search feature based on Lucene
BaseX W3C XQuery and XPath Full Text 1.0
Zorba W3C XQuery and XPath Full Text 1.0

This means that to support multiple XML databases, we would need to develop something like Gravsearch for XQuery, and generate the appropriate syntax for each database.

There is also an XQuery and XPath Full Text 3.0 spec, but I haven't found any implementations.

@benjamingeer
Copy link
Author

MarkLogic is a commercial database server that supports both RDF and XML. You can mix SPARQL and XQuery in a single query:

https://docs.marklogic.com/guide/semantics/semantic-searches#id_77935

@subotic subotic added this to the Backlog milestone Feb 7, 2020
@adamretter
Copy link

@benjamingeer there is also a RDF+SPARQL plugin for eXist-db that integrates Apache Jena - https://github.com/ljo/exist-sparql

You might also be interested in FusionDB as an alternative to eXist-db; It is 100% API compatible with eXist-db.

@lrosenth
Copy link
Contributor

lrosenth commented Mar 2, 2020 via email

@irinaschubert irinaschubert removed this from the Backlog milestone Dec 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants