-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support invisible XML #48
Comments
Should -- and if so, how would -- iXML support be an optional feature of an implementation? |
Definitely. An implementation does not need to support any particular converter at all. (Indeed, they could "implement" it by just failing with a schema error if the Schematron schema has a sch:conversion element.) And, despite the heading, the markup suggested is for a general mechanism, not tied to iXML specifically: iXML is just the motivating example. The sch:conversion element etc can be used for anything that takes resource (i.e. a string) and converts it to XML. For example, the same mechanism could be used to read in encrypted files into a variable (decrypted and parsed as XML), where the content of the sch:conversion is a public key or whatever. So the markup does two things:
For example, say some HL7 schema uses information that is stored in JSON: our schema specifies that it requires ixml to read this document into a variable as XML. Any developer charged with developing the validation system looks at the schema, and sees that it needs an iXML converter. They select or develop their system accordingly.
So an implementation would register a converter for some common name: e.g. for "ixml". If the schema comes along using some different name (e.g. "InvisibleXML") then the implementation needs configuration for that. (N.b. I think using URLs or MIME types for naming the converter would be over-engineering: the problem URLs solve is name clashes, not what a name or URL corresponds to.) |
I think doing this would be to extend Schematron beyond its core purpose, and there are other existing tools, such as XProc, which would be better placed to serve this need by orchestrating conversion to XML and supplying the result to Schematron. The scenarios described seem to me to be clear cases for pipelining, which other tools such as XProc are designed specifically to handle already. If this enhancement were approved, I agree it should definitely be optional in implementations. @rjelliffe , your three "other scenarios" (descriptions beginning "We want...") - can you elaborate on exactly why these are wanted, with concrete examples? |
@AndrewSales : I have updated the original post added examples for 3) and 2) as requested. |
Many thanks, @rjelliffe . |
Just as a note of interest (only to me): the schema language I was working on in 1999 immediately before Schematron was called "XML Notation Schemas". https://schematron.com/document/3046.html?publicationid= This current proposal for converters on non-XML to allow validation, is a way to implement what XML Notations Schemas was proposing then, finally! Instead of vaguely talking of "BNF" we now have the more concrete iXML. I toyed with the idea in Schematron v1.1 of adding something like this, but de-prioritized it when no obvious method sprang to mind, and because XSLT 1 was not very capable. The idea of this language was to support specification and validation of embedded notations (as distinct from external files with some different notation) and to link them to validators/generators. So you could specify the lexical model for your notation (e.g. in Regular expression or BNF) then the notation would be tokenized by it and these tokens could then be validated as if they were element names by e.g. a content model. The XML Notation Schema allowed these complex data to be named and validated in an extensible way. I tried to get the XML Schema WG interested in the idea, but a corporate member there thought it was a competitor to types which were really good while notations were an SGML idea and therefore really bad. "Without types you can do nothing" he said: utter nonsense. So XSD supported regexes but not grammars, and it did not allow testing constraints within the regexes or between some "captured text" of the regex and the rest of the document: so even with its regexes XSD again managed to extract the least bang-per-buck. |
So how does Schematron talk to the XProc which invoked it?
For example, say I have an instance document to validate
```
< info>
...
< one-of-multiple-arbitraty-nested-element data="some string I want
to parse with ixml 123 " / >
...
< / info >
```
In my schema proposal, I can read the attribute each time it is encountered
into a variable, convert to XML, and access that.
```
< sch:schema .... >
<sch:conversion id=" OOMANE-data " ... />
...
<sch:rule context=" one-of-multiple-arbitrary-nested-element ">
<sch:let name="data-as-xml" value=" @ data " using-conversion="
OOMANE-data "/>
<sch:assert test=" $data-as-xml//thing='123' "> The data attribute
should have "123" thing</sch:assert>
```
I don't see how XProc fits in. Is it supposed to duplicate the
phase/pattern/rule/variable logic to identify strings, then pass them in
somehow to Schematron at invocation time? And the schema would have to be
written knowing what names the XProc was being written for.
I can see that XProc could fit in, for scenario 1 (the main document is
non-XML) only. But for the other scenarios, I dont see how.
Cheers
Rick
…On Fri, Aug 5, 2022 at 8:59 PM Andrew Sales ***@***.***> wrote:
I think doing this would be to extend Schematron beyond its core purpose,
and there are other existing tools, such as XProc, which would be better
placed to serve this need by orchestrating conversion to XML and supplying
the result to Schematron. The scenarios described seem to me to be clear
cases for pipelining, which other tools such as XProc are designed
specifically to handle already.
If this enhancement were approved, I agree it should definitely be
optional in implementations.
@rjelliffe <https://github.com/rjelliffe> , your three "other scenarios"
(descriptions beginning "We want...") - can you elaborate on exactly *why*
these are wanted, with concrete examples?
—
Reply to this email directly, view it on GitHub
<#48 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AF65KKOP6ZRC2BCLMVYQ5VTVXTX23ANCNFSM55USMXKQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Preprocessing and 'in'-processing (for want of a better term) would both have their place, but I'm inclined to agree with @AndrewSales that Schematron doesn't necessarily need to be extended to do preprocessing when there's other, well-known or even standardised ways to do preprocessing. If I wanted to validate CSS using Schematron, if I wanted to validate the CSS as a whole, then I probably would preprocess the CSS into XML using something like iXML and validate the XML. If I wanted to validate the CSS that applies to particular elements, then I would probably preprocess the HTML+CSS using something like Transpect's CSS tools (https://github.com/transpect/css-tools) to annotate the HTML with attributes for individual properties and validate those. For validating the syntax of individual attribute values, Compare:
and:
P.S. It turns out that email replies on issues can't contain Markdown, so Markdown isn't, and can't be edited to be recognised as, Markdown. |
I agree that supporting something like iXML can be convenient but I'm not sure this balances complexity added into the Schematron. Validation of non-XML inputs can be already done either by preprocessing or by calling functions that turn non-XML syntax into XML, e.g.: |
Background
Invisible XML is simple system for a deterministic context-free transducer (specified with a non-deterministic context-free attribute grammar) that is worthwhile supporting
IXML can be considered useful both in itself and as a good example of a class of processing.
Scenarios
Obviously a non-XML document converted using an iXML grammar into XML can be validated with Schematron. And a Schematron engine could have its own method to detect and convert a non-XML document and run the conversion, presenting the result to the Schematron validation
However, there are three other scenarios.
We want to be able to validate a non-XML document directly, and we want the grammar to be used to be part of the Schematron schema, either directly or by a name.
We want for an
sch:pattern/@document
reference to, if it downloads a non-XML resource, convert the document to XML.We want to be able to take some node value (such as an attribute's value), convert it to XML and have that XML available in a variable,
3a. We want to take that variable and validate patterns in it.
Also, SVRL needs to be adjusted to cope.
Proposal
SVRL
As an initial minimal approach to leaves as much flexibility for implementers as possible, I propose to augment SVRL with svrl:active-pattern/svrl:conversion-failure, which is a container element that can contain any message from the parser. (As with URL retrieval failures, we are rather at the mercy of the library and implementation for the quality and user-targetting of the error message.)
See #47 for info.
Schematron
1) Main document from ixml
Schematron is augmented by a top-level element
sch:schema\sch:conversion
which registers a converter name for a MIME type or extension. This can be inline or by a reference.or
As well as, or in addition to
@mime-type
we allow@filename
to match on the filename by regex e.g. *.ixml. Perhaps we can, for UNIXy reasons, allow@magic
to look at the initial bytes of the file.The sch:schema element is augmented by an attribute
@use-conversion
which provides the conversion to use, e.g.2) Pattern on external document from ixml
The sch:schema element is augmented by an attribute
@use-conversion
which provides the conversion to use.@use-conversion
can only be used if@document
is present. If the retrieved resource is MIME type */*xml then no conversion is performed (and an implementation determined warning generated.)If there are two patterns with the same URL and conversion, the document should be re-used not re-retrieved.
3) Parse node value into variable
To read some text from a node into a variable and convert it, the sch:let element is augmented by an attribute
@use-conversion
which provides the conversion to use.3a) Validate variables using patterns
However, there is no obvious mechanism to make patterns validate a variable's value. That is a more general facility that would be a separate proposal, probably only needed if this proposal is accepted.
##Examples for 3) Parse node value into variable
There are numerous examples of complex data formats used for attribute and data: URLs, and even CSV. There are many cases where it is no practical or desirable to represent the atomic components of some complex data using elements: because of verbosity, for example, or because there is an industry standard idiom or notation that is what is being marked up.
Currently, Schematron fails in its core task of finding patterns in documents, whenever the document contains these complex field values.
ISO 8601
Our document is a large book catalog, where each book has a date using ISO8601. This is not the subset of used by XSD, but the full ISO8601 date format. So, we have an element like
(For ISO8601, the % means approximate, the ? means uncertain, the X is a wildcard, the / is a date range; it allows omitting the day. Things like timezones etc not shown.)
We want to validate that the author-active-date range fits in the author-life-span range, that the creation date fits in the author-active range, and that the publication-date is later than the creation-date. We have a converter for complete ISO8601 date to XML (whether this is iXML or some regex converter is not material) so we can have the complex expressions sitting as nice sets of XDM nodes.
And we can go on making the tests better, without having to worry about how to parse the data
Example: XPaths
For Schematron itself, we have many XPaths. Schematron validation has been held back because validators do not check the XPaths.
The Schematron schema for Schematron could invoke the converter for the XPaths and do various kinds of validation. For example, in this example we check that we are not using XSLT3 XPaths novelties when our schema query language binding advertises the schema as only requiring XSLT1 or XSLT2.
Example: Land Points
A mapping system specifies areas of land by surfaces bounded by some number of points, where the points have a northerly, easterly, and elevation value.
These are specified in a whitespace separated list: N0 E0 H0 N1 E1 H1 ... Nn En Hn
We want to make sure that none of the points in the polygon overlap. We want to do this by exposing the data as tuple, rather than hiding it behind some complex function.
Method: again, we define an iXML grammar that converts the P element into a variable as
which is very explicit for validation.
(I note that in fact using Schematron to validate geometry is a real application: the intersection of flight routes over Europe, being the example I was informed of. )
Validate styles from CSS stylesheet
We are validating an XHTML document. It has a linked CSS stylesheet. We want to confirm that the CSS has selectors for all the stylenames used in the XHTML.
So we have a CSS parser in iXML (or whatever). So we read the document in (as a string: if XPath does not support this, a standard function should be made, presumably.
So we have our CSS file as a top-level variable, as XML. The Schematron rules then handle looking up in that data.
(Of course, wild CSS has other issues: included stylesheets and so on. Being able to parse a stylesheet means that such things can start to be addressed, rather than us being stymied at the start.)
Example 2) Pattern on external document from ixml
Most of the Schematron projects I have been involved in over the years have involved AB testing: either testing that the information that was in the input document is also in the transformed document mutatis mutandis, or that when a document is converted then round-tripped back, it has the equivalent information as far as can be.
Database migration validation
Recently, I had a variation on this AB testing. A large complex organization web-publishes large complex XML dumps of its databases, produced by a large complex pipeline. They had lost confidence with passage of years and rust and moth, and decided that prudence dictated they make smaller chunks of data available using JSON and CSV (as well as an XML).
However, for a particular reason, they did not have access to the code that produced the big XML. So they wanted to cross check their new JSON/CSV API against the XML data dumps. For a particular reason, they were not interested in backward compatability (for all the data in the XML, does it match the JSON/CSV API?) but on forward compabitility (for all the data in the new JSON/CSV API does it match the XML.)
With the current proposal, this could be handled in Schematron like this:
The text was updated successfully, but these errors were encountered: