You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on configuring Elasticsearch within the Smile Elasticsearch suite, and I have encountered an issue with the content field mapping. Currently, the content field is defined as type: "text", but it uses the keyword analyzer, which does not tokenize the data as expected for full-text search.
Hi @romainruaud
I am working on configuring Elasticsearch within the Smile Elasticsearch suite, and I have encountered an issue with the content field mapping. Currently, the content field is defined as type: "text", but it uses the keyword analyzer, which does not tokenize the data as expected for full-text search.
Current Mapping (for the content field):
"content" : { "type" : "text", "fields" : { "standard" : { "type" : "text", "analyzer" : "standard" }, "untouched" : { "type" : "keyword", "ignore_above" : 256, "normalizer" : "untouched" } }, "copy_to" : [ "search" ], "norms" : false, "analyzer" : "keyword" }
Issue
Questions
File : elasticsuite_indices.xml
NOTE: I have a large amount of data in the content field.
I would appreciate any guidance on how to apply these changes and ensure minimal disruption to our existing setup.
Thanks in advance for your assistance!
The text was updated successfully, but these errors were encountered: