You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 27, 2020. It is now read-only.
I'm using following config to read data using a splitkeyMin and Max.But whole data is getting loaded into dataframe.Im using latest version of spark-MonogDB and spark 2.0.0
It is not working as there is no index on "created_at" field in my mongoCollection.
I would be very helpful if this is included in the documentation and also what is the format we must use in splitkeyMin/Max for isodate data type.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I'm using following config to read data using a splitkeyMin and Max.But whole data is getting loaded into dataframe.Im using latest version of spark-MonogDB and spark 2.0.0
val mongoConfig = MongodbConfigBuilder(
Map(
Credentials -> List(slaveCredentials),
Host -> mongoHost,
Database -> mongoDatabase,
Collection -> mongoCollection,
SamplingRatio -> 1.0,
WriteConcern -> "normal",
SplitSize -> "10",
SplitKey -> "created_at",
SplitKeyMin -> "2016-11-20T10:01:32.239Z",
SplitKeyMax -> "2016-11-23T10:01:32.239Z",
SplitKeyType -> "isoDate"
)
).build()
val mongoDF = spark.sqlContext.fromMongoDB(mongoConfig)
The text was updated successfully, but these errors were encountered: