-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to set configuration to access files on S3 bucket #90
Comments
try using |
Hi I've add as hadoop-dependency the spark-core:2.0.0 it's not able to work with s3n because it doesn't include the necessary packages. So like you said I'm return back to old S3 protocol I'm usin s3:// url and provide as properties the access key and secret key. Now I get this So the REST API get the access and secret key but refuse to access the file |
By the way I check the bug you open on druid regarding classpath. com.fasterxml.jackson.databind.JsonMappingException: Jackson version is too old 2.4.6 In fact the dependency of druid is still first in the classpath and not working |
I'm still trying to get druid working internally on 0.9.2 + spark2.0 Whenever it does work the branch will be here: https://github.com/metamx/druid/tree/0.9.2-mmx Right now I'm on https://github.com/metamx/druid/tree/0.9.2-mmx-fix3148 which is under very active modifications. I have had to modify the jackson version in the master druid POM to accommodate spark 2.0 A _2.10 release of this repository should still be compatible with a spark 1.6.x deployment (assuming scala 2.10), though. In short, spark 2.0 is causing various problems for me as well, and I haven't had a stable version running on druid 0.9.2-rc yet. |
Thanks I'll switch back to 1.6.2 |
Ok still nothing here. made pull-deps to get it In fact there is an version of JetS3 in the hadoop distribution How can I know if the job failed on druid or is it the spark cluster ? |
Hi
I succeed to start the job but my files are located in S3 bucket so I get always
I've tried to set the keys in all way I know nothing help
tried to put in properties like this :
or in context like this
I've also tried to set the S3 keys directly on spark cluster but it seems that it's not working anymore.
Do you have any idea what can I do ?
Thanks
The text was updated successfully, but these errors were encountered: