java - S3 bucket missing redshift-spark -


i sm tryint o read data redshift using spark-redshift , ran error. have created buckets in s3 , able access sufficient credentials.

java.sql.sqlexception: amazon invalid operation: s3serviceexception:the specified bucket not exist,status 404,error nosuchbucket,rid aa6e01bf9bced7ed,extrid 7tqkpowu5lmdj9av3e0ehzdgg+e0yrrnyab5q+wcef0jpm134xheisnk1mx4cdzp,canretry 1 details:  error: s3serviceexception:the specified bucket not exist,status 404,error nosuchbucket,rid aa6e01bf9bced7ed,extrid 7tqkpowu5lmdj9av3e0ehzdgg+e0yrrnyab5q+wcef0jpm134xheisnk1mx4cdzp,canretry 1 code: 8001 context: listing bucket=redshift-spark.s3.amazonaws.com prefix=s3redshift/3a312209-7d6d-4d6b-bbd4-c1a70b2e136b/ query: 0 location: s3_unloader.cpp:200 process: padbmaster [pid=4952] -----------------------------------------------; @ com.amazon.redshift.client.messages.inbound.errorresponse.toerrorexception(errorresponse.java:1830) @ com.amazon.redshift.client.pgmessagingcontext.handleerrorresponse(pgmessagingcontext.java:804) @ com.amazon.redshift.client.pgmessagingcontext.handlemessage(pgmessagingcontext.java:642) @ com.amazon.jdbc.communications.inboundmessagespipeline.getnextmessageofclass(inboundmessagespipeline.java:312) @ com.amazon.redshift.client.pgmessagingcontext.domovetonextclass(pgmessagingcontext.java:1062) @ com.amazon.redshift.client.pgmessagingcontext.geterrorresponse(pgmessagingcontext.java:1030) @ com.amazon.redshift.client.pgclient.handleerrorsscenario2forprepareexecution(pgclient.java:2417) @ com.amazon.redshift.client.pgclient.handleerrorsprepareexecute(pgclient.java:2358) @ com.amazon.redshift.client.pgclient.executepreparedstatement(pgclient.java:1358) @ com.amazon.redshift.dataengine.pgqueryexecutor.executepreparedstatement(pgqueryexecutor.java:370) @ com.amazon.redshift.dataengine.pgqueryexecutor.execute(pgqueryexecutor.java:245) @ com.amazon.jdbc.common.spreparedstatement.executewithparams(unknown source) @ com.amazon.jdbc.common.spreparedstatement.execute(unknown source) @ com.databricks.spark.redshift.jdbcwrapper$$anonfun$executeinterruptibly$1.apply(redshiftjdbcwrapper.scala:101) @ com.databricks.spark.redshift.jdbcwrapper$$anonfun$executeinterruptibly$1.apply(redshiftjdbcwrapper.scala:101) @ com.databricks.spark.redshift.jdbcwrapper$$anonfun$2.apply(redshiftjdbcwrapper.scala:119) @ scala.concurrent.impl.future$promisecompletingrunnable.liftedtree1$1(future.scala:24) @ scala.concurrent.impl.future$promisecompletingrunnable.run(future.scala:24) @ java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor.java:1142) @ java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:617) 

i have bucket created in s3

the issue spark-redshift version conflicting amazonaws-sdk. updating pom resolved issue.

updated pom.xml

<dependency>             <groupid>com.amazonaws</groupid>             <artifactid>aws-java-sdk</artifactid>             <version>1.10.22</version>             <!--<version>1.7.4</version>-->         </dependency>      <dependency>             <groupid>com.databricks</groupid>             <artifactid>spark-redshift_2.10</artifactid>             <version>0.6.0</version>         </dependency>  

Comments