flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Simya Jose (Jira)" <j...@apache.org>
Subject [jira] [Created] (FLINK-14955) Not able to write to swift via StreamingFileSink.forBulkFormat
Date Tue, 26 Nov 2019 18:07:00 GMT
Simya Jose created FLINK-14955:
----------------------------------

             Summary: Not able to write to swift via StreamingFileSink.forBulkFormat
                 Key: FLINK-14955
                 URL: https://issues.apache.org/jira/browse/FLINK-14955
             Project: Flink
          Issue Type: Bug
            Reporter: Simya Jose


not able to use StreamingFileSink to write to swift file storage

 

*Code*:

flink version: 1.9.1.

 scala 2.11

build tool : maven

main part of the code:

val eligibleItems: DataStream[EligibleItem] = env.fromCollection(Seq(
 EligibleItem("pencil"),
 EligibleItem("rubber"),
 EligibleItem("beer")))(TypeInformation.of(classOf[EligibleItem]))

val factory2: ParquetWriterFactory[EligibleItem] = ParquetAvroWriters.forReflectRecord(classOf[EligibleItem])
val sink: StreamingFileSink[EligibleItem] = StreamingFileSink
 .forBulkFormat(new Path(capHadoopPath),factory2)
 .build()

eligibleItems.addSink(sink)
 .setParallelism(1)
 .uid("TEST_1")
 .name("TEST")

*scenario* : when path is set to point to swift ( capHadoopPath = "swift://<path>"
) , getting exception - _java.lang.UnsupportedOperationException: Recoverable writers on Hadoop
are only supported for HDFS and for Hadoop version 2.7 or newerjava.lang.UnsupportedOperationException:
Recoverable writers on Hadoop are only supported for HDFS and for Hadoop version 2.7 or newer
at org.apache.flink.fs.openstackhadoop.shaded.org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriter.<init>(HadoopRecoverableWriter.java:57)_ 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Mime
View raw message