flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-1753) Add more tests for Kafka Connectors
Date Thu, 16 Apr 2015 07:29:59 GMT

    [ https://issues.apache.org/jira/browse/FLINK-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497667#comment-14497667
] 

ASF GitHub Bot commented on FLINK-1753:
---------------------------------------

Github user mbalassi commented on a diff in the pull request:

    https://github.com/apache/flink/pull/603#discussion_r28488904
  
    --- Diff: flink-staging/flink-streaming/flink-streaming-connectors/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaITCase.java
---
    @@ -709,4 +834,72 @@ private static KafkaServer getKafkaServer(int brokerId, File tmpFolder)
throws U
     		private static final long serialVersionUID = 1L;
     	}
     
    +/*	@Test
    +	public void testKafkaWithoutFlink() {
    +
    +		// start consumer:
    +		new Thread(new Runnable() {
    +			@Override
    +			public void run() {
    +				LOG.info("Starting consumer");
    +				// consume from "testtopic"
    +
    +				Properties cprops = new Properties();
    +				cprops.put("zookeeper.connect", zookeeperConnectionString);
    +				cprops.put("group.id", "mygroupid");
    +				cprops.put("zookeeper.session.timeout.ms", "400");
    +				cprops.put("zookeeper.sync.time.ms", "200");
    +				cprops.put("auto.commit.interval.ms", "1000");
    +				ConsumerConnector jcc = Consumer.createJavaConsumerConnector(new ConsumerConfig(cprops));
    +				Map<String,Integer> tcm = new HashMap<String, Integer>(1);
    +				tcm.put("testtopic", 1);
    +				Map<String, List<KafkaStream<byte[], byte[]>>> ms = jcc.createMessageStreams(tcm);
    +				LOG.info("message streams "+ms);
    +				Assert.assertEquals(1, ms.size());
    +
    +				List<KafkaStream<byte[], byte[]>> strList = ms.get("testtopic");
    +				Assert.assertEquals(1, strList.size());
    +
    +				KafkaStream<byte[], byte[]> str = strList.get(0);
    +				LOG.info("str = "+str);
    +
    +				ConsumerIterator<byte[], byte[]> it = str.iterator();
    +				while(it.hasNext()) {
    +					MessageAndMetadata<byte[], byte[]> msg = it.next();
    +					LOG.info("msg = "+msg);
    +				}
    +				LOG.info("Finished consumer");
    +			}
    +		}).start();
    +
    +		// start producer:
    +		Properties props = new Properties();
    +		String brks = "";
    +		for(KafkaServer broker : brokers) {
    +			SocketServer srv = broker.socketServer();
    +			String host = "localhost";
    +			if(srv.host() != null) {
    +				host = srv.host();
    +			}
    +			brks += host+":"+broker.socketServer().port()+",";
    +		}
    +		LOG.info("Using broker list "+brks);
    +		props.setProperty("metadata.broker.list", brks);
    +		props.put("serializer.class", StringEncoder.class.getCanonicalName());
    +
    +		ProducerConfig config = new ProducerConfig(props);
    +		Producer p = new Producer<String, String>(config);
    +
    +		for(int i = 0; i < 10; i++) {
    +			p.send(new KeyedMessage<String, String>("testtopic", "What a message"));
    +		}
    +
    +		LOG.info("+++++++++++ FINISHED PRODUCING ++++++++++++++++ ");
    +
    +		try {
    +			Thread.sleep(1000000);
    +		} catch (InterruptedException e) {
    +			e.printStackTrace();
    +		}
    --- End diff --
    
    Why do you have this as comment?


> Add more tests for Kafka Connectors
> -----------------------------------
>
>                 Key: FLINK-1753
>                 URL: https://issues.apache.org/jira/browse/FLINK-1753
>             Project: Flink
>          Issue Type: Improvement
>          Components: Streaming
>    Affects Versions: 0.9
>            Reporter: Robert Metzger
>            Assignee: Gábor Hermann
>
> The current {{KafkaITCase}} is only doing a single test.
> We need to refactor that test so that it brings up a Kafka/Zookeeper server and than
performs various tests:
> Tests to include:
> - A topology with non-string types MERGED IN 359b39c3
> - A topology with a custom Kafka Partitioning class MERGED IN 359b39c3
> - A topology testing the regular {{KafkaSource}}. MERGED IN 359b39c3
> - Kafka broker failure MERGED IN cb34e976
> - Flink TaskManager failure
> - Test with large records (up to 30 MB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message