flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-10356) Add sanity checks to SpillingAdaptiveSpanningRecordDeserializer
Date Tue, 16 Oct 2018 04:44:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651108#comment-16651108

ASF GitHub Bot commented on FLINK-10356:

zhijiangW commented on a change in pull request #6705: [FLINK-10356][network] add sanity checks
to SpillingAdaptiveSpanningRecordDeserializer
URL: https://github.com/apache/flink/pull/6705#discussion_r225395730

 File path: flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/serialization/SpillingAdaptiveSpanningRecordDeserializer.java
 @@ -137,7 +162,16 @@ else if (remaining == 0) {
 		// spanning record case
 		if (this.spanningWrapper.hasFullRecord()) {
 			// get the full record
-			target.read(this.spanningWrapper.getInputView());
+			try {
+				target.read(this.spanningWrapper.getInputView());
+			} catch (EOFException e) {
+				Optional<String> deserializationError = this.spanningWrapper.getDeserializationError(1);
 Review comment:
   I do not quite understand why we set `addToReadBytes` as 1 here.
   If the `target.read` is successful, then we do `spanningWrapper.getDeserializationError(0)`
in the following `moveRemainderToNonSpanningDeserializer` and it makes sense, otherwise we
do `spanningWrapper.getDeserializationError(1)`.
   The `spanningWrapper.getDeserializationError(0)` may be also suitable for the exception
case? Because we only want to show some internal informations during exceptions for debugging.
Then we just need one check.

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

> Add sanity checks to SpillingAdaptiveSpanningRecordDeserializer
> ---------------------------------------------------------------
>                 Key: FLINK-10356
>                 URL: https://issues.apache.org/jira/browse/FLINK-10356
>             Project: Flink
>          Issue Type: Improvement
>          Components: Network
>    Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.6.0, 1.6.1, 1.7.0, 1.5.4
>            Reporter: Nico Kruber
>            Assignee: Nico Kruber
>            Priority: Major
>              Labels: pull-request-available
> {{SpillingAdaptiveSpanningRecordDeserializer}} doesn't have any consistency checks for
usage calls or serializers behaving properly, e.g. to read only as many bytes as available/promised
for that record. At least these checks should be added:
>  # Check that buffers have not been read from yet before adding them (this is an invariant
{{SpillingAdaptiveSpanningRecordDeserializer}} works with and from what I can see, it is followed
>  # Check that after deserialization, we actually consumed {{recordLength}} bytes
>  ** If not, in the spanning deserializer, we currently simply skip the remaining bytes.
>  ** But in the non-spanning deserializer, we currently continue from the wrong offset.
>  # Protect against {{setNextBuffer}} being called before draining all available records

This message was sent by Atlassian JIRA

View raw message