hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-14423) s3guard will set file length to -1 on a putObjectDirect(stream, -1) call
Date Mon, 15 May 2017 11:35:04 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16010355#comment-16010355
] 

Steve Loughran commented on HADOOP-14423:
-----------------------------------------

Stack which won't quite match s3guard or be reproducible there
{code}
Running org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSECBlockOutputStream
Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.154 sec <<< FAILURE!
- in org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSECBlockOutputStream
testEncryption(org.apache.hadoop.fs.s3a.ITestS3AEncryptionSSECBlockOutputStream)  Time elapsed:
0.961 sec  <<< ERROR!
java.io.IOException: regular upload failed: java.lang.IllegalArgumentException: content length
is negative
	at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:205)
	at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:456)
	at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:368)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
	at org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:159)
	at org.apache.hadoop.fs.s3a.AbstractS3ATestBase.writeThenReadFile(AbstractS3ATestBase.java:135)
	at org.apache.hadoop.fs.s3a.AbstractTestS3AEncryption.validateEncryptionForFilesize(AbstractTestS3AEncryption.java:79)
	at org.apache.hadoop.fs.s3a.AbstractTestS3AEncryption.testEncryption(AbstractTestS3AEncryption.java:57)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.lang.IllegalArgumentException: content length is negative
	at com.google.common.base.Preconditions.checkArgument(Preconditions.java:122)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2252)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1354)
	at org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$3(WriteOperationHelper.java:392)
	at org.apache.hadoop.fs.s3a.AwsCall.execute(AwsCall.java:43)
	at org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:390)
	at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:439)
	at org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:432)
	at org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
	at org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
	at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111)
	at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58)
	at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
{code}

> s3guard will set file length to -1 on a putObjectDirect(stream, -1) call
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-14423
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14423
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.0.0-alpha3
>            Reporter: Steve Loughran
>
> You can pass a negative number into {{S3AFileSystem.putObjectDirect}}, which means "put
until the end of the stream". S3guard has been using this {{len}} argument: it needs to be
using the actual number of bytes uploaded. Also relevant with client side encryption, when
the amount of data put > the amount of data in the file or stream.
> Noted in the committer branch after I added some more assertions, I've changed it there
so making changes to S3AFS.putObjectDirect to pull the content length to pass in to finishedWrite()
from the {{PutObjectResult}} instead. This can be picked into the s3guard branch



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message