jackrabbit-oak-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matt Ryan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (OAK-8520) [Direct Binary Access] Avoid overwriting existing binaries via direct binary upload
Date Thu, 08 Aug 2019 19:19:00 GMT

    [ https://issues.apache.org/jira/browse/OAK-8520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903251#comment-16903251

Matt Ryan commented on OAK-8520:

I created JCR-4463 suggesting a possible documentation change to the {{JackrabbitValueFactory.completeBinaryUpload()}}
method reflecting the behavior.  The documentation does not claim that the performance is
different than what is implemented; rather it is just not clear on the point.  Making it
clear would be helpful.

> [Direct Binary Access] Avoid overwriting existing binaries via direct binary upload
> -----------------------------------------------------------------------------------
>                 Key: OAK-8520
>                 URL: https://issues.apache.org/jira/browse/OAK-8520
>             Project: Jackrabbit Oak
>          Issue Type: Bug
>          Components: blob-cloud, blob-cloud-azure, blob-plugins
>            Reporter: Matt Ryan
>            Assignee: Matt Ryan
>            Priority: Major
>             Fix For: 1.18.0, 1.10.4
> Since direct binary upload generates a unique blob ID for each upload, it is generally
impossible to overwrite any existing binary.  However, if a client issues the {{completeBinaryUpload()}}
call more than one time with the same upload token, it is possible to overwrite an existing
> One use case where this can happen is if a client call to complete the upload times out. 
Lacking a successful return a client could assume that it needs to repeat the call to complete
the upload.  If the binary was already uploaded before, the subsequent call to complete the
upload would have the effect of overwriting the binary with new content generated from any
uncommitted uploaded blocks.  In practice usually there are no uncommitted blocks so this
generates a zero-length binary.
> There may be a use case for a zero-length binary so simply failing in such a case is
not sufficient.
> One easy way to handle this would be to simply check for the existence of the binary
before completing the upload.  This would have the effect of making uploaded binaries un-modifiable
by the client.  In such a case the implementation could throw an exception indicating that
the binary already exists and cannot be written again.

This message was sent by Atlassian JIRA

View raw message