jackrabbit-oak-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Axel Hanikel (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (OAK-8515) Make the Azure Persistence timeouts configurable
Date Wed, 31 Jul 2019 09:22:00 GMT

    [ https://issues.apache.org/jira/browse/OAK-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896972#comment-16896972
] 

Axel Hanikel commented on OAK-8515:
-----------------------------------

IMHO giving up after retrying a few times is bad: when the segment store is trying to read
its journal file it should try to do so no matter how long it takes. It is not up to the segment
store to decide how long is too much, any value is kind of arbitrary. If it takes a long time
the segment store should issue a warning from time to time, but if it throws it's becoming
the segment store's fault, not azure's or the network's or whatever.

> Make the Azure Persistence timeouts configurable
> ------------------------------------------------
>
>                 Key: OAK-8515
>                 URL: https://issues.apache.org/jira/browse/OAK-8515
>             Project: Jackrabbit Oak
>          Issue Type: Task
>          Components: segment-azure
>            Reporter: Tomek Rękawek
>            Assignee: Tomek Rękawek
>            Priority: Major
>             Fix For: 1.18.0
>
>         Attachments: OAK-8515.patch
>
>
> OAK-8406 introduced timeout for the server-side execution in Azure cloud. This may cause
issues like this:
> {noformat}
> Exception in thread "main" java.util.NoSuchElementException: An error occurred while
enumerating the result, check the original exception for details.
>         at com.microsoft.azure.storage.core.LazySegmentedIterator.hasNext(LazySegmentedIterator.java:113)
>         at java.util.Iterator.forEachRemaining(Iterator.java:115)
>         at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
>         at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>         at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>         at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>         at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>         at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>         at org.apache.jackrabbit.oak.segment.azure.AzureSegmentArchiveReader.<init>(AzureSegmentArchiveReader.java:61)
>         at org.apache.jackrabbit.oak.segment.azure.AzureArchiveManager.forceOpen(AzureArchiveManager.java:103)
>         at org.apache.jackrabbit.oak.segment.azure.tool.SegmentStoreMigrator.migrateArchives(SegmentStoreMigrator.java:149)
>         at org.apache.jackrabbit.oak.segment.azure.tool.SegmentStoreMigrator.migrate(SegmentStoreMigrator.java:87)
> [...]
> Caused by: com.microsoft.azure.storage.StorageException: The client could not finish
the operation within specified maximum execution timeout.
>         at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:233)
>         at com.microsoft.azure.storage.core.LazySegmentedIterator.hasNext(LazySegmentedIterator.java:109)
>         ... 14 more
> Caused by: java.util.concurrent.TimeoutException: The client could not finish the operation
within specified maximum execution timeout.
>         at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:232)
>         ... 15 more
> {noformat}
> Let's make the timeouts configurable.
> //cc: [~frm], [~ierandra]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Mime
View raw message