flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Greg Hogan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-9061) add entropy to s3 path for better scalability
Date Thu, 19 Jul 2018 14:59:01 GMT

    [ https://issues.apache.org/jira/browse/FLINK-9061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16549386#comment-16549386
] 

Greg Hogan commented on FLINK-9061:
-----------------------------------

Not that we shouldn't implement the general purpose solution but Amazon looks to have increased
the PUT rate from 100 to 3500 and the GET rate from 300 to 5500:

"This S3 request rate performance increase removes any previous guidance to randomize object
prefixes to achieve faster performance. That means you can now use logical or sequential naming
patterns in S3 object naming without any performance implications."

https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-s3-announces-increased-request-rate-performance/

> add entropy to s3 path for better scalability
> ---------------------------------------------
>
>                 Key: FLINK-9061
>                 URL: https://issues.apache.org/jira/browse/FLINK-9061
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystem, State Backends, Checkpointing
>    Affects Versions: 1.5.0, 1.4.2
>            Reporter: Jamie Grier
>            Assignee: Indrajit Roychoudhury
>            Priority: Critical
>              Labels: pull-request-available
>
> I think we need to modify the way we write checkpoints to S3 for high-scale jobs (those
with many total tasks).  The issue is that we are writing all the checkpoint data under a
common key prefix.  This is the worst case scenario for S3 performance since the key is used
as a partition key.
>  
> In the worst case checkpoints fail with a 500 status code coming back from S3 and an
internal error type of TooBusyException.
>  
> One possible solution would be to add a hook in the Flink filesystem code that allows
me to "rewrite" paths.  For example say I have the checkpoint directory set to:
>  
> s3://bucket/flink/checkpoints
>  
> I would hook that and rewrite that path to:
>  
> s3://bucket/[HASH]/flink/checkpoints, where HASH is the hash of the original path
>  
> This would distribute the checkpoint write load around the S3 cluster evenly.
>  
> For reference: https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/
>  
> Any other people hit this issue?  Any other ideas for solutions?  This is a pretty
serious problem for people trying to checkpoint to S3.
>  
> -Jamie
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message