spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Holden Karau <hol...@pigscanfly.ca>
Subject Re: Getting the ball started on a 2.4.6 release
Date Thu, 23 Apr 2020 18:17:31 GMT
Tentatively I'm planning on this list to start backporting. If no one sees
any issues with those I'll start to make backport JIRAs for them for
tracking this afternoon.
SPARK-26390       ColumnPruning rule should only do column pruning
SPARK-25407       Allow nested access for non-existent field for Parquet
file when nested pruning is enabled
SPARK-25559       Remove the unsupported predicates in Parquet when possible
SPARK-25860       Replace Literal(null, _) with FalseLiteral whenever
possible
SPARK-27514       Skip collapsing windows with empty window expressions
SPARK-25338       Ensure to call super.beforeAll() and super.afterAll() in
test cases
SPARK-27138       Remove AdminUtils calls (fixes deprecation)
SPARK-27981       Remove `Illegal reflective access` warning for
`java.nio.Bits.unaligned()` in JDK9+
SPARK-26095       Disable parallelization in make-distibution.sh. (Avoid
build hanging)
SPARK-25692       Remove static initialization of worker eventLoop handling
chunk fetch requests within TransportContext. This fixes
ChunkFetchIntegrationSuite as well
SPARK-26306       More memory to de-flake SorterSuite
SPARK-30199       Recover `spark.(ui|blockManager).port` from checkpoint
SPARK-27676       InMemoryFileIndex should respect
spark.sql.files.ignoreMissingFiles
SPARK-31047       Improve file listing for ViewFileSystem
SPARK-25595       Ignore corrupt Avro file if flag IGNORE_CORRUPT_FILES
enabled

Maybe:
SPARK-27801       Delegate to ViewFileSystem during file listing correctly

Not yet merged:
SPARK-31485       Barrier execution hang if insufficient resources

On Thu, Apr 23, 2020 at 9:13 AM Holden Karau <holden@pigscanfly.ca> wrote:

>
>
> On Thu, Apr 23, 2020 at 9:07 AM edeesis <edeesis@gmail.com> wrote:
>
>> There's other information you can obtain from the Pod metadata on a
>> describe
>> than just from the logs, which are typically what's being printed by the
>> Application itself.
>
> Would get pods -w -o yaml do the trick here or is there going to be
> information that wouldn’t be captured that way?
>
>>
>>
>> I've also found that Spark has some trouble obtaining the reason for a K8S
>> executor death (as evident by the
>> spark.kubernetes.executor.lostCheck.maxAttempts config property)
>>
>> I admittedly don't know what should qualify for a backport, but
>> considering
>> 3.0 is a major upgrade (Scala version, et al), is there any room for for
>> being more generous with backporting to 2.4?
>
> I’d like to revisit the conversation around a Spark 2.5 as a transitional
> release. I know that some people are already effectively maintaining 2.4+
> Selective new functionality backports internally. Maybe I’ll kick off that
> discussion which we can have and that can help inform what we should be
> putting in 2.4.
>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>>
>> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.):
> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>


-- 
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.):
https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
YouTube Live Streams: https://www.youtube.com/user/holdenkarau

Mime
View raw message