spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From simonhampe <simon.ha...@iteratec.com>
Subject Security vulnerabilities due to Jackson Databind
Date Mon, 06 Apr 2020 08:07:55 GMT
My question concerns the dependency of Spark on somewhat older Versions of
Jackson databind (2.6.7 in Spark 2.4.5) and the potential security
vulnerabilities that come with that.

In my current project, company/project guidelines require that we scan all
our dependencies - including transitive ones! - for known security
vulnerabilities. We use Whitesource for that, but OWASP would probably
produce a similar result.

Concretely, our project uses the HDP framework in version 3.0.1.0. They use
a homebrew version of Spark 2.3.1 which actually has a slightly newer
version of Jackson Databind (2.9.6.) Nevertheless, Jackson (at least before
2.10) is known to contain a myriad of security vulnerabilities, mostly rated
High and mostly due to polymorphic typing (see for example 
https://medium.com/@cowtowncoder/on-jackson-cves-dont-panic-here-is-what-you-need-to-know-54cd0d6e8062
<https://medium.com/@cowtowncoder/on-jackson-cves-dont-panic-here-is-what-you-need-to-know-54cd0d6e8062>

). With a pending migration away from HDP we might have to switch to Vanilla
Spark with an even older Jackson and significantly more vulnerabilities.
Naturally, we cannot use Spark 3 for a production environment as long as it
is only preview-released.

Now I'm aware that most of these vulnerabilities are extremely specific,
nevertheless we are required to regularly verify that we are not affected -
usually by checking our classpathes for the relevant gadget classes. That is
not only a lot of time spent rather unproductively, the sheer amount of
potential vulnerabilities could present an unacceptable risk in itself
according to company guidelines.

Hence my question: Can someone provide a qualified estimate as to if or how
much Spark is affected by Jackson Databind polymorphic typing issues? I know
this might be a diffcult question, considering that Spark is a large project
with a large number of (transitive) dependencies. I'd be very interested to
know if someone has similar issues or if there might be reasons why these
CVEs are perhaps irrelevant.



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Mime
View raw message