spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Blue <rb...@netflix.com.INVALID>
Subject Re: Changing how we compute release hashes
Date Fri, 16 Mar 2018 15:31:45 GMT
+1 It's possible to produce the same file with gpg, but the sha*sum
utilities are a bit easier to remember the syntax for.

On Thu, Mar 15, 2018 at 9:01 PM, Nicholas Chammas <
nicholas.chammas@gmail.com> wrote:

> To verify that I’ve downloaded a Hadoop release correctly, I can just do
> this:
>
> $ shasum --check hadoop-2.7.5.tar.gz.sha256
> hadoop-2.7.5.tar.gz: OK
>
> However, since we generate Spark release hashes with GPG
> <https://github.com/apache/spark/blob/c2632edebd978716dbfa7874a2fc0a8f5a4a9951/dev/create-release/release-build.sh#L167-L168>,
> the resulting hash is in a format that doesn’t play well with any tools:
>
> $ shasum --check spark-2.3.0-bin-hadoop2.7.tgz.sha512
> shasum: spark-2.3.0-bin-hadoop2.7.tgz.sha512: no properly formatted SHA1 checksum lines
found
>
> GPG doesn’t seem to offer a way to verify a file from a hash.
>
> I know I can always manipulate the SHA512 hash into a different format or
> just manually inspect it, but as a “quality of life” improvement can we
> change how we generate the SHA512 hash so that it plays nicely with shasum?
> If it’s too disruptive to change the format of the SHA512 hash, can we add
> a SHA256 hash to our releases in this format?
>
> I suppose if it’s not easy to update or add hashes to our existing
> releases, it may be too difficult to change anything here. But I’m not
> sure, so I thought I’d ask.
>
> Nick
> ​
>



-- 
Ryan Blue
Software Engineer
Netflix

Mime
View raw message