spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Tremblay <>
Subject pyspark bug with PYTHONHASHSEED
Date Sat, 01 Apr 2017 19:43:29 GMT
When I try to to do a groupByKey() in my spark environment, I get the error
described here:

In order to attempt to fix the problem, I set up my ipython environment
with the additional line:


When I fire up my ipython shell, and do:

In [7]: hash("foo")
Out[7]: -2457967226571033580

In [8]: hash("foo")
Out[8]: -2457967226571033580

So my hash function is now seeded so it returns consistent values. But when
I do a groupByKey(), I get the same error:

Exception: Randomness of hash of string should be disabled via

Anyone know how to fix this problem in python 3.4?



Paul Henry Tremblay
Robert Half Technology

View raw message