spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "shane knapp (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SPARK-26997) k8s integration tests failing after client upgraded to 4.1.2
Date Thu, 28 Feb 2019 00:11:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-26997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779881#comment-16779881
] 

shane knapp edited comment on SPARK-26997 at 2/28/19 12:10 AM:
---------------------------------------------------------------

great news, everyone!  :)

i was able to get everything upgraded on my staging box to the latest-n-greatest, and all
of the integration tests pass w/the 4.1.2 client.

here are the pertinent versions of all the things:

 
{noformat}
minikube + kvm2 driver: v0.34.1
k8s:  1.13.3
client:  4.1.2
{noformat}
 

 

TODO for each jenkins worker:

1) download + install latest minikube version

2) dist out a home-rolled kvm2 driver

3) change symlinks for minikube + docker-machine-driver-kvm2 to point to latest binaries

4) minikube delete, rm -rf .kube .minikube

 

TODO for the jenkins job config:

change the minikube start sequence to be the following:
{code:java}
$ minikube --vm-driver=kvm2 start --memory 6000 --cpus 8
$ kubectl create clusterrolebinding serviceaccounts-cluster-admin \
 --clusterrole=cluster-admin \
 --group=system:serviceaccounts{code}
 

only when these are done will we be able to re-merge [https://github.com/apache/spark/pull/23814] to
master, as well as back-port to 2.4

 

TIMING:

i can stage this stuff on the jenkins workers, and mid-next week (once the dust literally
settles) i can coordinate with whomever wants to merge/backport and make this work.

 

phew.  at least we now have some breathing room WRT deciding which k8s version/etc to test
against.


was (Author: shaneknapp):
great news, everyone!  :)

i was able to get everything upgraded on my staging box to the latest-n-greatest, and all
of the integration tests pass w/the 4.1.2 client.

here are the pertinent versions of all the things:

 
{noformat}
minikube + kvm2 driver: v0.34.1
k8s:  1.13.3
client:  4.1.2
{noformat}
 

 

TODO for each jenkins worker:

1) download + install latest minikube version

2) dist out a home-rolled kvm2 driver

3) change symlinks for minikube + docker-machine-driver-kvm2 to point to latest binaries

4) minikube delete, rm -rf .kube .minikube

 

TODO for the jenkins job config:

change the minikube start sequence to be the following:
{code:java}
$ minikube --vm-driver=kvm2 start --memory 6000 --cpus 8 --extra-config=apiserver.authorization-mode=RBAC
$ kubectl create clusterrolebinding serviceaccounts-cluster-admin \
 --clusterrole=cluster-admin \
 --group=system:serviceaccounts{code}
 

only when these are done will we be able to re-merge [https://github.com/apache/spark/pull/23814] to
master, as well as back-port to 2.4

 

TIMING:

i can stage this stuff on the jenkins workers, and mid-next week (once the dust literally
settles) i can coordinate with whomever wants to merge/backport and make this work.

 

phew.  at least we now have some breathing room WRT deciding which k8s version/etc to test
against.

> k8s integration tests failing after client upgraded to 4.1.2
> ------------------------------------------------------------
>
>                 Key: SPARK-26997
>                 URL: https://issues.apache.org/jira/browse/SPARK-26997
>             Project: Spark
>          Issue Type: Bug
>          Components: Kubernetes
>    Affects Versions: 3.0.0
>            Reporter: Marcelo Vanzin
>            Priority: Critical
>
> SPARK-26742 upgraded the client libs to version 4.1.2, and that doesn't seem to agree
well with the minikube we're using in jenkins. My PRs are failing (minikube 0.25):
> {noformat}
> 19/02/25 17:46:52.599 ScalaTest-main-running-KubernetesSuite INFO ProcessUtils: 19/02/25
17:46:52 INFO ShutdownHookManager: Deleting directory /tmp/spark-3007689c-e3ca-48f5-a673-f3bad5c4774a
> 19/02/25 17:46:52.788 OkHttp https://192.168.39.69:8443/... ERROR ExecWebSocketListener:
Exec Failure: HTTP:500. Message:container not found ("spark-kubernetes-driver")
> java.net.ProtocolException: Expected HTTP 101 response but was '500 Internal Server Error'
> 	at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229)
> 	at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196)
> 	at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206)
> 	at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> 19/02/25 17:46:52.999 OkHttp https://192.168.39.69:8443/... ERROR ExecWebSocketListener:
Exec Failure: HTTP:404. Message:404 page not found
> java.net.ProtocolException: Expected HTTP 101 response but was '404 Not Found'
> 	at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229)
> 	at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196)
> 	at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206)
> 	at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Tests pass on my local minikube (0.34). Reverting that change makes them pass on jenkins
(see https://github.com/apache/spark/pull/23893).
> Not sure if this is a client bug or a compatibility issue.
> [~shaneknapp] [~skonto]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message