This is a great question as I've heard similar concerns about Spark on Mesos.
At the time when I started to contribute to Spark on Mesos approx half year ago, the Mesos scheduler and related code hasn't really got much attention from anyone and it was pretty much in maintenance mode.
As a Mesos PMC that is really interested in Spark I started to refactor and check out different JIRAs and PRs around the Mesos scheduler, and after that started to fix various bugs in Spark, added documentation and also in fix related Mesos issues as well.
Just recently for 1.4 we've merged in Cluster mode and Docker support, and there are also pending PRs around framework authentication, multi-role support, dynamic allocation, more finer tuned coarse grain mode scheduling configurations, etc.
And finally just want to mention that Mesosphere and Typesafe is collaborating to bring a certified distribution (https://databricks.com/spark/certification/certified-spark-distribution
) of Spark on Mesos and DCOS, and we will be pouring resources into not just maintain Spark on Mesos but drive more features into the Mesos scheduler and also in Mesos so stateful services can leverage new APIs and features to make better scheduling decisions and optimizations.
I don't have a solidified roadmap to share yet, but we will be discussing this and hopefully can share with the community soon.
In summary Spark on Mesos is not dead or in maintenance mode, and look forward to see a lot more changes from us and the community.