well, at a high-level, resource negotiation and distributed storage are orthogonal concepts. Yarn, Mesos, Standalone, and Kubernetes are resource schedulers, which you can configure via master and a separate deploy mode (client/cluster). Under the covers of the HDFS API, you can also use various alternative file system implementations such as HDFS, local file, object stores (e.g., swift/s3), etc. At a bare minimum, you need to have some hadoop jars in your classpath, which would already allow you to run local/standalone and the local file system implementation.
Regarding the attached error, it looks like your HDFS is configured with local FS as the default file system implementation but you're trying to write to a filename with prefix hdfs. It also looks like you're running a stale version of SystemML (according to the given line numbers in your stacktrace). Note that up until SystemML 0.14 (inclusive), we always used the default file system implementation, but in master, we create the correct file system according to the given file schemes (see SYSTEMML-1696). So please try to (1) use a recent build of SystemML master, or (2) reconfigure your hdfs-site.xml to use hdfs as the default fs implementation.