Skip to content

Where is Apache Spark heading?

I watched (COVID19-era version of “attended”) the latest spark Summit and in one of the keynotes Reynold Xin from Databricks, presented the following two images comparing spark usage on their platform on 2013 vs. 2020:

While Databricks’ platform is, of course, not the whole spark community, I would wager that they have enough users to represent the overall trend.

Incidentally, the same trends are also happening outside of the Spark world with SQL re-installing itself as the standard to access data, on traditional RDBMSs, so called “NoSQL” databases (that are getting SQL dialects) and NewSQL offerings. As for Python it is definitely the go-to language for machine-learning (we’ll see if Julia will be able to dethrone it)

Anyway, back to spark, considering these numbers it comes as no surprise that a lot of the efforts that are coming from Databricks are diverging away from Scala and making much more investment in Python:

  • Koalas – which is an attempt similar to Dask i.e. trying to replicate pandas API on a distributed engine.
  • Project Zen – improving the PySpark developer experience

I have to say, that in my opinion, these two initiatives are crucial for Spark to remain relevant for python development. I have an anecdotal example from a couple of years ago. I worked in a company where we developed predictive models that provide early alerting for medical deteriorations in ICU patients (the respiratory failure model was FDA approved during COVID-19). Since I had a lot of good experience with Spark (on JVM) we started out with PySpark and integrated it with our pandas based feature building (both medical oriented and data science oriented) that look at patient hospitalization time-series data. We got all that to work with Spark but, the cryptic error message and the problematic memory tuning, with two runt-times JVM and python passing data back and forth, drove us to replace this with a new solution we built using Dask. That solution was both easier to work with and used less resources.

The other thing Spark needs to stay relevant, is to provide a good dev experience for deep learning projects. DeepLearning4J can work with Spark, but I think we already established that’s not where spark is heading, nor where ML innovation is happening. The more interesting work on adding Deep-learning to Spark is “project Hydrogen”.

“Project Hydrogen’ is an on-going effort since spark 2.4. It still has some way to go but it already yielded interesting changes like spark on GPUs and barrier scheduler . Barrier scheduler is particularly interesting as it is an additional execution scheduler that is not map/reduce and more aligned with MPI (there are good explanations here and here). This enables things like HorovodRunner and better integration with TesorFlow (better vs. Yahoo’s TensorFlowOnSpark) as can be seen in the example code (posted on the repo):

  1. from spark_tensorflow_distributor import MirroredStrategyRunner
  2.  
  3. # Adapted from https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras
  4. def train():
  5.     import tensorflow as tf
  6.     import uuid
  7.  
  8.     BUFFER_SIZE = 10000
  9.     BATCH_SIZE = 64
  10.  
  11.     def make_datasets():
  12.         (mnist_images, mnist_labels), _ = \
  13.             tf.keras.datasets.mnist.load_data(path=str(uuid.uuid4())+'mnist.npz')
  14.  
  15.         dataset = tf.data.Dataset.from_tensor_slices((
  16.             tf.cast(mnist_images[..., tf.newaxis] / 255.0, tf.float32),
  17.             tf.cast(mnist_labels, tf.int64))
  18.         )
  19.         dataset = dataset.repeat().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
  20.         return dataset
  21.  
  22.     def build_and_compile_cnn_model():
  23.         model = tf.keras.Sequential([
  24.             tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
  25.             tf.keras.layers.MaxPooling2D(),
  26.             tf.keras.layers.Flatten(),
  27.             tf.keras.layers.Dense(64, activation='relu'),
  28.             tf.keras.layers.Dense(10, activation='softmax'),
  29.         ])
  30.         model.compile(
  31.             loss=tf.keras.losses.sparse_categorical_crossentropy,
  32.             optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
  33.             metrics=['accuracy'],
  34.         )
  35.         return model
  36.  
  37.     train_datasets = make_datasets()
  38.     options = tf.data.Options()
  39.     options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.DATA
  40.     train_datasets = train_datasets.with_options(options)
  41.     multi_worker_model = build_and_compile_cnn_model()
  42.     multi_worker_model.fit(x=train_datasets, epochs=3, steps_per_epoch=5)
  43.  
  44. MirroredStrategyRunner(num_slots=8).run(train)

Lastly, if python is taking center-stage, the whole JVM (Scala in this case) is becoming more of a nuisance Spark needs to tag along vs. the advantage it used to be (and still is in many use cases – but we’re talking trends here). Indeed, in a couple of other session on the Spark summit, Databricks talked about the Photon execution engine (depicted below) they are currently developing and is available on Azure Databricks . According to the “brochure” it brings 20x performance improvement over spark 2.4 – this was done by reimplementing Spark’s engine in C++ and I don’t know if this would become open-source or not ,I am not even sure if this is the beginning of the end or just the end of the beginning for spark – I am sure it is a big deal. One other sure thing these changes in spark are bad news for Scala (but that’s for another post :) )

Published inBig DataBlog