How Apache Spark works behind the scenes.

Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark was developed to overcome the drawbacks of the Hadoop MapReduce cluster computing paradigm:

A quick overview of Hadoop’s architecture.

With the growth of the data in our world and the birth of the buzzwords — BIG DATA, people started to ask themself: How much hardware of a single computer can hold in the face of the challenges of storing or processing these amounts…

Or Bar Ilan

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store