A survey of scheduling frameworks in big data systems Online publication date: Fri, 03-Aug-2018
by Ji Liu; Esther Pacitti; Patrick Valduriez
International Journal of Cloud Computing (IJCC), Vol. 7, No. 2, 2018
Abstract: Cloud and big data technologies are now converging to enable organisations to outsource data in the cloud and get value from data. Big data systems typically exploit computer clusters to gain scalability and obtain a good cost-performance ratio. However, scheduling a workload in a computer cluster remains a well-known open problem. Scheduling methods are typically implemented in a scheduling framework. In this paper, we survey scheduling methods and frameworks for big data systems, propose taxonomy and analyse the features of scheduling frameworks. These frameworks have been designed initially for the cloud (MapReduce) to process web data. We examine 16 popular scheduling frameworks. Our study shows that different frameworks are proposed for different big data systems, different scales of computer clusters and different objectives. We propose the main dimensions for workloads and metrics for benchmarks to evaluate these scheduling frameworks. Finally, we analyse their limitations and propose new research directions.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Cloud Computing (IJCC):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com