Only interested in the raw trace data? Skip to the end.
(EDIT 2020-02-03: added Blue Waters HPC network traces. H/T Saurabh Jha)
(EDIT 2019-07-04: added Mustang and Trinity HPC traces. H/T Apoorve Mohan, again)
(EDIT 2019-03-11: added Azure and Alibaba traces. H/T Apoorve Mohan)
(EDIT 2018-02-21: added TU Delft Bitbrains and CERIT-SC traces. Via ResearchGate)
(EDIT 2017-08-01: added traces from our IC2E 2015 paper “Using Trustworthy Simulation to Engineer Cloud Schedulers”)
(EDIT 2015-09-15: added Yahoo cluster traces. H/T Dachuan Huang)
Whenever there’s a new idea for a cloud scheduler, my first step is a quick draft of the algorithm in an IaaS cloud simulation framework – punching out every idea on a production system simply isn’t feasible. The simulator then needs to be fed with platform configuration about system hardware and some type of utilization trace. The easiest type of workload trace to look at is generated from synthetic distributions, but this has some limitations. The traces we work with at minimum contain (a) job start times, (b) a type of job size such as duration or amount of data to process, and (c) a job type such as the instance type other form of constraint. When I speak of workload traces in this article, I am specifically referring to traces of batch jobs with fixed units of work. As an example, for one of our recent papers about SLA-enforcement for IaaS spot instances this means in detail:
- request timestamp
- instance life-time
- instance core count
- any additional data …
Generating realistic cloud workloads synthetically has spawned an entire branch of research. My focus in this article is rather a practical description of the steps I personally take for developing and evaluating a new cloud scheduler.
I usually start with a synthetic trace with job inter-arrival times and durations generated from an exponential distribution, with uniform core size – in our example a core count of 1 – for all requests. If the new scheduler doesn’t provide satisfactory results with this, it’s back to the drawing board. The next stage uses a log-normal distribution for arrival and duration, as this better models the long-tail properties of jobs encountered in real-world traces. A last extension for the synthetic traces then is the introduction of a non-homogeneous mix of instance sizes – which has been the demise of quite a few ideas. While the synthetic approach is a useful basic for testing, it does not re-create the kind of challenges that production traces pose, such as change-points in user-behavior, time-varying auto-correlation, and seasonality in the workload.
When a scheduler prototype enters serious consideration, I am a strong proponent of using traces recorded from production systems for evaluation. Unfortunately, this is where evaluation becomes difficult. Besides handling the technological complexity of the scheduler, a logistical problem comes up: the scarcity of publicly available production traces. This can be a big challenge for the aspiring cloud researcher. I’ve listed a number of notable exception below, but generally companies in the cloud space either do not record utilization traces over the long-term or they heavily guard these traces and rarely allow the interested researcher a glimpse. If researchers do get access, they often cannot name the source of the traces and cannot re-distribute the raw data used as foundation for their work. This in turn creates problems with the reproducibility of results and slows down the overall innovation process. The desire to protect a company’s competitive edge is understandable, and yet the availability of anonymized traces would spark innovation and drastically support academic research.
Fortunately, there are exceptions to this rule of scarcity. Here is a selection of public traces that we have found valuable in testing the real-world suitability of cloud schedulers:
Google cluster workload. Published by Google in an effort to support large-scale scheduling research, these traces from a Google data center cell have attracted analysis efforts from a number of researchers in the meantime, e.g. an analysis by Sharma et al.. The trace covers a 1-month time frame and 12.000 machines an includes anonymized job constraint tags.
Facebook Hadoop workload. A number of 1-hour segments from Facebook’s Hadoop traces published as part of UC Berkeley AMP Lab’s SWIM project. Some segments contain arrival times and duration, whereas others provide the amounts of data processed.
OpenCloud Hadoop workload. Taken from a Hadoop cluster managed by CMU’s Parallel Data Lab, these traces provide very detailed insights in the workload of a cluster used for scientific workloads for a 20-month period. Includes timestamps, slot counts, and more. K. Ren et al. investigate the traces in depth.
Eucalyptus IaaS cloud workload. Anonymized multi-month traces scraped from the log files of 6 different production systems running Eucalyptus private IaaS clouds. Published as part of a study by Wolski and Brevik. The traces contain start- an stop times for instances, their size and the node allocation as decided by the native scheduler.
EDIT: We added the traces from our IC2E 2015 paper on trustworthy cloud simulation as well.
Yahoo cluster traces. A number of data sets from Yahoo’s production systems. Most notably contains system utilization metrics from PNUTS/Sherpa and HDFS access logs for a larger Hadoop cluster. Additionally provides data sets with file access statistics and time-series for testing anomaly detection algorithms.
TU Delft Bitbrains traces. Two data sets about VM allocation in a distributed data center focused on financial applications. One trace uses SAN storage, the other has a mixed population. Provides fine-grained cpu, memory, disk, and network utilization data over several weeks. Shen et al. analyze the trace. There are several other traces under “datasets”.
CERIT-SC grid workload. Traces from a cluster running cloud and grid applications on a shared infrastructure. Contains traces with resource foot print, instance groups, and allocated hosts. Klusácek and Parák analyze the trace.
Azure Public Dataset. Very large trace of anonymized cloud VMs in one of Azure’s availability zones. Contains cpu and memory utilization plus deployment batch size. Cortez et al. analyze the trace in their SOSP 17 paper.
Alibaba Cluster Trace Program. Data center traces for VMs with batch workloads and DAG information. Contains a 12 hour and a longer 8 days trace, with cpu and memory allocation. Lu et al. analyze the trace.
Mustang and Trinity HPC traces. HPC cluster traces from Los Alamos National Labs. The Mustang trace is a smaller cloud-like trace with node counts and groups ids, whereas the Trinity trace comes from a large-scale super computer with backfill scheduler. G. Amvrosiadis et al. analyze the traces and summarizes the results.
Cloudera Hadoop workload. (no trace) Similar to the above with data from production systems of anonymous Cloudera customers and Facebook and analyzed by researchers from UC Berkeley. Unfortunately, the raw data is not available.
Blue Waters HPC traces. (uses LDMS) Cray Gemini torodial network traces from the NCSA’s Blue Waters cluster. Especially relevant for HPC networking studies. Jha et al. present the trace with their work on Monet.
Notably, most of these traces stem from Hadoop clusters and are limited to data-mining applications. More generic IaaS-type workloads can be found in the Eucalyptus traces and, potentially, the Google trace. I want to emphasize that these are very different types of batch workloads that can offer interesting insights in the behavior of a cloud system under varying conditions. I hope this short reference provides a jump-off point for both researchers and engineers to get their hands on a broader variety of production traces.