An open-source gymnasium for machine studying assisted pc structure design – Google Analysis Weblog

An open-source gymnasium for machine studying assisted pc structure design – Google Analysis Weblog

Computer Architecture analysis has an extended historical past of creating simulators and instruments to judge and form the design of pc methods. For instance, the SimpleScalar simulator was launched within the late Nineties and allowed researchers to discover varied microarchitectural concepts. Pc structure simulators and instruments, akin to gem5, DRAMSys, and lots of extra have performed a big function in advancing pc structure analysis. Since then, these shared sources and infrastructure have benefited trade and academia and have enabled researchers to systematically construct on one another’s work, resulting in vital advances within the discipline.

Nonetheless, pc structure analysis is evolving, with trade and academia turning in the direction of machine studying (ML) optimization to fulfill stringent domain-specific necessities, akin to ML for computer architecture, ML for TinyML acceleration, DNN accelerator datapath optimization, memory controllers, power consumption, security, and privacy. Though prior work has demonstrated the advantages of ML in design optimization, the shortage of robust, reproducible baselines hinders truthful and goal comparability throughout completely different strategies and poses a number of challenges to their deployment. To make sure regular progress, it’s crucial to grasp and sort out these challenges collectively.

To alleviate these challenges, in “ArchGym: An Open-Source Gymnasium for Machine Learning Assisted Architecture Design”, accepted at ISCA 2023, we launched ArchGym, which incorporates quite a lot of pc structure simulators and ML algorithms. Enabled by ArchGym, our outcomes point out that with a sufficiently giant variety of samples, any of a various assortment of ML algorithms are able to find the optimum set of structure design parameters for every goal downside; nobody answer is essentially higher than one other. These outcomes additional point out that deciding on the optimum hyperparameters for a given ML algorithm is important for locating the optimum structure design, however selecting them is non-trivial. We release the code and dataset throughout a number of pc structure simulations and ML algorithms.

Challenges in ML-assisted structure analysis

ML-assisted structure analysis poses a number of challenges, together with:

  1. For a particular ML-assisted pc structure downside (e.g., discovering an optimum answer for a DRAM controller) there isn’t a systematic method to establish optimum ML algorithms or hyperparameters (e.g., studying fee, warm-up steps, and so forth.). There’s a wider vary of ML and heuristic strategies, from random walk to reinforcement learning (RL), that may be employed for design space exploration (DSE). Whereas these strategies have proven noticeable efficiency enchancment over their selection of baselines, it’s not evident whether or not the enhancements are due to the selection of optimization algorithms or hyperparameters.
    Thus, to make sure reproducibility and facilitate widespread adoption of ML-aided structure DSE, it’s essential to stipulate a scientific benchmarking methodology.
  2. Whereas pc structure simulators have been the spine of architectural improvements, there may be an rising want to handle the trade-offs between accuracy, pace, and price in structure exploration. The accuracy and pace of efficiency estimation extensively varies from one simulator to a different, relying on the underlying modeling particulars (e.g., cycleaccurate vs. MLbased proxy models). Whereas analytical or ML-based proxy fashions are nimble by advantage of discarding low-level particulars, they typically undergo from excessive prediction error. Additionally, attributable to industrial licensing, there could be strict limits on the number of runs collected from a simulator. Total, these constraints exhibit distinct efficiency vs. pattern effectivity trade-offs, affecting the selection of optimization algorithm for structure exploration.
    It’s difficult to delineate the way to systematically evaluate the effectiveness of assorted ML algorithms below these constraints.
  3. Lastly, the panorama of ML algorithms is quickly evolving and a few ML algorithms want information to be helpful. Moreover, rendering the result of DSE into significant artifacts akin to datasets is essential for drawing insights in regards to the design house.
    On this quickly evolving ecosystem, it’s consequential to make sure the way to amortize the overhead of search algorithms for structure exploration. It’s not obvious, nor systematically studied the way to leverage exploration information whereas being agnostic to the underlying search algorithm.

ArchGym design

ArchGym addresses these challenges by offering a unified framework for evaluating completely different ML-based search algorithms pretty. It contains two fundamental elements: 1) the ArchGym atmosphere and a pair of) the ArchGym agent. The atmosphere is an encapsulation of the structure value mannequin — which incorporates latency, throughput, space, vitality, and so forth., to find out the computational value of operating the workload, given a set of architectural parameters — paired with the goal workload(s). The agent is an encapsulation of the ML algorithm used for the search and consists of hyperparameters and a guiding coverage. The hyperparameters are intrinsic to the algorithm for which the mannequin is to be optimized and may considerably affect efficiency. The coverage, then again, determines how the agent selects a parameter iteratively to optimize the goal goal.

Notably, ArchGym additionally features a standardized interface that connects these two elements, whereas additionally saving the exploration information because the ArchGym Dataset. At its core, the interface entails three fundamental alerts: {hardware} state, {hardware} parameters, and metrics. These alerts are the naked minimal to ascertain a significant communication channel between the atmosphere and the agent. Utilizing these alerts, the agent observes the state of the {hardware} and suggests a set of {hardware} parameters to iteratively optimize a (user-defined) reward. The reward is a operate of {hardware} efficiency metrics, akin to efficiency, vitality consumption, and so forth. 

ArchGym contains two fundamental elements: the ArchGym atmosphere and the ArchGym agent. The ArchGym atmosphere encapsulates the associated fee mannequin and the agent is an abstraction of a coverage and hyperparameters. With a standardized interface that connects these two elements, ArchGym supplies a unified framework for evaluating completely different ML-based search algorithms pretty whereas additionally saving the exploration information because the ArchGym Dataset.

ML algorithms might be equally favorable to fulfill user-defined goal specs

Utilizing ArchGym, we empirically show that throughout completely different optimization aims and DSE issues, a minimum of one set of hyperparameters exists that ends in the identical {hardware} efficiency as different ML algorithms. A poorly chosen (random choice) hyperparameter for the ML algorithm or its baseline can result in a deceptive conclusion {that a} specific household of ML algorithms is best than one other. We present that with ample hyperparameter tuning, completely different search algorithms, even random walk (RW), are in a position to establish the very best reward. Nonetheless, be aware that discovering the suitable set of hyperparameters could require exhaustive search and even luck to make it aggressive.

With a ample variety of samples, there exists a minimum of one set of hyperparameters that ends in the identical efficiency throughout a spread of search algorithms. Right here the dashed line represents the utmost normalized reward. Cloud-1, cloud-2, stream, and random point out 4 completely different reminiscence traces for DRAMSys (DRAM subsystem design house exploration framework).

Dataset building and high-fidelity proxy mannequin coaching

Making a unified interface utilizing ArchGym additionally permits the creation of datasets that can be utilized to design higher data-driven ML-based proxy structure value fashions to enhance the pace of structure simulation. To judge the advantages of datasets in constructing an ML mannequin to approximate structure value, we leverage ArchGym’s means to log the info from every run from DRAMSys to create 4 dataset variants, every with a special variety of information factors. For every variant, we create two classes: (a) Various Dataset, which represents the info collected from completely different brokers (ACO, GA, RW, and BO), and (b) ACO solely, which exhibits the info collected solely from the ACO agent, each of that are launched together with ArchGym. We practice a proxy mannequin on every dataset utilizing random forest regression with the target to foretell the latency of designs for a DRAM simulator. Our outcomes present that:

  1. As we improve the dataset measurement, the common normalized root mean squared error (RMSE) barely decreases.
  2. Nonetheless, as we introduce variety within the dataset (e.g., gathering information from completely different brokers), we observe 9× to 42× decrease RMSE throughout completely different dataset sizes.

Various dataset assortment throughout completely different brokers utilizing ArchGym interface.
The impression of a various dataset and dataset measurement on the normalized RMSE.

The necessity for a community-driven ecosystem for ML-assisted structure analysis

Whereas, ArchGym is an preliminary effort in the direction of creating an open-source ecosystem that (1) connects a broad vary of search algorithms to pc structure simulators in an unified and easy-to-extend method, (2) facilitates analysis in ML-assisted pc structure, and (3) varieties the scaffold to develop reproducible baselines, there are a variety of open challenges that want community-wide help. Beneath we define a number of the open challenges in ML-assisted structure design. Addressing these challenges requires a properly coordinated effort and a neighborhood pushed ecosystem.

Key challenges in ML-assisted structure design.

We name this ecosystem Architecture 2.0. We define the important thing challenges and a imaginative and prescient for constructing an inclusive ecosystem of interdisciplinary researchers to sort out the long-standing open issues in making use of ML for pc structure analysis. In case you are eager about serving to form this ecosystem, please fill out the interest survey.


ArchGym is an open supply gymnasium for ML structure DSE and permits an standardized interface that may be readily prolonged to swimsuit completely different use instances. Moreover, ArchGym permits truthful and reproducible comparability between completely different ML algorithms and helps to ascertain stronger baselines for pc structure analysis issues.

We invite the pc structure neighborhood in addition to the ML neighborhood to actively take part within the growth of ArchGym. We imagine that the creation of a gymnasium-type atmosphere for pc structure analysis can be a big step ahead within the discipline and supply a platform for researchers to make use of ML to speed up analysis and result in new and modern designs.


This blogpost relies on joint work with a number of co-authors at Google and Harvard College. We wish to acknowledge and spotlight Srivatsan Krishnan (Harvard) who contributed a number of concepts to this challenge in collaboration with Shvetank Prakash (Harvard), Jason Jabbour (Harvard), Ikechukwu Uchendu (Harvard), Susobhan Ghosh (Harvard), Behzad Boroujerdian (Harvard), Daniel Richins (Harvard), Devashree Tripathy (Harvard), and Thierry Thambe (Harvard).  As well as, we might additionally wish to thank James Laudon, Douglas Eck, Cliff Younger, and Aleksandra Faust for his or her help, suggestions, and motivation for this work. We’d additionally wish to thank John Guilyard for the animated determine used on this submit. Amir Yazdanbakhsh is now a Analysis Scientist at Google DeepMind and Vijay Janapa Reddi is an Affiliate Professor at Harvard.

Leave a Reply

Your email address will not be published. Required fields are marked *