-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AI HW Acceleration #53
Conversation
Missing references bibtex emerging tech
hw_acceleration.qmd
Outdated
|
||
In response, new manufacturing techniques like wafer-scale fabrication and advanced packaging now allow much higher levels of integration. The goal is to create unified, specialized AI compute complexes tailored for deep learning and other AI algorithms. Tighter integration is key to delivering the performance and efficiency needed for the next generation of AI. | ||
|
||
#### Wafter-scale AI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo: Wafer-scale AI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, fixed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
hw_acceleration.qmd
Outdated
|
||
- **Co-simulation:** Unified platforms like the SCALE-Sim [@samajdar2018scale] integrate hardware and software simulation into a single tool. This enables what-if analysis to quantify the system-level impacts of cross-layer optimizations early in the design cycle. | ||
|
||
For example, an FPGA-based AI accelerator design could be simulated using Verilog hardware description language and synthesized into a Gem5 model. The accelerator could have ML workloads simulated using TVM compiled onto it within the Gem5 environment for unified modeling. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This example is a bit difficult to follow. It would be nice to have a step by step explanation of why an FPGA-based AI accelerator should be simulated using Verilog (what exactly about Verilog makes it well-suited for this type of accelerator?), as well as why it should be synthesized into a Gem5 model (what specifically about Gem5 makes it optimal for this task?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this helpful @AditiR-42
For example, an FPGA-based AI accelerator design could be simulated using Verilog hardware description language and synthesized into a Gem5 model. Verilog is well-suited for describing the digital logic and interconnects that make up the accelerator architecture. Using Verilog allows the designer to specify the datapaths, control logic, on-chip memories, and other components that will be implemented in the FPGA fabric. Once the Verilog design is complete, it can be synthesized into a model that simulates the behavior of the hardware, such as using the Gem5 simulator. Gem5 is useful for this task because it allows modeling of full systems including processors, caches, buses, and custom accelerators. Gem5 supports interfacing Verilog models of hardware to the simulation, enabling unified system modeling.
The synthesized FPGA accelerator model could then have ML workloads simulated using TVM compiled onto it within the Gem5 environment for unified modeling. TVM allows optimized compilation of ML models onto heterogeneous hardware like FPGAs. Running TVM-compiled workloads on the accelerator within the Gem5 simulation provides an integrated way to validate and refine the hardware design, software stack, and system integration before ever needing to physically realize the accelerator on a real FPGA.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hope that works, updated it in the text.
d230ebb
to
9868ae3
Compare
Looks good. Thanks for addressing the typo! |
added more references to the sections that I was responsible for (challenges and solutions) |
Thank you folks!
…On Mon, Nov 13, 2023 at 2:43 PM gnodipac886 ***@***.***> wrote:
added more references to the sections that I was responsible for
(challenges and solutions)
—
Reply to this email directly, view it on GitHub
<#53 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABT6DFHEIR277A7XR7NHVTTYEJZ4LAVCNFSM6AAAAAA7C5SPIOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBYHEYTQMJYGA>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Vijay Janapa Reddi, Ph. D. |
John L. Loeb Associate Professor of Engineering and Applied Sciences |
John A. Paulson School of Engineering and Applied Sciences |
Science and Engineering Complex (SEC) | 150 Western Ave, Room #5.305 |
Boston, MA 02134 |
Harvard University | Email ***@***.***> | Website
<http://scholar.harvard.edu/vijay-janapa-reddi> | Google Scholar
<https://scholar.google.com/citations?hl=en&user=gy4UVGcAAAAJ&view_op=list_works&sortby=pubdate>
| Edge Computing Lab <https://edge.seas.harvard.edu> | Schedule a Meeting
<https://scholar.harvard.edu/vijay-janapa-reddi/schedule> | Admin
<https://scholar.harvard.edu/vijay-janapa-reddi/contact> |
|
Really great job on this chapter! |
Before submitting your Pull Request, please ensure that you have carefully reviewed and completed all items on this checklist.
Content
References & Citations
Quarto Website Rendering
Grammar & Style
Collaboration
Miscellaneous
Final Steps