Skip to content

Commit

Permalink
Updated purpose
Browse files Browse the repository at this point in the history
  • Loading branch information
profvjreddi committed Dec 24, 2024
1 parent 33d70be commit 7998b68
Show file tree
Hide file tree
Showing 18 changed files with 74 additions and 35 deletions.
10 changes: 8 additions & 2 deletions contents/core/ai_for_good/ai_for_good.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,11 @@ Resources: [Slides](#sec-ai-for-good-resource), [Videos](#sec-ai-for-good-resour

![_DALL·E 3 Prompt: Illustration of planet Earth wrapped in shimmering neural networks, with diverse humans and AI robots working together on various projects like planting trees, cleaning the oceans, and developing sustainable energy solutions. The positive and hopeful atmosphere represents a united effort to create a better future._](images/png/cover_ai_good.png)

By aligning AI progress with human values, goals, and ethics, the ultimate goal of ML systems (at any scale) is to be a technology that reflects human principles and aspirations. Initiatives under "AI for Good" promote the development of AI to tackle the [UN Sustainable Development Goals](https://www.undp.org/sustainable-development-goals) (SDGs) using embedded AI technologies, expanding access to AI education, amongst other things. While it is now clear that AI will be an instrumental part of progress towards the SDGs, its adoption and impact are limited by the immense power consumption, strong connectivity requirements, and high costs of cloud-based deployments. TinyML can circumvent many of these issues by allowing ML models to run on low-cost and low-power microcontrollers.
## Purpose {.unnumbered}

> The "AI for Good" movement is critical in cultivating a future where an AI-empowered society is more just, sustainable, and prosperous for all humanity.
_How can we harness machine learning systems to address critical societal challenges, and what principles guide the development of solutions that create lasting positive impact?_

The application of AI systems to societal challenges represents the culmination of technical capability and social responsibility. Impact-driven development reveals essential patterns for translating technological potential into meaningful change, highlighting critical relationships between system design and societal outcomes. The implementation of solutions for social good showcases pathways for addressing complex challenges while maintaining technical rigor and operational effectiveness. Understanding these impact dynamics provides insights into creating transformative systems, establishing core principles for designing AI solutions that advance human welfare, and promote positive societal transformation.

::: {.callout-tip}

Expand All @@ -32,6 +34,10 @@ By aligning AI progress with human values, goals, and ethics, the ultimate goal

## Overview

By aligning AI progress with human values, goals, and ethics, the ultimate goal of ML systems (at any scale) is to be a technology that reflects human principles and aspirations. Initiatives under "AI for Good" promote the development of AI to tackle the [UN Sustainable Development Goals](https://www.undp.org/sustainable-development-goals) (SDGs) using embedded AI technologies, expanding access to AI education, amongst other things. While it is now clear that AI will be an instrumental part of progress towards the SDGs, its adoption and impact are limited by the immense power consumption, strong connectivity requirements, and high costs of cloud-based deployments. TinyML can circumvent many of these issues by allowing ML models to run on low-cost and low-power microcontrollers.

> The "AI for Good" movement is critical in cultivating a future where an AI-empowered society is more just, sustainable, and prosperous for all humanity.
To give ourselves a framework around which to think about AI for social good, we will be following the UN Sustainable Development Goals (SDGs). The UN SDGs are a collection of 17 global goals, shown in @fig-sdg, adopted by the United Nations in 2015 as part of the 2030 Agenda for Sustainable Development. The SDGs address global challenges related to poverty, inequality, climate change, environmental degradation, prosperity, and peace and justice.

![United Nations Sustainable Development Goals (SDG). Source: [United Nations](https://sdgs.un.org/goals).](https://www.un.org/sustainabledevelopment/wp-content/uploads/2015/12/english_SDG_17goals_poster_all_languages_with_UN_emblem_1.png){#fig-sdg}
Expand Down
6 changes: 4 additions & 2 deletions contents/core/benchmarking/benchmarking.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,11 @@ Resources: [Slides](#sec-benchmarking-ai-resource), [Videos](#sec-benchmarking-a

![_DALL·E 3 Prompt: Photo of a podium set against a tech-themed backdrop. On each tier of the podium, there are AI chips with intricate designs. The top chip has a gold medal hanging from it, the second one has a silver medal, and the third has a bronze medal. Banners with 'AI Olympics' are displayed prominently in the background._](images/png/cover_ai_benchmarking.png)

Benchmarking is critical to developing and deploying machine learning systems, especially TinyML applications. Benchmarks allow developers to measure and compare the performance of different model architectures, training procedures, and deployment strategies. This provides key insights into which approaches work best for the problem at hand and the constraints of the deployment environment.
## Purpose {.unnumbered}

This chapter will provide an overview of popular ML benchmarks, best practices for benchmarking, and how to use benchmarks to improve model development and system performance. It provides developers with the right tools and knowledge to effectively benchmark and optimize their systems, especially for TinyML systems.
_How can quantitative evaluation reshape the development of machine learning systems, and what metrics reveal true system capabilities?_

The measurement and analysis of AI system performance represent a critical element in bridging theoretical capabilities with practical outcomes. Systematic evaluation approaches reveal fundamental relationships between model behavior, resource utilization, and operational reliability. These measurements draw out the essential trade-offs across accuracy, efficiency, and scalability, providing insights that guide architectural decisions throughout the development lifecycle. These evaluation frameworks establish core principles for assessing and validating system design choices and enable the creation of robust solutions that meet increasingly complex performance requirements across diverse deployment scenarios.

::: {.callout-tip}

Expand Down
6 changes: 4 additions & 2 deletions contents/core/data_engineering/data_engineering.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,13 @@ Resources: [Slides](#sec-data-engineering-resource), [Videos](#sec-data-engineer

![_DALL·E 3 Prompt: Create a rectangular illustration visualizing the concept of data engineering. Include elements such as raw data sources, data processing pipelines, storage systems, and refined datasets. Show how raw data is transformed through cleaning, processing, and storage to become valuable information that can be analyzed and used for decision-making._](images/png/cover_data_engineering.png)

Let me refine this to strengthen the academic tone and remove direct reader references while maintaining the focus on data engineering principles:

## Purpose {.unnumbered}

_How does sourcing, preparation, and the quality of data influence the design, performance, and scalability of machine learning systems?_
_Why does data infrastructure form the foundation of AI system success, and how does its design influence system capabilities?_

Data is the lifeblood of AI systems. This chapter explores the engineering aspects of data sourcing, processing, and quality assurance, highlighting their impact on the scalability, efficiency, and adaptability of machine learning workflows. Building on earlier discussions about neural network architectures, this chapter addresses the design challenges of creating reliable data pipelines that integrate seamlessly with machine learning systems. The insights gained here will equip readers with the skills to engineer data solutions that support optimization and deployment, paving the way for advanced applications in complex environments.
At the heart of every AI system lies its data infrastructure, determining both its potential and limitations. While architectural patterns and workflows define the system's structure, data quality and accessibility ultimately govern its effectiveness. The engineering of data systems requires careful consideration of acquisition, processing, and delivery mechanisms, establishing the bedrock upon which sophisticated AI capabilities can be built. This perspective reveals how data engineering decisions ripple through every aspect of system performance and reliability, shaping the possibilities for model development, training efficiency, and deployment success.

::: {.callout-tip}

Expand Down
7 changes: 2 additions & 5 deletions contents/core/dl_architectures/dl_architectures.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,9 @@ Resources: [Slides](#sec-deep-learning-primer-resource), [Videos](#sec-deep-lear

## Purpose {.unnumbered}

_What are the modern deep learning architectures, what fundamental computational patterns underlie them, and how can these patterns be leveraged to build adaptable and efficient AI systems?_

This chapter focuses on real-world deep neural network (DNN) architectures, including convolutional networks (CNNs), recurrent networks (RNNs), and transformers, with a unique focus on the foundational computational patterns they share. Despite rapid advancements in AI, the core building blocks---or "LEGO pieces"---have remained relatively stable. Understanding these components is essential for constructing AI systems that support diverse model architectures while enabling efficient and scalable deployments. By bridging theoretical concepts with practical system design, this chapter provides a solid foundation for advancing machine learning systems and engineering.

## Purpose {.unnumbered}
_What recurring patterns emerge across modern deep learning architectures, and how do these patterns enable systematic approaches to AI system design?_

Deep learning architectures represent a convergence of computational patterns that form the building blocks of modern AI systems. Understanding these foundational patterns—from convolutional structures to attention mechanisms—reveals how complex models arise from simple, repeatable components. The examination of these architectural elements provides insights into the systematic construction of flexible, efficient AI systems, establishing core principles that influence every aspect of system design and deployment. These structural insights illuminate the path toward creating scalable, adaptable solutions across diverse application domains.

::: {.callout-tip}

Expand Down
4 changes: 2 additions & 2 deletions contents/core/dl_primer/dl_primer.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ Resources: [Slides](#sec-deep-learning-primer-resource), [Videos](#sec-deep-lear

## Purpose {.unnumbered}

_How can we understand neural networks as digital representations of biological concepts, and what implications do these artificial neural networks have on system design?_
_What inspiration from nature drives the development of machine learning systems, and how do biological neural processes inform their fundamental design?_

This chapter is an essential primer on artificial neural networks. It focuses on translating the biological inspirations underlying these models into their computational counterparts. The goal is to bridge the gap between the intuitive principles of biological neural systems and their digital implementations. Newcomers gain foundational insights into the challenges and considerations at the intersection of biology and computation, while seasoned practitioners deepen their understanding of the systemic implications of these digital constructs. In doing so, this chapter supports the exploration of advanced deep learning architectures and methodologies in subsequent sections. It lays the groundwork for the terminology and concepts that will be central throughout the book.
The neural systems of nature offer profound insights into information processing and adaptation, inspiring the core principles of modern machine learning. Translating biological mechanisms into computational frameworks illuminates fundamental patterns that shape artificial neural networks. These patterns reveal essential relationships between biological principles and their digital counterparts, establishing building blocks for understanding more complex architectures. Analyzing these mappings from natural to artificial provides critical insights into system design, laying the foundation for exploring advanced neural architectures and their practical implementations.

::: {.callout-tip}

Expand Down
4 changes: 2 additions & 2 deletions contents/core/efficient_ai/efficient_ai.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ Resources: [Slides](#sec-efficient-ai-resource), [Videos](#sec-efficient-ai-reso

## Purpose {.unnumbered}

_Why is efficiency important in evaluating and deploying AI systems, and what general strategies do machine learning system engineers adopt to enhance performance, energy efficiency and resource utilization?_
_What fundamental resource constraints challenge machine learning systems, and how do they shape architectural decisions?_

This chapter provides an overview of the importance of efficiency in AI, focusing on model architectures, compression techniques, and inference hardware. It builds on previous discussions of AI workflows and data handling, highlighting how efficient numerics and formats enhance computational performance. This foundation helps readers appreciate the strategies essential for optimizing AI systems as they progress through the book.
The interplay between computational demands and system resources drives fundamental design choices in machine learning systems. Each efficiency decision---from arithmetic precision to memory access patterns---cascades through multiple layers of the system stack, revealing essential relationships between resource utilization and model capability. These relationships guide architectural choices that balance system performance against resource constraints, establishing core principles for designing AI solutions that achieve maximum impact with minimum resources.

::: {.callout-tip}

Expand Down
8 changes: 5 additions & 3 deletions contents/core/frameworks/frameworks.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,13 @@ Resources: [Slides](#sec-ai-frameworks-resource), [Videos](#sec-ai-frameworks-re

![_DALL·E 3 Prompt: Illustration in a rectangular format, designed for a professional textbook, where the content spans the entire width. The vibrant chart represents training and inference frameworks for ML. Icons for TensorFlow, Keras, PyTorch, ONNX, and TensorRT are spread out, filling the entire horizontal space, and aligned vertically. Each icon is accompanied by brief annotations detailing their features. The lively colors like blues, greens, and oranges highlight the icons and sections against a soft gradient background. The distinction between training and inference frameworks is accentuated through color-coded sections, with clean lines and modern typography maintaining clarity and focus._](images/png/cover_ml_frameworks.png)

## Purpose
Let me refine this to maintain a textbook tone, remove the "By" construction, and strengthen the systems perspective:

_What role do AI frameworks play in enabling scalable and efficient machine learning systems, and how do their features and capabilities influence the design, training, and deployment of models across diverse environments?_
## Purpose {.unnumbered}

AI frameworks are the backbone of modern machine learning workflows, providing the tools and abstractions necessary to design, train, and deploy complex models across diverse environments. These frameworks bridge the gap between model development and system execution, allowing engineers to take models designed by developers and efficiently execute them on various underlying systems. By understanding the components, capabilities, and limitations of these frameworks, machine learning practitioners can make informed decisions about the tools that best enable scalability, efficiency, and adaptability. This knowledge prepares us to address the challenges of machine learning deployment, optimization, and system-level integration.
_How do AI frameworks bridge the gap between theoretical design and practical implementation, and what role do they play in enabling scalable machine learning systems?_

AI frameworks represent critical middleware that transform abstract model specifications into executable implementations. The evolution of these frameworks reveals fundamental patterns for translating high-level designs into efficient computational workflows. Their architecture illuminates essential trade-offs between abstraction, performance, and portability, providing systematic approaches to managing complexity in machine learning systems. Understanding framework capabilities and constraints offers crucial insights into the engineering decisions that shape system scalability, enabling the development of robust, deployable solutions across diverse computing environments.

::: {.callout-tip}

Expand Down
6 changes: 5 additions & 1 deletion contents/core/hw_acceleration/hw_acceleration.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,11 @@ Resources: [Slides](#sec-ai-acceleration-resource), [Videos](#sec-ai-acceleratio

![_DALL·E 3 Prompt: Create an intricate and colorful representation of a System on Chip (SoC) design in a rectangular format. Showcase a variety of specialized machine learning accelerators and chiplets, all integrated into the processor. Provide a detailed view inside the chip, highlighting the rapid movement of electrons. Each accelerator and chiplet should be designed to interact with neural network neurons, layers, and activations, emphasizing their processing speed. Depict the neural networks as a network of interconnected nodes, with vibrant data streams flowing between the accelerator pieces, showcasing the enhanced computation speed._](images/png/cover_ai_hardware.png)

Deploying ML on edge devices presents challenges such as limited processing speed, memory constraints, and stringent energy efficiency requirements. To overcome these challenges, specialized hardware acceleration is key. Hardware accelerators are designed to optimize compute-intensive tasks like inference by using custom silicon chips tailored for matrix multiplications, providing significant speedups compared to general-purpose CPUs. This enables real-time execution of advanced models on devices with strict constraints on size, weight, and power.
## Purpose {.unnumbered}

_How does specialized computing transform the performance frontier of machine learning systems, and what principles guide the design of acceleration strategies?_

The evolution of computational acceleration in machine learning systems represents a fundamental shift in how processing resources are architected and utilized. Each acceleration approach introduces unique patterns for matching algorithmic structure with computational capabilities, revealing essential relationships between model design and execution efficiency. The integration of specialized computing elements demonstrates trade-offs between performance, power efficiency, and system complexity. These architectural interactions provide insights into system-level optimization strategies, establishing core principles for designing AI solutions that effectively leverage advanced computation across diverse deployment environments.

::: {.callout-tip}

Expand Down
Loading

0 comments on commit 7998b68

Please sign in to comment.