From 5b67bf44f3376861e038e884be6d71fba7888db5 Mon Sep 17 00:00:00 2001 From: jasonjabbour Date: Sat, 9 Nov 2024 17:52:19 -0500 Subject: [PATCH 1/9] remove repetition --- contents/core/benchmarking/benchmarking.qmd | 23 +++++++-------------- 1 file changed, 7 insertions(+), 16 deletions(-) diff --git a/contents/core/benchmarking/benchmarking.qmd b/contents/core/benchmarking/benchmarking.qmd index e0fe1ddb..a005ef2f 100644 --- a/contents/core/benchmarking/benchmarking.qmd +++ b/contents/core/benchmarking/benchmarking.qmd @@ -108,7 +108,7 @@ A key prerogative for any benchmark to be impactful is that it must reflect the Furthermore, benchmarks published with broad co-authorship from respected institutions carry authority and validity that convinces the community to adopt them as trusted standards. Benchmarks perceived as biased by particular corporate or institutional interests breed skepticism. Ongoing community engagement through workshops and challenges is also key after the initial release, and that is what, for instance, led to the success of ImageNet. As research progresses, collective participation enables continual refinement and expansion of benchmarks over time. -Finally, community-developed benchmarks released with open access accelerate adoption and consistent implementation. We shared open-source code, documentation, models, and infrastructure to lower barriers for groups to benchmark solutions on an equal footing using standardized implementations. This consistency is critical for fair comparisons. Without coordination, labs and companies may implement benchmarks differently, reducing result reproducibility. +Finally, releasing community-developed benchmarks with open access promotes their adoption and consistent use. By providing open-source code, documentation, models, and infrastructure, we reduce barriers to entry, enabling groups to benchmark solutions on an equal footing with standardized implementations. This consistency is essential for fair comparisons. Without coordination, labs and companies might implement benchmarks differently, which can undermine reproducibility and comparability of results. Community consensus brings benchmarks lasting relevance, while fragmentation confuses. Through collaborative development and transparent operation, benchmarks can become authoritative standards for tracking progress. Several of the benchmarks that we discuss in this chapter were developed and built by the community, for the community, and that is what ultimately led to their success. @@ -126,7 +126,7 @@ The architecture, size, and complexity of AI models vary widely. Different model ### Data Benchmarks -AI, particularly machine learning, is inherently data-driven. The quality, size, and diversity of data influence AI models' training efficacy and generalization capability. Data benchmarks focus on the datasets used in AI training and evaluation. They provide standardized datasets the community can use to train and test models, ensuring a level playing field for comparisons. Moreover, these benchmarks highlight data quality, diversity, and representation challenges, pushing the community to address biases and gaps in AI training data. By understanding data benchmarks, researchers can also gauge how models might perform in real-world scenarios, ensuring robustness and reliability. +In machine learning, data is foundational because the quality, scale, and diversity of datasets directly impact model efficacy and generalization. Data benchmarks focus on the datasets used in training and evaluation. They provide standardized datasets the community can use to train and test models, ensuring a level playing field for comparisons. Moreover, these benchmarks highlight data quality, diversity, and representation challenges, pushing the community to address biases and gaps in training data. By understanding data benchmarks, researchers can also gauge how models might perform in real-world scenarios, ensuring robustness and reliability. In the remainder of the sections, we will discuss each of these benchmark types. The focus will be an in-depth exploration of system benchmarks, as these are critical to understanding and advancing machine learning system performance. We will briefly cover model and data benchmarks for a comprehensive perspective, but the emphasis and majority of the content will be devoted to system benchmarks. @@ -143,7 +143,7 @@ Machine learning system benchmarking provides a structured and systematic approa #### Micro Benchmarks -Micro-benchmarks in AI are specialized, evaluating distinct components or specific operations within a broader machine learning process. These benchmarks zero in on individual tasks, offering insights into the computational demands of a particular neural network layer, the efficiency of a unique optimization technique, or the throughput of a specific activation function. For instance, practitioners might use micro-benchmarks to measure the computational time required by a convolutional layer in a deep learning model or to evaluate the speed of data preprocessing that feeds data into the model. Such granular assessments are instrumental in fine-tuning and optimizing discrete aspects of AI models, ensuring that each component operates at its peak potential. +Micro-benchmarks are specialized, evaluating distinct components or specific operations within a broader machine learning process. These benchmarks focus on individual tasks, offering insights into the computational demands of a particular neural network layer, the efficiency of a unique optimization technique, or the throughput of a specific activation function. For instance, practitioners might use micro-benchmarks to measure the computational time required by a convolutional layer in a deep learning model or to evaluate the speed of data preprocessing that feeds data into the model. Such granular assessments are instrumental in fine-tuning and optimizing discrete aspects of models, ensuring that each component operates at its peak potential. These types of microbenchmarks include zooming into very specific operations or components of the AI pipeline, such as the following: @@ -153,7 +153,7 @@ These types of microbenchmarks include zooming into very specific operations or * **Layer Benchmarks:** Evaluations of the computational efficiency of distinct neural network layers, such as LSTM or Transformer blocks, when operating on standardized input sizes. -Example: [DeepBench](https://github.com/baidu-research/DeepBench), introduced by Baidu, is a good example of something that assesses the above. DeepBench assesses the performance of basic operations in deep learning models, providing insights into how different hardware platforms handle neural network training and inference. +Example: [DeepBench](https://github.com/baidu-research/DeepBench), introduced by Baidu, is a good benchmark that evaluates fundamental deep learning operations, such as those mentioned above. DeepBench assesses the performance of basic operations in deep learning models, providing insights into how different hardware platforms handle neural network training and inference. :::{#exr-cuda .callout-caution collapse="true"} @@ -167,7 +167,7 @@ Ever wonder how your image filters get so fast? Special libraries like cuDNN sup #### Macro Benchmarks -Macro benchmarks provide a holistic view, assessing the end-to-end performance of entire machine learning models or comprehensive AI systems. Rather than focusing on individual operations, macro-benchmarks evaluate the collective efficacy of models under real-world scenarios or tasks. For example, a macro-benchmark might assess the complete performance of a deep learning model undertaking image classification on a dataset like [ImageNet](https://www.image-net.org/). This includes gauging accuracy, computational speed, and resource consumption. Similarly, one might measure the cumulative time and resources needed to train a natural language processing model on extensive text corpora or evaluate the performance of an entire recommendation system, from data ingestion to final user-specific outputs. +Macro benchmarks provide a holistic view, assessing the end-to-end performance of entire machine learning models or comprehensive ML systems. Rather than focusing on individual operations, macro-benchmarks evaluate the collective efficacy of models under real-world scenarios or tasks. For example, a macro-benchmark might assess the complete performance of a deep learning model undertaking image classification on a dataset like [ImageNet](https://www.image-net.org/). This includes gauging accuracy, computational speed, and resource consumption. Similarly, one might measure the cumulative time and resources needed to train a natural language processing model on extensive text corpora or evaluate the performance of an entire recommendation system, from data ingestion to final user-specific outputs. Examples: These benchmarks evaluate the AI model: @@ -179,7 +179,7 @@ Examples: These benchmarks evaluate the AI model: #### End-to-end Benchmarks -End-to-end benchmarks provide an all-inclusive evaluation that extends beyond the boundaries of the AI model itself. Instead of focusing solely on a machine learning model's computational efficiency or accuracy, these benchmarks encompass the entire pipeline of an AI system. This includes initial data preprocessing, the core model's performance, post-processing of the model's outputs, and other integral components like storage and network interactions. +End-to-end benchmarks provide an all-inclusive evaluation that extends beyond the boundaries of the ML model itself. Instead of focusing solely on a machine learning model's computational efficiency or accuracy, these benchmarks encompass the entire pipeline of an AI system. This includes initial data preprocessing, the core model's performance, post-processing of the model's outputs, and other integral components like storage and network interactions. Data preprocessing is the first stage in many AI systems, transforming raw data into a format suitable for model training or inference. These preprocessing steps' efficiency, scalability, and accuracy are vital for the overall system's performance. End-to-end benchmarks assess this phase, ensuring that data cleaning, normalization, augmentation, or any other transformation process doesn't become a bottleneck. @@ -254,18 +254,9 @@ Beyond raw scores or metrics, benchmarks often provide guidelines or context to Example: A benchmark might highlight that while Model A scored higher than Model B in accuracy, it offers better real-time performance, making it more suitable for time-sensitive applications. -### Training vs. Inference - -The development life cycle of a machine learning model involves two critical phases - training and inference. [Training](../training/training.qmd), as you may recall, is the process of learning patterns from data to create the model. Inference refers to the model making predictions on new unlabeled data. Both phases play indispensable yet distinct roles. Consequently, each phase warrants rigorous benchmarking to evaluate performance metrics like speed, accuracy, and computational efficiency. - -Benchmarking the training phase provides insights into how different model architectures, hyperparameter values, and optimization algorithms impact the time and resources needed to train the model. For instance, benchmarking shows how neural network depth affects training time on a given dataset. Benchmarking also reveals how hardware accelerators like GPUs and TPUs can speed up training. - -On the other hand, benchmarking inference evaluates model performance in real-world conditions after deployment. Key metrics include latency, throughput, memory footprint, and power consumption. This type of benchmarking determines if a model meets the requirements of its target application regarding response time and device constraints. However, we will discuss these broadly to ensure a general understanding. - - ### Training Benchmarks -Training represents the phase where the system processes and ingests raw data to adjust and refine its parameters. Therefore, it is an algorithmic activity and involves system-level considerations, including data pipelines, storage, computing resources, and orchestration mechanisms. The goal is to ensure that the ML system can efficiently learn from data, optimizing both the model's performance and the system's resource utilization. +The development life cycle of a machine learning model involves two critical phases - training and inference. Training represents the phase where the system processes and ingests raw data to adjust and refine its parameters. Benchmarking the training phase provides insights into how different data pipelines, storage, model architectures, computing resources, hyperparameter values, and optimization algorithms impact the time and resources needed to train the model. The goal is to ensure that the ML system can efficiently learn from data, optimizing both the model's performance and the system's resource utilization. #### Purpose From 3e72ac06b8444d40f47431c3267b976cb8b4caa3 Mon Sep 17 00:00:00 2001 From: jasonjabbour Date: Sun, 10 Nov 2024 17:26:23 -0500 Subject: [PATCH 2/9] rewritten ml training benchmarks purpose section --- contents/core/benchmarking/benchmarking.qmd | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/contents/core/benchmarking/benchmarking.qmd b/contents/core/benchmarking/benchmarking.qmd index a005ef2f..3c405149 100644 --- a/contents/core/benchmarking/benchmarking.qmd +++ b/contents/core/benchmarking/benchmarking.qmd @@ -256,17 +256,17 @@ Example: A benchmark might highlight that while Model A scored higher than Model ### Training Benchmarks -The development life cycle of a machine learning model involves two critical phases - training and inference. Training represents the phase where the system processes and ingests raw data to adjust and refine its parameters. Benchmarking the training phase provides insights into how different data pipelines, storage, model architectures, computing resources, hyperparameter values, and optimization algorithms impact the time and resources needed to train the model. The goal is to ensure that the ML system can efficiently learn from data, optimizing both the model's performance and the system's resource utilization. +The development life cycle of a machine learning model involves two critical phases - training and inference. Training represents the phase where the system processes and ingests raw data to adjust and refine its parameters. Benchmarking the training phase reveals how choices in data pipelines, storage solutions, model architectures, computing resources, hyperparameter settings, and optimization algorithms affect the efficiency and resource demands of model training. The goal is to ensure that the ML system can efficiently learn from data, optimizing both the model's performance and the system's resource utilization. #### Purpose -From an ML systems perspective, training benchmarks evaluate how well the system scales with increasing data volumes and computational demands. It's about understanding the interplay between hardware, software, and the data pipeline in the training process. +From a systems perspective, training machine learning models is resource-intensive, especially when working with large models. These models often contain billions or even trillions of trainable parameters and require enormous amounts of data, often on the scale of many terabytes. For example, GPT-3 has 175 billion parameters, was trained on 45 TB of compressed plaintext data, and required 3,640 petaflop-days of compute for pretraining. ML training benchmarks evaluate the systems and resources required to manage the computational load of training such models. -Consider a distributed ML system designed to train on vast datasets, like those used in large-scale e-commerce product recommendations. A training benchmark would assess how efficiently the system scales across multiple nodes, manage data sharding and handle failures or node drop-offs during training. +Efficient data storage and delivery during training also play a major role in the training process. For instance, in a machine learning model that predicts bounding boxes around objects in an image, thousands of images may be required. However, loading an entire image dataset into memory is typically infeasible, so practitioners rely on data loaders from ML frameworks. Successful model training depends on timely and efficient data delivery, making it essential to benchmark tools like data loaders, data pipelines, preprocessing speed, and storage retrieval times to understand their impact on training performance. -Training benchmarks evaluate CPU, GPU, memory, and network utilization during the training phase, guiding system optimizations. When training a model in a cloud-based ML system, it's crucial to understand how resources are being utilized. Are GPUs being fully leveraged? Is there unnecessary memory overhead? Benchmarks can highlight bottlenecks or inefficiencies in resource utilization, leading to cost savings and performance improvements. +Hardware selection is another key factor in training machine learning systems, as it can significantly impact training time. Training benchmarks evaluate CPU, GPU, memory, and network utilization during the training phase to guide system optimizations. Understanding how resources are used is essential: Are GPUs being fully leveraged? Is there unnecessary memory overhead? Benchmarks can uncover bottlenecks or inefficiencies in resource utilization, leading to cost savings and performance improvements. -Training an ML model is contingent on timely and efficient data delivery. Benchmarks in this context would also assess the efficiency of data pipelines, data preprocessing speed, and storage retrieval times. For real-time analytics systems, like those used in fraud detection, the speed at which training data is ingested, preprocessed, and fed into the model can be critical. Benchmarks would evaluate the latency of data pipelines, the efficiency of storage systems (like SSDs vs. HDDs), and the speed of data augmentation or transformation tasks. +In many cases, using a single hardware accelerator, such as a single GPU, is insufficient to meet the computational demands of large-scale model training. Machine learning models are often trained in data centers with multiple GPUs or TPUs, where distributed computing enables parallel processing across nodes. Training benchmarks assess how efficiently the system scales across multiple nodes, manages data sharding, and handles challenges like node failures or drop-offs during training. #### Metrics From e7b8529095de458918d1e3c76566fe7871d8f2cb Mon Sep 17 00:00:00 2001 From: jasonjabbour Date: Sun, 10 Nov 2024 17:31:55 -0500 Subject: [PATCH 3/9] added reference --- contents/core/benchmarking/benchmarking.qmd | 2 +- contents/core/frameworks/frameworks.qmd | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/contents/core/benchmarking/benchmarking.qmd b/contents/core/benchmarking/benchmarking.qmd index 3c405149..62e85813 100644 --- a/contents/core/benchmarking/benchmarking.qmd +++ b/contents/core/benchmarking/benchmarking.qmd @@ -262,7 +262,7 @@ The development life cycle of a machine learning model involves two critical pha From a systems perspective, training machine learning models is resource-intensive, especially when working with large models. These models often contain billions or even trillions of trainable parameters and require enormous amounts of data, often on the scale of many terabytes. For example, GPT-3 has 175 billion parameters, was trained on 45 TB of compressed plaintext data, and required 3,640 petaflop-days of compute for pretraining. ML training benchmarks evaluate the systems and resources required to manage the computational load of training such models. -Efficient data storage and delivery during training also play a major role in the training process. For instance, in a machine learning model that predicts bounding boxes around objects in an image, thousands of images may be required. However, loading an entire image dataset into memory is typically infeasible, so practitioners rely on data loaders from ML frameworks. Successful model training depends on timely and efficient data delivery, making it essential to benchmark tools like data loaders, data pipelines, preprocessing speed, and storage retrieval times to understand their impact on training performance. +Efficient data storage and delivery during training also play a major role in the training process. For instance, in a machine learning model that predicts bounding boxes around objects in an image, thousands of images may be required. However, loading an entire image dataset into memory is typically infeasible, so practitioners rely on data loaders (as disucssed in @sec-frameworks-data-loaders) from ML frameworks. Successful model training depends on timely and efficient data delivery, making it essential to benchmark tools like data loaders, data pipelines, preprocessing speed, and storage retrieval times to understand their impact on training performance. Hardware selection is another key factor in training machine learning systems, as it can significantly impact training time. Training benchmarks evaluate CPU, GPU, memory, and network utilization during the training phase to guide system optimizations. Understanding how resources are used is essential: Are GPUs being fully leveraged? Is there unnecessary memory overhead? Benchmarks can uncover bottlenecks or inefficiencies in resource utilization, leading to cost savings and performance improvements. diff --git a/contents/core/frameworks/frameworks.qmd b/contents/core/frameworks/frameworks.qmd index d35bf4d8..908d1b3f 100644 --- a/contents/core/frameworks/frameworks.qmd +++ b/contents/core/frameworks/frameworks.qmd @@ -397,7 +397,7 @@ Recently, the distinction has blurred as frameworks adopt both modes. TensorFlow Computational graphs can only be as good as the data they learn from and work on. Therefore, feeding training data efficiently is crucial for optimizing deep neural network performance, though it is often overlooked as one of the core functionalities. Many modern AI frameworks provide specialized pipelines to ingest, process, and augment datasets for model training. -#### Data Loaders +#### Data Loaders {#sec-frameworks-data-loaders} At the core of these pipelines are data loaders, which handle reading training examples from sources like files, databases, and object storage. Data loaders facilitate efficient data loading and preprocessing, crucial for deep learning models. For instance, TensorFlow's [tf.data](https://www.tensorflow.org/guide/data) dataloading pipeline is designed to manage this process. Depending on the application, deep learning models require diverse data formats such as CSV files or image folders. Some popular formats include: From 143ddc37d91c692d948a626df008f3edbeea5047 Mon Sep 17 00:00:00 2001 From: jasonjabbour Date: Sun, 10 Nov 2024 18:42:31 -0500 Subject: [PATCH 4/9] remove repetitive task sections and group in new section --- contents/core/benchmarking/benchmarking.qmd | 53 +++++++-------------- 1 file changed, 16 insertions(+), 37 deletions(-) diff --git a/contents/core/benchmarking/benchmarking.qmd b/contents/core/benchmarking/benchmarking.qmd index 62e85813..98fe4bd8 100644 --- a/contents/core/benchmarking/benchmarking.qmd +++ b/contents/core/benchmarking/benchmarking.qmd @@ -260,7 +260,7 @@ The development life cycle of a machine learning model involves two critical pha #### Purpose -From a systems perspective, training machine learning models is resource-intensive, especially when working with large models. These models often contain billions or even trillions of trainable parameters and require enormous amounts of data, often on the scale of many terabytes. For example, GPT-3 has 175 billion parameters, was trained on 45 TB of compressed plaintext data, and required 3,640 petaflop-days of compute for pretraining. ML training benchmarks evaluate the systems and resources required to manage the computational load of training such models. +From a systems perspective, training machine learning models is resource-intensive, especially when working with large models. These models often contain billions or even trillions of trainable parameters and require enormous amounts of data, often on the scale of many terabytes. For example, [OpenAI's GPT-3](https://arxiv.org/abs/2005.14165) [@brown2020language] has 175 billion parameters, was trained on 45 TB of compressed plaintext data, and required 3,640 petaflop-days of compute for pretraining. ML training benchmarks evaluate the systems and resources required to manage the computational load of training such models. Efficient data storage and delivery during training also play a major role in the training process. For instance, in a machine learning model that predicts bounding boxes around objects in an image, thousands of images may be required. However, loading an entire image dataset into memory is typically infeasible, so practitioners rely on data loaders (as disucssed in @sec-frameworks-data-loaders) from ML frameworks. Successful model training depends on timely and efficient data delivery, making it essential to benchmark tools like data loaders, data pipelines, preprocessing speed, and storage retrieval times to understand their impact on training performance. @@ -276,13 +276,13 @@ The following metrics are often considered important: 1. **Training Time:** The time it takes to train a model from scratch until it reaches a satisfactory performance level. It directly measures the computational resources required to train a model. For example, [Google's BERT](https://arxiv.org/abs/1810.04805) [@devlin2018bert] is a natural language processing model that requires several days to train on a massive corpus of text data using multiple GPUs. The long training time is a significant resource consumption and cost challenge. In some cases, benchmarks can instead measure the training throughput (training samples per unit of time). Throughput can be calculated much faster and easier than training time but may obscure the metrics we really care about (e.g. time to train). -2. **Scalability:** How well the training process can handle increases in data size or model complexity. Scalability can be assessed by measuring training time, memory usage, and other resource consumption as data size or model complexity increases. [OpenAI's GPT-3](https://arxiv.org/abs/2005.14165) [@brown2020language] model has 175 billion parameters, making it one of the largest language models in existence. Training GPT-3 required extensive engineering efforts to scale the training process to handle the massive model size. This involved using specialized hardware, distributed training, and other techniques to ensure the model could be trained efficiently. +2. **Scalability:** How well the training process can handle increases in data size or model complexity. Scalability can be assessed by measuring training time, memory usage, and other resource consumption as data size or model complexity increases. For instance, training OpenAI's GPT-3 required extensive engineering efforts to scale the training process across many GPU nodes to handle the massive model size. This involved using specialized hardware, distributed training, and other techniques to ensure the model could be trained efficiently. 3. **Resource Utilization:** The extent to which the training process utilizes available computational resources such as CPU, GPU, memory, and disk I/O. High resource utilization can indicate an efficient training process, while low utilization can suggest bottlenecks or inefficiencies. For instance, training a convolutional neural network (CNN) for image classification requires significant GPU resources. Utilizing multi-GPU setups and optimizing the training code for GPU acceleration can greatly improve resource utilization and training efficiency. 4. **Memory Consumption:** The amount of memory the training process uses. Memory consumption can be a limiting factor for training large models or datasets. For example, Google researchers faced significant memory consumption challenges when training BERT. The model has hundreds of millions of parameters, requiring large amounts of memory. The researchers had to develop techniques to reduce memory consumption, such as gradient checkpointing and model parallelism. -5. **Energy Consumption:** The energy consumed during training. As machine learning models become more complex, energy consumption has become an important consideration. Training large machine learning models can consume significant energy, leading to a large carbon footprint. For instance, the training of OpenAI's GPT-3 was estimated to have a carbon footprint equivalent to traveling by car for 700,000 kilometers. +5. **Energy Consumption:** The energy consumed during training. As machine learning models become more complex, energy consumption has become an important consideration. Training large machine learning models can consume significant energy, leading to a large carbon footprint. For instance, the training of OpenAI's GPT-3 was estimated to have a carbon footprint equivalent to traveling by car for 700,000 kilometers (~435,000 miles). 6. **Throughput:** The number of training samples processed per unit time. Higher throughput generally indicates a more efficient training process. The throughput is an important metric to consider when training a recommendation system for an e-commerce platform. A high throughput ensures that the model can process large volumes of user interaction data promptly, which is crucial for maintaining the relevance and accuracy of the recommendations. But it's also important to understand how to balance throughput with latency bounds. Therefore, a latency-bounded throughput constraint is often imposed on service-level agreements for data center application deployments. @@ -296,20 +296,6 @@ The following metrics are often considered important: By benchmarking for these types of metrics, we can obtain a comprehensive view of the training process's performance and efficiency from a systems perspective. This can help identify areas for improvement and ensure that resources are used effectively. -#### Tasks - -Selecting a handful of representative tasks for benchmarking machine learning systems is challenging because machine learning is applied to various domains with unique characteristics and requirements. Here are some of the challenges faced in selecting representative tasks: - -1. **Diversity of Applications:** Machine learning is used in numerous fields such as healthcare, finance, natural language processing, computer vision, and many more. Each field has specific tasks that may not be representative of other fields. For example, image classification tasks in computer vision may not be relevant to financial fraud detection. -2. **Variability in Data Types and Quality:** Different tasks require different data types, such as text, images, videos, or numerical data. Data quality and availability can vary greatly between tasks, making it difficult to select tasks that are representative of the general challenges faced in machine learning. -3. **Task Complexity and Difficulty:** The complexity of tasks varies greatly. Some are relatively straightforward, while others are highly complex and require sophisticated models and techniques. Selecting representative tasks that cover the complexities encountered in machine learning is challenging. -4. **Ethical and Privacy Concerns:** Some tasks may involve sensitive or private data, such as medical records or personal information. These tasks may have ethical and privacy concerns that need to be addressed, making them less suitable as representative tasks for benchmarking. -5. **Scalability and Resource Requirements:** Different tasks may have different scalability and resource requirements. Some tasks may require extensive computational resources, while others can be performed with minimal resources. Selecting tasks that represent the general resource requirements in machine learning is difficult. -6. **Evaluation Metrics:** The metrics used to evaluate the performance of machine learning models vary between tasks. Some tasks may have well-established evaluation metrics, while others lack clear or standardized metrics. This can make it challenging to compare performance across different tasks. -7. **Generalizability of Results:** The results obtained from benchmarking on a specific task may not be generalizable to other tasks. This means that a machine learning system's performance on a selected task may not be indicative of its performance on other tasks. - -It is important to carefully consider these factors when designing benchmarks to ensure they are meaningful and relevant to the diverse range of tasks encountered in machine learning. - #### Benchmarks Here are some original works that laid the fundamental groundwork for developing systematic benchmarks for training machine learning systems. @@ -388,26 +374,6 @@ Finally, it is vital to ensure that the model's predictions are not only accurat 6. **Memory Usage:** Memory usage quantifies the volume of RAM needed by a machine learning model to carry out inference tasks. A relevant example to illustrate this would be a face recognition system based on a CNN; if such a system requires 150 MB of RAM to process and recognize faces within an image, its memory usage is 150 MB. -#### Tasks - -The challenges in picking representative tasks for benchmarking inference machine learning systems are, by and large, somewhat similar to the taxonomy we have provided for training. Nevertheless, to be pedantic, let's discuss those in the context of inference machine learning systems. - -1. **Diversity of Applications:** Inference machine learning is employed across numerous domains such as healthcare, finance, entertainment, security, and more. Each domain has unique tasks, and what's representative in one domain might not be in another. For example, an inference task for predicting stock prices in the financial domain might differ from image recognition tasks in the medical domain. - -2. **Variability in Data Types:** Different inference tasks require different types of data—text, images, videos, numerical data, etc. Ensuring that benchmarks address the wide variety of data types used in real-world applications is challenging. For example, voice recognition systems process audio data, which is vastly different from the visual data processed by facial recognition systems. - -3. **Task Complexity:** The complexity of inference tasks can differ immensely, from basic classification tasks to intricate tasks requiring state-of-the-art models. For example, differentiating between two categories (binary classification) is typically simpler than detecting hundreds of object types in a crowded scene. - -4. **Real-time Requirements:** Some applications demand immediate or real-time responses, while others may allow for some delay. In autonomous driving, real-time object detection and decision-making are paramount, whereas a recommendation engine for a shopping website might tolerate slight delays. - -5. **Scalability Concerns:** Given the varied scale of applications, from edge devices to cloud-based servers, tasks must represent the diverse computational environments where inference occurs. For example, an inference task running on a smartphone's limited resources differs from a powerful cloud server. - -6. **Evaluation Metrics Diversity:** The metrics used to evaluate performance can differ significantly depending on the task. Finding a common ground or universally accepted metric for diverse tasks is challenging. For example, precision and recall might be vital for a medical diagnosis task, whereas throughput (inferences per second) might be more crucial for video processing tasks. - -7. **Ethical and Privacy Concerns:** Concerns related to ethics and privacy exist, especially in sensitive areas like facial recognition or personal data processing. These concerns can impact the selection and nature of tasks used for benchmarking. For example, using real-world facial data for benchmarking can raise privacy issues, whereas synthetic data might not replicate real-world challenges. - -8. **Hardware Diversity:** With a wide range of devices from GPUs, CPUs, and TPUs to custom ASICs used for inference, ensuring that tasks are representative across varied hardware is challenging. For example, a task optimized for inference on a GPU might perform sub-optimally on an edge device. - #### Benchmarks Here are some original works that laid the fundamental groundwork for developing systematic benchmarks for inference machine learning systems. @@ -472,6 +438,19 @@ Get ready to put your AI models to the ultimate test! MLPerf is like the Olympic ::: + +### Benchmark Task Selection + +Selecting representative tasks for benchmarking machine learning systems is complex due to the varied applications, data types, and requirements across different domains. Machine learning is applied in fields such as healthcare, finance, natural language processing, and computer vision, each with unique tasks that may not be relevant or comparable to others. Key challenges in task selection include: + +1. **Diversity of Applications and Data Types:** Tasks across domains involve different data types (e.g., text, images, video) and qualities, making it difficult to find benchmarks that universally represent ML challenges. +2. **Task Complexity and Resource Needs:** Tasks vary in complexity and resource demands, with some requiring substantial computational power and sophisticated models, while others can be addressed with simpler resources and methods. +3. **Privacy Concerns:** Tasks involving sensitive data, such as medical records or personal information, introduce ethical and privacy issues, making them unsuitable for general benchmarks. +4. **Evaluation Metrics:** Performance metrics vary widely across tasks, and results from one task often do not generalize to others, complicating comparisons and limiting insights from one benchmarked task to another. + +Addressing these challenges is essential to designing meaningful benchmarks that are relevant across the diverse tasks encountered in machine learning, ensuring benchmarks provide useful, generalizable insights for both training and inference. + + ### Measuring Energy Efficiency As machine learning capabilities expand, both in training and inference, concerns about increased power consumption and its ecological footprint have intensified. Addressing the sustainability of ML systems, a topic explored in more depth in the [Sustainable AI](../sustainable_ai/sustainable_ai.qmd) chapter, has thus become a key priority. This focus on sustainability has led to the development of standardized benchmarks designed to accurately measure energy efficiency. However, standardizing these methodologies poses challenges due to the need to accommodate vastly different scales—from the microwatt consumption of TinyML devices to the megawatt demands of data center training systems. Moreover, ensuring that benchmarking is fair and reproducible requires accommodating the diverse range of hardware configurations and architectures in use today. From fb3d607f41e496c03baf77b9918483be9a1a794d Mon Sep 17 00:00:00 2001 From: jasonjabbour Date: Mon, 11 Nov 2024 13:19:17 -0500 Subject: [PATCH 5/9] re-working the examples --- contents/core/benchmarking/benchmarking.qmd | 48 +++++++++------------ 1 file changed, 21 insertions(+), 27 deletions(-) diff --git a/contents/core/benchmarking/benchmarking.qmd b/contents/core/benchmarking/benchmarking.qmd index 98fe4bd8..f6650036 100644 --- a/contents/core/benchmarking/benchmarking.qmd +++ b/contents/core/benchmarking/benchmarking.qmd @@ -300,9 +300,7 @@ By benchmarking for these types of metrics, we can obtain a comprehensive view o Here are some original works that laid the fundamental groundwork for developing systematic benchmarks for training machine learning systems. -*[MLPerf Training Benchmark](https://github.com/mlcommons/training)* - -MLPerf is a suite of benchmarks designed to measure the performance of machine learning hardware, software, and services. The MLPerf Training benchmark [@mattson2020mlperf] focuses on the time it takes to train models to a target quality metric. It includes diverse workloads, such as image classification, object detection, translation, and reinforcement learning. @fig-perf-trend highlights the performance improvements in progressive versions of MLPerf Training benchmarks, which have all outpaced Moore's Law. Using standardized benchmarking trends enables us to rigorously showcase the rapid evolution of ML computing. +**[MLPerf Training Benchmark](https://github.com/mlcommons/training)**: MLPerf is a suite of benchmarks designed to measure the performance of machine learning hardware, software, and services. The MLPerf Training benchmark [@mattson2020mlperf] focuses on the time it takes to train models to a target quality metric. It includes diverse workloads, such as image classification, object detection, translation, and reinforcement learning. @fig-perf-trend highlights the performance improvements in progressive versions of MLPerf Training benchmarks, which have all outpaced Moore's Law. Using standardized benchmarking trends enables us to rigorously showcase the rapid evolution of ML computing. ![MLPerf Training performance trends. Source: @mattson2020mlperf.](images/png/mlperf_perf_trend.png){#fig-perf-trend} @@ -312,9 +310,7 @@ Metrics: * Throughput (examples per second) * Resource utilization (CPU, GPU, memory, disk I/O) -*[DAWNBench](https://dawn.cs.stanford.edu/benchmark/)* - -DAWNBench [@coleman2017dawnbench] is a benchmark suite focusing on end-to-end deep learning training time and inference performance. It includes common tasks such as image classification and question answering. +**[DAWNBench](https://dawn.cs.stanford.edu/benchmark/)**: DAWNBench [@coleman2017dawnbench] is a benchmark suite focusing on end-to-end deep learning training time and inference performance. It includes common tasks such as image classification and question answering. Metrics: @@ -322,9 +318,7 @@ Metrics: * Inference latency * Cost (in terms of cloud computing and storage resources) -*[Fathom](https://github.com/rdadolf/fathom)* - -Fathom [@adolf2016fathom] is a benchmark from Harvard University that evaluates the performance of deep learning models using a diverse set of workloads. These include common tasks such as image classification, speech recognition, and language modeling. +**[Fathom](https://github.com/rdadolf/fathom)**: Fathom [@adolf2016fathom] is a benchmark from Harvard University that evaluates the performance of deep learning models using a diverse set of workloads. These include common tasks such as image classification, speech recognition, and language modeling. Metrics: @@ -334,17 +328,18 @@ Metrics: #### Example Use Case -Consider a scenario where we want to benchmark the training of an image classification model on a specific hardware platform. +Imagine you have been tasked with benchmarking the training performance of an image classification model on a specific hardware platform. Let’s break down how you might approach this: -1. **Task:** The task is to train a convolutional neural network (CNN) for image classification on the CIFAR-10 dataset. -2. **Benchmark:** We can use the MLPerf Training benchmark for this task. It includes an image classification workload that is relevant to our task. -3. **Metrics:** We will measure the following metrics: +1. **Define the Task**: First, choose a model and dataset. In this case, you’ll be training a CNN to classify images in the [CIFAR-10](https://www.cs.toronto.edu/kriz/cifar.html) dataset, a widely used benchmark in computer vision. -* Training time to reach a target accuracy of 90%. -* Throughput in terms of images processed per second. -* GPU and CPU utilization during training. +2. **Select the Benchmark**: Choosing a widely accepted benchmark helps ensure your setup is comparable with other real-world evaluations. You could choose to use the MLPerf Training benchmark because it provides a structured image classification workload, making it a relevant and standardized option for assessing training performance on CIFAR-10. Using MLPerf enables you to evaluate your system against industry-standard metrics, helping to ensure that results are meaningful and comparable to those achieved on other hardware platforms. -By measuring these metrics, we can assess the performance and efficiency of the training process on the selected hardware platform. This information can then be used to identify potential bottlenecks or areas for improvement. +3. **Identify Key Metrics**: Now, decide on the metrics that will help you evaluate the system’s training performance. For this example, you might track: + - **Training Time**: How long does it take to reach 90% accuracy? + - **Throughput**: How many images are processed per second? + - **Resource Utilization**: What’s the GPU and CPU usage throughout training? + +By analyzing these metrics, you’ll gain insights into the model's training performance on your chosen hardware platform. Consider whether training time meets your expectations, if there are any bottlenecks, such as underutilized GPUs or slow data loading. This process helps identify areas for potential optimization, like improving data handling or adjusting resource allocation, and can guide future benchmarking decisions. ### Inference Benchmarks @@ -413,20 +408,19 @@ Metrics: #### Example Use Case -Consider a scenario where we want to evaluate the inference performance of an object detection model on a specific edge device. - -Task: The task is to perform real-time object detection on video streams, detecting and identifying objects such as vehicles, pedestrians, and traffic signs. +Suppose you were tasked with evaluating the inference performance of an object detection model on a specific edge device. Here’s how you might approach structuring this benchmark: -Benchmark: We can use the AI Benchmark for this task as it evaluates inference performance on edge devices, which suits our scenario. +1. **Define the Task**: In this case, the task is real-time object detection on video streams, identifying objects such as vehicles, pedestrians, and traffic signs. -Metrics: We will measure the following metrics: +2. **Select the Benchmark**: To align with your goal of evaluating inference on an edge device, the AI Benchmark is a suitable choice. It provides a standardized framework specifically for assessing inference performance on edge hardware, making it relevant to this scenario. -* Inference time to process each video frame -* Latency to generate the bounding boxes for detected objects -* Energy consumption during the inference process -* Throughput in terms of video frames processed per second +3. **Identify Key Metrics**: Now, determine the metrics that will help evaluate the model’s inference performance. For this example, you might track: + - **Inference Time**: How long does it take to process each video frame? + - **Latency**: What is the delay in generating bounding boxes for detected objects? + - **Energy Consumption**: How much power is used during inference? + - **Throughput**: How many video frames are processed per second? -By measuring these metrics, we can assess the performance of the object detection model on the edge device and identify any potential bottlenecks or areas for optimization to improve real-time processing capabilities. +By measuring these metrics, you’ll gain insights into how well the object detection model performs on the edge device. This can help identify any bottlenecks, such as slow frame processing or high energy consumption, and highlight areas for potential optimization to improve real-time performance. :::{#exr-perf .callout-caution collapse="true"} From c1549a7dc3fee894d8a646d3f1aec373c9bd2e70 Mon Sep 17 00:00:00 2001 From: jasonjabbour Date: Mon, 11 Nov 2024 14:44:48 -0500 Subject: [PATCH 6/9] benchmark engineering --- contents/core/benchmarking/benchmarking.bib | 8 ++++++++ contents/core/benchmarking/benchmarking.qmd | 10 +++++----- 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/contents/core/benchmarking/benchmarking.bib b/contents/core/benchmarking/benchmarking.bib index 863e8d38..5923a4cb 100644 --- a/contents/core/benchmarking/benchmarking.bib +++ b/contents/core/benchmarking/benchmarking.bib @@ -50,6 +50,14 @@ @article{banbury2020benchmarking year = {2020}, } +@article{banbury2021mlperf, + title={Mlperf tiny benchmark}, + author={Banbury, Colby and Reddi, Vijay Janapa and Torelli, Peter and Holleman, Jeremy and Jeffries, Nat and Kiraly, Csaba and Montino, Pietro and Kanter, David and Ahmed, Sebastian and Pau, Danilo and others}, + journal={arXiv preprint arXiv:2106.07597}, + year={2021}, + url = {https://arxiv.org/pdf/2106.07597}, +} + @article{beyer2020we, author = {Beyer, Lucas and H\'enaff, Olivier J and Kolesnikov, Alexander and Zhai, Xiaohua and Oord, A\"aron van den}, journal = {ArXiv preprint}, diff --git a/contents/core/benchmarking/benchmarking.qmd b/contents/core/benchmarking/benchmarking.qmd index f6650036..e6327c75 100644 --- a/contents/core/benchmarking/benchmarking.qmd +++ b/contents/core/benchmarking/benchmarking.qmd @@ -517,7 +517,7 @@ Hardware lottery occurs when a machine learning model unintentionally performs e In contrast to the accidental hardware lottery, benchmark engineering involves deliberately optimizing or designing a machine learning model to perform exceptionally well on specific hardware, often to win benchmarks or competitions. This intentional optimization might include tweaking the model's architecture, algorithms, or parameters to exploit the hardware's features and capabilities fully. -#### Problem +##### Problem Benchmark engineering refers to tweaking or modifying an AI system to optimize performance on specific benchmark tests, often at the expense of generalizability or real-world performance. This can include adjusting hyperparameters, training data, or other aspects of the system specifically to achieve high scores on benchmark metrics without necessarily improving the overall functionality or utility of the system. @@ -527,7 +527,7 @@ It can lead to several risks and challenges. One of the primary risks is that th The AI community must prioritize transparency and accountability to mitigate the risks associated with benchmark engineering. This can include disclosing any optimizations or adjustments made specifically for benchmark tests and providing more comprehensive evaluations of AI systems that include real-world performance metrics and benchmark scores. Researchers and developers must prioritize holistic improvements to AI systems that improve their generalizability and functionality across various applications rather than focusing solely on benchmark-specific optimizations. -#### Issues +##### Issues One of the primary problems with benchmark engineering is that it can compromise the real-world performance of AI systems. When developers focus on optimizing their systems to achieve high scores on specific benchmark tests, they may neglect other important system performance aspects crucial in real-world applications. For example, an AI system designed for image recognition might be engineered to perform exceptionally well on a benchmark test that includes a specific set of images but needs help to recognize images slightly different from those in the test set accurately. @@ -535,15 +535,15 @@ Another area for improvement with benchmark engineering is that it can result in It can also lead to misleading results. When AI systems are engineered to perform well on benchmark tests, the results may not accurately reflect the system's true capabilities. This can be problematic for users or investors who rely on benchmark scores to make informed decisions about which AI systems to use or invest in. For example, an AI system engineered to achieve high scores on a benchmark test for speech recognition might need to be more capable of accurately recognizing speech in real-world situations, leading users or investors to make decisions based on inaccurate information. -#### Mitigation +##### Mitigation There are several ways to mitigate benchmark engineering. Transparency in the benchmarking process is crucial to maintaining benchmark accuracy and reliability. This involves clearly disclosing the methodologies, data sets, and evaluation criteria used in benchmark tests, as well as any optimizations or adjustments made to the AI system for the purpose of the benchmark. One way to achieve transparency is through the use of open-source benchmarks. Open-source benchmarks are made publicly available, allowing researchers, developers, and other stakeholders to review, critique, and contribute to them, thereby ensuring their accuracy and reliability. This collaborative approach also facilitates sharing best practices and developing more robust and comprehensive benchmarks. -One example is the MLPerf Tiny. It's an open-source framework designed to make it easy to compare different solutions in the world of TinyML. Its modular design allows components to be swapped out for comparison or improvement. The reference implementations, shown in green and orange in @fig-ml-perf, act as the baseline for results. TinyML often needs optimization across the entire system, and users can contribute by focusing on specific parts, like quantization. The modular benchmark design allows users to showcase their contributions and competitive advantage by modifying a reference implementation. In short, MLPerf Tiny offers a flexible and modular way to assess and improve TinyML applications, making it easier to compare and improve different aspects of the technology. +The modular design of MLPerf Tiny connects to the problem of benchmark engineering by providing a structured yet flexible approach that encourages a balanced evaluation of TinyML. In benchmark engineering, systems may be overly optimized for specific benchmarks, leading to inflated performance scores that don’t necessarily translate to real-world effectiveness. MLPerf Tiny’s modular design aims to address this issue by allowing contributors to swap out and test specific components within a standardized framework, such as hardware, quantization techniques, or inference models. The reference implementations, highlighted in green and orange in @fig-ml-perf, provide a baseline for results, enabling flexible yet controlled testing by specifying which components can be modified. This structure supports transparency and flexibility, enabling a focus on genuine improvements rather than benchmark-specific optimizations. -![MLPerf Tiny modular design. Source: @mattson2020mlperf.](images/png/mlperf_tiny.png){#fig-ml-perf} +![Modular design of the MLPerf Tiny benchmark, showing the reference implementation with modifiable components. This modular approach enables flexible, targeted testing while maintaining a standardized baseline. Source: @banbury2021mlperf.](images/png/mlperf_tiny.png){#fig-ml-perf} Another method for achieving transparency is through peer review of benchmarks. This involves having independent experts review and validate the benchmark's methodology, data sets, and results to ensure their credibility and reliability. Peer review can provide a valuable means of verifying the accuracy of benchmark tests and help build confidence in the results. From a56d3e291498f2011332b125defae596d913fdf1 Mon Sep 17 00:00:00 2001 From: jasonjabbour Date: Mon, 11 Nov 2024 16:10:24 -0500 Subject: [PATCH 7/9] cutting down on repetitive metrics descriptions --- contents/core/benchmarking/benchmarking.qmd | 44 +++++++-------------- 1 file changed, 15 insertions(+), 29 deletions(-) diff --git a/contents/core/benchmarking/benchmarking.qmd b/contents/core/benchmarking/benchmarking.qmd index e6327c75..bd2e436a 100644 --- a/contents/core/benchmarking/benchmarking.qmd +++ b/contents/core/benchmarking/benchmarking.qmd @@ -603,59 +603,45 @@ Machine learning model evaluation has evolved from a narrow focus on accuracy to #### Accuracy -Accuracy is one of the most intuitive and commonly used metrics for evaluating machine learning models. At its core, accuracy measures the proportion of correct predictions made by the model out of all predictions. For example, imagine we have developed a machine learning model to classify images as either containing a cat or not. If we test this model on a dataset of 100 images, and it correctly identifies 90 of them, we would calculate its accuracy as 90%. +Accuracy is one of the most intuitive and commonly used metrics for evaluating machine learning models. In the early stages of machine learning, accuracy was often the primary, if not the only, metric considered when evaluating model performance. However, as the field has evolved, it’s become clear that relying solely on accuracy can be misleading, especially in applications where certain types of errors carry significant consequences. -In the initial stages of machine learning, accuracy was often the primary, if not the only, metric considered when evaluating model performance. This is understandable, given its straightforward nature and ease of interpretation. However, as the field has progressed, the limitations of relying solely on accuracy have become more apparent. +Consider the example of a medical diagnosis model with an accuracy of 95%. While at first glance this may seem impressive, we must look deeper to assess the model's performance fully. Suppose the model fails to accurately diagnose severe conditions that, while rare, can have severe consequences; its high accuracy may not be as meaningful. A well-known example of this limitation is [Google’s diabetic retinopathy model](https://about.google/intl/ALL_us/stories/seeingpotential/). While it achieved high accuracy in lab settings, it encountered challenges when deployed in real-world clinics in Thailand, where variations in patient populations, image quality, and environmental factors reduced its effectiveness. This example illustrates that even models with high accuracy need to be tested for their ability to generalize across diverse, unpredictable conditions to ensure reliability and impact in real-world settings. -Consider the example of a medical diagnosis model with an accuracy of 95%. While at first glance this may seem impressive, we must look deeper to assess the model's performance fully. Suppose the model fails to accurately diagnose severe conditions that, while rare, can have severe consequences; its high accuracy may not be as meaningful. A pertinent example of this is [Google's retinopathy machine learning model](https://about.google/intl/ALL_us/stories/seeingpotential/), which was designed to diagnose diabetic retinopathy and diabetic macular edema from retinal photographs. +Similarly, if the model performs well on average but exhibits significant disparities in performance across different demographic groups, this, too, would be cause for concern. The evolution of machine learning has thus seen a shift towards a more holistic approach to model evaluation, taking into account not just accuracy, but also other crucial factors such as fairness, transparency, and real-world applicability. A prime example is the [Gender Shades project](http://gendershades.org/) at MIT Media Lab, led by Joy Buolamwini, highlighting biases by performing better on lighter-skinned and male faces compared to darker-skinned and female faces. -The Google model demonstrated impressive accuracy levels in lab settings. Still, when deployed in real-world clinical environments in Thailand, [it faced significant challenges](https://www.technologyreview.com/2020/04/27/1000658/google-medical-ai-accurate-lab-real-life-clinic-covid-diabetes-retina-disease/). In the real-world setting, the model encountered diverse patient populations, varying image quality, and a range of different medical conditions that it had not been exposed to during its training. Consequently, its performance could have been better, and it struggled to maintain the same accuracy levels observed in lab settings. This example serves as a clear reminder that while high accuracy is an important and desirable attribute for a medical diagnosis model, it must be evaluated in conjunction with other factors, such as the model's ability to generalize to different populations and handle diverse and unpredictable real-world conditions, to understand its value and potential impact on patient care truly. - -Similarly, if the model performs well on average but exhibits significant disparities in performance across different demographic groups, this, too, would be cause for concern. - -The evolution of machine learning has thus seen a shift towards a more holistic approach to model evaluation, taking into account not just accuracy, but also other crucial factors such as fairness, transparency, and real-world applicability. A prime example is the [Gender Shades project](http://gendershades.org/) at MIT Media Lab, led by Joy Buolamwini, highlighting significant racial and gender biases in commercial facial recognition systems. The project evaluated the performance of three facial recognition technologies developed by IBM, Microsoft, and Face++. It found that they all exhibited biases, performing better on lighter-skinned and male faces compared to darker-skinned and female faces. - -While accuracy remains a fundamental and valuable metric for evaluating machine learning models, a more comprehensive approach is required to fully assess a model's performance. This means considering additional metrics that account for fairness, transparency, and real-world applicability, as well as conducting rigorous testing across diverse datasets to uncover and mitigate any potential biases. The move towards a more holistic approach to model evaluation reflects the maturation of the field and its increasing recognition of the real-world implications and ethical considerations associated with deploying machine learning models. +While accuracy remains essential for evaluating machine learning models, a comprehensive approach is needed to fully assess performance. This includes additional metrics for fairness, transparency, and real-world applicability, along with rigorous testing across diverse datasets to identify and address biases. This holistic evaluation approach reflects the field’s growing awareness of real-world implications in deploying models. #### Fairness -Fairness in machine learning models is a multifaceted and critical aspect that requires careful attention, particularly in high-stakes applications that significantly affect people's lives, such as in loan approval processes, hiring, and criminal justice. It refers to the equitable treatment of all individuals, irrespective of their demographic or social attributes such as race, gender, age, or socioeconomic status. - -Simply relying on accuracy can be insufficient and potentially misleading when evaluating models. For instance, consider a loan approval model with a 95% accuracy rate. While this figure may appear impressive at first glance, it does not reveal how the model performs across different demographic groups. If this model consistently discriminates against a particular group, its accuracy is less commendable, and its fairness is questioned. +Fairness in machine learning involves ensuring that models perform consistently across diverse groups, especially in high-impact applications like loan approvals, hiring, and criminal justice. Relying solely on accuracy can be misleading if the model exhibits biased outcomes across demographic groups. For example, a loan approval model with high accuracy may still consistently deny loans to certain groups, raising questions about its fairness. -Discrimination can manifest in various forms, such as direct discrimination, where a model explicitly uses sensitive attributes like race or gender in its decision-making process, or indirect discrimination, where seemingly neutral variables correlate with sensitive attributes, indirectly influencing the model's outcomes. An infamous example of the latter is the COMPAS tool used in the US criminal justice system, which exhibited racial biases in predicting recidivism rates despite not explicitly using race as a variable. +Bias in models can arise directly, when sensitive attributes like race or gender influence decisions, or indirectly, when neutral features correlate with these attributes, affecting outcomes. Simply relying on accuracy can be insufficient when evaluating models. For instance, consider a loan approval model with a 95% accuracy rate. While this figure may appear impressive at first glance, it does not reveal how the model performs across different demographic groups. For instance, a well-known example is the COMPAS tool used in the US criminal justice system, which showed racial biases in predicting recidivism despite not explicitly using race as a variable. -Addressing fairness involves careful examination of the model's performance across diverse groups, identifying potential biases, and rectifying disparities through corrective measures such as re-balancing datasets, adjusting model parameters, and implementing fairness-aware algorithms. Researchers and practitioners continuously develop metrics and methodologies tailored to specific use cases to evaluate fairness in real-world scenarios. For example, disparate impact analysis, demographic parity, and equal opportunity are some of the metrics employed to assess fairness. +Addressing fairness requires analyzing a model’s performance across groups, identifying biases, and applying corrective measures like re-balancing datasets or using fairness-aware algorithms. Researchers and practitioners continuously develop metrics and methodologies tailored to specific use cases to evaluate fairness in real-world scenarios. For example, disparate impact analysis, demographic parity, and equal opportunity are some of the metrics employed to assess fairness. Additionally, transparency and interpretability of models are fundamental to achieving fairness. Tools like [AI Fairness 360](https://ai-fairness-360.org/) and [Fairness Indicators](https://www.tensorflow.org/tfx/guide/fairness_indicators) help explain how a model makes decisions, allowing developers to detect and correct fairness issues in machine learning models. -Additionally, transparency and interpretability of models are fundamental to achieving fairness. Understanding how a model makes decisions can reveal potential biases and enable stakeholders to hold developers accountable. Open-source tools like [AI Fairness 360](https://ai-fairness-360.org/) by IBM and [Fairness Indicators](https://www.tensorflow.org/tfx/guide/fairness_indicators) by TensorFlow are being developed to facilitate fairness assessments and mitigation of biases in machine learning models. - -Ensuring fairness in machine learning models, particularly in applications that significantly impact people's lives, requires rigorous evaluation of the model's performance across diverse groups, careful identification and mitigation of biases, and implementation of transparency and interpretability measures. By comprehensively addressing fairness, we can work towards developing machine learning models that are equitable, just, and beneficial for society. +While accuracy is a valuable metric, it doesn’t always provide the full picture; assessing fairness ensures models are effective across real-world scenarios. Ensuring fairness in machine learning models, particularly in applications that significantly impact people's lives, requires rigorous evaluation of the model's performance across diverse groups, careful identification and mitigation of biases, and implementation of transparency and interpretability measures. #### Complexity ##### Parameters -In the initial stages of machine learning, model benchmarking often relied on parameter counts as a proxy for model complexity. The rationale was that more parameters typically lead to a more complex model, which should, in turn, deliver better performance. However, this approach has proven inadequate as it needs to account for the computational cost associated with processing many parameters. - -For example, GPT-3, developed by OpenAI, is a language model that boasts an astounding 175 billion parameters. While it achieves state-of-the-art performance on various natural language processing tasks, its size and the computational resources required to run it make it impractical for deployment in many real-world scenarios, especially those with limited computational capabilities. +In the initial stages of machine learning, model benchmarking often relied on parameter counts as a proxy for model complexity. The rationale was that more parameters typically lead to a more complex model, which should, in turn, deliver better performance. However, this approach overlooks the practical costs associated with processing large models. As parameter counts increase, so do the computational resources required, making such models impractical for deployment in real-world scenarios, particularly on devices with limited processing power. -Relying on parameter counts as a proxy for model complexity also fails to consider the model's efficiency. If optimized for efficiency, a model with fewer parameters might be just as effective, if not more so, than a model with a higher parameter count. For instance, MobileNets, developed by Google, is a family of models designed specifically for mobile and edge devices. They use depth-wise separable convolutions to reduce the number of parameters and computational costs while still achieving competitive performance. +Relying on parameter counts as a proxy for model complexity also fails to consider the model's efficiency. A well-optimized model with fewer parameters can often achieve comparable or even superior performance to a larger model. For instance, MobileNets, developed by Google, is a family of models designed specifically for mobile and edge devices. They used depth-wise separable convolutions to reduce parameter counts and computational demands while still maintaining strong performance. -In light of these limitations, the field has moved towards a more holistic approach to model benchmarking that considers parameter counts and other crucial factors such as floating-point operations per second (FLOPs), memory consumption, and latency. FLOPs, in particular, have emerged as an important metric as they provide a more accurate representation of the computational load a model imposes. This shift towards a more comprehensive approach to model benchmarking reflects a recognition of the need to balance performance with practicality, ensuring that models are effective, efficient, and deployable in real-world scenarios. +In light of these limitations, the field has moved towards a more holistic approach to model benchmarking that considers parameter counts and other crucial factors such as floating-point operations per second (FLOPs), memory consumption, and latency. This comprehensive approach balances performance with deployability, ensuring that models are not only accurate but also efficient and suitable for real-world applications. ##### FLOPS -The size of a machine learning model is an essential aspect that directly impacts its usability in practical scenarios, especially when computational resources are limited. Traditionally, the number of parameters in a model was often used as a proxy for its size, with the underlying assumption being that more parameters would translate to better performance. However, this simplistic view does not consider the computational cost of processing these parameters. This is where the concept of floating-point operations per second (FLOPs) comes into play, providing a more accurate representation of the computational load a model imposes. +FLOPs, or floating-point operations per second, have become a critical metric for representing a model’s computational load. Traditionally, parameter count was used as a proxy for model complexity, based on the assumption that more parameters would yield better performance. However, this approach overlooks the computational cost of processing these parameters, which can impact a model’s usability in real-world scenarios with limited resources. -FLOPs measure the number of floating-point operations a model performs to generate a prediction. A model with many FLOPs requires substantial computational resources to process the vast number of operations, which may render it impractical for certain applications. Conversely, a model with a lower FLOP count is more lightweight and can be easily deployed in scenarios where computational resources are limited. @fig-flops, from [@bianco2018benchmark], shows the relationship between Top-1 Accuracy on ImageNet (_y_-axis), the model's G-FLOPs (_x_-axis), and the model's parameter count (circle-size). +FLOPs measure the number of floating-point operations a model performs to generate a prediction. A model with many FLOPs requires substantial computational resources to process the vast number of operations, which may render it impractical for certain applications. Conversely, a model with a lower FLOP count is more lightweight and can be easily deployed in scenarios where computational resources are limited. @fig-flops, from [@bianco2018benchmark], illustrates the trade-off between ImageNet accuracy, FLOPs, and parameter count, showing that some architectures achieve higher efficiency than others. ![A graph that depicts the top-1 imagenet accuracy vs. the FLOP count of a model along with the model's parameter count. The figure shows a overall tradeoff between model complexity and accuracy, although some model architectures are more efficiency than others. Source: @bianco2018benchmark.](images/png/model_FLOPS_VS_TOP_1.png){#fig-flops} -Let's consider an example. BERT---Bidirectional Encoder Representations from Transformers [@devlin2018bert]---is a popular natural language processing model, has over 340 million parameters, making it a large model with high accuracy and impressive performance across various tasks. However, the sheer size of BERT, coupled with its high FLOP count, makes it a computationally intensive model that may not be suitable for real-time applications or deployment on edge devices with limited computational capabilities. - -In light of this, there has been a growing interest in developing smaller models that can achieve similar performance levels as their larger counterparts while being more efficient in computational load. DistilBERT, for instance, is a smaller version of BERT that retains 97% of its performance while being 40% smaller in terms of parameter count. The size reduction also translates to a lower FLOP count, making DistilBERT a more practical choice for resource-constrained scenarios. +Let's consider an example. BERT---Bidirectional Encoder Representations from Transformers [@devlin2018bert]---is a popular natural language processing model, has over 340 million parameters, making it a large model with high accuracy and impressive performance across various tasks. However, the sheer size of BERT, coupled with its high FLOP count, makes it a computationally intensive model that may not be suitable for real-time applications or deployment on edge devices with limited computational capabilities. In light of this, there has been a growing interest in developing smaller models that can achieve similar performance levels as their larger counterparts while being more efficient in computational load. DistilBERT, for instance, is a smaller version of BERT that retains 97% of its performance while being 40% smaller in terms of parameter count. The size reduction also translates to a lower FLOP count, making DistilBERT a more practical choice for resource-constrained scenarios. -In summary, while parameter count provides a useful indication of model size, it is not a comprehensive metric as it needs to consider the computational cost associated with processing these parameters. FLOPs, on the other hand, offer a more accurate representation of a model's computational load and are thus an essential consideration when deploying machine learning models in real-world scenarios, particularly when computational resources are limited. The evolution from relying solely on parameter count to considering FLOPs signifies a maturation in the field, reflecting a greater awareness of the practical constraints and challenges of deploying machine learning models in diverse settings. +While parameter count indicates model size, it does not fully capture the computational cost. FLOPs provide a more accurate measure of computational load, highlighting the practical trade-offs in model deployment. This shift from parameter count to FLOPs reflects the field’s growing awareness of deployment challenges in diverse settings. ##### Efficiency From d050a58a22dd1cecf8a56fdabcf7504422abeee2 Mon Sep 17 00:00:00 2001 From: jasonjabbour Date: Mon, 11 Nov 2024 16:26:54 -0500 Subject: [PATCH 8/9] re-drawn trifecta diagram --- contents/core/benchmarking/benchmarking.qmd | 2 +- .../images/png/benchmarking_trifecta.png | Bin 0 -> 68501 bytes 2 files changed, 1 insertion(+), 1 deletion(-) create mode 100644 contents/core/benchmarking/images/png/benchmarking_trifecta.png diff --git a/contents/core/benchmarking/benchmarking.qmd b/contents/core/benchmarking/benchmarking.qmd index bd2e436a..9a889ef0 100644 --- a/contents/core/benchmarking/benchmarking.qmd +++ b/contents/core/benchmarking/benchmarking.qmd @@ -768,7 +768,7 @@ Benchmarking the triad of system, model, and data in an integrated fashion will @fig-benchmarking-trifecta illustrates the many potential ways to interplay data benchmarking, model benchmarking, and system infrastructure benchmarking together. Exploring these intricate interactions is likely to uncover new optimization opportunities and enhancement capabilities. The data, model, and system benchmark triad offers a rich space for co-design and co-optimization. -![Benchmarking trifecta.](images/png/trifecta.png){#fig-benchmarking-trifecta} +![Benchmarking trifecta.](images/png/benchmarking_trifecta.png){#fig-benchmarking-trifecta} While this integrated perspective represents an emerging trend, the field has much more to discover about the synergies and trade-offs between these components. As we iteratively benchmark combinations of data, models, and systems, new insights that remain hidden when these elements are studied in isolation will emerge. This multifaceted benchmarking approach charting the intersections of data, algorithms, and hardware promises to be a fruitful avenue for major progress in AI, even though it is still in its early stages. diff --git a/contents/core/benchmarking/images/png/benchmarking_trifecta.png b/contents/core/benchmarking/images/png/benchmarking_trifecta.png new file mode 100644 index 0000000000000000000000000000000000000000..3b2b9e56791a264e5438ed8a57ed712363d27a40 GIT binary patch literal 68501 zcmeGE^;cAJ*f$E((j7`8-AH$LOLs_%#L%4*(h{O{qrw1^LpMXGg2IqP3Jf6K@8+)a z+|TSMfntTLllB5*rB#2~SN`Q4a|T9fX90T84=V{AJ-Ft{C`* z<*91wjf6zo_wJAaeSBQX-Bi1I6%jRlEcC)>vBG6hZ7yzl&&SOA&Jz!UVS!o`IpJK? z^yO^_`9g$ikI)+B)kgfj@*3sSQ{i%C6_}RG;y&m(- z=Qr{D!vEg~|1YJIZX(9R&H0qf&LiRk^e`pU{a~E<*1|bMXf1webES4?vHSk?sBQ^f z$`%x(cyVUnqZ0lAuQN~mB53fP(Op5Jt7h!s+@mnfuRBusKCH+iDOcvvJiFP#9WDs0 z0O6@UiF&*|QcEz3BRP7wu2ODl-NkU}A!?D!w>&L>@Hd$jloxFiDU}Q6dyao`(k(@O zk=kGGS@t+(Ewn#1C?bcN0@2#t(z1ZdFrt1fdA?j=Wz}Q`#;5Vx7^bE9>F9E?yx%mb zdfG(OefRtQ%g&VO`3OdYod3}f52Bm#)Lq9tTnFQfx};}wfUS^wMa8YLkAbSuPQZ zq0wbaEgQMsy}ykPzrkw+n=MNlBjVeqI$Q3W3vt%`tgmaN!=ePb%wTK&t9%PxT%O$g zwe)31`^5X?v8&CsK_0zUZ+nRRdE4ELDeo(h5WDUeZm0*?yZjc1bKRk?yACgq^nd59 zrQhPWw!euEp2t^#N$uC=zvV%EY5E~A=kyK_?Bg!??e=}kom=F5cBsWjVA_`$k~mmC z|6r#}5+h>8=hvjr*)v$D_Zo?vfui{k3nQOz_rsz1Cib)s!-!CGiv25CvI1h#bW>Vz zn8=?l$VFu7s#cRdZ{yP!6!(NSCaPHZJUW8vt%R47P!rL1x8{l}i zwVF=+Yop$d*bTAG7=}F@L*GJgM1GQmD9t6K5yoZ*cfl<;Czj;!L^lvNl?MtLT_f5M zf`sNf1sHWNp_zSQ9l0C&?ChT|(SJuIW@^87Z^Dq}i5>1X0s{Hz;Wx3R6a zP8~~bKNt~y=iQY}IEx*~%ILOu4s&TBtiAibr7JLlaYQY4C$Yw~&uHdCJXsfqzm8Ub z`GnuV@~7>~)|K!piu1xdxY?@@avZ%(j3g`AG@?y1kgACx=Hfrf*x7&H=6@i+Xr=2s z%v}IfZQHl&QRts|4C3?s!dZM*afC)V)CXPNxNT&8>JnIpn~UeC&6Hx?fJE3JDm zo`^2zK&Nh4-_Y}Iok*8`kSWTRQ0QgYy0mRgs3}t5`ur&K2>PF`CPaYnl1Bkef~m*0 zI5kM5bVuR)*0Xj<=?4^|^;BByHViEc^S&0H0IFIA9&{~=1M zeO&<~!{HdpLKMyi3MPt|6aib>i1jc26ULkI z<_%EMAdM^^+D^JCYXwf>n%dsi!!4Pq%IJ(uL!0ASHJ+Yj_QSwVL8M zt|&^!IT_Myw7ndWj70Um6J6kVzP76bWP~_H%zWiY%}XFz?nTvA){}HDWNZcmGkfU{)uLbzqN+_%1@>L= z52@I{vuMdx*Di33!%y@@gmClZBM)DE7AI0-+9r1Y2UW09A>rjz#aXBI&JA!nOY69a zbUu1I|IbDEDkrv^S-5!mX^XtDru{Td*|}szW43IlllbO}=?oXnah0VCV2SpfYa<=P zkE6b3M^yLF&Y18YAj4 z-{=X~*a}`akmaBlMD}3*hb+S!-P%5GihIhl)MlOj!cL|k;IxJ?B1Sn}{gA`*%c>1@ z1yK%$#gvvZ>vT*Kn>QABlWH^2+_D_SGQ%~3Ig@hSNW#uJodGL`?jW|G&D$YdoeZTz zQaKrC8)L^|Wg*u0AosnhF|Mj~sM zmH1+#Z0q6nI;F`IO~>{|T8ogazVp4Del3Z(i&SE|X4 zbF~qBVrnJpP80?j2+Nt&2i@zyUJxhQW%YP|QRi@*Er<6x=Lc1}8paxe6fyv?v`5wF zohRNxusbh==JrH4ZXhO42!xYu%rp{$fahF}Spn~+CjQU6WAU_g{{k((xbX2V5G_Cl zv7eMk&UEsxbd8p+9sWY71sw_J6JM2Lgz}bmK%ulP>k2j#WBvn{`jL2?2v@T7UpQSx z#kTX-gInSoM#Y2M!^eRZm!Mp`Ltl$;jP5jMQWjqj@V3-o$b9DqPVa%`_n!zV@e858 zbeEphfmFsD@$GovBai&2ciOw>+i|%42bN>3eisT~N!Hv#ZtcMBzx4nYbC9*{mT?PQ z0#W0OP#B{IT^48iRVGWP?n@&OCUEKI0l?h`y9K+gkkPdW6s#<3K{q}BDcBf3;(V{S zsD?D{j5B@6*uD~QS`rwP0{^;gq_Ts7p)$p#xAJXt*jKi9T$p6|CbQJ z$WZ*kqUFfGfmt6m_m&29*$8j`c|davGKV+VW(zB4mr6)d87H?KK>o=&U4mNdo#M>F zjEMCMHyYz+)Kkb@C6Aj8c?x3AfUsU2CKN|LOoe2Ue!^6DAni`$=-0zf10sNUyFwbd zZy}rBD|o2C-`WSTJ~+ofaLURXZXrUybsLn#H-y_CjXQ6^Q*iBTx4!GwxD}sKzv6l? zLoI*tyB4oCAlpyl~}K&stC+oTjnmI|O!IG5vk zn}-F+%#qpqQ?H)?ed8y$22)jajve2;k)X$)nRabwQGqQQ1gAx{9Kfw6G&iu1$@{C+ zcgT>w`dkFUq8m5dtTGu9!cE=bz8-p|nqv0Q*Xa5Wcwp``T`WSFR2x;eOjX9E4~1oF{IP zEs_#cJ27OV$!KvDY2!&!P8Y+Y38 zk6;)hm55PHD*|5ghGeom0kD$BGoMb~N1$%LLll+KG2YlAJL91iEt8?Zw+QL8Swtg9 z;uMn6p55Drg%0wq<;>Gx`(tWdRVN|rZ^8{4kg9&DzJ1%g22V+)js5eGwD&G*(0UmP zZr`^IV}A0tipj}D_3MGKk*kfs02n7Z_iMhM!*iW$2SVx8rRNH;B^=*@<)xY>^?s4R zaI0mRRK26-U3?D;$?c{6Cf-+6y6A{EClvXi;69`3gEUUx0jbRu@WbNw94L*9c4ewD z335~GbFo0lAj)zL5={50)S98g7zARXpG=}P>C^rL>RSet!53=w@BvJnxFZ^)&M!1| zfOAGgz$kgZD0D5qzEDdM+_19Vug@(-Np;t(6kslqcuLWBSZHGIn$5)!+*~B|ZY`SN zRBtz|f&VYD#@+@&FHv*8py&Ci)Czo-`0~(oSefx`aAfT)>dshBA!uJ4Wk53399O5+ zFeFb>hC{bc=Dxbt6=0~blo^+`th@l)=OWRof7c z3_R%BTk8{r$o#^@Qt8aBQ83mgBqo7hJ0Ne=lb_E3Uf#>f7390vXaV+#2bG6mCw^s6 z!&5@a&1uCA$Q-RkY%?$P)uu| zn8V9?B}VzTei;Eq5poE{57uJKHYQRg{vygSMs_}0sFcxx-h{27S1-zP!~ z$J35p>e#TXH1`3zP2EbZX{q|;r*Q8 zcITB&ZF&6_6T_!H)w~`}C+xq#mc9mf2fi}l3u1(kFVCD41-8hV(p!H@!|k;iLX^EvT+V@l2=o^F?cT7Di?aCWD&q*NmwvNb1th-s5lkegpTJiv3QLxa@!J)oWrha*uE~tzLf34jYpuTU=Go7oQ3&NX*{u({46`Fd zEk8=v1t<{lw=cj#E#s8ZJW{UBHhPiEzu@cF<&s@RZ0yqIHnfBC@*7EAp9_mj(MO0A zUatLeu_n6PR-(NnBX=}^uzt_qjofKOkGwIw-oWVY9oNrV10%X5QFrzH*t89*#SjPs z6DRCwT)2bS_)+E}f{|}TN1!Nl+1Fjb#^_$EGVq<^D8BvsP|ig6r_^AF)Qvu5tx5hWq<9C# z(r|@&Nro32exsSk9f1}fXG@@A>)YhZu3p|sdFoa#Q%3C^3=~y~>3M!(E)%uVBY;rT;2(Sydj21wJ>8i6!_z#=H8T*d>&kuH|&)Z(Yd(jr81KJ|%Jd-6EuLRaJ{= z&yT58pf-indz9Js7Jc%TI~nh{gBYFi>564VJeMFUX%?gQ5Ea<3vtK?4fJG7%N8M@J z^YO_AskG-4ggp%rtw`dLTFU(dp%!m~Bo}THW~vTn>PQj%idsn|%1`-xii zqTiJ71J_csOm?ywgSv4Cjfzb2oCV!w@`6dDb4QRLo-d(ckj2ZKfnu(`Y~0;*jTK}M zpbXy5Pp84glQo|vRe=qqCK!?eKrf*=&mSBJ)827L*mWrde?G9+CtZG+>#Y8(_pS_S zts-rtmtUGE3!t>Ga5n9&C8hUky2344qyuy<0VV+I#r+S|yK&H>^Ph1~&L4XgM5ZIU zn=e_q)WTW4PuyX+nE{bq^^)c}aZ0_X+$Ij?uZZ>!7BnN)6W+x*Lw0LIjq#`&A`%5iIX<>kybf4_}%)E(?ay9e0YCGc)MFPjyxoPM-$JS4_cg>P(aX z%Tnb$7PxQ>LLX$&Uvgn7LhPy*C$6-4FG|mRFdpH#Ic#td|HjED8bDQOVW-JaQjc!$b>4bKd zp45Ae9y%FoU&MhQrSr1um$l@8F(p&-C;AgR=PrOmo#=DFm771%v)sI@8COpJvkE%7L8=Cy(3P&D+Wor20o+*(lrDLSx@0r zdK={wSNxc0*%hiPzYPlEUCsABL}WZ0a&N36P;TU|T(G`6jXS>9%+ zV6@gGt3b~#x83oKPtoWEDMVPCTm_{3=RMPZ#eM%1%QZ!vLvU?SoO(F>1sE8SF`^~5 zx;J(rc9w-uuU25q9@$r}yq^ts&)w&wd|tQM@GQRN{uI0P;e*fr3KOeLQgO=A%}z`i zepx_?XEBZ$js0)Zd8|I4c+0Sin?84pac22_Ow0W-*3>+U=|A1r{}E6&<(xx$O#8W_ zP0+2;ZPX%}Oa7e=A=Ur$9MF05d>i94YfgIdd*1lps+XWpgX%IsHnV>I3fKmZ!->0) zgDY`Z(dU(I8;g>=~JH6ap~&yj|7ze1lvZ% zfeH(2VvS$AY7G#yCyN0H+A@GTa*}u_X#FMgM>U|hQ=~KF&`tj@Qpx;B7Xim1uf7aRgfJSg{D8-C zqV~3~!`s`1KK-S5CuD5&kJLpD6&>j)ejolnHWIw?r9nGU1Z{sc50+Gb*lR^qs*MNT z`TRMm3=`=!U!=Y~)4O=qPQ^r@`1OP=z~N7bivL{&ehrBAO+xdVrdx>n#(yU~KizT6 z!)ML?gMv`2p68G5Qy{gVbp1n#KPd0&gkPX`;wGnk>AiM$d>gS+T6#R<@d^OSeib_6 zG_P^#k|b<+I(?r#8-r5F4m=WLUE9brT!IM323)%+RbV+}AvPNe1yDCB>2j)B(q-h8 z3Px2mUTCX?YbCGPzOTn(rQ{>hnjJk?r)hn#M80OBPg~3Lm8sl`auE%Vfx_Wj=7y0q zyP$@$pY8!w$5X$tuaw!ib#t~nyBHCXe*!wQ;qm6RVFX%j^fmItKxQy9!ShiznSPLv z`TFeQEF{wCRU4^IDQx7dDPL+!7uIM!(k>K2@7Dq7={c7TTjhz1^$vkM z?HbpqTkDc-centhh4pB4?C{##<5cXu>OSU7i*i%^9f=LvysU*lxfF=++skA7TCr4# zM;hE{=*jXP{SrlW@TrfmAtZSis>*h?__pbV($blppp+Tt(D-tX=jF!cB&|T{xt&>_ zVxt*+Z*b738pQy2vqAA&%%m4R5|-dZnuQ9iU*MFoc9^I|d5gAaS11%#>J#V&@iuJS zvoiDYQEBOhqO`o|LYLf(uO@DHARUNPHRio=HhavHG!`2oh>Q*Pp~rbM8oY?tr-aLj z>&EO}FV;vs@Uh&NUzm0^rD4wm6mB*wLzp=P($|L=hp$-J=iVC%dIn*Fm{X;oy zt0`z_?*0bL@e+;Ys?NZrfu_buGMqsQ3N1f4iJsIV71^9k>8}dDt&MGoP*d_IVpSDa zkhT0+O{PzO=cQZ&l=;kpEGf3)&9fZ|^btX)5P}Z^xocb{gD1z={0N+y&lnN{zS6(Kb@|ALWid~5$*0v>Q3a&+)qXtV z@WZ3a{^X0LIVsCc;ym`=+e$bMl}h$C5ibf8d^vwOhh(Y0N`bFVzF=^~e!&cmm>-a*1`+(4-*4qnfd!ofebrm| z2cl~|gOwH?x}59&{MAUiEJFL=y#Q!xl4UzL5bR30fO+K7{@K{#*pgj^orPP85!aKpT_x{)* z>YdLR@#1*qF_f3I{!Et-V$yYalM_B9Bgr!#(!#=*jhT30WVc9XM|(s4{VD3{%If}&Y%;R^Qmz+%k7B1-a0_|nx_@r*nbBP;NXp>b z{sep$<%ifQc4*LD@vj_|-3td5FJ#f8+&3?a9qJjy_yiABwW+?$rEDgo z=Bapu@Nw{nn3Cebe6ovRO41mhj!AG}#Z z_wn6+Ot)>cAb#_RoXyj`anR1hEElC2$-|=k|G$dsB%D}%CYM=%_z)j~JN$f(h2nID zOAG@iAM5-IIS#EU(2g𝔱(LQ7uws-7%9%jO&K)I`ze zpUj{MbS$i`>~~Lk_S^MIbIpDR)yXfMgI<)Tyxdqn-0Kl4qPG3>Yv1kf-o@GBUXt(o zAb$aZn!(HB)1^6dI|feGvSqh|^p;B20PZ3U4;?2_p(I1+EgvRtmVhLpKm?q&8oc0b zn*urCJ=ZO-kx1pN^YguyJOJRs^b&0Vc25z`=b@dEvxnwYt%x6@FC;C7y?gYR&dBk< z=b?TcEvrUy$SfZ@@lg1CfG(rg<-E^0gjXtS3(vx(#_j=IYD|c3&MMZm4tCJ~(nWaAMz=yWh z>4yBH;h(awb#8}qd~>DBVKjvHqr54-k{= zaaP3;%jJ-;JD1-=;@%!KS!6T1bH9}`Db5;)CwO2&_CabS^0`_L1RE?EebG6C#<-+$Tt<+{|WE0L~{ z7ad1?Rsj5^KBes;+~uK-xacvJGpwPq#=*q%IoXPM!sx+Y0hN8a%ah8Q?Mc=k<@B1# zipdF|(8Zud>}y_$t-)%w^hH`UXbu7IpSvnOLS`oMMv{>MnZ>1y5z$K-%uTT!Z=PsI zV6^KPYN2GV)ORwtS90sfeig=Hg?sWG&2wHin?sqTk%3qd8w&fsj(RLvbCox0=rjqOaN+TjE|IPD^5&64* zUmZ(m*@ma^(qXh&PDm15RD`zwbeai*t#CI7*xMV&eOflV`Vq+_HmrFe~ z9>%wGPaHXQE`5R0tafe&?@UG@nzX-%5hm@to8qhw@O_TT`GEIE%>#<(e< z=u*k<@$FX%#9Z&Lz^Hva&|=YSU2N6NcBq{!s^(Cy*4x(8x6Ap5Lhi@R$4!~O@A3Rd zpZE$I0njYfv~?)BMC30Kw7Osx*I0Pfx<+iH`dTHDJ(?w&yZ_ZyY-S%+U0KW~O~7R6 za;o&*(HBHi{=2rjG>^2TzJ{I2`@DSG$TjFDqq&3<{$@;tdg)mdVd@4JsPN`?v2X>U z1!Ua)=O|ZqZo=+%_PtXHMP9n16{sfeR`NZsZlPV&{`5Gi-_E>A5wMe zK=Cu--ur(Kg_o#X{52)gPrV`e-`u!pU$;EhW{^)%2RHnf^e7-S^=yPLD!brOY)4O~ zk?NvxA9gn{k-NC7|4YdmCfEfTMk z%TG+4!(d3Xap7(%^tkgmm@wArNJ(^KzG^l4C z-BA0F?()u2jXGT>P;$&_XNgmLv67M`g@BWZNF9QfC0qA1;;U1&o{>GO+p7WS- znk15S_W7gi0;XN|-JFY=6_I(;?mf=kI>mKCv(B!ODbhg6s&wvXDOO)m<%xNSkc0nK zZih9dERL%7HKfk?oHSbFlWbXHCm_TqZP_vcRWZBzlLj zpeDYlNLaN@fQf;W=P51{*dUi2@1fJ1`wN9PXzyk9 z@ZZJiy~5Q@0{vV4G9HRzdn~%{6o5{4Zjj_2eyWEdt(!Ax!LaEgzQNHVJp}$VYRy8(nR)goT{`oqUEOQ{+&RUwrB<OY4F_$e*7bv`bg`X<~TvPE;{m}RTg$E5aF{mMvm_* zXy6F|L^8G$Po=vnnfxNO?FUwRo;r>=ky2AO$RO7p>*KK|p0nL&>1|yg3vi;bie20o z|JpRORGMc8nXvq&8gr#ayV|_l7#cby^~o2^Xm-h;>-32Xo5Z zPTbTEE#j-m{wOtQL2uQT;-MIa6yj`bShPrI3i+R689&!L zoc51G=kgMJR)UrxAv$}(Ynsi{b2DT0z9v@;>mP^qB#8XI%M4y~(X*42)_w1R`@9wW zZUYlSa!FXN3>gT^gH-3g+BiO}t0jzwDpGiw1GNp_fRZd6g@`Cq`@*mD?4EGP@DKH5PK+DPWijY%&yY`oKX0%3d(k7u5h5zZq6~dla;z zNskplGS$PYz*&J)^t$KOO*q$)GvRbLA;p~@##Y~p@;xrD(UXEy3F(G}l*08j_1CU0 z&N^R7gcRG1l&-z#F)0+E@(l_wI$$IHrSOoJmxb~Yqa>=s+`RU%JDiUl)_@R52XVsCE|Do2L*g1PC4d zFhno~)3kJX);3?#U6z0aAj^^O#6FcY$}`y`Se7@>|MD@~Kf%Av`_LGrw zVAvQVrxZ>nw+3<*Z>9MK6^f$bGAYQkPn9*RbNP{LexNpn?&!e6MY^bx`6? zcR&qsLCEp!K9tN*BciGvN}}HZx>l336#?mfOw#UqtqotrQ?Bb{hVUw{DGuT#tab4f}I z$}WsQSSuxOHa#t>3|ur@lsyGk#BS?%SH66k&E^*La^9jijL+RKE&is2(}&CHT&V2q z2vpR1-II;sV{?&x%y}#Bj~9mtlFRXFvaJzmNf0Fo&uO>Tv18q%IU2@dSw=MaQAKHb zM*d6}Z16brbzLH+nj*GWza%1?ceSKzdnn|OyaLqN>rV>g1Ah*sC4`IxVm46HD9v`Xbb&ylb?wV;v!hxt2B`I~m5@5M%Z55g> z3Y8L450O*GSBte@W@!}O&y$(QR8=t_)UCGRVWuHRnudF0KQ2W>PUEs_a5C%uy%?q*5~Y1=MXqV_Q3jM;ISDri4tt#X}KVfl)?OR!Qx5>{Vl+A^nrV77W0}*jHo20tSKo#5XagKML z;GTa7E8oDxL#k%&_m%kIPpem-u{k(Od}*yuMKmsUC(<`iNl=kIf^yv|0bue@3$D<4 ziCcl4*;goEIr~NFktoq+x`|``wGm_F5c+{Ck$@ zrSSXDk{&}jW*)T0^qwK4)^&392Bh4`isVz_2&5!r;ffDC8{fN zVgJT8#tiULzUi?ie9Ws4hc^q#OKWJqcV_s;XD9ET06A@Lz*0{lvTmM+z21pmf^}7L zKDs_wArVmtvP|XvsF=@(lCkAJ^~;Dr@7GDR`e#4c#<~&{rBt=B27BT51W5+;nvYg6 z_h>nWB_dDhD2n24#M@_iI!hJ}d6!iRI*6oaLecT z;>BvU75O9-0>cbKGlWC;&OVYe4wEONSoX*x3u6q9z_*@Cafu}olHDCaPKnF86kgeQ zI%+5?WUNG4jcIz)c8QI@iPIzBr!&Qqrj+l4Ou=KJG1nUNrC7pT+wn_v_`KE#>oHCE zVrYZVrJ*WADtx9JJ%POg`IhbR?}He!<)Zanfb-NbOAU8PqpMaz{!88Rbd!(cYH6QI z9{2t@&k9cLKolAJh%Lrb-Wjbg?@W&O-v5m-7w+FK1G*397EWu=@|TfN$&i zM0KR6-r8Ll63Qi#U(@~)>(H%7Bq}C=a5wl@jza6HLqAsYCGE#sj8D$!G3b*xT z9TtmTdaBX>-(82_p_k|`@4IUITv^f8@iXnSA_na5DxFG;<4@Q)&0Mi28yPP_GF|=` z_qAUqw}OUS@l^p}|Lo_N!p_W8dJ&k&(&tnvWIazeuScjeSRHg?zmKzlL5!@1}x34F& z&?eK9SVuU)hDEf2k1K!Z5Nk#bO}Fe&B7d1%@T}3})d|n$(5<8SKMlb)0$x<=M*x$JXy=tNU)J^6#O@EqZgyGCHkUDwI#-Udq0g8;O=3)$FYvM(05m zk*fVX$RiQH7$18?(C4b&dH>j5zX%enD&k@H)K3_!FHh(pArh}J9F{-O=J-KANss7b z&nc7%(YF18F3bcaYGI(cUw7bvU2wE|G;ak08H4q)h&Ys(2$Le=NESNmPK;D~WRc`z z&0OTSS;X&=i@qew2*kG^uY8$e4{?r(n_7zxz6$O5Nyg(d{xEr04+i=j#TTAB7;CX^ zuBPe?boI6Ak*Pt#v`V^>6PXz75r!!WHT8q)Oo`eJz&c}`x2<&sEuwS9PjfrdL1*3z z4TPTk(#b$CSD;1T*e!Y@SzBXVvMeGbkCaXLJ?VH-$Z&&T-+z!ud$#ZugFgm{LT5TN z^SZU>pI$H+AWpNw4z&Cln)ptv$ko+q2Ic$YLzA$4zGn0lCsV(Q8WW-)qtY^%@$Pq#slP~yG zj-fUxYCTLSBGW?h?58y7mq^Y-_AirK$6b$E*A-WC3MSXQ?u=IW*nc@&&KCS;b4zU+ zT|~u@C{1?pjaR6NuWNRRwU`*M?#t%gGL~an>I&y&Z``Qy0kziv_q9AKzFEqCz!T(e-w8Bizh&35rkBV+JJ^ZQO#?Z5hfpuof7@tk{=yJxl8EKgYMVkZ$M3OObT&MC z_C|h6blI0OgGP1bKAfY1;)GuX)tqBDClz1^tpba9BB4o(4E~IcomE-DOwQY>ch#07 zI#yiIwdh6uR1zKjX-?J9rLbbea%u^_W#dKIS|t7c0=LIbi)|uO#WVPV^SD+iAWK=i z`BX*@r`=##eIxFjJ3e1_Yw)f*yR(GmsV5`FXJ9_(qkJS!=RJ`u#Ao&0?I{G;7v4~3 z&C{8%lUyiFB1j6!%s^pA`kXjzeTT_#%)nbNM_J1A7$;kf1vCly({2m}JK3%X&zvS1 zwai~zoeVwkACz@u5OW%ih}9Z(az8zYNY#)P?Fykxy19Q?yNN$0HyJ&~)Xf_lA*GKj z)kGF5NUoTEiy}uZJn}y$o&YSN^;Df#(gqVrG-1z z4sA+yfhds-j7xfQikKTHF=$Ch;-aXnxM1Rn2&ZwmXC|VWGU>MEF$5jTvwE-CMaDBZ z?-zq+ezBvuRX+}NIZMF;v*%00Ef^7kS;f1+i$zfydIHjIw9 zqYn+aYf{wmjNa_UE7j@AD~POwIMpY9Y?L=IM3_;XigA>5Vyo5?cT@=kF-3aQJhDs% zaR@JW^1elHvc_~H-W8kD`o%ILY%h<3jkC45CfgdUA2`}Oncf+1gYXa$$q`0F78oiD zw&*qtN@1HS(w@Ne)%8CrCuDg#gp8}ecx4>B`c*?m8m|wiuY*Ik0^>>mO;Xq!$*N`S z43viE=JQU4S`40Gt^?bSEa6_tGdrMY#)%XmxVgTpgpDC|lHLWlR`nMjvA)VUZoSN%CxlNTBZsQS` zeHb4c5PGvt=5Q49xdH8f1{vt`-iN1q$=rutYC{9&3!bKObi}`pIqWQ&su<`j!2EkF5FI@`?oybxVEPJcV3D{xhKX&LP3;jFt zEMxB!Vt56y?h0{tD>#peyR8D~c7NEe(=2J4-*^+HdkLyK27TLkAjwv!UBM|l2;^-> z1(3{gr1o!T61}6hO=1@AKsTHb97t@`&r_@_a}RM=Jq?%`zV*IzmkJoYGR?VidUboS31?aICY;Y zlUH~?dIBh}|E}S-{%xis{h1F~+nqMs=Py-euB{M@&w!hzF{>*==J5gU#w zT*IrIU)ftZ#We4BpCpZnmn(RuKgd1v7Ls)f{WfGTK%bwic<>h-a} zn5o$MDdg8JmmTWB(I->-t2j zEtz$sO&Wk%8w|Sa%v{+T4_)^mzqDeYW5CZf==Dej!9IcGCCK#z)Lmws=&F5J7J77& z|1`oh>8mz~r`}PCZS`jN6motA>Edd619)pt3PfT_C`<6&vcGgD)w;C16wtjKLhe6y zzp0(G)%EX;=SjsVg3H^44bL~Dn}&BluKz%%tPBq~oAd7TTC|920GRqWyMA{zsh@4g z89Yb8=eOXFRGE(_#$=sG{C>KC;7R;u&l36;#XYRyb`l6%1d6;Z7S^B4`8%T30JElP zJZYth4UutbkWm29wMYY61H(pmg;j)iYAype2b=dl8?Jo*-@O1!8P;1F1b$?Il$1V6 zN$wXK&nL>~=klCbgfrB99|?tthr)QRA5z*E4d$JmxvmEX0?0J@kK34_c;LVF)1weJ z#rAxNmZ|b zwi?OiQYR)I)yyjp~@B)-M)VM3Q z4Hj3hQUMUE<=g$@-x{}{H!u5+ z<`_BT$VCMfdekd+s8fN`{g9W@jWMzV+P?%fEPFj#l{R$j;y-cdnS{dx%deBoXd&SC zH=!1@zV-Eh70o%MQ3h!9{Qk3r5;vegA%)Q8_o=zuW4^;eaEbV7qD%U=SDo=@_dxU_ zvH`E4%DkJ^_e~T;NDG0n1)|FrVcI95wO{qwCFR z7hpf&SvlUQ);%qv^C?Q{!fC*{da=6v?_TAc2a%1v13J3(N@9t!{VT3M?INW%pQgus zb8KsyZcohLOaB3fComFs%;?qH4l#H#0#eh|@k)T^(I{VCrD&E3n6&cPsaH{%$NI(e z)?D9G>(pX&2WYsw73(~7{Lagbt1Rbk8Gl>8UC1f%wl8i4sNgF6 zUH8HH7gR7(>rUHpKJ-=n88V#>cdXN}7!k}qB+(Ql0wcxzTwQ0@w=CkepPz~s=E z6>A9Vt_dR+!hXlVSyW6qo4LX1jrqV0hhzWL2>diaHW*&jY*9Xss;qrnDLrui5rz(S z@1F?5?Jwu~21qM2S<|z3zS7q=+1AUF?qh^4{urCY<0REoV`cKS#+9j@lS0P0k0OKS zFR?PCSU#wx6N|q3m~un7Q23s&L81uYzKys*@W0~NT*s2He?u(jd8UwmhkOtG1lET@ z-a{e>rX725^X#$diik_^dZAJqUTUU_F*Gj(Q>peH+Q?*`l?+8Wh8ep+o6#OTHIvMiY?D}=l1quvA*TYd;nKUCvBUA#t zFv-)QBYM1sx;93;jl^JlLJc1QD5l7~u< zU*w1?)5FQ$(7cM#z3B#)(}>5-{N0n*c%CEgRsHB&^!}gPajawokq{y))N$-9?7t|k zWC6&K^~o=F<}YkOZ2j_=Z@^K{=d4dGGgcoLa{>bg4WIty1BaHou{H`MtLwBl#PHV@ z1SQ66=ingn|3}tahSd=?U4Re*3GVK}-66QU+aNTt(8~jUv}NyfI?9u*bzfK^QyRbEg1@sP_i?FIub~5`6Lu9gN#IZXvhHZzMVT zt%Jh@w7Mq8V|pigh?^s3{>T*LbV`2%#;7+P%Ny^?`0{syFs*=2dgf0ciS7$$jms)VGd zbC=T4{E=AFtdAU^le{{&O4d?RO5BM~?ve7|BWsQFM>C95|m`)TU0E03{ZBJO9I z(N6Ft$lu5B&sO;=*F%w&?ujA8JDiDEWni?$Q4t>pmIKA>4p81#>kD)<=qeC1af%gn zHG)Gx|IaS|f7)@X*KgE!zi~_s)?sdZ3+yXgLJ#!P)Gc(KK-yx=d#QsMZcqDLGbau1 z){VPqo=29*N5GZ|;4iIY&Y47?o$&I(Nz*u&s$8CB^=)8|p>!08TFlQLzOvbdO>$aPfc^nIDBKk5JV$L>c^e?l#2Fs*& z5JpXfH{~^WVr2C{i!1w0m5e!B>q2~cD`Ia0()E|q*W)Z?d~l*5Y@;paAc7FX$7Z?WZVzU?CR%mBw{e!IfJQR^JC~p{z=k&?lVT(w7P0ACPN)nIZy;~ zoM*8Gm}-6ORkIW;xst%7an{)1dZo-8!dH3;>DK157mup!Lv^cZ_b~?0Ga1L>Q-l$M z^+6uk>>j3+45p{>L(UgwX!DD;C;0Yx+$=#lN! zkkgXqF@d6We1l1QN$Eh5pFt54VM-BX8?n@@kiu6j*E1|=v=x|ISyT~S4hQ>Nddwl& zA`l;&wtaCGw=XN1YHZejOkQ)DMO3P2L$6bhiyg~VDwn-B^?5hewt=|;w z^kT%=N8PJnBup_>y~MEza7pZ0o3#9wZ_E1G>L{Vf4OgOOKk059?8*WKL ziZ5tHy>~cQX#;1nZzqyLV}f+(_oJ7ue@96r1)ZdngIdfur$r)&0=5s@*-A zX7baEd8bS3wI`610PGyjks#I}a?Is5M@YOMN-%3QkUPs{N$Lf?y5mBTodyk7Ff$S$??P>| zMg?yIWr$Dd%_+koGgj2m2s~r5i=1^|<(|H3CIiW~-Q;OS1;Cr;(X~}msaM;Nk!K9s z`;Pn<5z`qCY#f^yOe2L`Zz;?;5zTPwX_##AJ6;ll**N1K>QS(dG5$s}Krn&;-yS(C}$Lp>RXMnSEB!*-1VshC(_4l3ntD z+96cXm@eRVvW+hjnRWscK4xzWcEMd<;|lKJ3;g`0tJeu^X}x-HULp<`h3K;sFHKwV zmE4Qefn=t95)pRkIRW{i&;1K_`|#&U3MxpA0D%u20*O5|0o*?X3`oeTtJU`#aYUC| zoIiL{3RDY7{Ni0K&7CAH=#u+xfS77K+T&s%tK;kC8OinUB#T_(G^hlv^uun24N(f| z8@ZJ^S>1N_fgh(zo`y;mkfK-2?))Mz;;Qtstpru=cYieL*dv89cx6csMu_d}!=ZYU z%Z0*itNEaEyy|-JsqHwY#|WKRqZlay+YCe4SXkk*fty00iqGZOBKl%SoH^!a$3b)$ zyoLcJL4pweVF1;%J6w%bD;z{MfbB$XL}w&`t-2{bi@y?(!;}B^7G|g;xIK{8N|3`d z^kpK1Zu|%vr|qx|S%xR;W#t|-3INA1aU0n+xygIQHYD;2&*RI7t6JMX4;Ssn@|rU^ z1V?U=MHYxcU|WY*Tsb^sN`!A*;IY7gPm4PY{;{?Bp+CadK+E*%|T?r0qijs z>Xxzx6XxNN19d*wY$WDap(N$H!=qrOXMRM!7{`S*HdnrVRB;H8k=@yWNwZ&{;}5jK zV(|FTpLsW6{ps&eQRN?~VbTN*G`3aH?$+oQ@}4$xo(bh!yC7^`Fw(tq5a`#Ac|&Noh66iN}*lo&{({oTigzanlr z5~t{;kxkNFU5A+WVXoJ?3@MVHJBZrFNO|iRWn^AeF$hF#-cIH=&=PATi`rPM)z)X7 z_>(4PY#&g(H`4{hyM|Ht?DbO;bBw9=AZ@~~Lc=zykx&F};SdLW34(}@|Aghpl{N+U z4GB&pjCp+^Xe{OE_T+|c1-1YK{d|g|dNSMAZYm~`5;v0pVRrUtX{qFjEZdV)YZ9`H zgn4Jn{nyWr@WnqgL^1G}R7sD1FzFLG%EB$i0Ay3&>x5>D(fKE#`%E##kD#1Xxsf6nOlc&80ZiFy0Igu z?dWiD(>3xu1JlAPbKFyq^WN4;mDt?$W#lCuvGdCCwJ6&C5$(+BBjrXWguVhZy|GP! z=rp`CBr~aO^hhKI1%5ze0t9;6B^l%MID>azYO#`1I?C64=I=Vg8}YOsHe-bxvVX6p zN6PXiEQ{VCj_zq6o+=YMxYa0PHCt2wHS?>j*T@{c73ZG*CpSH)J}T%Vo8I|lc)|%iw(^f9!pOhlpT7oa0i}at?3R~p z(ehHe%SuQhb`+%#%tcaEQWOAVtzSY0?^l3+JS2m9rUM<*=toLS*!u)D6kOUSa-9DR zT#S}^W4AP%c~QjDWm|J=LeMt(7C855XProF8C5+4YP5HRJI5dpt98o@KE-p$$m#?p z%E)AhG#;xOWF5j(s!*sI37izVCoCu!E0|hV+C-&gPCdRVNm+9)A$yGdJi@<+a4DA@ zP7LRkY3bj0HnxfG&34M}wT2N)iY#5^lw7SpjuG;2T&;K$6waxBX6PTTZ8!xYQ0!j>`;fq2kRB!OmxFQ78&A zp@aw#@{<6hs%QU0Rwh--8p1r-Ol(mr%}nd5QX$9{<}9=<_L7*cL#e_pHP%Xul|CwcfXskr4mz zT3VbYqYRZ$MV>mtW)-tX&Hnov!mvnv(5cz&l!Uy2vgtt#agN5f!lJA^-px}vjgE!b z757V0J5swuI%E-uWP~Ck9N`EcS>Ghp+Dt&`EYVg1-AS4XcOwNc-Cc-PYm(gG7n@yT$7CS#FeDV^z( zm&2p4|G57gtYK3cGqUATL+g`ii(==FiE0aAmJovPe6rD6a zFXh8MhgQB$Sa&x=VFa(TyyH_9t-rv{u@+q&H;bX;vf zVZPAaoq7#cb!7FTAPiHbQs8`Q3nYZ55ExMkn$4%NQ{lnJjnxzjk!htz1I?&pf|`3E z@&0}E-7@1IkQmw~K!@w>1+7}&3KQSM>=4shNa6q%SV*?ZMF#99j!r8jXj^Vl_%37> zJDW|pSo+zZI0xA%w+e7^b9R}VN0c|AWnhXizL5D*ry7+fZOe(O4qRb4l78K`2(FsE z#%rMJJhkPs0G)nNcq4Rcx&V-Z#)8TMD9cYH!sEp)2d*CS5@WG5#rkD#-EnrlJ_iu! z_Hx3|MM)u0{~{j~QhOl(;duN5AU?~$M-CepjmCgcTE9}M3&q!q(1c!Sexv0T!2fsi z(wc$9=-Efg^=;p_sfI=`~)k5DSLRyEYZ`XO6Kn77Ny*oa?+#lwj08n z=;y=_q6o_<#uN;u!QVg%Yyrt^oqU}{1o;1ERukWJC6Jf47gN^6mgc6L0a2J0S7iEX z!IwBNvRlsEUCpEm-nDBxm6sQR+mBhs6xHVkYX`*>IfZ87hsnlK{CEIdxRig{iUZEl z{5Ba%p?)<*GDOqA>VquF)b6#$dW7eP_C<9Fi4vHXTGc{npfc(O1)e+aetl@zJ~-w5 z=lTh`-MgK>MSm@qZ(-=P2>8Dm#X|9sz}rr*kbS?mXWouQx7?ju;+WUxy@hDJ;e4~A zwYeu|?52Glq$HqPpTVC2ahfea^|rNCqyP4DCg|n)eAnIW>$8;>^a537!fADPyYY8o zx_Jb|y=kkmB)iZ=tSCV5!~HaTZ~RsZn$KKySdp9{cFh1Xh&n%?q$$SR(XK{Iw>#BU zY{Wl{3PA5()06a~fQdlaS0+k4DNgFQ$61$p?f0Xps@L^DU)N9da*j13PFQjjJKgEaIBzDASK&1xbSqklAZHu26|`Aet^N+#QI=+ zwVr8jQHzP-+hEsU`vsmkR5wXa@|BJ?NHinp!AAAY?&h^~g&}guJ~j z#R+f+nLSJS@;r!c@%}CVeDyc%T+m1RyX!3E_B=_u!JjZQOHgcS3V#~`wBo9|XuWwz zKz5qn9H3>76*`@w-ERHq@}5ga-T!c$eul#ihzK1(I<____RGy{82+xAbT zzWVR)weOL?KZ>c98t0;3a~0%$|BH-&`^&C7Jdy&`NWD(CZ0*h}peN?|N$F;UZ|Uvi z4Bw=sM$)cV1d%-Uh|Unp%90E>8jP=e<&{)8=H@-P?UjDnF=a)feR3jktH}XUoObKXQO-N{u4(E=vIcnv1?}O=fp@P6_PU+tqOOrB z%8B0$d!lJl2Mw=1D5cTr{(i!fvcv9}I%wqp@D%6`_oe@r<9+{4a#A@8CY~b>>YBT}oE7`F&>MqHh ztsV;FZ7N5or7K zZBx>2iwZ1s5}mbRw6ysj#UKz2z_W302KJz%C~@e zxI&7ov}2F1jQpfp1I*0#MBhI;DUphR@n%pPLrrgI2d`1C`ge6kh~r6}t?_I#&Zyh~ zCB#FQ$7nIX%MHRE_dgOGz;xQFt$@f~hWUe*IPw)NU%L%OFe z3;fQy%&w>p2--+sI?Ks4>ltADz?{nSFowGfFwhisnbd!nscIMCXy3;C&!i}dROW)G z8hAE+_1x%^u0PRfsHCvjNkP5#TZ=|C17|Ms=p3OVXIy@Bm1er__kikVETGUyYv3|a zB@`|jcD~dnOFg35oEy;n`q_)i_pLSfF#}Wt(4jo}bRYM*g1Pb?YfPjO3!5LlkFZYO zS^|d`zFIDH7|sQHfJSGVAtI1RWT*l1vM`NVi~}zRiUKx5TY7`Ec;_@V>Wxj5HDssh za60+!aNsT!H`rkAF*AKx_M8 zRK~%iKgz8Og+JHh*B2OEbBZ}v?gq30&&&4tklL=`xjtZ8kUWs?vgu>)JLP?rTn;WT zqwI%l!Q*dB^yG+LwYk@{Q=dpa4b!%Jd-}fad2=*VF(cF_+$w6Tu9cOv6~YXU2Pc~@={uLFmK#derD=OkWe<)(jF!hL|FMFEvTCM|CS_Sk>&^mYM zhresbD&+xZTr>9rw<|Se1K9S5dSq<3Z&|ifxSz3J0`Xpc4KL|w5Vghgh&I6du~l(W zyhnk5CZwmt0-@SXuDYvc?6E5mf8~-8%YdhEMk_!pf|&}7} z9L|Acwk|UA+1}?XnH8!Pvj4JSmJ8_?I>~HJY2|ag&({o+%I2ZJtG=(X^%`o+m#D{E zVoaKu!Q*(Cc&}anB;uoi-axS^{VsAhTp&w`a6U9@t_d*F1V}Y{;%>k@^9Gspy)TS7 zJe_IUivw(aTIso&7i8$Yp#WVeLBF^6rVU^Co)b9o^DH}0c~nmP(8}R+CyzM zYv*&c6$!8zJenf>zU9jfbR19k%DgYqH`Nq9Z#Rxu`XM0WUpN3HTwNOX6a^|v_?bEd z@y<j?<290V@(sk5jKxzj`P_2f?zh zw7a)=9xEOr0G;oxRJ(hDz(KI8|9a%pn%u^$)pej(A!O)s6Ym)6T>|YjVbXhU**x_Q zbb#&T90JIpST+IhrKFoY>i8FgnuS~68nrvQ-W%F*Q_gSJb7nJA{y70T4gFh8_wBP$ z^E9xnRJ2_2)A5n{u4nPau|8{gbOZ3C%&${EoK8)BK4+~C=$!XoW7R7;0)~>u-);N^ zTjUYs@pTR8*yPU$^@s4jpfy{u2+{nZ|VU5`ZsL!m4Y_t14#N6Hr4e?7kszOgS?J zRIOdaD=^}TRbRh7(55@B{+I>E>-Q9qRn)$K>Sn2qTF`xEi20rSOxU`Ii~3FNjt7va zq=Rp;e_ zf0^Pw+jN}njD0~!l)ctHpzWFR`40*QymX^FsRrxNFrbdNOlP4v~I2 zm%we_yP~9C^Bu57EVG29Y+*VC^*JPi5_F&tr+b#4@S9_2{E(BYfYh)ORSWtyRG#E{ zZXCltic@=%raFMq%3u;N8ywafDmy|>;2;Z^XTr)69K3=W*zM~ApRS&&nqC!KV>nsuzFuW&XBDOyr9vhF$l4IdtQ9JhC5{xDlpV z=4uFs)=JGFVTP(D25yuQDh@FD;rNJ}T3Jx2ynQ#xs{U|Tt3#M5W7oxc{K|4S*(1k7 z1%^ql2-ubwx-;oGzrXM@Z`B0HWE}la+33Pul6AdLKxP|t47B#=h+Tdf4nd%Jo|H8E z5)Dq{S#2VGYi2weovntP7ZON{&9Cl8U}i&;T$UqDlrsCS+Mbsq>5H6ONvGHjBL@{p zx-b-V5GE?QtcH& zC~S3r7*wP$p|7Gl-=To*2n^N=*VSTad_La+-Z01dq)c_p0}X&~Op#};GkrRO@AvMA z<9+)K)?d+-iX<*lDFjvH{v_Q zbC-k~sfL3OiLx_CpW6$=R}+rAGkcp)_LiYlPaV>Q(yyPu#TKew<7E?d7~qgsxeH-bG&PW${7jEI%Xqn9WA+*Vu08sUN8L@p{7Q+y;&+Ph&v5_|bT#cg z=QgyIOBVx%VMToY)!JNvDoQ6%7b^EUgO|HV#v8ZutPS#De4pLj76Ye7>vjno z=6zRN?RMcYu%cO}>=6s`?k~3XjHU+9vg&~I-FXA7Rj#Lg^XFNX9fyUUc070U#Xu%g zstNARK5xL%3ZUV9hmKcJchjl^bfDl<816gSR1(P;D@|X0*z{47V<-TFN}lcsG==v) z`@TeKI&)1?)x8k|| z8cw3VUiC1dRD<7&0L^%Hy!2+hK8Sk3kMBh+P+cX2mK;LzBw|gQNhIn}k@f7`nR9%T>;_la0qMr(Fot`l@`k~^}N7MR{(UL#>h<#6v!d;I;PQKINGNSj2z%Pf z5#*Y_<8Zy>n-+ijC*0y$XdjMgvA#s}GRX^OSs*+i**_)Od_7y+^6+nZW3NA$7L3w| z)PocNhgor)uL<7y*)y~S-uhL{&;OfoMRM>%j6eIgHJ!6p(#>?n7;}3C27mSR%5B{r z@Y;WEPv~~}!%2=^TLPHr6t!?g#>VW}(aYdqS2VbN?5QDTsaL+EaC0{&ODm??IP8f| zp&StXCe|i=Y~&$?yS-Z9on>kLO8ghGr;o)=vF?B*<3$mo#OT^tkkWVul?tT&G6Lgp z6F-JFbhj`)#l1=~zCttbg5|q%opbgrv`6cs35<(3#+Y^7_mg1muu={nN#8`h^W&Uhw+DZ4X z+;D_+RIgePkXXKmUKUxx9e6pba|aL3ras~0Z3!7gw14PQ&oR(bmqok`yctc3_0Ty~ zyBSW2IDt512gAM@#UE`uf5{Cd#h&2!706J79=>_`mxpz>zEw>c_u%GZ>eHD*CHgRq25kbMh&n3#q}1~YGt{<1o_k(6Zit)eS)^w}Wq z)#Bl@X*xaczr0_#ee$G2xyz8-z!KF$OeU!0drdH|rd(UNif`qJRK!O)|HdD>SgGG7fPq4 zK@J*bRf0At7Mb?)$6VWo>5lEq?R>2&$%h$+SDQ`yJL}bEm{&S~FYk}W<1o4c9~qiX zYvCP5$jh8{H)M67eHsW18SReX+1wx;x<`;RPw}B_J?;0-a|r&Z&WE~;r9=ufw94w#Z`xKVAeJ;gHbLRYU<4tTOsmbrnsyYs#5cTjiTQ@s zk3Y@87F=lBx16aVOHuH*9Ijdt2csf@|-}(z3TO@-by$!L$PR3p;F$>vF zNoZk|$^Sl9g#nB~8JX0iRbuln*q=O^2PvjHAe~03K~Dnl(gl3I>HDd=INt zu50XJHC7SE$YIX=z|+EW`SytS{xN4=z?ejoqDx)! zr3|c?kHh~{@Lk}n)pQ_Dp{u%Nt>)Zb;Ejd4YMH1bt!f+Jwb!uah}$VAVy?8s*21H>PUb1$ zPl5dGVusJR(ge@bxpI1p-$iH|r|DVfI+N`!bycA9De*{WmDq|(I?@I>&$7Q`JM8T6 zv|PhW@^Ywl(b8HW{;Pb{Y4L94(lfFUjN89xz|a16-!zfK9W6RB-J@TB{W3u^C-O7E z9f`9%E-Y~;lVV1>GTM<62c|fV!eqi!nofw%yx~#l1yvBPZm?{*>Pxlv#_dC|t@o!L zLOL-r;RKh=Y2Nn*1`H@sDriNw9y0U3L&QnRE<=NMiEXuexS^vFl+*46ydB0 z{em;S$c(Nb*&d74E7c+>lSwDu=3*j&LBO+7e&D6|RX5LVLhLN8j>_+zrv^S%dv}0 zDt=lce5*XRs~~O;1N&Xb*gW>B7fHvK%V1(QaaJtSQoD>v;`h%FQZ}G6Kg7@dzCy%m z=J#mG;?$zEq2mi;Po zB8R1a28NMgbWSpM`F8t-gE$6LdilQx8gOAJ4(t30L>e^^XTXy+ez zaS)k&QQ`y2OZ$xKNq_}?2Q{cG9VW+chEzn#BS|pt=yRWL+royVt5$)*dqTtwoWA=L)kCQu=&bbIgf9b59N12CIg4n)@+gbL%v6FZ@&~ZgC>*j0sws46 zaRxr8{z0Q!ApY@p^0S3p)!WwDjd|yHBT!7OH={PQT$7wGYZhDzY0t+&A*{M&`_EDp z{?5fqhBt!|>)WsYWESiY9p1)X6umYhwA5skDdaHo2%(OsD|xVsx^} z&)@qLvzuCGZnQ<01rOpC<=B41S`L0DD9;y)S7Rt?+bNiK6n0gYm^qr>f$IwW3z860 zrF|hKG5(VsDmCcjk;@>m%euDgjFEw0-!DW948EGB3gV)JP!38XcZ~Mlhz=-*!8^Qd zwu?^{+x*9biV1$3&6{6~%&UxD7LK)+r^bny%Ae-a7_Zeh;p4>;Ct>3#I;izNj_m;s zTvXiV$cJis12?WkK?mO$h((EMG3SWFs%#a#J(;*}&I>=#91MPA-UN!*s#&q1pFMfB zw2q&PE>%}9rwv+3+G)uan;K1)epTC5*CLWz$z(kj)YpiP!=wiTGh-8CNeWX?1VSkI zYvNA_Q(xK?Br2o@J#+W4Z}Y`BRsnhVEr{8L#5FMq32c^i-S+rZ;(MEhi^O19K9FlO`2JbaptFGr$5Sbx|J4~( z7aCx;H8qSSL8kshj}DT~Pyz(M8V){PA;+l)6?a%dhn2K)9v42wPJ5aRtc1igPz)p` z(WVF=l}G8^iLj%wYYyRNDTQvUQ3A0ie89jbaoRKgwcB!d{cdbI5JqpfXJ4U7(~F3f z`{T3}s;pd7=8Xf7R1;vA%^oj7VW=R)1UU9|+24+Wny7N{4^$Xf{xA1_no5*7XDmyxWh)v1`#&?wXc`l z|0s;o%97axWlH))eRLX#NR5Kj`x`$DIKj%%Aq(7pEnsjSNJdF1arpH6qqU@Q>uInB zxx(C)@Yx(0uS`;8qbxKx+a1fA^m7w5)H!WN9%S|}-l&3+N+QFD|FQb4&n=J8cK_Gv zi*|oL%UG{RGF-nTqqHc#Fj@Dj0H&joRhs|hef&UfX2H1dMFU%#cp}E)nQX{yWTg+n zLMmx(xWm%{Pveu8!20HTq=eNU?c&PVXrlzkNsc4bFB0E41n)NKXNwta<{PC z?k5{IMnw1>-)!_xDn%YMDPA~WKSriE!=Y*ij@7iFc(HJ)n5+j|&Wx6~;<@1phe%i( zet<#CSfGs)rB>AfcrAdwKW~)BP5$5cH^`sMOjtZx>$Bxt5?+2|*<7u|%Kf-cegIAR zBh~)h_HBfB^3em{9XEFiE-Liyn1I9d`N_pr3+gS_V`ac{nYrBHWv2nRD7du@&t-%e zv-+mKgM>{iIgzVB8Au-{``Ah3W$dCp2Q{CcCY_KlGO5!@Ap-tOPM(KD`X2}0`=&V` zuj(2Jt6aqDe+{X$`ck#B(s4Sx$G$EOhOD-+ z12&y2Z3c=rhs%M5b-sbBC7$ctFmWZ<1tIS%6u{|m;988((-5^(!FOyxdlNOV&s z*=&?G0H45sL?@9Cy$``dbD$UYhfp~~eaHyDJUMd|9veSAt*r61^bOt*b)Yz|l;-9_ zwCZ+1^W%Sq&nS#K?p%$3I09HDXWOn^YWduv_V;$kZOG>fQ zK9_55&MxW@U?` zRTP=Q<3s=m?ERoC$cd9_uBEy8gVM5qWm?2zxGXJDUON1rnDm}Wv|_b3|J@!}(;}2y zHZxdrtyV$die@A5-8NQ~lFdk<1qU zuDs<3m@j4xi$UnqZ)xc}k;YSh%A>DCSaYN+CLxg=B1S%Pj>K-fQgew~xDHb`%)tRz zpK(5J5JX|H(4Idi#-;~a8@*bG{rN+|L6XjuZ0T6U-5L-!rWMAs^{v=jU2WSy_O;jjKh0ZEyxDp)#%ikcs`QwMqqS zoiV@?Jo9r^Xdp@}RKbWU%q+-a1ycTxg9OtYDR&Z$?xec75!z10wPx^%fijy_QOZJF z4E_+&5MCJgb^=Pxlp;|`iUilnsJ48sR9ircu$octBct~0=@uu{L$gcL@m5zKu*|kOf{SVO=HjhSyFfn1zW65EayGB#H7QZ#`r@q zOiXSw4sGZnAheOahqjE&w1_9Ld{5_zZow*d!$lDk1F?N_8gB95BYKMTi_ki1P^kY; zXd>jVA_Jc*LI7ylL#ymk5uXszaoqQ=60xEI?({>BN>^$a--01wJ+5Omu^?}f>#bll zOmhCh_eYW?;lCmx?fYIwW}3+k&DQMlo1Mco`&HmN0b#|f>UAX4LTGAEHScw@E81W^`Ou4NuDyg)gMi0 z952?>L?yWVxbmB1*os2*MF)S?|6i=k2nGjFZNW5P(*aHGu;7+X>wzq`9|S*m;iGKx zK4^UYq5?B>iHbgID5NnclnlT%kbuQn)&B`#6}1D{Abhq)KDE%meSrLYJkRMq{M81$ z$Q)1X$PUmJ_7Au6n2n?!`D7HP`dUepM)0(l+GtpeoHGq)u7EZ-@lCTwut1D;EEN*7&64qUtCeV+^c;ah%NP3QTI%!qwCPaq?Gspx8&BKz z%hjRU+guE7C_NPHJ%3FVU?rVE!?Iy8q3$;@TAo5kF{=KrD3D&lfB=voXI?((Df(_w z)#vW{#)snqYXNJkA*n+*dxD&OR$`2gM;wCFd%(>UXqbX5uxUZGIU|n0U&=`j8w@)w zm07wg+DG-3ht&;(u0rC~n`Xnv*wvLEP+NweC$vX*20{MH{>e*`7zm`AY36FhL(b-x z_pbbW$(H{xb(e5m(Nr*o`b!!pd{$JpXmFPXSp>E-6999%h6V$dl2$}M%^uy_J3Bdg zdpok>HP*7*kUJP6V(MoA7~!((>M?xQ=dOYm(MQUB;Zxr;~8?Trt}{HqO~Dq^0H=-Ap~8 z;zD*v`>wzORPY4mg^>NU9P|hXCTSwqb)Ig#M?WU+Ay32c^%w0TDZKxx%% zxjMgOfCOALn9)1izce@>be8zHcSnWW>UEfMa%6t+EVQI|f^TTw-Ja6P&vJ#umScc< zHxJagnYm0qP{h;_^#FF;dud(EGTml)+;cOFsJo(#ke<@M6Ey>+0QW_FZampHonwi4 zx)(Y^lM$`hN3Paqs=g0^;YZm0nwIVz_Cxr>-P`5PlOIgu)h_OKeQtp^mN$>mpZVjU z=kez%{QyOybtG0U)IRwUihA6Q?A)$Z-QM9a$7>-z2U^~vl0{_Dy0v#XG7BiSX^ z%O09cMtU&Nz zktocxn6_}p*?@&88(}hL7#>sMGkh{(LRlut!m(2P*ffL>pC?U~AN3=X)5~(lN#-cB zlG*-*s1B&ITmp!9O2MV%(KWAW%g85^$eIHOS=CU+X}5}XBl9bk`%C#at2MU+V~!JW z*!YVL@IWFUnRh6ZUgS2}=iWeC9HfHsAD|0o_r+iALA$#ueDTBuyc!sKL))qv(Fq`1x8--DBOt;zF+VZ7#XeosKeW zKqQ@kE>kF#REerZf(l2cBTUfXWaWvNE*oc(b|L@pVr!_?yH)W5YV(M?)6yt+A$0=( z9qvVh4gH@F=BY5)l6~LX(6h10)%{Ln1@3KbY3pC0kxS*S^hT*R72Xr0r5fPSo+0>D zU@H3+aTNt||J36E6giJC8i%TYVwDfo>JcWJLNrgyA}c}z6uZQ~Gh*Wl9#u`EXOl@4 zkqlzwSz$eHU1%4)6c{5f-(>5uFG57i`69$(I%OI$qv{f-GX}Z-9F&x;Pxv1+EjGqr*s>Cp@wi#^6a(6D3By7RoQJ zeCD`Bphwi9Y{4pP$VaW68YboXo2U|}Q7w{lv2%$l{HGKsIb}@6d4*jWOfBS=KqO1- z{7eJ7R~M44B_`tqcBOf~Q}rc?7M#v#ZEr7GBz*baNd{5oX#_!(GPy7(N(yp9T!swM z5yDXY+|GRYyPhpb$OGZQj-J&I@K8Geq{zx*vI2Tb{Ehipr%eVN1HTU?hMil1$i_E-iCDs;T$Q1xrW{9R zb^YUSW1(pCt!LGTU zY1!`~7>6SwvjBOKAyE4oSEE|k-9U=5_$OZ|7OnIzNEVc;=wj29O z!OacF)$j3V3aNrLB>4x*f93!H;i8N`YWT@=7)6Dp<1xlrbstXGzo3#U&-JsOtfl8N z#BL7?k+dvS+cRoZY)71+e<{s4dT#O(o!53o!7CjYI_Wr=h}-0xZ0*h5`j$*ezH7_P zdOh%OFTLAj*8TtC=`9?hY@hF8rBk}QyE~*iq`SL8T2ewfmhSGB5|&14>6C7k6hZ06 z_xAaGfA4>A@3q%lGjrz5IYXSI(qt7>%eJTio9&FKmkT_OZ9GSfl-1v(RRa*=f^?Sn zLA8B>qw?rvHsk!8PVyzGbBJqHlXKaY%{5@QP2BX!nS3y|HokkCGa9ju716yX4sLDoMxw7$qm#>8_8?)tZ-O zMjn_36=QEDNgqbRjo@K&yeeJUHx7Q7MOVctvZ2DqJ!gJuN73BUm3JG(KT1ENuh6L= zNHIyBfu6JKlgSJYL53-jDhTEEMrRfL;;UE1V1LDC{=j+bMy;GYYn|mB&sjP?+g4(a zMW~KP&}?jxj=KP}Ul4U+1*Ehk>^;T1v8NLSs#D2!5Azr?`3hRb3t4=MR z5vief448I2!Cd6=TMNw+N!vDY)d1vuDg~&=qK#L%USA$(l?!XXu_**MRcNpZx4p`P zC7jDpEb4xNI$wT6>@+eH$pD>;cMd6g+FKc&s}`tvgY;%Z&uPy6(H-(7)jhQNEl2m9 zb^C6^%d;XJv_!8&khaT4?|%vXz=c^ZV>$vbFWy}uBH7C=Nx~)F!`M~6A zWx$rkTt|(YF02alV_k^`Pn%+r#w03@&*mx484nh^4#n}H47BJi#{8jjOUFsc3Rz2o z(Zg1EWl^glQO zD?{M{$IMhX=9SU4WT^y$c}iQ!Re{k?IGZUT5ila6FSG*I@Co)^TmA)3sN#cTm(JQ$ ztc9h?)btf=C=E8#?5umm3PohQmLk4$bNTWg|q_HE&9PLjsk)I~FL?aN=; zx-s-$gfS(bR-%K?9{X=@&#DnPYbA6UJ&#ME`uYQPiIwt{JgZ+AL+02l$Jt6+5ft)VH8S3R$~x%F*mwtgdm1 zt}qGLCyKsxIwUTiNbUrXIggrSQ+ z)jWIZArP8WE4|-z?eMH|&EY=cLZxFA&4FmhssZ#xTBZC$7?R7Sqz!BiuZ^!V1IJ;d z$xDzj%F_Ul|Q{-0% z5ea`0I#%~M3ME#erXvw56pi~n%7irfr4!WUM%ZR{W;0~sTMrYn1HBguXJ6qVX$j4i z`!NfvBbb&o^1WYrTFw>ChSnqFkb9=*Y!!(*s~#V3ByiWF$S*e|;X;$h(dj{BADza~lmLB@RcI>vgJd zT&U%TLmLsxh@*iTO3Z;aD#YT6+kav&*a(+dGfAu1lgd`voP;s=W6iZC&? z5@kxE1OcF+IWp5elMg30J*gMaP1#S3nJ77lY`iEFt)I1>g*0VW>PS>heW26q-U`@M)YGwmK?^JAJ9)v zAc4=J4d#KLQvy1t4dFNYGt7Pj1YwzW)pD3fsnsBtZT^=RmwVG#q42OnD%O=HsUjAb z>-QUq)(m*|G+~@~?s}XQb|wa}62>dMbrd7^X`*Ph)mUg4FS29bk$Zlv#UiPtDRS+9 zY35?`fO#*IB{@=-X5Fk#OA*E=&|PGd15=_CWoT)_{2nQenhz3Wtm8kMRyF#_xM+83 z8G;P&&-JMNgZ!FwkvS-Oo=HPQU0KmrAz~a>Qd4(KzWYy|f~zp$grh|j6MiZG5NHP4 zfJSX<4v&tsXDJA$#fl-=^oj4uKARLr`r9{@FrQ8S3^<}Ky5?5=RC7()4~`c^;fquW z;c)wL=Je{9d-$~y5 z*8&t%)D=eHMpat&xHTF5A`tT|S>h34=36q*srB`7tF7Z0ImN1e1e*Ve^kw^V6%N7hiLMWdEf3Y(Kdgu~eyL5}F{6~dQ2Gznh}O)`4Z@E=Uz zwM+3+qaM4(x`dI&pzqs=+WfAGe-QWOeY|nP$CLJ8W05x8k&NtEG2Jh7!Tip3y8-Ez zo4E|Z-q(&44kYcY_{t@L=*DATbr4gox)3tvCA`JX2lKaNcO?O<-A^oz@sj**!knz( z-r-1tg-)-!NS5()`UcaRCsicoy@bZ1fAj$r&~jnZ>{r^Lpn1qJ!k5iqhgv=miNUIJ$_)u0-ZB}f;)~%0c4+IA^LS2f+Lhmi&M%0c-lBx}6SJsS!}M7- z2QiN9-&^v48`*dpk#Q&P3iR2%d25)8OdWL-i)@FVW zQz?m%$9ur*0^*=LTtg3fb0PO&{;ZXngGTjKD$d$g%y4vQNI^-Vv7A6=^yb_R-L~w4 zG)=KHrDvoDldHzqGA|_C_50$1XY-rPKS3WAJ232HV$9EgsIHKbX{?!f%f0zK% z9hN?JgthO-q;nPaIyaBZDx0afYUB)AJpSpyju+CHyx|k`AG}hze_oVu+@W6gh*bi3sYYl zEm9}Pe(Xz=qK!fY361i{7GLXEXkDWhfXav_KV}k^v@iJZUFJ39UMHB}0F|R~?Z}NlFLmsb zztD9U zqfaP5iu!1SH;Q+(|Gx=9#7#P&C5nmj&XAhA&o-9FqMe&n?p<GNM zj;RUcMs2{#u^cZl=C4DOpGZmlm9KHjljGp=+N)e-;G#qr$p^i@_S7TIrD z+QfJ^fMkbuQLCz1K~sE{i3g1aR%UZU#c%RUzUNWaOJxP{unXT1-U+?nNlOg7607NT zE-9OSUuM2UUXgbLhUgqM#ce2M_vPzy1Yf6qQBW9vK#|v_9vTeVIG7t0(p!E5 zNPB~ILtV1pWpVa6B^{cChCLXN=@RQ~y58O2-_x>E2nz`biP2NrIAEb_A#VmhrB_6F z?|*rF^@hCOG%`J5BCypmaBKyKz3e7XO%oWa)nLO{SAA=$lOz2Kc6>Vix?x^d!Z>#1v0kns>1o53gh_6q=R_~Ih>}=2j18t^ zqrt2egN|dDSYHZuaRwT@IPhgGt@3V0+C+OSvintIHt^5Qdlz&@SlK90Visa_JJ!%3 zvpC?%On5XVIQ_4Af~j3Ra=$2$4{jo>edoWAc_E<+SVyI=wb=HHxhiJ_e23U1zP-DE zh_Tm$umm{@%IPTb%Z8VnIgMKA4#du}8+Ta?mqMnQ2@gk+EMa-2$TR8 zm$wpsO{#F435Q-1hp$TBaYfM6M9#QrO_Gn_X9y35_=O-n%Ic+FKTDnE9+#zs#`m zdQTJiJ%Yvw+s4S;pik_5Nq3Rc&U^G9fNcB=N}lS7o;*f->q^My*Px%~%eS`&(fEB= z=HEYtIyXol&cMnT84|W=ky?{NrKg1ebdzN~1iJ`lUa1BLm^usE$ZpT(vss|pXjg{j z+=J`+XC~Rj_7zS1EetjCnsFzgmwL5SuJ+i=BT;-w&{V|iZp+H2!SD&i-BHCapTM9e za??*+oy18u;Fz1WD(--3px2F#sSe(UA_S1DaZDGi558U~fsIJ-qRviR0D}a4vayd% z_Er6uddPn3Lg^Gmle562RK1o_{#$abQnXx&jPg<>x{4f=)6H?2ZyxCP%EzFm@l$y6NN%Xo4BuKuVxP9dX4TwMc33l%4B8cV?k{0 zCm5LU?hn~5Ht$kPr6^tJcV*$kAYmJgK^-W<(vLO*J zSAWEoOoN@Y-(w9$|`Rc~li)I;$x97+XXmyu!Yl z9x%;_RIW^u4)_8sr&j1`T$g(Ovc{h(x0lQO-ovq{VeCQ6Q{fO$YPZ|fvN3FdiDyu^ zD&^wLq%u9I%=G^>8kV;M)DftJ#bp8@;7|qnMwp8EHq@TZUcE=pY!b9Vxq$4Jz z`1E5gabIa?@|nj-Ck_8HkX%ojn#ht%GE84 z4f<0Jr$X%?KaDSYTK2>aNz8jykAwfn|l+D zvYKXTE4z_yK_!Y$vcndZ$qK%{13C8zJh75+6{rjJR{Ro#7Uaa614Ep2%!fU)6nEdu zyrQ}rksz=b!a#Cpg3s2WJj+J0fTu$4LD3MRWfx;V`+<~`k&Fw8h?S_&n`0>yzABUX zYWF)NX{iXoiKG3kyL+WuGJk=e1$ZTkiTfYI$9xOzu62~a6v+B2%Q_SiXV;>HYH?1e z>_sCRNqJO_OoOjEy4>BLQ5He}t@y1j`msWj3YN?m>czLU6|AUv4tqtzV$LWWnH3@r z6@7d8D_cB$D*uOBMFlAUUdsYh5HCMw;p*Tuqme2BN~nHL8ndd(g>lC zF&oAFJNXi873kXhPHnWC5EDG_Q6JXRM8 zDFw!Uwt6nzFSUdtV%Q})lP#YQL^S`wjzG=4+56awviyj6`v159w{UR0jybY-Be>Nh zzYitb17GWAa3H2F!dYRTD!oT8-Dn@q702}@KT1Vs0HZ%yB=_5N-l7&1PTOWt8CKrr z2pcVYtwld$!^JVhq%gWT1?7mq^wdjcCZ~`&L0r?|bySSHx8mkXB3U0#1KOJl;upBn zkAM&+>@O_qPhW=am3Ia`RexXlj#xMqOgn!v`idm`^y~QbRqL}|Gn@Wz#q!b&bwmaY ztYkFz>`yu`hzZYlzL0glXc2qY^OwA0*M_d*dGGNrs;m<=+4H^~)xlJQXAP0;zNa5}(~O z=dbPgUiH8oJ7y~O#EW6xPvCsdk+;H@H-4v7lFFLq=*)Aue68)q5E*A5@F5OkH5R%|2QpbIaWx|4df~nEKO8+n=TKHHh)z=}f z*4n*j%V(H#njs)LOMgf~ra6KVmK_%b6lFSp27zYu4UH?G6CM(-x2dE%E0K-mbmQfZ zPPzi@Xi!HME@UWceI=E^8|NYT2g~ENJ=VKD-jQl&is3pUq@!4A*!XIO`oStt%H zGuNR|Pxpl(3_0 zE!t&^e7{BzE1rMydt1Us5t#<^hy0Ce4-CVmi(gp*z<#T+K$b> z`bU1F5c9e0uGXuej=pMTqe@bdqjUZto0>5jQ&q*kJlopjh!vIGV!~xEKs_cMjSo2O zz+bs9rStvEE(iaYUB;JKE7q{HQWqrbj26Ez^%th$Jg6W=4Qas~6bXMZ&qMRmrBc}p zt14#-r~0*1xDNQoj- zCD@DhccU&&MaW;)baW-6IkfU-DzFSl6lWM zImfnG504hiO!YrFVy;IjdB$thdU!Ds0W{cyqOQt%=X9HCvIn(~Ex(0IAAgx-;=b%8 zWjNNR^RYn1&QbpC*0DIfiJm|}qM4K~HPYv|9n96|}B zf_hT?-;qdQ>O~^OC(SqyUxAw8p(OtIU6HWG>B?6Yv;9G8V&<8Hc7cJ!>EbxhGQl>LU?>|A=a zVK>J3cLA#?=McJT^9Na{NpK1JUJ{Anql!Ik&t0*TS>3;1cq^gVXir!l(zpgyv53H-ae47 zaOi6JFF#;e&I(o@+lh+QbFb1UE>U<=YSF}i)*oLkX(d_IU0utKA=PsrDro$%P< z*ASh3?Z5B+cUi{4pf2v{Poa3c>YJkH67i-XAg)YR)4xB8#(rcMF-;Lvczxm|xR2*y z!X)DJ4D=)U)8<>!XDlr_sx;lyt?h<)un)v_kaCbMd1irZ^6PoQ-a6jn{`rCQx7-FD ziIKe)yn}&fJBX&(dHQwW9UF@Swo-AcDoK@rqBm=(%l$Hn)u@39?1Ynq9o`DeKjtnm z6^8PNvhn0NzEZ9vMFM8A=bVd`YpBp$?1^)j0aLI79WOwNP@Lu3Db&pVu)noZ%q5hqsOpx*0fsG_vEnQp$oGWqv86wXQ z)(c>mE|;N_E336x{lXuFen1g19GvH?gRXr96;r8*>03Qb+`qFBLrcZY6?{pb%un&4 z9OA@m5TuS5>t*G<7i-zRtWmYl?YS(|`JJ{ZE?gGEsdy0jl14YH)myBXVU;iEU_eB#JUCl8!YzPC zK;mh&DFa+u9W?h>9o~NFlAItsa(k<5m*pbWg<{Z|Is;hFcu=&%&y%FyVX?n^8)cdOU&^}K88h!5|{1*E3+(Q|B? z>gB1hfgo3{QGv)9N(qkmR)*(Bw`tE^+@y7oGb%;lVJfaN>AL7B$_oACS^7Vhft0m6 z_6BY{)j-FJ!cg_IHsKYT-p@~7FW1$5Mu8FQ^omPP#&O<;h#ZB#(CsmVDuTVDg~&U*&TzG-eo7Dp_aXL{-e+6oLc*I zy6gVRoO#a1mbl}Ap-n-SL;|6exfBxrKmMRuVJ?Kud=?NaQYPZX)9=YkK+FB|DsJV5vu&k;Fovp?-s z&%N1hr;?3B{+>4dUtKOZ5=II3*BV*c#i|Y9 zdFXV%AlwH`%P`fTAUwZGgl(k?QLtNdZ^)cjhX#k#+=?)*|;+o&t+ zo7(c)%0G=dcRs7#s#W*q-_saIG2K;zP;=`(9OA%Gx$qrLcwUet@oDh(VRd-(I@ZoN z)aRkGsF<=;@V@-OrZ~Fk=4-fVs z*ZXt!?)UJn8Fy*r_K&abFfqc1iDyn{rJo@qB`@*h`SD-br=x7GXoA=DRn#7NeqU=U zt!j?&ey{}h^y!i*O#dd*;sz5b*`uZ@Y3rDMwWs3Cc(Az;YoHO=>qK2@&?*kp{TtiN}Nls zDEdrcAwp0yZI3i|uU@IncrO>JMLW-*<{naX#I6x-4KN!r=C* zy49k9i8$Qr#@E2FD3-(JGV~ot%E1GWmy6}UL4O}_00X6=65FAwg}&F9D~J+7oI%77 zOCi_NiDUd69mD90NDK8#_iT6wbZ$_B0nrw5(lIS;_V$3>vz_qQDq%oKYnIhn+w59G zQuX}kfAm5_y6&y}-^T|;kf{7dddLxF)GJQt4{ZaBz?^Kgj@Zp{D1$D8xL&(HH7xrZ zN+_~y_QB7~n;3kb9uciZd;L&TFSW?!%1k{rVarWc5pAGpb-C?p#g!N3EEgZ zbL&&qU;g|8{hLu>Qnd|P#8RAj1LByT20*n~u(DQln5w$r3RL)2BU&9}4~O22}6zkxhGl2Mw|i7pQ^k-@{G}NK%y`fBzY{ zGj}qRS8pi-SVFE1ZZJKw$Y2B^!r8QWF{ob8{xa|xPXE^PX@e{!vxqU4(`>>tJ&Vl4`LoYy3+{_Gn z*0`jvkc~k&CWk!r4yzi_A4)Cf0`jAnE8i8k#Nw;~jtpw#3z);VaOSGBgK0v*q=O=q z;K5+&%&U<`%2fV|B@JpCh&@*(Q93iJCTpVxoO`Lh!Z11Km->Q0PDibQQA;6XpZ|?5 zzR9Ah$2-z!)KrfuOj-!XJt~i5as3(XmFr{Ld;q>nTVcy{sZ^w~ehKH!I(MocQ(*Lq zb87~CB_c!pxomOc$Kuf>;IUeMIqjLtP}!mnId#I@T3D4lEbno<09AMi@B=e57v1Sw z_9!$$Aw~8NL{_7%7g!#poD%@B;oXv90DnSapWH>KSRqqfDh`u*jgX0i8{6zuLm_KC z{$j5iI&sn!SpA2ulzfI6@F=HsjH?SX=;S@f$H`37Oh%kqzK#-e74vg~O^^c6vG%?f zb}lG%fOso^KHbll$Ba*wzh6Pg3fSDl(Q>&sP7;8Lp5`27$;ZA< z@hGz-uF~rcFw$`Mr$Q64RAShbeUe@DV7go zQl=G3H_$a@08+##KZU>;TEVai6lbxV3$iOR#9vNQka>oq^=vo{T;2@wbj+>dC7+w-3Wq%4Frd^O|zO z-7;6pv=&rt|0VhIGD! zHSwIog@q--pAhbX(ddAb^mKid+zb~s&!%@AnBuQ)z=XmzZo#LRo+`g}p|N5%CV6`E zsFfNg@a~$Gs0KN?)!pEmgWSIigp60Dryx)*C!w9fq=iyUF<>xG1Ur$GU_ULPNS6U+ z{ccZ+Dt{W^m9^z5CvUEhoe<5*Mr^&0Ws9qkoc9A_U=00={ed;FTU9RS2~1Z`7I^ud zf*_Jhq}*r|K;Im%=8}dRhI59;p$af|yg}9wUJ$a7V5kI;{$SFOy^LUf< z_Iv*g9kiZDKjSlRul4A;+5`bFTB5w{&r+Jg{D-R634=&4_4JtNF-kU3tIDk6B&a2p zx>b3Ko=ePusV+F!&iNF8eBSWoyHMX=vb3slrp=#+?rs(@bBW%({jtMUZwr->P@v;uF|xHDaA1Ya_jfzTBz`u!cNtet}N`)4ELu;!fF+gWhy$|FVDpuZuA#((jMd z>(n?Q@Cv*`(lS%9Bi4&EsNq#v*?7!P;Lk6GuDR;O3N4C&G%)LL%unJ#?WNSrWRkDL zX{8JMp?bLb%Hpn$(y+_S(RJFNn2uVw_y@nuSJfn?=H*>;Tj9qn$U8u(1{$*EwkeBnRzAO6j zy>mB!CK~N*GSWfut}-s(T8(g0GQXSJm0@y>S1IWJ7Ik!8q5DcYhIzh)lizY4Ofk2` zui3rWB+$KLPVI3Lf+pfgpNaQf&YACHyIil2PFOX~0&+C6{LWsO*NVXqlB^mOgFLWb zZ(sgsy|yd@8+{5D>$yr04IJ2ZH+-x2~^edG-WWjKi6h{~*OZHo3v)!0YZ3l}+JoB82{{_iUYnxpj4v1`>&yzn-v^@X*|uoS-W;CwTrabKx3i10sh zKhaZo+I)!>8(DAY)?!$L^G&{K^!fbj{dMK-?X$$VbTn5q))3sc>ZiNr)b#t*JqWzX(2^g&93@5F>6cki@PQSNGI5Z=JqJji}OI^vCEk&bJL zi1+|C8hA%qG^_!LLFbHM)hf#j zo2%~Dfpn*WrANvK7E57UnkBcwY7K)IU5==7pU7LuZS`|h8{TpN3k;?t^TFqmimAl!`ID%#b~;ZY^R)ra z<2M919rVoU5Vg-rGsM7_W^ZpAZJ${}*L1t8kBQgPEX55ndVcX$g-#9g1Iks;6n`x% z8CK6qu}D74wf!TD{EvtM7{ri9BgSgv&|p)}3}I<_B<0H{BrE67J$?yW4jC&>5p4YI z-(k^7M9CcfOCOI}gNhj$M#gdySmCQ-Hh`}wH>O+TQBQWj-O#1ZXbD6$I8LTyzuS?) zyhAEE?n=dKH)kM`k7+WOb5UL^!^FVCJt|x5VnoBHk=7jn-lsnbZl&&n9Isl$Nj;=2 zMzxe7%4&!`i8#uK?)_t!l%NF<`*#p!4V$!q)YA3a>fVmb744$59nF22o;6h-2Y7v% zq9535z*|t^^i6KB(IDDrwhqRO??sVP7!XBcYTUxn>D6+zBi4c4k7;T&h3eY< zVNrSdb~;oaxBA;RQ$(dY@lt{Bo^f+~3(Uj|O^-dJc@{mjTBL#dH`gL%Q9ppu=Xc9w zd?9tk*WW&MtUknN#SdMY9mUUyF=VXBc?4gtH?id^qXMUxMuUwcgq28YJ+WFcQdPcB z6%{`MbMv>v%M$_6ml@-}2!#_dGMVS6c{GGyl4-!!NK019)UzA33Og+c*btkK8ZXN` zFd7Bc5aZ6{LCVxzwdUi66IFOdF9uii@@G~9e@X<&3r^ys>b7Jpz@4L*zrqy{p_UP^ zGk+&3#ICA7Kfm04zMm0Soqcn6Bg&Gc_C(Wt(e6l*8M!pZlMxe^29*jMUCI`NPUi@(;u0=JI!eK5o;gctHzot%Q%bMc0rbPV`(Aedn{Rvb z^te~B%1G6YczS>WeC z+^7*FY^pH;JH#Gv&`S zt3Ty>n?Wou{C;w3g|kl*bBLP)r=+Kbs4-%`{uw8yQ_$XBy+y8`6zw+NpLjFe+irX_ z7#d&m5+u)1$AB|Se}t~fFZd*aYuC4fGw=H$m#o?EraU*-mhWaw4mh0{0= zMRY&6FZbQ=nBX|VIwmYlPHG=MfqAuq10OD~HW}ss7c3HSJ-3$tu?Cf`X{_phYXO$P zTwZQennwc_AbpbGMQr74c1&iXd&w`$YY|26{P56VR4H7D9m9vOOqU9k2oUWgr#oj$ zcnKAav~5Arbl&2#e<9X48F$UTgQJIdYU}V+>$E&S?1>*hfidbw2YSiJGn7#VmL|pY zu!bcP-if>3g(Ln1!(Z~Bgt=)o+TeS^W>0G@|hFF=vPT3$is${}HC7qaRz^VRWq%Z2?9r5p3z4*1SzMWZ1b)raV z|0>s<@%AOodQV`dHP^5GOZ%WeHt((P1N+Z|;p>?*5R2=X0zRX<+4#Gh;ftv~>EK6e zCp>eZgKsGH%{+n4`iBXDN93ob`SmOvF2n_?$n^X;SS%@8quj7K15y+OO3xPuU;nbc zF#+V=eSETnP|-r~C=2sHsN=nH4|Q|?6m4Mdg&Lml57P_UK(sBf3e2!%e?oP%FI1+_$yXs1B#D z+>btN(G$%0P9HOwy=S#7o?~NO=aeIj{~MYmXm-o4_-s7J7VFv<6zuceRfF2NL?N?e zOg#w(8rSZ*hDF!G7i9=S>_c}GIfv4dby;g;5^P3pq4qE;o*`TtszlE)V?=3vsJ!~p zd%qJnY1VN&nSJB-I=MPo5{V&m>;DKQRur=`?Z#9k{&tMZ&d*JYKXy14793R6=cm<~ z2&P~>+iA87+>lLeA4_imJ)u;g0ldd(EDbS0Uj>bG41etuNc%P;naeLkAIlWmp}D(f zgX`r}-0BzV4ot-RW}7Cx??us{o&&WW7$282vD`!eFYUx4kJEHgcB!9os4xlCVxWnj zWb!pdsT4t@1m{C~W^z>L3=uo}_b4-YYFi}o`A+qIaJheS=;gC$b)g5>A_;Y_qn)RshM(}hf63X7LSOX&uvrKCq z^Ec0_Sh0$BP!kG6vrL)b-`b1<((ua@%*-lpEWt}>hCmJ8vc2Zo z(SaaeSl4@-vQ~}XVE(t=a63#J+cUS;>Wmr4) z8;VM%G)-#lw%XW2-wc2%9l|4>DR2&$u^e|G=#{F>kF zy$GUACOsKwt&%`*kG(071dM&-w^3%58IQ6j6%Zy-j6+w>0AD>?wHMMD_h*?JdazwXvddL8{>m7z;u)o-o|(pGEDe%`nQ+E`1*C1(JS)hN55}Dx5)+O z5PA|$Q>?1TEss-=i~%p80)lYrI#-(I#sTbNHB8O2qqbe+*QH{yv2~&K8L3*@v+K$J zjRXsai8=PgX!vx2101u&J#E)^)N#`p(-kULc|P}unZ6gOy+|74F9ZDRU#N#%!3x0zP%ZU6t;9jwVNvXX^mP`E6LX;cA!; z5V+Ps8BOTiJ#r?`(BFIkHHI<59F2Y6kHRN#7KF{y>G6wCRIU(xCPw({*wmkgXwM6Q!89ze zFeV=x~k>T`-fF)$@GTxR~#RIZ!IyZ)*5@yA{&xQV#*OH*Xk?;Y;=*G)-+VH$drZp2_ku5(D%I7{$p8(tcg4E?go8%_KQ4X&f?E&Y81? zOsdoXgUJ75Jr+Q0?1w_L3<3%9siaCN-)DUQ`bS#G2&h$aGP+mS;vrbzgQT$rJ1+)Z zs@wz_mcsGd1)(eDOnXEs)gzYdM%6bwa$W6Kqm`cLP@j?ZII8w6p5)pQ^KVKtY)zEF z!1(IHasYTTS0kgm?SX-;bKr@@1Y(|zZ6p&ubDGUEv$;wRCyD3xd0^7at&95u#<*PZ zr*W&>;}LgY_OaOcPyFTrkkk9Jemli?yHK(B;^Kvwe`l&7_eJEn<6E8K@-(dt)3z(s zVCVFfSxCZS^JrEMh4iooe`HRn2vE2ADYzbNadace8(S|Wedp@GYc~Wjw z7$;Ity}W;K-wuO_BtV@3e)6>1SuG{lmaKSf`Q2U`=^yu#e6+OW7iIR&`#r_>EkVD| zS#sMsqAuJ%bHvpk<5OOYJAK%Auu&moQ8LzoaWn@_Sv|X->oIqyfNywuKeN`=R>}`U zl=H=;w>HaaO040-iX$RTl55%VetLo$=+%jzCJ*npl|5?$AT|jv`T1(Ya)U@Drm67{ zBQYjbkTAeCHV*js+yaI9Z2z+>C)3}LmO;p)iroMpEwD{X5-0Q3Q_j9HZ;;w z>Jy!nRHKzO4%B3k2s|u%f8=22-cAyEZ!s)0H%{MRojY)lxMF)2N5 zrQ{Prht1{^Px|2=FKieGG>yNxYCC$c{6?WUoUX_uH8(?Ev^H@C%$}E;7mxysGw4u4 zV_VH+OlC>E2#~nOXtN<~_)p;amifyjD}NHsPOf^xJs{Az%kq6I`6#1n!xdS{>BX$B zDgUS|i7V_tsQPYWue9GFlq<4F8R{@k=u^@u%xBnldNZ(WseM`l1!orcI&6>7*zWEV zs!>LmRA(1ZwA?7x|2MkdHu!cACU~}4>8kuXav9b21(PUW2W|bVxbapl@NfKM>7<%j z%LZM~o>cda2dKhO!v_xKE_X{e8Ngxc4DkqT#6Icf)N z=xB4cmP(<)uf}5j)ZS0Q{#v3Cu~3sOhNM^eOpZ$4%A)XNm3)_08eJQYazFY!Dc(0n zUnU3FgEnNSD$J;=8)RQnu#l*RF;_=g%ptJn;OCG5GdNrM?z2N^ZlE0j25snHCCL5V zoAW>JPr;KB3u;>vm`R5-BZ|U5qYda{(0=#b23tSD|EfS(G6u1%$?_lN*FP~Au97t@ zL1WN?&C|;qIW*+DMKS>fNJ1My;$6;WQR97K8(8>n~)S`LlW6zW?@)- zM|+N@6pdutZcT28mn~5{5uH;ox>Gh)A-jZ%e#pLEUYdsaOLXauWPWMIK4U*erFP3& zx_{%}%6YMh2Kp(JA_}_Whcsr<3K6DI6`kAGgsPZ&Tydj^sX^)#)~!IeWoZ-MPLaGYncE4wfB~9QFd(~ zDBUr1$IvY)jdXW+cMpw(Al)q?A&qo12ud?FQqmoQ(jf?l`{I7?=iTp)KVW~@^J#rp za~$hB*V*w~qF+#rznbDPCTa9HiJ@RYv-nONGN@a;G9p(8b@@g9hziEr76U4sk9Fhden$9dfJURq|JV%wd@yX?@nOX&zp1zcwE>#Bzr zW=e$c}D$93Pm)4Hyc=5E`W6z*RGkVno`W{|6!?}bz-=e}l z+<&pvVOeQzv;G;MtuYr=fNuqYu!2;2>dV+gcSvZiLR!1+1FIW*vESls)a!#t7UML? zD@!gN0JlHsW!HkSFdu1s}w zjWH*dd?Or_)jX9#1b3qOJk^4LBiO5}?sh`ek3ox0$!XjeR5Gk9Ph5#)(c7?2Ua^rE zF$0Gi+x8tY+TPIr=g9gxelrA6DZ-?@s!}V~%T=*Pv?eqBj5f;ZV(K|Ik0QLSB-0qV zro($fp!DMjA5Sv5ejq=7^TwnVj!KfbB&UiH++2T%2AMOXb)*Iw9|$bpxePZH6D|;t zIQpK~RE59>MVF;&N!l6W*P2!34mm=syN;wI=Q;X0)Z#eYuK5|Z0Tl2FpgS;#d#gQz zgigY}(KDwc*OoR$5x}sq-0Xit)V*fz9}rDsJP}z+=}goqRx@0sKmQ}WRk4KNNo2RB zcnX{m6U6TQIh~;9Qqu3OI9V; zN6G3~lSdb3g&INUzt5ex_S@Et+`~Z8){Y?h<7IBJ+Uvb<$*1Q&7{a{zy{VWur7UBu zd5Ld*3TK%v;mUfBesX^);H}NvqLffI$CB#coBPgR3d|%Khv`388l075w*GGCtaqbF6&iV~)b^a+xot2HJd)@G2FHMzNgg2mBlhytbElMV5uJyvPK2ed( za5upL63%U%@Z|BuNnU}}SFyq&gJ{17%3liT7O24y(1xqOn|wG-_C|yNehXhwIXe-e zqjvGmkGXkyWr?H!M|pK}i9Pq#>#?BL%yb+zr+p~DiaMD2xkHz0cj2i6JJv}~vuY#xA6bz@oQ54iv-j!A@GWHTxL}s8h z|FysExYls-QZC7;gbsn;iYaM1W%RIQGL>PoKC6wtT5j15z(k2*ryK{NiiWx!`^#?L zrwfyvgITDkQ_ol(9#)0r&qjZx3j8{{QgA^f?a^slNXFYBo@T+$SWPNlrWesnnBckc z{pXRg@x`J;4ki0|4D@aN^*FCK=-@@{)@aQ!`7e{usPp#>T86Ti6a`EkzmPn52Jd-B z)rDW!y_$OR;F1W;f-$HEJS2NEXq7k(NS}5qR?*wrORvG!?YdX~wqJ<36S)u#CXYtb z(AZ9jP-^^AR7d%)&~Bc{`JSxLRqpLRj3YIXN3BgHMd3vRfuCQ$)-{bac9l$gPB&?E zwX_6z=FBq==Wi5|+8PG*Av7&vY0CI$_c~CekDH&r=j-d0VQ=eh`xn$KbY9eFMbV{C zTSw;pU0^1cSXq<%)HBG#&7lu17o?B9o&ZlNKYq+&qJRX`cW}ZqdQ*7yjp6DxJAg8W zjmqc$FW{8r!%NkiZ%%5{-V^0VQK1jpUgX~}rE+xB#@QFG6ZH)o#6Ca6-EEd&eUkf| z&qN4y7Kh?p@Zlsc2FRY3& zhw=UM3oBh|x&EQ|eK$Xc77RmO-Y&-@rKu{$DLM0g`6SduZL5RMi<6Lc-_bPsjU&rm zTW7QiKa=b=(3sw8xFE*GzkWn1m$@e)@+~Q&uBeZATDqfo-s$2)z;8Xg_P^akDeF^& zk477hse`4-Ickx2z*IhBSR2)s$gvPp>KE5I$G@-+@PGJuplIL%1KizJlC=^m!mwj* zw%LrDgbo1iFlpiteQcP(Enre3!)vBhh*3L`DLZ!-cdpz}`T90r%KxLft`@D7legYf zu#f8LoP{uR_uOjYMigEpEwL#&`*)2g+h6(*-ETa866;DH`!Q*qy)Eg+-Zr~7+?GT2b)i%?1mlWc66VMh=z`C4Jmlf)-zSDt7e z%xgHspG2X@V%h_>G~-mPkx^>k3X^)mv_$}Zp_Fx&*hN_SAUTGIy~_@of*F!0(W*mG zUi~QzY@9jq;peyYI&uGXx2u{dUToBtC$t|?0WAL5Q`1O~vt5(W0K*(rMb`*!o%($a z3>m64@Uymn?_I&Mcss?`NH%9hKOn8SD-0AP8$H-z{1W_>G~y}AJip&b)icEd(5XGI z%5F`?|DQOSrCNUv0wlhc++7^zzO0mX({^*^9jf*1PtBL?XnFV2j~;;N{20Lo#KF?F|>v{IQ3`J5IWk-{0Rq20S%-ozGl)Vc0VvA}Zi;UOFKZ+fAMPHnp zQ-BWWkC^QraCyVACi}+7O%Ba=JUid^F0DLy9YXs+LT1}`{5T}Lt$T%7)B)$Ccm1@x zKU~Y4_tPxNW-ElId=8BhWaI(1A~aqQbDu>ln?R?cGBN;(Ah9K{hv#U@@SiXok%MH( zy_I?6M}l&_Z8jcrMiogjSP~p49(>_!R$o|=2x!8@>Tw8PZs@x_qn%C|csI$t`NfhP zv*?i0oZ(9vA~v;Gga81HrK(Q*!lcr(>;r~IY!(TPcQgI@nMaM(i zs0y-~bmmTonAt=2A*CnF2t0}CR$=1j?W=!n%A4IUe zQ&3EIv;PQIUAW72tl$tHsVIt?d-&dW_*v;~IUH1!Tx`=OD_5_~?Vy-Ruk>}DVEd}| ziYs>Fa^6Rvgk8^qYk*r2E1^e~t^^D^R{v1fO`g!5j8&0P#+q zw?$GP6e{0)Q9d!OCa=vM*DR~BCL`Hy_Br}fh$)Xf&m}D3A^X?)K|$_%R3k8Dt8j9) zNpfA`CLBO|0d^p>B^P8H&Mj-!k$4hPk$c9HTq}7OXnL)o`9fib`6vE3KHXI&XXP8~!UuwUA^^SmjT zLp5f$qUpaCRlH2broC*T;CWaM9D?O7VnE44DWn1E#49i`-GKGnu!t&pY5oeklw{ zJsdr59~Y+TuNxDLmuQAZ3L$sqj|vPrN8zh~ST)K`S_g2(Out;pIw;`-k6uJYdM1?U zwHU8mb09ECk!{H=)Ak9aQQa}(8?`l}@u>6jXW|A*r};&8|4swk+Nnrjn!}scR}*_D zaP9sZTa{J}-VnXXlQ5!2z_6IzJ8+aQTAJRukf{Pc!^Ak6>1WthoR%+a9S0oqkCkv3nNnjf$zV1Krm*Oo^*j z7jnh_2!g?x#Hp0jTC&+h#Q|iYil>!_bfo=cL?Py)`G&cca>$BDX%0P6%g}hN3jf^? za9+IPGnmu$TuJl|wB%cr`A>x^Vc_mD{*)GEUpJv=P@bdWfi=cUWn_Nk$D+MF8kv6n zHXD$y<>ZVJG8XcG=NJ*6T|D8t*kXoA;@JG8Hx=(S*%8bqQze&jvA*%sG1ziv3&1yj zCFE>t%+p0OuvP%1oKmC+Ht9APvWAG*S@0>p;2J`o&gP6kil8R!%p>(JtRp%{nwyxO zPE~=}(&(8(0iTsvxA(PjIxEkwA-b&h&2nK3x}Q_gt-f|OIr}PeEKH~xXP?WsXC}Lj z_Osflj~MaK<9#&B05m*nm1!mo>Eg+C)mh zYDHdYthTqc)Pzc20OY98#m9;ahIu+R?PSpRRww<07~7FCQ3ho`^|WDI5CZ}ieY4{( zidqics5NE91gm(G3M3;rQ8RqxJa8>Srqk@xhq_ulO;LvySXwd~*|POLL=}@H8c~6> zq889W7E3}Xfq*;t^<+YrtzVJyB_;&32o{BxQveWH<<2L-&1!J{n=vx7ynN@e=3823 zUbt97T)a5&X{%QgHWuPhYy3ii*6_*Q$ z$ma&?K!kap#e{WlHnbMZ;s{O)jk_?&yOhoxc_|Uug!55KJ&mcP4GzPBuIw%C3p<=a zZP|DS-m>s;AVaq=kUi%^;xLt~mkRsgw^ zDIc)yDaw+@4gAByM<~zTWpaSu&iH<=!e}3NG?kH3>r*Tr7bZbTW`+-K^I@h=o>gQk zwyH`ly#_U=udG*>L<_`No^&ZAp7VE;#hVpB*G0emf)pO5XSNyOjL~pydFe#)MXHDJ(;kHKfg=q z@V|Zce*ou|ObCFj|a|24LPw z(XH2;X42Lh@~lDuPr6G(9cW%1ux#{dG$;pKxzaWpK2p?-p}a4MO1M+5ZAu)<3GeK! zYL{^j3RjI;!2mo@ zf61PcE02)z2`n8|LPCjzn$)OtP<`sC>z%`ELKFPj=2&MkOKiJZep_x5zPxVD2{fL) zw-=V5;$vkkYI;9$56QeJVPTAd9&v$YsT-j_%1V?Zim$X$(aQ63V|!93l?b?&J6X0- zuS&eD=y~^e*I?qhjjM4#y=b1#gaLHrToQnS7ve>t9T-8%VYsnFz06xM@lxqJdLVw2s;1pnSbVI7XrVJx2)bLYh$W*FO{ zg5VM!`|UIX*a`BMA}m0}6MPddQidIuC8?N-ql}NjMGLe*@uq3Qzk~mx+c#Wc1*$=& zXXk^H?;8{T=wnN-F~#aFJyk-7l%6=HJpk~vI^0~}%YFdA8=SCqE3RtV0}`UTm*-=vBsUCzsPw&ES2xXh zfQmG`_5K#Chc}x?J6D3B$0FD6Vo9(Jt@#Z?zxKXQ;L~TNykkt~i+9{dvDmVAuGIxH zU6>V8RrL5RygKW{UuH{f;D= zJ#t2V7G9vKYyufL+LW7G+Yng}s1TK5>WIzL#VcpBEo<`o)I3%=auMVKrClM>ZKb+uA%kBE{H@(I_$HA!n&(zC%CzYSWK+pqC!{;SHiQ zM(L6Ax>{iHTD7QF9dS04MQ@&sN4(fI)iR%wNjBNpxP-;Ep8@rEaqtL1d?wES^1%Pg zqf1|L5E}yl%xd=!r7MjLnI*K0aUcg`btLI@ETa)1>1_zcWK`K7jOJdDt<0wzLvx}S zn@D3$i8IYe=J+@Q1?0yNkRMDy15+)(+zO^%4C8V}x+{Ys0h$v(MHqhMugeuaubh>22XQSM1M2V9PQnxuJ!_6^=j%5I$Clgfs9Y0^&w3%o zspT}E=!-w;O`sQG$`OH1do|0!n-HRO`0o;2?DPY)5cs6QO(lE>;wyRqJ_eb0o|%JW z*-@rN%IUFX`-7Bsh%X4r?fw^|>HX&`;0M-mO!KC9$IucH>zyR(r+24?c#RbLwULpJZFknSr@MR4JLw(;7Rj#{iH$u1^wMQ(MV#feWq zr5=Mv-HsQ&3S5S7Uz#QJk)u7er95l+`~OKXE9hz}fckwe#)ata1z6^Pq#q&H#nNW; z7f>gAMmNFG+2`{<$CFV);{*v){8be!)<^2j1BW;Ak<~iMDvnrkxggjUvl=^F34BDW zA569f*8?vYOUx+$m%j*r`2z$`o^zzP#i`LVTzEPoI-wO8U@-nL3Z295reV}3ZyyIQ zL(d@jxS^_%Ud7lBAp;<~A69dZih-6;v3f@hJyN1vXT2RjRO;>XN;(N~(0>yyxEKxB z8l`8xP0Ml`q*j*XnlcvKbNW3}i^T`1jsx50RMLMLU>WW}>rlcS!3#kJLpw0vDxNQW zBK1E)LYf_w9u=9HP^&sQFv9@ECNOnw&@nouzCduaPKLpm3M*bp$_z0ykQ#4;!YM(DGt4i4}V z>WODG=IYa2OwwBf1Q)30U-<_V(F7PDS?ykO4S`2_%S%$vjiMsty3C*ewrUIf#6UgJ zPyJKu)Tk(Vk&xf0uN+ZxvD29ZS)gLEOW>G5-I6Q*gC`SR_B-BeXLKdNVH0N5Bm6QG~^qqcS@IOxpWgPdp5zFdqcv@G2!PShAf9)=UAbM`A&`8rX z0KyvxmXY-ldGR9LS`Pw0Dd)=o_^O4V#GzO;@Z^;0ZVBCVtlAaiqs5Q+F!6nJ=D|t5 ziKD%+15|F12+dGEE!THlWyLEe-u%W$MEL1R#DQ(a?Xah$7QvSn!F&k}lkp zGGQ?Ug_^0%?Q7;Yj$CgII)f;7U{)byEn3EUGJE<8>L9YRbiPDr6RRthtwvhm{Ft9G zOF1US)ul1Y<22Ws4ptvzA+|;+8uO&-IQZgl(Y3wnC*rePOZt%t$r~i|A=3!MB=(={ zg^vfB&BN>iK5m&x|~5TiLMXR-}WMnJbrEbxLPgoQHxA!{hs3Ly~$q zgG`-j_0Oc%yD_vnINT}MQMjb{eAg@}GaGLAM7Z4bgbWT1K>}&L<^)XC=IX3?HKFX3 z_`7+j)E`>E)L2Fw47UnqvljtR)?-sc43#{}`Aw{$=gN`Au z#JU2Z_fK30MSQ1Ziz{eYS>q&vewczSEOre&o10>5!9n3&W;+5HrEI1Q(!4)*Xwm2k z)^;VNmL`{wZ=uxN0^jHlFb60E6c@C+W2hZjYW&JsqKDpx7?-o*0Q7xY= zy+{tU8EpG+(by;$8%KGi!XvTFK+fQc$f_U;6DqjavVp>fVEy}3^=I*K?7 z>28(FHx0Q+TjsKxw|>8k`I7;>(iM!`yAbgS*}a=-xj4E^+37s;WQ!~r>0T-(z9WuJy?y75uPOU#=y9D-A?V8C^t8^BzH8(oLs*Sw zc3P)6^c17S+kQTtfBz^8)^P;BpV;&(MmuYpa<7GW45vZp1zwzy)Tnq0R-RdOe+s?X zkuo3|bEw|VY;j6ndD895H?za>@k)7MzfxPVAHEsV)h05~?b<6bi_onbG4Sn^!WUCX zGh{TmWg|u968zN}mo-|BG`t$~9(8q*J@>1{2e1sl-LVO>C+wS9E&3^v|_r`a$2GsGDK#KEh#=P9MLkP+%eTgd42|K}$ zZGptE59n;vSmI5w2hw!5R%EAhq6RPP6y-Z%pBnlY3lSEJa~NE0v7;4iyFj0$mxdyQ zdlF5+^yX>?IiM;OT?CE~Q7_O>u!K(2ARlWIT{KV#PCYJ2e90`L)2HY2>vNRa4mdBZ~y_-FSQ7?wA$lFX%Dv?hg-M z6wd%p$fsZz?d^9n=K}&ZSUkz?uNDsAEZceCGnrbN`nQb21(s+eggKbXejpm8P@ey> zV1upQM%WIQ(Qr|?pwtBvETwl%*b3%QKr*wth;XA@GXlTop{NWo(m2M3~ zt0&)(YyIxV5~FsPjb+%s9HjXg;gZ#LA6nl<@KCE0hBqLl+>s@>ZVAe3u|uG6(s5ThBJIJtFN)4 ztX&}4DvB)`ufJ`>x1oQBo$g+lJYlq)A^U`Skwuh|XM^vpUN!wA1IdsSQeW($8@#d30~crMdZeZ;ygM>gBb0 zF6{0dG52T;e{RVz!^gCOtIczl@wsN9g;D@Y?Sm6s!9P@68E~eAJRF(-^gsz(v!vVA z#z!J-xjEXXKai$ZmemxJt9!#~*QMs|P8i+c!+nTjkX0zLGi!s3^u;h>Ve4DJ)fOF7 z$CyxP$PKMy9d|(K#%I`&pS_KLjFOczYq&vSN)1k;28W1^bHQ&At*Uh5P}^tnf}H7N8d{{hW|J%LIO*(in_U3K^S6XZ8 z9Im3eMZqfWAM2{N$7XS^ryak5l&fpJZ0aD|u%7`A@y=W*0x%^OBx@x(GkpysTYl@~ zD6SYaF#{{#1U-UVD|If$PKpzVzp^#94Xtk~xjqe_pTdAA;N za?k}gIQsC2T};;mzfe|&H}_N~=C`fbhUcHPj#F$}2s z@0^E^NOs?~(`D?}cuS7t9MYNKi$OlpiEFUTB&nyoijEsBgUa&qOYy( z&APAYG1?dLKaZ8LK-zQszMxp-J_Fq-5SD|DxPTlcroy?F^XH!;r~U!gH?20Tqsx{l z53vklFwma4%=WXgptKf_Y)lZPqS1RHx<)zAx*Iw(#m5a|8f@8;rHzn<@2gwpimadj zM6q>Byq-UBZDY5ee!^(jtWYy1Q$}{;atB!uMczF1+~O2+L{xZk^Nq`#<8LnP$8^ha zSZ}V}RfnM|-o6&9xHIk;{s&`Pp^`9@A4fl-Okv~D05PT7{J3&i_6t#mR;bHNU;Ns* zK$Jkba;k9C3ZLgT$e;L$jzct9cb#>2r+GH*)Q@V4rcx&-QEMGJ5RNK(Ob=Pwm2A=d zk8h!#NzmH1?^-i#VX>da`pMNaRIhl#;x5UM<T&X$%|c{C zwlcA6*lry4?MNU!Q@Kev2SYf@+PTG0ZrjgVfSaf+VCx&Zr_kgF_>4D}Cg#S)jGD*0 z6558p!*?Cg3ET?p@~iJ8jD3?Q{Zf9PfxM2IQ4#JAYfVRza*~ZP>#1SzwVFw7aPE

ef85U*-?cb#cPvXN?6Z927eNA^w+4#Pef14eLoT1%rjq#<$U%F8_Vwo^*bRJ@mGg>wTSiikT&GSeO~j1#0T7py$Jidl0GFY zC?$(8GC2d|I`@ohIIW7UY5G&oB4l&r?j_6zhS&WW*~%s*q*WWX^%)$_JAeqhA+Bg% z6OrPWvf%?w-n6@@oWns2ZUS}f(@AadRH)RMi{sb`?ucLaCTX_Ak}Mf<&tT$v`%9C# zn~7Zj)xgBW8>)G~6&S%Fnbe@`U=Cg)g_&K_el9NFGu0p5*gwt5{6n0DuB=7i08!+# z^s+{*9^ga~7<9GN=i}vC{`&H$MaqA-#>r;tcJhH zDs>@myNOV-|E2oAuAHIL&|jhbH^Kv+(yle@P7ek;&pTR_8gaz0W%^Z=zA0Uh#Xr3D zX8KF{mx9z3{Iy0FIGJb&Zunj!#geIvyj)6uKf#iILLYIIKhyZ%U|Tf{2iVyc{T1rV z2A{(yJ(8`T_L9VJDKk&(_ zMHQ4@zJMe6eKTdH^V*_C(8Xb=7# z#ZBE?kI~E%<$>F;+We`GBE8j!47{B}yBD|nAZB!21%A?Sw<{d4z$kc)99lsgy1Oj; zK;#&)XiGG>Xvny_FWuuQ-$bPnw?UHdp!wA3JtH;y6tq9U03P0H?RTDrldbaVwHAY6 z(Mef{{v?Krdu zaW8%t#D=h6Hj8tNQT(*;C8{&6hsH^;{4|xzyitKqgLnl&`jU zsL&+#Nq4euxlhkjEj_rRUn}%a@i;l>kDOA9d9^j7)H>xMSz zN2r)cSHPRU=>B@qka!I)M*o1{=B&3rOVizXYG=O;8^NeJXLix5=ihq8MXhfuC84#4 zlKCf{QQb2E(Z?JdBQ0<3Ilg1n5KCmc$*RoDP@YV&hBZYz1sN(~%%-9J?S+B90fUe^ zl9y`r7zvUK)@+Wuc}xTKIdU;YzkUTK@RbH$PiJ98QB3JO zvvu$3l*{A-)AqU+Q; z7b_oyWQhlaMKLCc(Q!;&^*ZXg7mBaFs1FF8T64^d|xE*Sc4~qE%5a^k#$@86mi8A61OEgiMVL zt?Mf&J$4-ugY+G-TBe0NhKd_uwM52wo(!i`r8RBwGY41XUI|_Fk{OSi;9j+Q@!Ghi z>G`fI2KV={Be*#4+1D*x4cBfPZ5>B4k>jHagD{1+|BI2)?^0BG2 z&YGW_KyL?D7L4T<14CWr-ej7r{pD&ZD>p;iQa(NJoR2*};Y~EF#nAMW)#{p;vGQUC zTb*^p7fTv9eKW?fW_LRH0b*ENdB^iAy3m=VUHOm9hp$I$?u$R6KLt*+%t9GbCEjrA z;^<1bk?bY&ZYTPcs^BLYUQ`RQTSpn8Ztci?4rj{L47U7nEHvv4(EuBp*WIDsdWWvN z8oX1ye-qQIvPS589|vD!S$nb3J}gzMIm9C!U`m3W;%q?#mKSfvkZbSf9dYY|=;aN^ z#L6y|tkTZs?RllzQ6-IVs;4mTI#UOoLKa_LknMk5_t&olb4@Pqb2X%#}3{`K))wYVzC9HC)3$i20b*BLWL7?SGT4KRc_AbhnI7vr;9r zh``2U0Yu9Sms!>>@W`zd)EF5m=z`S5)z&Ou6`Q^pa&Q?d@u4QT>(vdnC;sNSW-D?& z=4u=|x}0wmAA!28WxB!lA_$?q=|y%1%LO5LIcHcnAo4FVz8|*x@{3WOXS6FCMH16S zTo9~gTtXTwZtgfz*U8|!VYN7OhI=~ImYetrKV0z9I7K{H%-ZzkPAJ(fN)U(xcRx91 zW?5qAGS=|R%(zAE_ctihE9DHlBGIgA;>29QF^YO2;A3I=-hb^(+-_0)G|wVpX$>Tc zD05E0_=|FmuplF1$?rIFi-l5FQ>tPH{F8%0}G^f#- zlJ_Mg4<$dgG-Gz7`AK~w;KB-1j(+Ath~PHi>6@I25u%4dtSb!RHbc2T6uDPyUM30B zoz}*F)an-4Ajhzu3Vs5pE5%S^7;~KS4Q56A$;QUZU;AJln8dO3JRG z{?ZOh-3dSnw2tVBbiF*Q-v4(}v@QIglBhhrk9b?acY03)*Ep!LVYFRLD>qo6yMnL+ zA<`+Z>Kb`?{3;}8+NMtZSv!;)u5wVdlS&X=tx6X6w?xn^yQRi;Znx6DGO(zhCU5=M zW$PEqq4mr0A6S2yH}d3x{DCzQ_hTm^Emom{UbwCo!*K{&GLa&tH9I%!s2qwqA$~BH z)YT^(<~v=J+CrM$hqjDqeW$*$9V5F1}EiQQ3s|^qic2|fk zrN+EmsVr2$Hm7zL3(=jPgCtasT*+%6=2HCG6%G#Lm6EKKE@;r(V%9Cij(?~F#rvc8 z*}bsuZDQAf_*(^<@Fe=f0K5eh-DcF(1TxKW>FY#=_IrMbQy1d3TE6v;mj1FDZ`o#( z^^xbC<(za*1{}d@^R$e}->oPm(*@Q2P-j9yh z)14!sJ?qlOJXV_i8Nin4C z(gYf4?VhQ7UqW^(x6lW^)>R3DBvoQu{}vb}etHU8ySbkfF8bzoZmNiCicnqRY>0Tu zIfCeN*cNrE?q!Z=B_kFuHSJWjlsJQ=(r%_)K$|4iq3Y32 z0Y|wYWgWV|6T;bQ*0#2DT>w}`Xpv}uS}gE98k5ABY{(|PG5k$Z;{$oh#^|XWM$Ll% z2C<*d;=>}+EqJ&7?@rs=WK$?9YwC`x0FX?=-m^y8^8FOdHu}qN>-1_U&YOJj5L1s; z-N`VCQA>`S>M3Ev=LE+L0+~LVj(m_MV_HuH-HpDw8z)$A)_f~LzDt^{_u&SfD}GVN zN_#Xd^|q2w5tF^H_ni}W=ZK+_MkHy%97YWlVeGUo_Stp)rU6Yb#xJ_v`b7)qxLIft zjRYnwdW*lG6t!iH`pM!l2U8w*Z>ABXMPeK@pHSOmo>o##6Cd?X$ayZ+#N_iXQPiYX zC9_YmzJ;GdY_CelYp;9%IDaIEZuveJA3{oS@vlZEEi4ami-V;XliZf`(pUOzBi|I$ zpZZesjFV1i1)cY_x9L3w)+znJL?=kDW+Zk!O=NZnABba0$A5Jqj%}ujeH~>w5k_u2 zQ~#+%s7A373=vc@Ih};IMv9PI!tYIU|M(= z(Tn$^Jh402851-kXE3-y z_S@V#)fzV2JnBCfrlUP4FaCsi*_*c|U-=__r<<-nnrV9So+F}|gj2H~2~IUlyV8uI z^hpIGkP!qKra|85K7DOqc~G!Se{N*Z{tmZvT0`t$HB3KXmvjPRRyMBOh!WXtkry@F zJd}0Fr8F{UOhmne!}~)jFtn6v>hj9G78u`STW<21{S*>71-j=Gwox5@}$$o)SEI z^EtHr`eYOSl(41&f|Q*lP3B29t!Bp0pqxy>pwb?L5v4CC)2cZjr^P$h$OYf_K@+Wy zMaHns9B7T}F)}|H4?fcwHfw00ESj9@OZYCkHp6}b4~WQDn{#1t2`Y&fcCVMdnrbEf z5-8@j8DcG;32=f*^3Wp%8MsYX!=dbR3Vh{#M0r2F{7OZIV3PDGx)7SF`R+Gr+%7d7*Gn>EeO$8)`_J!Yn z9bKQPKe{d22fp_D8~>1R6<~sv5Jo@nrbIuVL`z`oe))4|zZVf_ou^)Rt}B+57^}DH zZ2azz%Y{GiJw^`8el5a1tMr4n9J}?a`7VY9X8B^lpI`r4efQuYc&3qj(xB+-AqdO$ zxMg|tX8C!uE;ZUoT-os*71v-hbQSsbcy(d2@3&h%-sQ=t#Qy4^PSINQKRpsx=&#^v zMs<6?K6U)P%l(_fz3%_HdgCxTc29`&Q4f#r)#1oG(kPiwjB&F0+35ZQ%BkxVb$F4% z#cu=|BAzdzB{00dRkk+Fd3w~GXHD*_S$cokf<0u8XS{ydy{Ttrnfm3PGH&&!d#52n zEPmeuCMNRx(cr9sbV{O=0d5X$Eo55{8drXH;dj$tWp6Pk_ z4SepKhbm`ry~m(C9j>60vnM9L(_>=2LQlF>G#?pSK6k;xIS8mM{=z?xeWiFbF(ee5cdAYAqX1+c42 zswf#)Tl%5d4i~Iven)3Uh?5iqcKJm}PyuV;5F?4AU98gt>PYzE?!c}Sln{~U^)fh> z%->I^kU1ZX0bw23wTgZ3^So{Wj(B=pA*LaOr-<|k*wsOCee}Gxj{%OjtQPqHf5`tw dZ)EE)+&z1n=vDqlaX8?oB&RN0FKzku{{Xyek1_xN literal 0 HcmV?d00001 From b483050903568395cf445fd750b7ef6e0a11067e Mon Sep 17 00:00:00 2001 From: Vijay Janapa Reddi Date: Mon, 11 Nov 2024 21:02:09 -0500 Subject: [PATCH 9/9] Updated bib --- contents/core/benchmarking/benchmarking.bib | 680 +++++++++++--------- 1 file changed, 360 insertions(+), 320 deletions(-) diff --git a/contents/core/benchmarking/benchmarking.bib b/contents/core/benchmarking/benchmarking.bib index 5923a4cb..c785dd69 100644 --- a/contents/core/benchmarking/benchmarking.bib +++ b/contents/core/benchmarking/benchmarking.bib @@ -1,414 +1,454 @@ %comment{This file was created with betterbib v5.0.11.} - @article{bianco2018benchmark, - author = {Bianco, Simone and Cadene, Remi and Celona, Luigi and Napoletano, Paolo}, - title = {Benchmark analysis of representative deep neural network architectures}, - journal = {IEEE access}, - volume = {6}, - pages = {64270--64277}, - year = {2018}, - publisher = {IEEE}, + doi = {10.1109/access.2018.2877890}, + pages = {64270--64277}, + source = {Crossref}, + volume = {6}, + author = {Bianco, Simone and Cadene, Remi and Celona, Luigi and Napoletano, Paolo}, + year = {2018}, + url = {https://doi.org/10.1109/access.2018.2877890}, + issn = {2169-3536}, + journal = {IEEE Access}, + publisher = {Institute of Electrical and Electronics Engineers (IEEE)}, + title = {Benchmark Analysis of Representative Deep Neural Network Architectures}, } @inproceedings{adolf2016fathom, - author = {Adolf, Robert and Rama, Saketh and Reagen, Brandon and Wei, Gu-yeon and Brooks, David}, - booktitle = {2016 IEEE International Symposium on Workload Characterization (IISWC)}, - doi = {10.1109/iiswc.2016.7581275}, - organization = {IEEE}, - pages = {1--10}, - publisher = {IEEE}, - source = {Crossref}, - title = {Fathom: {Reference} workloads for modern deep learning methods}, - url = {https://doi.org/10.1109/iiswc.2016.7581275}, - year = {2016}, - month = sep, + doi = {10.1109/iiswc.2016.7581275}, + pages = {1--10}, + source = {Crossref}, + author = {Adolf, Robert and Rama, Saketh and Reagen, Brandon and Wei, Gu-yeon and Brooks, David}, + year = {2016}, + month = sep, + url = {https://doi.org/10.1109/iiswc.2016.7581275}, + booktitle = {2016 IEEE International Symposium on Workload Characterization (IISWC)}, + publisher = {IEEE}, + title = {Fathom: reference workloads for modern deep learning methods}, + organization = {IEEE}, } @inproceedings{antol2015vqa, - author = {Antol, Stanislaw and Agrawal, Aishwarya and Lu, Jiasen and Mitchell, Margaret and Batra, Dhruv and Zitnick, C. Lawrence and Parikh, Devi}, - bibsource = {dblp computer science bibliography, https://dblp.org}, - biburl = {https://dblp.org/rec/conf/iccv/AntolALMBZP15.bib}, - booktitle = {2015 IEEE International Conference on Computer Vision (ICCV)}, - doi = {10.1109/iccv.2015.279}, - pages = {2425--2433}, - publisher = {IEEE}, - timestamp = {Wed, 24 May 2017 01:00:00 +0200}, - title = {{VQA:} {Visual} Question Answering}, - url = {https://doi.org/10.1109/iccv.2015.279}, - year = {2015}, - source = {Crossref}, - month = dec, + doi = {10.1109/iccv.2015.279}, + pages = {2425--2433}, + source = {Crossref}, + author = {Antol, Stanislaw and Agrawal, Aishwarya and Lu, Jiasen and Mitchell, Margaret and Batra, Dhruv and Zitnick, C. Lawrence and Parikh, Devi}, + year = {2015}, + month = dec, + url = {https://doi.org/10.1109/iccv.2015.279}, + booktitle = {2015 IEEE International Conference on Computer Vision (ICCV)}, + publisher = {IEEE}, + title = {VQA: Visual Question Answering}, + bibsource = {dblp computer science bibliography, https://dblp.org}, + biburl = {https://dblp.org/rec/conf/iccv/AntolALMBZP15.bib}, + timestamp = {Wed, 24 May 2017 01:00:00 +0200}, } @article{banbury2020benchmarking, - author = {Banbury, Colby R and Reddi, Vijay Janapa and Lam, Max and Fu, William and Fazel, Amin and Holleman, Jeremy and Huang, Xinyuan and Hurtado, Robert and Kanter, David and Lokhmotov, Anton and others}, - journal = {ArXiv preprint}, - title = {Benchmarking tinyml systems: {Challenges} and direction}, - url = {https://arxiv.org/abs/2003.04821}, - volume = {abs/2003.04821}, - year = {2020}, + url = {http://arxiv.org/abs/2003.04821v4}, + year = {2020}, + month = mar, + title = {Benchmarking TinyML Systems: Challenges and Direction}, + author = {Banbury, Colby R. and Reddi, Vijay Janapa and Lam, Max and Fu, William and Fazel, Amin and Holleman, Jeremy and Huang, Xinyuan and Hurtado, Robert and Kanter, David and Lokhmotov, Anton and Patterson, David and Pau, Danilo and Seo, Jae-sun and Sieracki, Jeff and Thakker, Urmish and Verhelst, Marian and Yadav, Poonam}, + primaryclass = {cs.PF}, + archiveprefix = {arXiv}, + journal = {ArXiv preprint}, + volume = {abs/2003.04821}, } @article{banbury2021mlperf, - title={Mlperf tiny benchmark}, - author={Banbury, Colby and Reddi, Vijay Janapa and Torelli, Peter and Holleman, Jeremy and Jeffries, Nat and Kiraly, Csaba and Montino, Pietro and Kanter, David and Ahmed, Sebastian and Pau, Danilo and others}, - journal={arXiv preprint arXiv:2106.07597}, - year={2021}, - url = {https://arxiv.org/pdf/2106.07597}, + url = {http://arxiv.org/abs/2106.07597v4}, + year = {2021}, + month = jun, + title = {MLPerf Tiny Benchmark}, + author = {Banbury, Colby and Reddi, Vijay Janapa and Torelli, Peter and Holleman, Jeremy and Jeffries, Nat and Kiraly, Csaba and Montino, Pietro and Kanter, David and Ahmed, Sebastian and Pau, Danilo and Thakker, Urmish and Torrini, Antonio and Warden, Peter and Cordaro, Jay and Guglielmo, Giuseppe Di and Duarte, Javier and Gibellini, Stephen and Parekh, Videet and Tran, Honson and Tran, Nhan and Wenxu, Niu and Xuesong, Xu}, + primaryclass = {cs.LG}, + archiveprefix = {arXiv}, + journal = {arXiv preprint arXiv:2106.07597}, } @article{beyer2020we, - author = {Beyer, Lucas and H\'enaff, Olivier J and Kolesnikov, Alexander and Zhai, Xiaohua and Oord, A\"aron van den}, - journal = {ArXiv preprint}, - title = {Are we done with imagenet?}, - url = {https://arxiv.org/abs/2006.07159}, - volume = {abs/2006.07159}, - year = {2020}, + url = {http://arxiv.org/abs/2006.07159v1}, + year = {2020}, + month = jun, + title = {Are we done with ImageNet?}, + author = {Beyer, Lucas and H\'enaff, Olivier J. and Kolesnikov, Alexander and Zhai, Xiaohua and van den Oord, A\"aron}, + primaryclass = {cs.CV}, + archiveprefix = {arXiv}, + journal = {ArXiv preprint}, + volume = {abs/2006.07159}, } @inproceedings{brown2020language, - author = {Brown, Tom B. and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and Agarwal, Sandhini and Herbert-Voss, Ariel and Krueger, Gretchen and Henighan, Tom and Child, Rewon and Ramesh, Aditya and Ziegler, Daniel M. and Wu, Jeffrey and Winter, Clemens and Hesse, Christopher and Chen, Mark and Sigler, Eric and Litwin, Mateusz and Gray, Scott and Chess, Benjamin and Clark, Jack and Berner, Christopher and McCandlish, Sam and Radford, Alec and Sutskever, Ilya and Amodei, Dario}, - editor = {Larochelle, Hugo and Ranzato, Marc'Aurelio and Hadsell, Raia and Balcan, Maria-Florina and Lin, Hsuan-Tien}, - bibsource = {dblp computer science bibliography, https://dblp.org}, - biburl = {https://dblp.org/rec/conf/nips/BrownMRSKDNSSAA20.bib}, - booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual}, - timestamp = {Tue, 19 Jan 2021 00:00:00 +0100}, - title = {Language Models are Few-Shot Learners}, - url = {https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html}, - year = {2020}, + author = {Brown, Tom B. and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and Agarwal, Sandhini and Herbert-Voss, Ariel and Krueger, Gretchen and Henighan, Tom and Child, Rewon and Ramesh, Aditya and Ziegler, Daniel M. and Wu, Jeffrey and Winter, Clemens and Hesse, Christopher and Chen, Mark and Sigler, Eric and Litwin, Mateusz and Gray, Scott and Chess, Benjamin and Clark, Jack and Berner, Christopher and McCandlish, Sam and Radford, Alec and Sutskever, Ilya and Amodei, Dario}, + editor = {Larochelle, Hugo and Ranzato, Marc'Aurelio and Hadsell, Raia and Balcan, Maria-Florina and Lin, Hsuan-Tien}, + bibsource = {dblp computer science bibliography, https://dblp.org}, + biburl = {https://dblp.org/rec/conf/nips/BrownMRSKDNSSAA20.bib}, + booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual}, + timestamp = {Tue, 19 Jan 2021 00:00:00 +0100}, + title = {Language Models are Few-Shot Learners}, + url = {https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html}, + year = {2020}, } @article{10.1145/3467017, -author = {Hooker, Sara}, -title = {The hardware lottery}, -year = {2021}, -issue_date = {December 2021}, -publisher = {Association for Computing Machinery}, -address = {New York, NY, USA}, -volume = {64}, -number = {12}, -issn = {0001-0782}, -url = {https://doi.org/10.1145/3467017}, -doi = {10.1145/3467017}, -abstract = {After decades of incentivizing the isolation of hardware, software, and algorithm development, the catalysts for closer collaboration are changing the paradigm.}, -journal = {Commun. ACM}, -month = nov, -pages = {58-65}, -numpages = {8} + number = {12}, + doi = {10.1145/3467017}, + pages = {58--65}, + source = {Crossref}, + volume = {64}, + author = {Hooker, Sara}, + year = {2021}, + month = nov, + url = {https://doi.org/10.1145/3467017}, + issn = {0001-0782,1557-7317}, + journal = {Communications of the ACM}, + publisher = {Association for Computing Machinery (ACM)}, + title = {The hardware lottery}, + issue_date = {December 2021}, + address = {New York, NY, USA}, + abstract = {After decades of incentivizing the isolation of hardware, software, and algorithm development, the catalysts for closer collaboration are changing the paradigm.}, + numpages = {8}, } @inproceedings{chu2021discovering, - author = {Chu, Grace and Arikan, Okan and Bender, Gabriel and Wang, Weijun and Brighton, Achille and Kindermans, Pieter-Jan and Liu, Hanxiao and Akin, Berkin and Gupta, Suyog and Howard, Andrew}, - bibsource = {dblp computer science bibliography, https://dblp.org}, - biburl = {https://dblp.org/rec/conf/cvpr/ChuABWBKLAG021.bib}, - booktitle = {2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)}, - doi = {10.1109/cvprw53098.2021.00337}, - pages = {3022--3031}, - publisher = {IEEE}, - timestamp = {Mon, 18 Jul 2022 01:00:00 +0200}, - title = {Discovering Multi-Hardware Mobile Models via Architecture Search}, - url = {https://doi.org/10.1109/cvprw53098.2021.00337}, - year = {2021}, - source = {Crossref}, - month = jun, + doi = {10.1109/cvprw53098.2021.00337}, + pages = {3016--3025}, + source = {Crossref}, + author = {Chu, Grace and Arikan, Okan and Bender, Gabriel and Wang, Weijun and Brighton, Achille and Kindermans, Pieter-Jan and Liu, Hanxiao and Akin, Berkin and Gupta, Suyog and Howard, Andrew}, + year = {2021}, + month = jun, + url = {https://doi.org/10.1109/cvprw53098.2021.00337}, + booktitle = {2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)}, + publisher = {IEEE}, + title = {Discovering Multi-Hardware Mobile Models via Architecture Search}, + bibsource = {dblp computer science bibliography, https://dblp.org}, + biburl = {https://dblp.org/rec/conf/cvpr/ChuABWBKLAG021.bib}, + timestamp = {Mon, 18 Jul 2022 01:00:00 +0200}, } @article{coleman2017dawnbench, - author = {Coleman, Cody and Kang, Daniel and Narayanan, Deepak and Nardi, Luigi and Zhao, Tian and Zhang, Jian and Bailis, Peter and Olukotun, Kunle and R\'e, Chris and Zaharia, Matei}, - doi = {10.1145/3352020.3352024}, - issn = {0163-5980}, - journal = {ACM SIGOPS Operating Systems Review}, - number = {1}, - pages = {14--25}, - publisher = {Association for Computing Machinery (ACM)}, - source = {Crossref}, - title = {Analysis of {DAWNBench,} a Time-to-Accuracy Machine Learning Performance Benchmark}, - url = {https://doi.org/10.1145/3352020.3352024}, - volume = {53}, - year = {2019}, - month = jul, + number = {1}, + doi = {10.1145/3352020.3352024}, + pages = {14--25}, + source = {Crossref}, + volume = {53}, + author = {Coleman, Cody and Kang, Daniel and Narayanan, Deepak and Nardi, Luigi and Zhao, Tian and Zhang, Jian and Bailis, Peter and Olukotun, Kunle and R\'e, Chris and Zaharia, Matei}, + year = {2019}, + month = jul, + url = {https://doi.org/10.1145/3352020.3352024}, + issn = {0163-5980}, + journal = {ACM SIGOPS Operating Systems Review}, + publisher = {Association for Computing Machinery (ACM)}, + title = {Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance Benchmark}, } -@inproceedings{coleman2022similarity, - author = {Coleman, Cody and Chou, Edward and Katz-Samuels, Julian and Culatana, Sean and Bailis, Peter and Berg, Alexander C. and Nowak, Robert D. and Sumbaly, Roshan and Zaharia, Matei and Yalniz, I. Zeki}, - bibsource = {dblp computer science bibliography, https://dblp.org}, - biburl = {https://dblp.org/rec/conf/aaai/ColemanCKCBBNSZ22.bib}, - booktitle = {Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022}, - pages = {6402--6410}, - publisher = {AAAI Press}, - timestamp = {Mon, 11 Jul 2022 01:00:00 +0200}, - title = {Similarity Search for Efficient Active Learning and Search of Rare Concepts}, - url = {https://ojs.aaai.org/index.php/AAAI/article/view/20591}, - year = {2022}, +@article{coleman2022similarity, + number = {6}, + doi = {10.1609/aaai.v36i6.20591}, + pages = {6402--6410}, + source = {Crossref}, + volume = {36}, + author = {Coleman, Cody and Chou, Edward and Katz-Samuels, Julian and Culatana, Sean and Bailis, Peter and Berg, Alexander C. and Nowak, Robert and Sumbaly, Roshan and Zaharia, Matei and Yalniz, I. Zeki}, + year = {2022}, + month = jun, + url = {https://doi.org/10.1609/aaai.v36i6.20591}, + issn = {2374-3468,2159-5399}, + journal = {Proceedings of the AAAI Conference on Artificial Intelligence}, + publisher = {Association for the Advancement of Artificial Intelligence (AAAI)}, + title = {Similarity Search for Efficient Active Learning and Search of Rare Concepts}, + bibsource = {dblp computer science bibliography, https://dblp.org}, + biburl = {https://dblp.org/rec/conf/aaai/ColemanCKCBBNSZ22.bib}, + booktitle = {Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022}, + timestamp = {Mon, 11 Jul 2022 01:00:00 +0200}, } @article{david2021tensorflow, - author = {David, Robert and Duke, Jared and Jain, Advait and Janapa Reddi, Vijay and Jeffries, Nat and Li, Jian and Kreeger, Nick and Nappier, Ian and Natraj, Meghna and Wang, Tiezhen and others}, - journal = {Proceedings of Machine Learning and Systems}, - pages = {800--811}, - title = {Tensorflow lite micro: {Embedded} machine learning for tinyml systems}, - volume = {3}, - year = {2021}, + author = {David, Robert and Duke, Jared and Jain, Advait and Janapa Reddi, Vijay and Jeffries, Nat and Li, Jian and Kreeger, Nick and Nappier, Ian and Natraj, Meghna and Wang, Tiezhen and others}, + journal = {Proceedings of Machine Learning and Systems}, + pages = {800--811}, + title = {Tensorflow lite micro: Embedded machine learning for tinyml systems}, + volume = {3}, + year = {2021}, } @article{davies2018loihi, - author = {Davies, Mike and Srinivasa, Narayan and Lin, Tsung-Han and Chinya, Gautham and Cao, Yongqiang and Choday, Sri Harsha and Dimou, Georgios and Joshi, Prasad and Imam, Nabil and Jain, Shweta and Liao, Yuyun and Lin, Chit-Kwan and Lines, Andrew and Liu, Ruokun and Mathaikutty, Deepak and McCoy, Steven and Paul, Arnab and Tse, Jonathan and Venkataramanan, Guruguhanathan and Weng, Yi-Hsin and Wild, Andreas and Yang, Yoonseok and Wang, Hong}, - doi = {10.1109/mm.2018.112130359}, - issn = {0272-1732, 1937-4143}, - journal = {IEEE Micro}, - number = {1}, - pages = {82--99}, - publisher = {Institute of Electrical and Electronics Engineers (IEEE)}, - source = {Crossref}, - title = {Loihi: {A} Neuromorphic Manycore Processor with On-Chip Learning}, - url = {https://doi.org/10.1109/mm.2018.112130359}, - volume = {38}, - year = {2018}, - month = jan, + number = {1}, + doi = {10.1109/mm.2018.112130359}, + pages = {82--99}, + source = {Crossref}, + volume = {38}, + author = {Davies, Mike and Srinivasa, Narayan and Lin, Tsung-Han and Chinya, Gautham and Cao, Yongqiang and Choday, Sri Harsha and Dimou, Georgios and Joshi, Prasad and Imam, Nabil and Jain, Shweta and Liao, Yuyun and Lin, Chit-Kwan and Lines, Andrew and Liu, Ruokun and Mathaikutty, Deepak and McCoy, Steven and Paul, Arnab and Tse, Jonathan and Venkataramanan, Guruguhanathan and Weng, Yi-Hsin and Wild, Andreas and Yang, Yoonseok and Wang, Hong}, + year = {2018}, + month = jan, + url = {https://doi.org/10.1109/mm.2018.112130359}, + issn = {0272-1732,1937-4143}, + journal = {IEEE Micro}, + publisher = {Institute of Electrical and Electronics Engineers (IEEE)}, + title = {Loihi: A Neuromorphic Manycore Processor with On-Chip Learning}, } @inproceedings{devlin2018bert, - author = {Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, - address = {Minneapolis, Minnesota}, - booktitle = {Proceedings of the 2019 Conference of the North}, - doi = {10.18653/v1/n19-1423}, - pages = {4171--4186}, - publisher = {Association for Computational Linguistics}, - title = {{BERT:} {Pre-training} of Deep Bidirectional Transformers for Language Understanding}, - url = {https://doi.org/10.18653/v1/n19-1423}, - year = {2019}, - source = {Crossref}, + doi = {10.18653/v1/n19-1423}, + source = {Crossref}, + author = {Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, + year = {2019}, + url = {https://doi.org/10.18653/v1/n19-1423}, + booktitle = {Proceedings of the 2019 Conference of the North}, + publisher = {Association for Computational Linguistics}, + title = {None}, + address = {Minneapolis, Minnesota}, + pages = {4171--4186}, } @article{gaviria2022dollar, - author = {Mattson, Peter and Reddi, Vijay Janapa and Cheng, Christine and Coleman, Cody and Diamos, Greg and Kanter, David and Micikevicius, Paulius and Patterson, David and Schmuelling, Guenther and Tang, Hanlin and Wei, Gu-Yeon and Wu, Carole-Jean}, - doi = {10.1109/mm.2020.2974843}, - issn = {0272-1732, 1937-4143}, - journal = {IEEE Micro}, - number = {2}, - pages = {8--16}, - publisher = {Institute of Electrical and Electronics Engineers (IEEE)}, - source = {Crossref}, - title = {{MLPerf:} {An} Industry Standard Benchmark Suite for Machine Learning Performance}, - url = {https://doi.org/10.1109/mm.2020.2974843}, - volume = {40}, - year = {2020}, - month = mar, + number = {2}, + doi = {10.1109/mm.2020.2974843}, + pages = {8--16}, + source = {Crossref}, + volume = {40}, + author = {Mattson, Peter and Reddi, Vijay Janapa and Cheng, Christine and Coleman, Cody and Diamos, Greg and Kanter, David and Micikevicius, Paulius and Patterson, David and Schmuelling, Guenther and Tang, Hanlin and Wei, Gu-Yeon and Wu, Carole-Jean}, + year = {2020}, + month = mar, + url = {https://doi.org/10.1109/mm.2020.2974843}, + issn = {0272-1732,1937-4143}, + journal = {IEEE Micro}, + publisher = {Institute of Electrical and Electronics Engineers (IEEE)}, + title = {MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance}, } @inproceedings{hendrycks2021natural, - author = {Hendrycks, Dan and Zhao, Kevin and Basart, Steven and Steinhardt, Jacob and Song, Dawn}, - bibsource = {dblp computer science bibliography, https://dblp.org}, - biburl = {https://dblp.org/rec/conf/cvpr/HendrycksZBSS21.bib}, - booktitle = {2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, - doi = {10.1109/cvpr46437.2021.01501}, - pages = {15262--15271}, - publisher = {IEEE}, - timestamp = {Mon, 18 Jul 2022 01:00:00 +0200}, - title = {Natural Adversarial Examples}, - url = {https://doi.org/10.1109/cvpr46437.2021.01501}, - year = {2021}, - source = {Crossref}, - month = jun, + doi = {10.1109/cvpr46437.2021.01501}, + pages = {15257--15266}, + source = {Crossref}, + author = {Hendrycks, Dan and Zhao, Kevin and Basart, Steven and Steinhardt, Jacob and Song, Dawn}, + year = {2021}, + month = jun, + url = {https://doi.org/10.1109/cvpr46437.2021.01501}, + booktitle = {2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, + publisher = {IEEE}, + title = {Natural Adversarial Examples}, + bibsource = {dblp computer science bibliography, https://dblp.org}, + biburl = {https://dblp.org/rec/conf/cvpr/HendrycksZBSS21.bib}, + timestamp = {Mon, 18 Jul 2022 01:00:00 +0200}, } @inproceedings{ignatov2018ai, - author = {Ignatov, Andrey and Timofte, Radu and Kulik, Andrei and Yang, Seungsoo and Wang, Ke and Baum, Felix and Wu, Max and Xu, Lirong and Van Gool, Luc}, - booktitle = {2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)}, - doi = {10.1109/iccvw.2019.00447}, - pages = {0--0}, - publisher = {IEEE}, - source = {Crossref}, - title = {{AI} Benchmark: {All} About Deep Learning on Smartphones in 2019}, - url = {https://doi.org/10.1109/iccvw.2019.00447}, - year = {2019}, - month = oct, + doi = {10.1109/iccvw.2019.00447}, + pages = {3617--3635}, + source = {Crossref}, + author = {Ignatov, Andrey and Timofte, Radu and Kulik, Andrei and Yang, Seungsoo and Wang, Ke and Baum, Felix and Wu, Max and Xu, Lirong and Van Gool, Luc}, + year = {2019}, + month = oct, + url = {https://doi.org/10.1109/iccvw.2019.00447}, + booktitle = {2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)}, + publisher = {IEEE}, + title = {AI Benchmark: All About Deep Learning on Smartphones in 2019}, } @inproceedings{kiela2021dynabench, - author = {Kiela, Douwe and Bartolo, Max and Nie, Yixin and Kaushik, Divyansh and Geiger, Atticus and Wu, Zhengxuan and Vidgen, Bertie and Prasad, Grusha and Singh, Amanpreet and Ringshia, Pratik and Ma, Zhiyi and Thrush, Tristan and Riedel, Sebastian and Waseem, Zeerak and Stenetorp, Pontus and Jia, Robin and Bansal, Mohit and Potts, Christopher and Williams, Adina}, - address = {Online}, - booktitle = {Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, - doi = {10.18653/v1/2021.naacl-main.324}, - pages = {4110--4124}, - publisher = {Association for Computational Linguistics}, - title = {Dynabench: {Rethinking} Benchmarking in {NLP}}, - url = {https://doi.org/10.18653/v1/2021.naacl-main.324}, - year = {2021}, - source = {Crossref}, + doi = {10.18653/v1/2021.naacl-main.324}, + source = {Crossref}, + author = {Kiela, Douwe and Bartolo, Max and Nie, Yixin and Kaushik, Divyansh and Geiger, Atticus and Wu, Zhengxuan and Vidgen, Bertie and Prasad, Grusha and Singh, Amanpreet and Ringshia, Pratik and Ma, Zhiyi and Thrush, Tristan and Riedel, Sebastian and Waseem, Zeerak and Stenetorp, Pontus and Jia, Robin and Bansal, Mohit and Potts, Christopher and Williams, Adina}, + year = {2021}, + url = {https://doi.org/10.18653/v1/2021.naacl-main.324}, + booktitle = {Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, + publisher = {Association for Computational Linguistics}, + title = {Dynabench: Rethinking Benchmarking in NLP}, + address = {Online}, + pages = {4110--4124}, } @inproceedings{koh2021wilds, - author = {Koh, Pang Wei and Sagawa, Shiori and Marklund, Henrik and Xie, Sang Michael and Zhang, Marvin and Balsubramani, Akshay and Hu, Weihua and Yasunaga, Michihiro and Phillips, Richard Lanas and Gao, Irena and Lee, Tony and David, Etienne and Stavness, Ian and Guo, Wei and Earnshaw, Berton and Haque, Imran S. and Beery, Sara M. and Leskovec, Jure and Kundaje, Anshul and Pierson, Emma and Levine, Sergey and Finn, Chelsea and Liang, Percy}, - editor = {Meila, Marina and Zhang, Tong}, - bibsource = {dblp computer science bibliography, https://dblp.org}, - biburl = {https://dblp.org/rec/conf/icml/KohSMXZBHYPGLDS21.bib}, - booktitle = {Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event}, - pages = {5637--5664}, - publisher = {PMLR}, - series = {Proceedings of Machine Learning Research}, - timestamp = {Tue, 13 Dec 2022 00:00:00 +0100}, - title = {{WILDS:} {A} Benchmark of in-the-Wild Distribution Shifts}, - url = {http://proceedings.mlr.press/v139/koh21a.html}, - volume = {139}, - year = {2021}, + author = {Koh, Pang Wei and Sagawa, Shiori and Marklund, Henrik and Xie, Sang Michael and Zhang, Marvin and Balsubramani, Akshay and Hu, Weihua and Yasunaga, Michihiro and Phillips, Richard Lanas and Gao, Irena and Lee, Tony and David, Etienne and Stavness, Ian and 0002, Wei Guo and Earnshaw, Berton and Haque, Imran S. and Beery, Sara M. and Leskovec, Jure and Kundaje, Anshul and Pierson, Emma and Levine, Sergey and Finn, Chelsea and Liang, Percy}, + title = {WILDS: A Benchmark of in-the-Wild Distribution Shifts.}, + journal = {ICML}, + pages = {5637--5664}, + year = {2021}, + url = {http://proceedings.mlr.press/v139/koh21a.html}, + source = {DBLP}, + editor = {Meila, Marina and Zhang, Tong}, + bibsource = {dblp computer science bibliography, https://dblp.org}, + biburl = {https://dblp.org/rec/conf/icml/KohSMXZBHYPGLDS21.bib}, + booktitle = {Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event}, + publisher = {PMLR}, + series = {Proceedings of Machine Learning Research}, + timestamp = {Tue, 13 Dec 2022 00:00:00 +0100}, + volume = {139}, } -@inproceedings{lin2014microsoft, - author = {Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll\'ar, Piotr and Zitnick, C Lawrence}, - booktitle = {Computer Vision{\textendash}ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13}, - organization = {Springer}, - pages = {740--755}, - title = {Microsoft coco: {Common} objects in context}, - year = {2014}, +@incollection{lin2014microsoft, + doi = {10.1007/978-3-319-10602-1\_48}, + pages = {740--755}, + source = {Crossref}, + author = {Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll\'ar, Piotr and Zitnick, C. Lawrence}, + year = {2014}, + isbn = {9783319106014,9783319106021}, + url = {https://doi.org/10.1007/978-3-319-10602-1\_48}, + issn = {0302-9743,1611-3349}, + booktitle = {Computer Vision -- ECCV 2014}, + publisher = {Springer International Publishing}, + title = {Microsoft COCO: Common Objects in Context}, + organization = {Springer}, } @inproceedings{lundberg2017unified, - author = {Lundberg, Scott M. and Lee, Su-In}, - editor = {Guyon, Isabelle and von Luxburg, Ulrike and Bengio, Samy and Wallach, Hanna M. and Fergus, Rob and Vishwanathan, S. V. N. and Garnett, Roman}, - bibsource = {dblp computer science bibliography, https://dblp.org}, - biburl = {https://dblp.org/rec/conf/nips/LundbergL17.bib}, - booktitle = {Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA}, - pages = {4765--4774}, - timestamp = {Thu, 21 Jan 2021 00:00:00 +0100}, - title = {A Unified Approach to Interpreting Model Predictions}, - url = {https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html}, - year = {2017}, + author = {Lundberg, Scott M. and Lee, Su-In}, + editor = {Guyon, Isabelle and von Luxburg, Ulrike and Bengio, Samy and Wallach, Hanna M. and Fergus, Rob and Vishwanathan, S. V. N. and Garnett, Roman}, + bibsource = {dblp computer science bibliography, https://dblp.org}, + biburl = {https://dblp.org/rec/conf/nips/LundbergL17.bib}, + booktitle = {Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA}, + pages = {4765--4774}, + timestamp = {Thu, 21 Jan 2021 00:00:00 +0100}, + title = {A Unified Approach to Interpreting Model Predictions}, + url = {https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html}, + year = {2017}, } @article{maass1997networks, - author = {Maass, Wolfgang}, - doi = {10.1016/s0893-6080(97)00011-7}, - issn = {0893-6080}, - journal = {Neural Networks}, - number = {9}, - pages = {1659--1671}, - publisher = {Elsevier BV}, - source = {Crossref}, - title = {Networks of spiking neurons: {The} third generation of neural network models}, - url = {https://doi.org/10.1016/s0893-6080(97)00011-7}, - volume = {10}, - year = {1997}, - month = dec, + number = {9}, + doi = {10.1016/s0893-6080(97)00011-7}, + pages = {1659--1671}, + source = {Crossref}, + volume = {10}, + author = {Maass, Wolfgang}, + year = {1997}, + month = dec, + url = {https://doi.org/10.1016/s0893-6080(97)00011-7}, + issn = {0893-6080}, + journal = {Neural Networks}, + publisher = {Elsevier BV}, + title = {Networks of spiking neurons: The third generation of neural network models}, } @article{mattson2020mlperf, - author = {Mattson, Peter and Reddi, Vijay Janapa and Cheng, Christine and Coleman, Cody and Diamos, Greg and Kanter, David and Micikevicius, Paulius and Patterson, David and Schmuelling, Guenther and Tang, Hanlin and Wei, Gu-Yeon and Wu, Carole-Jean}, - doi = {10.1109/mm.2020.2974843}, - issn = {0272-1732, 1937-4143}, - journal = {IEEE Micro}, - number = {2}, - pages = {8--16}, - publisher = {Institute of Electrical and Electronics Engineers (IEEE)}, - source = {Crossref}, - title = {{MLPerf:} {An} Industry Standard Benchmark Suite for Machine Learning Performance}, - url = {https://doi.org/10.1109/mm.2020.2974843}, - volume = {40}, - year = {2020}, - month = mar, + number = {2}, + doi = {10.1109/mm.2020.2974843}, + pages = {8--16}, + source = {Crossref}, + volume = {40}, + author = {Mattson, Peter and Reddi, Vijay Janapa and Cheng, Christine and Coleman, Cody and Diamos, Greg and Kanter, David and Micikevicius, Paulius and Patterson, David and Schmuelling, Guenther and Tang, Hanlin and Wei, Gu-Yeon and Wu, Carole-Jean}, + year = {2020}, + month = mar, + url = {https://doi.org/10.1109/mm.2020.2974843}, + issn = {0272-1732,1937-4143}, + journal = {IEEE Micro}, + publisher = {Institute of Electrical and Electronics Engineers (IEEE)}, + title = {MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance}, } @article{modha2023neural, - author = {Modha, Dharmendra S. and Akopyan, Filipp and Andreopoulos, Alexander and Appuswamy, Rathinakumar and Arthur, John V. and Cassidy, Andrew S. and Datta, Pallab and DeBole, Michael V. and Esser, Steven K. and Otero, Carlos Ortega and Sawada, Jun and Taba, Brian and Amir, Arnon and Bablani, Deepika and Carlson, Peter J. and Flickner, Myron D. and Gandhasri, Rajamohan and Garreau, Guillaume J. and Ito, Megumi and Klamo, Jennifer L. and Kusnitz, Jeffrey A. and McClatchey, Nathaniel J. and McKinstry, Jeffrey L. and Nakamura, Yutaka and Nayak, Tapan K. and Risk, William P. and Schleupen, Kai and Shaw, Ben and Sivagnaname, Jay and Smith, Daniel F. and Terrizzano, Ignacio and Ueda, Takanori}, - doi = {10.1126/science.adh1174}, - issn = {0036-8075, 1095-9203}, - journal = {Science}, - number = {6668}, - pages = {329--335}, - publisher = {American Association for the Advancement of Science (AAAS)}, - source = {Crossref}, - title = {Neural inference at the frontier of energy, space, and time}, - url = {https://doi.org/10.1126/science.adh1174}, - volume = {382}, - year = {2023}, - month = oct, + number = {6668}, + doi = {10.1126/science.adh1174}, + pages = {329--335}, + source = {Crossref}, + volume = {382}, + author = {Modha, Dharmendra S. and Akopyan, Filipp and Andreopoulos, Alexander and Appuswamy, Rathinakumar and Arthur, John V. and Cassidy, Andrew S. and Datta, Pallab and DeBole, Michael V. and Esser, Steven K. and Otero, Carlos Ortega and Sawada, Jun and Taba, Brian and Amir, Arnon and Bablani, Deepika and Carlson, Peter J. and Flickner, Myron D. and Gandhasri, Rajamohan and Garreau, Guillaume J. and Ito, Megumi and Klamo, Jennifer L. and Kusnitz, Jeffrey A. and McClatchey, Nathaniel J. and McKinstry, Jeffrey L. and Nakamura, Yutaka and Nayak, Tapan K. and Risk, William P. and Schleupen, Kai and Shaw, Ben and Sivagnaname, Jay and Smith, Daniel F. and Terrizzano, Ignacio and Ueda, Takanori}, + year = {2023}, + month = oct, + url = {https://doi.org/10.1126/science.adh1174}, + issn = {0036-8075,1095-9203}, + journal = {Science}, + publisher = {American Association for the Advancement of Science (AAAS)}, + title = {Neural inference at the frontier of energy, space, and time}, } @inproceedings{reddi2020mlperf, - author = {Reddi, Vijay Janapa and Cheng, Christine and Kanter, David and Mattson, Peter and Schmuelling, Guenther and Wu, Carole-Jean and Anderson, Brian and Breughe, Maximilien and Charlebois, Mark and Chou, William and Chukka, Ramesh and Coleman, Cody and Davis, Sam and Deng, Pan and Diamos, Greg and Duke, Jared and Fick, Dave and Gardner, J. Scott and Hubara, Itay and Idgunji, Sachin and Jablin, Thomas B. and Jiao, Jeff and John, Tom St. and Kanwar, Pankaj and Lee, David and Liao, Jeffery and Lokhmotov, Anton and Massa, Francisco and Meng, Peng and Micikevicius, Paulius and Osborne, Colin and Pekhimenko, Gennady and Rajan, Arun Tejusve Raghunath and Sequeira, Dilip and Sirasao, Ashish and Sun, Fei and Tang, Hanlin and Thomson, Michael and Wei, Frank and Wu, Ephrem and Xu, Lingjie and Yamada, Koichi and Yu, Bing and Yuan, George and Zhong, Aaron and Zhang, Peizhao and Zhou, Yuchen}, - booktitle = {2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA)}, - doi = {10.1109/isca45697.2020.00045}, - organization = {IEEE}, - pages = {446--459}, - publisher = {IEEE}, - source = {Crossref}, - title = {{MLPerf} Inference Benchmark}, - url = {https://doi.org/10.1109/isca45697.2020.00045}, - year = {2020}, - month = may, + doi = {10.1109/isca45697.2020.00045}, + pages = {446--459}, + source = {Crossref}, + author = {Reddi, Vijay Janapa and Cheng, Christine and Kanter, David and Mattson, Peter and Schmuelling, Guenther and Wu, Carole-Jean and Anderson, Brian and Breughe, Maximilien and Charlebois, Mark and Chou, William and Chukka, Ramesh and Coleman, Cody and Davis, Sam and Deng, Pan and Diamos, Greg and Duke, Jared and Fick, Dave and Gardner, J. Scott and Hubara, Itay and Idgunji, Sachin and Jablin, Thomas B. and Jiao, Jeff and John, Tom St. and Kanwar, Pankaj and Lee, David and Liao, Jeffery and Lokhmotov, Anton and Massa, Francisco and Meng, Peng and Micikevicius, Paulius and Osborne, Colin and Pekhimenko, Gennady and Rajan, Arun Tejusve Raghunath and Sequeira, Dilip and Sirasao, Ashish and Sun, Fei and Tang, Hanlin and Thomson, Michael and Wei, Frank and Wu, Ephrem and Xu, Lingjie and Yamada, Koichi and Yu, Bing and Yuan, George and Zhong, Aaron and Zhang, Peizhao and Zhou, Yuchen}, + year = {2020}, + month = may, + url = {https://doi.org/10.1109/isca45697.2020.00045}, + booktitle = {2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA)}, + publisher = {IEEE}, + title = {MLPerf Inference Benchmark}, + organization = {IEEE}, } @inproceedings{ribeiro2016should, - author = {Ribeiro, Marco Tulio and Singh, Sameer and Guestrin, Carlos}, - booktitle = {Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining}, - pages = {1135--1144}, - title = {{\textquotedblright} Why should i trust you?{\textquotedblright} Explaining the predictions of any classifier}, - year = {2016}, + author = {Ribeiro, Marco Tulio and Singh, Sameer and Guestrin, Carlos}, + booktitle = {Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining}, + pages = {1135--1144}, + title = {'' Why should i trust you?'' Explaining the predictions of any classifier}, + year = {2016}, } @article{schuman2022opportunities, - author = {Schuman, Catherine D. and Kulkarni, Shruti R. and Parsa, Maryam and Mitchell, J. Parker and Date, Prasanna and Kay, Bill}, - doi = {10.1038/s43588-021-00184-y}, - issn = {2662-8457}, - journal = {Nature Computational Science}, - number = {1}, - pages = {10--19}, - publisher = {Springer Science and Business Media LLC}, - source = {Crossref}, - title = {Opportunities for neuromorphic computing algorithms and applications}, - url = {https://doi.org/10.1038/s43588-021-00184-y}, - volume = {2}, - year = {2022}, - month = jan, + number = {1}, + doi = {10.1038/s43588-021-00184-y}, + pages = {10--19}, + source = {Crossref}, + volume = {2}, + author = {Schuman, Catherine D. and Kulkarni, Shruti R. and Parsa, Maryam and Mitchell, J. Parker and Date, Prasanna and Kay, Bill}, + year = {2022}, + month = jan, + url = {https://doi.org/10.1038/s43588-021-00184-y}, + issn = {2662-8457}, + journal = {Nature Computational Science}, + publisher = {Springer Science and Business Media LLC}, + title = {Opportunities for neuromorphic computing algorithms and applications}, } @article{warden2018speech, - author = {Warden, Pete}, - journal = {ArXiv preprint}, - title = {Speech commands: {A} dataset for limited-vocabulary speech recognition}, - url = {https://arxiv.org/abs/1804.03209}, - volume = {abs/1804.03209}, - year = {2018}, + url = {http://arxiv.org/abs/1804.03209v1}, + year = {2018}, + month = apr, + title = {Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}, + author = {Warden, Pete}, + primaryclass = {cs.CL}, + archiveprefix = {arXiv}, + journal = {ArXiv preprint}, + volume = {abs/1804.03209}, } @inproceedings{xie2020adversarial, - author = {Xie, Cihang and Tan, Mingxing and Gong, Boqing and Wang, Jiang and Yuille, Alan L. and Le, Quoc V.}, - bibsource = {dblp computer science bibliography, https://dblp.org}, - biburl = {https://dblp.org/rec/conf/cvpr/XieTGWYL20.bib}, - booktitle = {2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, - doi = {10.1109/cvpr42600.2020.00090}, - pages = {816--825}, - publisher = {IEEE}, - timestamp = {Tue, 13 Oct 2020 01:00:00 +0200}, - title = {Adversarial Examples Improve Image Recognition}, - url = {https://doi.org/10.1109/cvpr42600.2020.00090}, - year = {2020}, - source = {Crossref}, - month = jun, + doi = {10.1109/cvpr42600.2020.00090}, + source = {Crossref}, + author = {Xie, Cihang and Tan, Mingxing and Gong, Boqing and Wang, Jiang and Yuille, Alan L. and Le, Quoc V.}, + year = {2020}, + month = jun, + url = {https://doi.org/10.1109/cvpr42600.2020.00090}, + booktitle = {2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, + publisher = {IEEE}, + title = {Adversarial Examples Improve Image Recognition}, + bibsource = {dblp computer science bibliography, https://dblp.org}, + biburl = {https://dblp.org/rec/conf/cvpr/XieTGWYL20.bib}, + pages = {816--825}, + timestamp = {Tue, 13 Oct 2020 01:00:00 +0200}, } @article{xu2023demystifying, - author = {Xu, Hu and Xie, Saining and Tan, Xiaoqing Ellen and Huang, Po-Yao and Howes, Russell and Sharma, Vasu and Li, Shang-Wen and Ghosh, Gargi and Zettlemoyer, Luke and Feichtenhofer, Christoph}, - journal = {ArXiv preprint}, - title = {Demystifying {CLIP} Data}, - url = {https://arxiv.org/abs/2309.16671}, - volume = {abs/2309.16671}, - year = {2023}, + url = {http://arxiv.org/abs/2309.16671v4}, + year = {2023}, + month = sep, + title = {Demystifying CLIP Data}, + author = {Xu, Hu and Xie, Saining and Tan, Xiaoqing Ellen and Huang, Po-Yao and Howes, Russell and Sharma, Vasu and Li, Shang-Wen and Ghosh, Gargi and Zettlemoyer, Luke and Feichtenhofer, Christoph}, + primaryclass = {cs.CV}, + archiveprefix = {arXiv}, + journal = {ArXiv preprint}, + volume = {abs/2309.16671}, } -@misc{yik2023neurobench, - author = {Yik, Jason and Ahmed, Soikat Hasan and Ahmed, Zergham and Anderson, Brian and Andreou, Andreas G. and Bartolozzi, Chiara and Basu, Arindam and den Blanken, Douwe and Bogdan, Petrut and Bohte, Sander and Bouhadjar, Younes and Buckley, Sonia and Cauwenberghs, Gert and Corradi, Federico and de Croon, Guido and Danielescu, Andreea and Daram, Anurag and Davies, Mike and Demirag, Yigit and Eshraghian, Jason and Forest, Jeremy and Furber, Steve and Furlong, Michael and Gilra, Aditya and Indiveri, Giacomo and Joshi, Siddharth and Karia, Vedant and Khacef, Lyes and Knight, James C. and Kriener, Laura and Kubendran, Rajkumar and Kudithipudi, Dhireesha and Lenz, Gregor and Manohar, Rajit and Mayr, Christian and Michmizos, Konstantinos and Muir, Dylan and Neftci, Emre and Nowotny, Thomas and Ottati, Fabrizio and Ozcelikkale, Ayca and Pacik-Nelson, Noah and Panda, Priyadarshini and Pao-Sheng, Sun and Payvand, Melika and Pehle, Christian and Petrovici, Mihai A. and Posch, Christoph and Renner, Alpha and Sandamirskaya, Yulia and Schaefer, Clemens JS and van Schaik, Andr\'e and Schemmel, Johannes and Schuman, Catherine and Seo, Jae-sun and Sheik, Sadique and Shrestha, Sumit Bam and Sifalakis, Manolis and Sironi, Amos and Stewart, Kenneth and Stewart, Terrence C. and Stratmann, Philipp and Tang, Guangzhi and Timcheck, Jonathan and Verhelst, Marian and Vineyard, Craig M. and Vogginger, Bernhard and Yousefzadeh, Amirreza and Zhou, Biyan and Zohora, Fatima Tuz and Frenkel, Charlotte and Reddi, Vijay Janapa}, - archiveprefix = {arXiv}, - eprint = {2304.04640}, - primaryclass = {cs.AI}, - title = {{NeuroBench:} {Advancing} Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking}, - year = {2023}, +@article{yik2023neurobench, + url = {http://arxiv.org/abs/2304.04640v3}, + year = {2023}, + month = apr, + title = {NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems}, + author = {Yik, Jason and den Berghe, Korneel Van and den Blanken, Douwe and Bouhadjar, Younes and Fabre, Maxime and Hueber, Paul and Kleyko, Denis and Pacik-Nelson, Noah and Sun, Pao-Sheng Vincent and Tang, Guangzhi and Wang, Shenqi and Zhou, Biyan and Ahmed, Soikat Hasan and Joseph, George Vathakkattil and Leto, Benedetto and Micheli, Aurora and Mishra, Anurag Kumar and Lenz, Gregor and Sun, Tao and Ahmed, Zergham and Akl, Mahmoud and Anderson, Brian and Andreou, Andreas G. and Bartolozzi, Chiara and Basu, Arindam and Bogdan, Petrut and Bohte, Sander and Buckley, Sonia and Cauwenberghs, Gert and Chicca, Elisabetta and Corradi, Federico and de Croon, Guido and Danielescu, Andreea and Daram, Anurag and Davies, Mike and Demirag, Yigit and Eshraghian, Jason and Fischer, Tobias and Forest, Jeremy and Fra, Vittorio and Furber, Steve and Furlong, P. Michael and Gilpin, William and Gilra, Aditya and Gonzalez, Hector A. and Indiveri, Giacomo and Joshi, Siddharth and Karia, Vedant and Khacef, Lyes and Knight, James C. and Kriener, Laura and Kubendran, Rajkumar and Kudithipudi, Dhireesha and Liu, Yao-Hong and Liu, Shih-Chii and Ma, Haoyuan and Manohar, Rajit and Margarit-Taul\'e, Josep Maria and Mayr, Christian and Michmizos, Konstantinos and Muir, Dylan and Neftci, Emre and Nowotny, Thomas and Ottati, Fabrizio and Ozcelikkale, Ayca and Panda, Priyadarshini and Park, Jongkil and Payvand, Melika and Pehle, Christian and Petrovici, Mihai A. and Pierro, Alessandro and Posch, Christoph and Renner, Alpha and Sandamirskaya, Yulia and Schaefer, Clemens JS and van Schaik, Andr\'e and Schemmel, Johannes and Schmidgall, Samuel and Schuman, Catherine and Seo, Jae-sun and Sheik, Sadique and Shrestha, Sumit Bam and Sifalakis, Manolis and Sironi, Amos and Stewart, Matthew and Stewart, Kenneth and Stewart, Terrence C. and Stratmann, Philipp and Timcheck, Jonathan and T\"omen, Nergis and Urgese, Gianvito and Verhelst, Marian and Vineyard, Craig M. and Vogginger, Bernhard and Yousefzadeh, Amirreza and Zohora, Fatima Tuz and Frenkel, Charlotte and Reddi, Vijay Janapa}, + primaryclass = {cs.AI}, + archiveprefix = {arXiv}, + eprint = {2304.04640}, } @article{tschand2024mlperf, - title={MLPerf Power: Benchmarking the Energy Efficiency of Machine Learning Systems from $\{$$\backslash$mu$\}$ Watts to MWatts for Sustainable AI}, - author={Tschand, Arya and Rajan, Arun Tejusve Raghunath and Idgunji, Sachin and Ghosh, Anirban and Holleman, Jeremy and Kiraly, Csaba and Ambalkar, Pawan and Borkar, Ritika and Chukka, Ramesh and Cockrell, Trevor and others}, - journal={arXiv preprint arXiv:2410.12032}, - year={2024} + url = {http://arxiv.org/abs/2410.12032v1}, + year = {2024}, + month = oct, + title = {MLPerf Power: Benchmarking the Energy Efficiency of Machine Learning Systems from Microwatts to Megawatts for Sustainable AI}, + author = {Tschand, Arya and Rajan, Arun Tejusve Raghunath and Idgunji, Sachin and Ghosh, Anirban and Holleman, Jeremy and Kiraly, Csaba and Ambalkar, Pawan and Borkar, Ritika and Chukka, Ramesh and Cockrell, Trevor and Curtis, Oliver and Fursin, Grigori and Hodak, Miro and Kassa, Hiwot and Lokhmotov, Anton and Miskovic, Dejan and Pan, Yuechao and Manmathan, Manu Prasad and Raymond, Liz and John, Tom St. and Suresh, Arjun and Taubitz, Rowan and Zhan, Sean and Wasson, Scott and Kanter, David and Reddi, Vijay Janapa}, + primaryclass = {cs.AR}, + archiveprefix = {arXiv}, + journal = {arXiv preprint arXiv:2410.12032}, } \ No newline at end of file