Skip to content

Commit

Permalink
fix: fix weird styling issue
Browse files Browse the repository at this point in the history
  • Loading branch information
natalieagus committed Apr 26, 2024
1 parent be7ab1a commit 8b320e2
Show file tree
Hide file tree
Showing 9 changed files with 264 additions and 49 deletions.
2 changes: 1 addition & 1 deletion docs/Hardware/i_betacpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Singapore University of Technology and Design
{: .no_toc}
[You can find the lecture video here.](https://youtu.be/4T9MR8BSzt0) You can also **click** on each header to bring you to the section of the video covering the subtopic.

## Learning Objectives
## Detailed Learning Objectives

1. **Understand Control Logic and CPU Instruction Handling**
- Learn how the Control Logic unit decodes the OPCODE of instructions and outputs appropriate control signals to manipulate the datapath for executing various instructions.
Expand Down
25 changes: 19 additions & 6 deletions docs/Software/i_betacpudiagnostics.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,26 @@ Singapore University of Technology and Design
# Beta CPU Diagnostics
{: .no_toc}

## Learning Objectives
## Detailed Learning Objectives

* Handle synchronous and asynchronous interrupts.
* Analyze the Beta Datapath structure and function.
* Identify anomalies within the Beta CPU Architecture.
* Explore testing and troubleshooting techniques for the Beta CPU.
* Implement alternative correcting measures in faulty Beta Datapath.

1. **Understand Interrupt Handling in Beta CPU**
- Learn about the role and types of interrupts in the Beta CPU, including synchronous (software-driven) and asynchronous (hardware-driven) interrupts.
- Examine how interrupts are sampled and processed within the CPU's control system to ensure timely and correct response to external and internal events.

2. **Diagnose Faults in the CPU Datapath**
- Develop skills in identifying and diagnosing faults within the Beta CPU's datapath using diagnostic software tools.
- Understand how to use simple test programs to isolate and identify specific faulty components within the CPU.

3. **Implement Fixes for Faulty Datapaths**
- Explore strategies for making code adjustments and changes to bypass or correct faulty components within the Beta CPU's architecture.
- Gain practical experience in altering CPU behavior through modifications in the control logic to handle specific types of errors or malfunctions.

4. **Enhance Knowledge of Beta CPU's Operational Details**
- Deepen understanding of how the Beta CPU processes and executes instructions by studying its control signals and datapath activity during normal operation and under fault conditions.
- Learn how different parts of the CPU interact during the execution of various types of instructions, focusing on the implications of these interactions for fault diagnosis and correction.

These objectives aim to equip students with the ability to not only understand the inner workings of the Beta CPU but also to effectively address and resolve issues that may arise during its operation, especially those related to the CPU's datapath and control mechanisms.

In this chapter, we'll focus on understanding and fixing problems in the Beta CPU, specifically looking at its datapath. We'll learn how to find out which datapath might be faulty using simple testing software to spot these issues, and figure out what code changes can help when parts of the system aren't working correctly. Our goal is about getting to know the Beta CPU datapath better and being able to fix it whenever possible. We will also learn how to handle **interrupts** in Beta datapath.

Expand Down
33 changes: 30 additions & 3 deletions docs/Software/j_assemblersandcompilers.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,36 @@ Singapore University of Technology and Design

## Learning Objectives

* Explain the concept and purpose of software abstraction
* Write more complex programs using Beta Assembly language in bsim (Beta Simulator), utilising labels and macros
* Hand compile basic C code expressions (int, arrays, conditionals, loops) into Beta assembly language

1. **Understand the Concept of Abstraction in Software Engineering**
- Learn the definition and importance of abstraction in software engineering and computer science.
- Understand how abstraction helps in managing complexity by hiding unnecessary details and allowing focus on higher-level problems.

2. **Explore Software Tools for Abstraction**
- Examine different tools and software that provide layers of abstraction in computing, including assemblers, compilers, and interpreters.
- Understand the role of operating systems and applications in abstracting resource management, security details, and other underlying complexities.

3. **Learn About Language Abstraction Levels**
- Study the progression from machine language to high-level programming languages through assembly language, and how each level abstracts the complexity of the underlying hardware.
- Discover how language constructs like subroutines, modules, and polymorphism further abstract programming tasks.

4. **Understand Assemblers and Their Role in Programming**
- Define what an assembler is and how it functions as a primitive compiler to translate assembly language into machine language.
- Learn about UASM and its role in providing a symbolic representation for the Beta assembly language.

5. **Grasp the Functionality and Usage of UASM**
- Understand the anatomy of an assembler using the UASM example, and how UASM helps in translating symbolic language into binary.
- Learn about the various components of UASM files including basic values, symbols, labels, and macroinstructions.

6. **Differentiate Between Interpreters and Compilers**
- Compare and contrast interpreters and compilers in terms of how they execute high-level languages.
- Understand the trade-offs between these tools in terms of execution speed, error detection, and ease of debugging.

7. **Translate High-Level Constructs to Machine Language**
- Practice translating high-level language constructs, such as variable declarations, arrays, conditionals, and loops, into Beta machine language.
- Explore the strategies for optimizing the translation process to reduce instruction count and memory operations.

These objectives aim to equip students with a deep understanding of how software abstraction layers work to simplify programming and enhance the usability of computing systems. Students will gain practical skills in using assemblers and understanding the transformation of high-level constructs into executable machine code.

## [Overview](https://www.youtube.com/watch?v=Hhq3RhZcngQ&t=49s)
The goal of this chapter is to help us understand how to improve the programmability of the $$\beta$$ (or any ISA in general). The $$\beta$$ machine language is encoded into 32-bit instructions each.
Expand Down
40 changes: 34 additions & 6 deletions docs/Software/k_stackandprocedures.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,40 @@ Singapore University of Technology and Design

## Learning Objectives

* Understand the necessity of functions and stacks for reusable program execution.
* Explain the role and functions of the linkage pointer, base pointer, stack pointer, and activation record.
* Describe the Beta procedure linkage convention in its entirety.
* Demonstrate the ability to draw the stack frame details of a procedure call.
* Analyze and draw the stack frame details of a recursive procedure call.
* Analyze and inspect a function stack paused during execution, accounting for each content.

1. **Understand Function Calls and Reusability**
- Learn why functions are essential for code reusability and organization.
- Understand the basic structure and operation of functions, including passing parameters, returning values, and the call and return mechanism.

2. **Explore the Procedure Linkage and Stack Mechanism**
- Study the procedure linkage concept to manage function calls and returns efficiently.
- Understand the role of the stack in function calls, particularly in managing local variables, parameters, and return addresses.

3. **Learn About Stack Operations**
- Understand how to use stack operations like PUSH and POP to manage function call contexts.
- Learn the importance of the stack pointer (SP), base pointer (BP), and linkage pointer (LP) in function execution.

4. **Procedure Linkage Convention**
- Explore the detailed procedure linkage convention, which dictates how functions should manage calling and returning from functions.
- Learn the sequence of operations a function caller and callee must perform to ensure the correct execution flow and state preservation.

5. **Stack Frame Management**
- Understand how to allocate and deallocate stack frames to manage local variables and function arguments.
- Study the impact of stack frame management on function calls, including nested calls and recursion.

6. **Implement Functions with Multiple Arguments**
- Learn how to handle functions with multiple arguments using the stack.
- Understand the sequence of stacking arguments in reverse order and cleaning up the stack after function execution.

7. **Debug and Manage Function Calls**
- Use debugging tools and techniques to inspect the call stack and understand function execution states.
- Learn about common pitfalls in function implementation, such as dangling references and stack mismanagement.

8. **Advanced Topics in Procedure Linkage**
- Explore advanced concepts like nested functions and non-local variables, and how they affect procedure linkage.
- Discuss the limitations of C and C++ in handling complex function structures compared to languages like Python.

These objectives aim to equip students with a comprehensive understanding of how functions are implemented at a low level, using the stack for memory management during function calls and returns. Students will also learn about the conventions and best practices in designing and debugging functions to write robust and maintainable code.

## [Overview](https://www.youtube.com/watch?v=u4TETujaNuk&t=0s)

Expand Down
49 changes: 43 additions & 6 deletions docs/Software/l_memoryhierarchy.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,49 @@ Singapore University of Technology and Design

## Learning Objectives

* Recognise the motivation behind memory hierarchy
* Present the workings behind simple SRAM and DRAM technologies
* Compare and contrast the pros and cons between cache, physical/main memory (RAM) and secondary memory (disk)
* Explain the concept locality of reference
* Explain the caching idea
* Identify two different cache designs: FA and DM and justify its benefits and drawback

1. **Overview of Memory in Computing Systems**
- Explains the role of the REGFILE and external Memory Units in the $$\beta$$ CPU architecture.
- Discusses the limitations of a 32-bit address space, which can address up to 4GB of memory.

2. **Memory Technologies: SRAM and DRAM**
- Introduces Static Random-Access Memory (SRAM) and Dynamic Random-Access Memory (DRAM) as two primary technologies for creating memory cells.
- Describes the construction and operation of SRAM and DRAM, including the processes of reading from and writing to these memory types.
- Highlights the volatility of these memory types and the need for power to retain data.

3. **SRAM vs DRAM Characteristics**
- Outlines the differences in cost, speed, and capacity between SRAM and DRAM.
- Provides a detailed explanation of how data is accessed and managed in both SRAM and DRAM cells.

4. **Application of Memory Technologies**
- Discusses the practical use of SRAM and DRAM in modern computing, particularly in the context of consumer-grade PCs and CPU caches.
- Explains the role of the cache in enhancing processing speed by storing frequently accessed data closer to the CPU.

5. **Understanding Disk Storage**
- Examines the mechanics of hard disk drives (HDDs) and their structure, including tracks, sectors, and the function of the read/write head.
- Explains the non-volatile nature of disk storage and its implications for data retrieval and storage.

6. **The Memory Hierarchy Concept**
- Introduces the idea of a memory hierarchy to achieve an optimal balance between speed, cost, and capacity.
- Describes the roles of various storage types within this hierarchy, including registers, cache, main memory (RAM), and disk storage.

7. **Locality of Reference**
- Discusses the principle of locality of reference, which predicts that certain memory locations are more likely to be accessed repeatedly over short periods.
- Explains how this principle supports the effective use of cache memory.

8. **Cache Operation and Management**
- Details the operation of cache memory, including the concepts of cache hits and misses.
- Describes the processes involved when the cache does not contain requested data, including fetching data from main memory or disk.

9. **Cache Design Types: Fully Associative and Direct Mapped**
- Compares fully associative and direct mapped cache designs.
- Discusses the advantages and challenges associated with each design, including speed, cost, flexibility, and contention issues.

10. **Summary and Further Learning**
- Concludes with a summary of the key points discussed in the chapter.
- Points to additional resources and videos for extended learning on the topics covered.

These notes are designed to give students a thorough understanding of the critical components and concepts related to memory technologies in computing systems, emphasizing the practical applications and the importance of memory hierarchy in achieving efficient computing operations.

## [Overview](https://www.youtube.com/watch?v=m5_u3sQ9bXo&t=0s)

Expand Down
42 changes: 36 additions & 6 deletions docs/Software/m_cacheissues.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,42 @@ Singapore University of Technology and Design

## Learning Objectives

* Explain various cache design issues
* Explain the differences between fully-associative cache, direct-mapped cache, and n-way set-associative cache
* Evaluate various cache policies: write and replacement
* Synthesise basic caching algorithm in the event of HIT or MISS
* Recognise the differences between byte and word addressing
* Benchmark various cache designs

1. **Introduction to Cache Memory**
- Describes cache as a small, fast, and expensive memory unit made of SRAM, located near the CPU core. It's used to reduce the time and energy required for the CPU to access data from the main memory (RAM).

2. **Cache Design Issues**
- Discusses four primary design issues: Associativity, Replacement Policy, Block Size, and Write Policy.
- Associativity determines how many cache lines an address can be mapped to.
- Replacement policy decides which cache entry to replace on a cache miss.
- Block size determines the amount of data written to cache at one time.
- Write policy dictates when data is written from cache to main memory.

3. **Comparison of FA and DM Cache**
- Provides a detailed comparison of Fully Associative (FA) and Direct Mapped (DM) caches across various metrics such as TAG field, performance, cost, contention risk, and associativity.

4. **N-Way Set Associative Cache (NWSA)**
- Introduces NWSA as a hybrid design between FA and DM caches, aiming to reduce the contention problem seen in DM caches by introducing a degree of associativity.

5. **Cache Replacement Policies**
- Explains common strategies like Least Recently Used (LRU), Least Recently Replaced (LRR), and Random replacement, discussing their implications, overheads, and use cases.

6. **Cache Block Size**
- Discusses the trade-offs involved in determining the cache block size, emphasizing the balance between fetching enough data to utilize locality of reference and the risk of fetching unused data.

7. **Write Policies in Cache**
- Describes strategies such as Write-through, Write-back, and Write-behind, focusing on their operational differences and the implications for cache and main memory coherence.

8. **Helper Bits in Cache**
- Explains the role of helper bits like Valid and Dirty bits in managing cache operations and ensuring the integrity and efficiency of cache data management.

9. **Cache Operations**
- Details the caching algorithms for read/load and write/store requests, explaining how cache interacts with CPU requests and how data is managed within the cache and between the cache and main memory.

10. **Summary and Further Learning**
- Concludes with a summary of the key points discussed in the chapter and points to additional resources for extended learning.

These notes are designed to provide students with a comprehensive understanding of cache memory and its critical role in enhancing CPU performance by reducing access times to frequently used data. They cover theoretical aspects and practical implications of cache management, crucial for optimizing modern computing architectures.

## [Overview](https://www.youtube.com/watch?v=2OARjqLK4io&t=0s)

Expand Down
43 changes: 36 additions & 7 deletions docs/Software/n_virtualmemory.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,42 @@ Singapore University of Technology and Design
{: .no_toc}
[You can find the lecture video here.](https://youtu.be/19wS4GC6mbQ) You can also **click** on each header to bring you to the section of the video covering the subtopic.

## Learning Objectives
* Describe how virtual memory works
* Explain simple page map design
* Calculate page map arithmetic given a page map specification
* Explain the role of translation look-ahead buffer
* Describe how demand paging works illustrates the workings of context switching
* Analyse the benefits and drawbacks of context switching
## Detailed Learning Objectives

1. **Overview of Memory Segmentation**:
- Discusses the segmentation of a process's memory into executable instructions, stack, and heap, explaining how each segment is utilized during the runtime of a process. The overview also clarifies the distinction between a program and a process, emphasizing operational dynamics.

2. **Utilizing Swap Space**:
- Explains the necessity of swap space for managing multiple processes simultaneously, especially when the physical memory capacity (like 32GB) is exceeded by the requirements of running applications (like a 77GB game).

3. **Virtual Memory Definition**:
- Introduces virtual memory as a technique that abstracts storage resources to allow multiple processes to share limited physical storage seamlessly and to give the illusion of a large memory space.

4. **Memory Paging**:
- Describes memory paging as a scheme to efficiently transfer data between the disk and physical memory. It breaks down the terminology around pages, including how pages are addressed through Physical Page Number (PPN) and Page Offset (PO).

5. **Virtual Memory Mechanics**:
- Details the functionality of virtual memory, highlighting the role of the Memory Management Unit (MMU) in translating virtual addresses (VA) to physical addresses (PA) using page tables.

6. **Pagetable Function and Structure**:
- Explains the pagetable's role in mapping virtual addresses to physical locations, detailing the structure of pagetable entries including flags like dirty and resident bits, and their implications for memory management.

7. **Demand Paging**:
- Covers the concept of demand paging, where data is not loaded from disk to memory until necessary. It includes the handling of page faults by the OS Kernel, which loads data into physical memory on an as-needed basis.

8. **Translation Lookaside Buffer (TLB)**:
- Introduces the TLB as a cache for pagetable entries to speed up the translation process from virtual addresses to physical addresses, noting its high hit rate due to the locality of reference.

9. **Handling Page Faults and Paging Strategy**:
- Discusses how the OS handles page faults by loading the required pages from disk to RAM and describes strategies for replacing pages in memory when necessary, using policies like Least Recently Used (LRU).

10. **Context Switching**:
- Explains how modern CPUs use context switching to manage multiple processes, making it appear as if multiple applications are running simultaneously on a single processor.

11. **Using Cache with Virtual Memory**:
- Discusses configurations where the cache can be placed before or after the MMU, detailing the implications of each setup and how addresses are handled in each scenario.

The notes effectively synthesize complex concepts in virtual memory management, providing clarity on how processes interact with physical and virtual memory systems to enable efficient and secure multitasking in modern computers.

## [Overview](https://www.youtube.com/watch?v=19wS4GC6mbQ&t=0s)

Expand Down
Loading

0 comments on commit 8b320e2

Please sign in to comment.