Skip to content

Commit

Permalink
feat: add summaries
Browse files Browse the repository at this point in the history
  • Loading branch information
natalieagus committed May 2, 2024
1 parent 6a69309 commit 13a1582
Show file tree
Hide file tree
Showing 16 changed files with 229 additions and 33 deletions.
10 changes: 9 additions & 1 deletion docs/Hardware/a_basics-of-info.md
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,15 @@ I_{8\rightarrow 3}(X) = \log_2 \left( \frac{8}{3} \right) = 1.42 \text{ bits}
## [Summary](https://www.youtube.com/watch?v=IicB30kA3pY&list=PLklpDKpv-EBj1agIq4vB1iB6ahMT8_2A_&index=1&t=2094s)
[You may want to watch the post lecture videos here.](https://youtu.be/UPIoYYLG718)

This chapter quickly summarises how we can represent integers using different number systems, especially the binary number system that is especially useful for our computers since they can only store information in terms electrical voltages (representing simply strings of `1`s and `0`s).
This chapter quickly summarises how we can represent integers using different number systems, especially the binary number system that is especially useful for our computers since they can only store information in terms electrical voltages (representing simply strings of `1`s and `0`s). It touches on how digital devices use various number systems to process and store information efficiently. The use of 2's complement for handling signed numbers is critical in arithmetic operations. Understanding different encoding techniques is essential for interpreting data correctly across different systems. Additionally, the concepts from information theory are applied to measure and manage data in computing, highlighting the importance of efficient data handling and storage in digital systems.

Here are the key points:

1. **Number Systems**: It discusses binary, decimal, octal, and hexadecimal number systems, emphasizing their use in encoding data in computers.
2. **2's Complement**: Explains how signed integers are represented using 2's complement, enabling the representation of negative numbers in binary form.
3. **Encoding Methods**: Describes various encoding methods, including fixed and variable length encodings, and character encodings like ASCII and Unicode.
4. **Information Theory**: Details how information can be quantified based on the probability of events, using logarithmic measures.


Given $$X$$ bits,

Expand Down
12 changes: 11 additions & 1 deletion docs/Hardware/b_digitalabstraction.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,17 @@ If both characteristics above aren't satisfied in the VTC curve, then it is <spa
## [Summary](https://www.youtube.com/watch?v=xkVIr8jrtX0&t=1290s)
[You may want to watch the post lecture videos here.](https://youtu.be/3OoeuqWDhns)

In this chapter, we have learned about the digital abstraction, that is how can we set some **contracts** (via setting the four voltage specifications) such that we can establish digital values out of real-valued voltages. It also emphasizes the importance of **static discipline**. The static discipline in digital circuits serve as guidelines that specify the **voltage levels** that represent the binary states, ensuring reliable and clear signal interpretation. These guidelines help maintain the distinction between '0' and '1' states even in the presence of noise and other electrical variances, which is crucial for the proper functioning of digital systems.
In this chapter, we have learned about the digital abstraction serves as the backbone for creating reliable digital systems, starting from the most basic components like MOSFETs. That is, how can we set some **contracts** (via setting the four voltage specifications) such that we can establish digital values out of real-valued voltages. It also emphasizes the importance of **static discipline**. The static discipline in digital circuits serve as guidelines that specify the **voltage levels** that represent the binary states, ensuring reliable and clear signal interpretation. These guidelines help maintain the distinction between '0' and '1' states even in the presence of noise and other electrical variances, which is crucial for the proper functioning of digital systems.

Here are the key concepts:

1. **Static Discipline**: Guidelines that define voltage levels for binary states, crucial for ensuring clear and reliable digital signal processing.
2. **MOSFETs and Logic Gates**: Introduction to MOSFETs used to build logic gates, forming the basic building blocks of digital devices.
3. **Levels of Abstraction**: Describes how complex systems like CPUs and microcontrollers are built from simpler components, facilitating easier programming and system management.


These components are used to construct logic gates, which then form more complex units and ultimately entire computer systems. This layered approach not only simplifies the design and development of digital devices but also ensures that even small components adhere to <span class="orange-bold">necessary</span> <span class="orange-bold">standards</span> to function correctly within the larger system.


In the next chapter, we will learn about the **MOSFET** (transistor), that is one of the smallest component (building block) that makes up a digital device, and how we can use them to form a proper combinational logic elements we call **gates**. These **gates** can be used to form an even larger **combinational circuits** such as the **adder**, **shifter**, etc, and an even larger one such as the **Arithmetic Logic Unit** (you will build them in Lab 2 and 3).

Expand Down
11 changes: 9 additions & 2 deletions docs/Hardware/c_cmostechnology.md
Original file line number Diff line number Diff line change
Expand Up @@ -316,9 +316,16 @@ Given the $$t_{pd}$$ and $$t_{cd}$$ for the NAND gate: $$t_{pd} = 4 ns$$, $$t_{c
## Summary
[You may want to watch the post lecture videos here.](https://youtu.be/cJxBlO5NMGs)

We begin the chapter by understanding how a MOSFET can be used as the most basic building block (element) in digital circuits. There are two types of FETs, namely NFET and PFET, that can be "activated" (switched on) or "deactivated" (switched off) using proper voltages supplied at its gate.
This chapter CMOS technology delves into the fundamentals of using MOSFETs (Metal-Oxide Semiconductor Field-Effect Transistors) in designing combinational logic circuits. Here are the key points:

It takes time for these FETs to work, e.g: reacting to the input voltage at its gate and establish a (low or high) voltage value at its drain. Therefore it is important to specify the *timing specifications* of a combinational logic device so that users may know how long the device takes to *react* (to a new valid input, or to an invalid input).
1. **MOSFETs Basics**: Explains the structure and operation of NFETs and PFETs, highlighting their roles in creating logic circuits.
2. **Complementary CMOS**: Discusses the use of complementary pairs of NFETs and PFETs to form stable, efficient logic gates.
3. **Logic Gates**: Describes how basic logic gates like NAND and NOR are formed using CMOS technology.
4. **Timing Specifications**: Covers critical timing aspects like propagation and contamination delays that affect circuit performance.

We elaborate on how CMOS technology underpins the design of efficient and reliable digital circuits. Through detailed discussions on MOSFETs and their applications, we illustrate how different types of MOSFETs (NFETs and PFETs) are used in tandem to ensure that digital logic circuits are both power-efficient and functionally reliable. Key concepts like the design of logic gates and the impact of timing delays on circuit performance are also explained, emphasizing the practical importance of these designs in modern electronics.

We begin the chapter by understanding how a MOSFET can be used as the most basic building block (element) in digital circuits. There are two types of FETs, namely NFET and PFET, that can be "activated" (switched on) or "deactivated" (switched off) using proper voltages supplied at its gate. It takes *time* for these FETs to produce a valid voltage value, e.g: reacting to the input voltage at its gate and establish a (low or high) voltage value at its drain. Therefore it is important to specify the *timing specifications* of a combinational logic device so that users may know how long the device takes to *react* (to a new valid input, or to an invalid input).

{: .note}
Knowing how long the combinational device takes to react (at most) tells us how *often* (e.g: at what rate) can we supply new inputs to the device, and how fast the device can process/compute a *batch* of input values.
Expand Down
6 changes: 2 additions & 4 deletions docs/Hardware/d_logicsynthesis.md
Original file line number Diff line number Diff line change
Expand Up @@ -462,11 +462,9 @@ For example, the Full Adder has 3 inputs (A, B, $$C_{in}$$), and 2 outputs ($$S$
## [Summary](https://www.youtube.com/watch?v=yXBAy432vT8&t=4421s)
[You may want to watch the post lecture videos here.](https://youtu.be/oo58e54SHjs)

Synthesizing combinational logic is not a simple task. There are many ways to realise a functionality, i.e: the **logic** (that the device should implement) represented by the truth table or boolean expression. We can use universal gates (only NANDS, or only NORS), a combination of gates (INV, AND, and OR), or many other ways (multiplexers, ROMs, etc).
Synthesizing combinational logic is not a simple task. There are many ways to realise a functionality, i.e: the **logic** (that the device should implement) represented by the truth table or boolean expression. We can use universal gates (only NANDS, or only NORS), a combination of gates (INV, AND, and OR). We can start synthesizing using the truth table, constructing the sum of products, and then minimising the boolean expression. Karnaugh map or properties of boolean algebra can be used to simplify boolean expressions, resulting in less number of logic gates required in the end to synthesize the same logic unit.

We then touch on special combinational logic devices that are commonly used: ROM, Multiplexer, and Decoder.

**Hardcoding** a truth table using ROM and Multiplexers are convenient, because we do not need to think about simplifying the boolean expression of our truth table (which can get really difficult and complicated when the truth table is large, i.e: complicated functionality). However it comes at a cost: the **cost of the materials** to build the ROM / Multiplexers, and at the **cost of space** (we need use a lot of logic gates to build these).
We then touch on special combinational logic devices that are commonly used: ROM, Multiplexer, and Decoder. We can utilise those to synthesize logic too, however they tend to be more expensive and takes up more space (physically larger). For instance **hardcoding** a truth table using ROM and multiplexers are convenient, because we do not need to think about simplifying the boolean expression of our truth table which can get really difficult and complicated when the truth table is large, i.e: complicated functionality.

# Appendix

Expand Down
10 changes: 10 additions & 0 deletions docs/Hardware/e_sequentiallogic.md
Original file line number Diff line number Diff line change
Expand Up @@ -384,8 +384,18 @@ You may want to watch the post lecture videos here:
* [Part 3: D-Flip Flop or Registers](https://youtu.be/X6kxFjAHkSw)
* [Part 4: Synchronisation](https://youtu.be/eK4JCv1oADo)


We begin by highlighting the crucial role of sequential logic in modern computing, where outputs depend not just on current but also previous inputs. It elaborates on the use of flip-flops and latches as fundamental elements that store data, making them indispensable in creating more complex memory structures. Additionally, it explains the necessary timing constraints and synchronization mechanisms that ensure the reliable operation of sequential circuits, crucial for maintaining data integrity and system stability.

A sequential logic device is a type of digital circuit where the output not only depends on the current inputs but also on the **history** of inputs, storing information about past events. This behavior is achieved through the use of storage elements like flip-flops or latches. These devices are fundamental in creating memory and more complex processing units within digital systems, enabling the implementation of functions such as counters, shift registers, and state machines.

The topics covered include:

1. **Dynamic Discipline and Timing**: Explains the timing constraints necessary for stable sequential logic operations.
2. **Flip-Flops and Latches**: Describes various types of storage elements used in sequential circuits, essential for memory functions.
3. **Synchronization and Clocking**: Discusses the importance of synchronization in sequential logic to ensure that operations are executed in the correct sequence and timing.


A **sequential** logic device has a *general* structure as shown below:

<img src="https://dropbox.com/s/7crg33w0e7yg2hn/Q1.png?raw=1" class="center_seventy" >
Expand Down
16 changes: 14 additions & 2 deletions docs/Hardware/f_fsm.md
Original file line number Diff line number Diff line change
Expand Up @@ -379,7 +379,20 @@ Having less states will result in less bits to represent the states in the machi
## [FSM Limitations and Summary](https://www.youtube.com/watch?v=efLcdpqlAyI&t=2375s)
[You may want to watch the post lecture videos here.](https://youtu.be/XJQaAG9xLoI)

In this chapter, we have learned that we can build an FSM to compute many types of functions, such as implementing the *digital lock*. Some problems however, cannot be computed using FSMs, so the notion of FSMs alone is not enough to be an *ultimate* computing device that we need.
In this chapter, we have learned that we can build an FSM to compute many types of functions, such as implementing the *digital lock*. A finite state machine is a **mathematical** model of computation used to design both computer programs and sequential logic circuits. It is an abstract machine that can be in exactly one of a finite number of states at any given time. FSMs are used to model behavior in systems and are widely used in software engineering, especially for designing embedded systems, user interfaces, and protocols. They have the following characteristics:
1. **Finite Set of States**: An FSM consists of a finite number of states. At any given time, the machine is in one of these states.
2. **Initial State**: There is always one state designated as the initial state, where the machine starts operation.
3. **Input**: FSMs receive inputs that can trigger transitions from one state to another. The inputs are based on the application for which the FSM is designed.
4. **State Transitions**: The core functionality of an FSM is defined by its state transitions. Each transition specifies the movement from one state to another, based on the input and possibly the current state. These transitions are defined in a transition table that acts as a roadmap for the FSM.
5. **Outputs (Optional)**: FSMs can be classified into two types based on output behavior:
- **Moore Machine**: The output is determined solely by the current state, not dependent on the input.
- **Mealy Machine**: The output depends on both the current state and the input. This generally allows for more reactive outputs and can reduce the number of states needed.
6. **Deterministic Rules**: In deterministic FSMs, the exact next state is uniquely determined by the current state and input. There is no ambiguity in transition.
7. **Termination State (Optional)**: Some FSMs have designated final or accepting states, which indicate a stop in the process or a successful completion of the operation.

Finally, we learned how to **minimise** an FSM while keeping its functionality equivalent to save resources.

Some problems however, cannot be computed using FSMs, so the notion of FSMs alone is not enough to be an *ultimate* computing device that we need.

{:.note}
Remember that the goal of this course is to teach you how to build a **general-purpose computer** from the ground up. A *general-purpose computer* is supposed to an *ultimate computing device,* able to solve *various computational problems and tasks* such as your math homework, running video games, rending graphics, playing music or video, browsing the web, and many more.
Expand All @@ -395,7 +408,6 @@ By definition, an FSM needs a **finite** amount of states. It is able to impleme
>
> We know that we can definitely write a program that performs parenthesis checking easily, so we know that our computers aren't just a *simple* FSM. In the next chapter, we will learned another class of machine called the **Turing Machine** that can tackle this issue.
Finally, we learned how to **minimise** an FSM while keeping its functionality equivalent to save resources.

# Appendix

Expand Down
Loading

0 comments on commit 13a1582

Please sign in to comment.