You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: AVL Tree/README.markdown
+5-5
Original file line number
Diff line number
Diff line change
@@ -8,15 +8,15 @@ This is an example of an unbalanced tree:
8
8
9
9

10
10
11
-
All the children are in the left branch and none are in the right. This is essentially the same as a [linked list](../Linked List/) and as a result, searching takes **O(n)** time instead of the much faster **O(log n)** that you'd expect from a binary search tree.
11
+
All the children are in the left branch and none are in the right. This is essentially the same as a [linked list](../Linked List/). As a result, searching takes **O(n)** time instead of the much faster **O(log n)** that you'd expect from a binary search tree.
12
12
13
13
A balanced version of that tree would look like this:
14
14
15
15

16
16
17
17
One way to make the binary search tree balanced is to insert the nodes in a totally random order. But that doesn't guarantee success, nor is it always practical.
18
18
19
-
The other solution is to use a *self-balancing* binary tree. This type of data structure adjusts the tree to keep it balanced after you insert or delete nodes. The height of such a tree is guaranteed to be *log(n)* where *n* is the number nodes. On a balanced tree all insert, remove, and search operations take **O(log n)** time. That means fast. ;-)
19
+
The other solution is to use a *self-balancing* binary tree. This type of data structure adjusts the tree to keep it balanced after you insert or delete nodes. The height of such a tree is guaranteed to be *log(n)* where *n* is the number nodes. On a balanced tree all insert, remove, and search operations take only **O(log n)** time. That means fast. ;-)
20
20
21
21
## Introducing the AVL tree
22
22
@@ -32,7 +32,7 @@ As mentioned, in an AVL tree a node is balanced if its left and right subtree ha
32
32
33
33

34
34
35
-
But these are trees that are unbalanced, because the height of the left subtree is too large compared to the right subtree:
35
+
But the following are trees that are unbalanced, because the height of the left subtree is too large compared to the right subtree:
36
36
37
37

38
38
@@ -52,7 +52,7 @@ Insertion never needs more than 2 rotations. Removal might require up to *log(n)
52
52
53
53
## The code
54
54
55
-
Most of the code in [AVLTree.swift](AVLTree.swift) is just regular [binary search tree](../Binary Search Tree/) stuff. You'll find this in any implementation of a binary search tree. For example, searching the tree is exactly the same. The only things that an AVL tree does slightly differently is inserting and deleting the nodes.
55
+
Most of the code in [AVLTree.swift](AVLTree.swift) is just regular [binary search tree](../Binary Search Tree/) stuff. You'll find this in any implementation of a binary search tree. For example, searching the tree is exactly the same. The only things that an AVL tree does slightly differently are inserting and deleting the nodes.
56
56
57
57
> **Note:** If you're a bit fuzzy on the regular operations of a binary search tree, I suggest you [catch up on those first](../Binary Search Tree/). It will make the rest of the AVL tree easier to understand.
58
58
@@ -66,6 +66,6 @@ The interesting bits are in the following methods:
66
66
67
67
[AVL tree on Wikipedia](https://en.wikipedia.org/wiki/AVL_tree)
68
68
69
-
AVL tree was the first self-balancing binary tree. These days, the [red-black tree](../Red-Black Tree/) seems to be more common.
69
+
AVL tree was the first self-balancing binary tree. These days, the [red-black tree](../Red-Black Tree/) seems to be more popular.
70
70
71
71
*Written for Swift Algorithm Club by Mike Taghavi and Matthijs Hollemans*
Copy file name to clipboardexpand all lines: Algorithm Design.markdown
+3-3
Original file line number
Diff line number
Diff line change
@@ -4,17 +4,17 @@ What to do when you're faced with a new problem and you need to find an algorith
4
4
5
5
### Is it similar to another problem?
6
6
7
-
One thing I like about [The Algorithm Design Manual](http://www.algorist.com) by Steven Skiena is that it includes a catalog of problems and solutions you can try.
7
+
If you can frame your problem in terms of another, more general problem, then you might be able to use an existing algorithm. Why reinvent the wheel?
8
8
9
-
If you can frame your problem in terms of another, more general problem, then you might be able to use an existing algorithm.
9
+
One thing I like about [The Algorithm Design Manual](http://www.algorist.com) by Steven Skiena is that it includes a catalog of problems and solutions you can try. (See also his [algorithm repository](http://www3.cs.stonybrook.edu/~algorith/).)
10
10
11
11
### It's OK to start with brute force
12
12
13
13
Naive, brute force solutions are often too slow for practical use but they're a good starting point. By writing the brute force solution, you learn to understand what the problem is really all about.
14
14
15
15
Once you have a brute force implementation you can use that to verify that any improvements you come up with are correct.
16
16
17
-
And if you only work with small datasets, then a brute force approach may actually be good enough on its own.
17
+
And if you only work with small datasets, then a brute force approach may actually be good enough on its own. Don't fall into the trap of premature optimization!
The downside of using multi-dimensional arrays in this fashion -- actually, multiple nested arrays -- is that it's easy to lose track of what dimension represents what.
63
63
64
-
So instead let's create our own type that acts like a 2-D array and that is more convenient to use. Here it is:
64
+
So instead let's create our own type that acts like a 2-D array and that is more convenient to use. Here it is, short and sweet:
65
65
66
66
```swift
67
67
publicstructArray2D<T> {
@@ -100,8 +100,14 @@ Thanks to the `subscript` function, you can do the following to retrieve an obje
100
100
let myCookie = cookies[column, row]
101
101
```
102
102
103
+
Or change it:
104
+
105
+
```swift
106
+
cookies[column, row] = newCookie
107
+
```
108
+
103
109
Internally, `Array2D` uses a single one-dimensional array to store the data. The index of an object in that array is given by `(row x numberOfColumns) + column`. But as a user of `Array2D` you don't have to worry about that; you only have to think in terms of "column" and "row", and let `Array2D` figure out the details for you. That's the advantage of wrapping primitive types into a wrapper class or struct.
104
110
105
111
And that's all there is to it.
106
112
107
-
*Written by Matthijs Hollemans*
113
+
*Written for Swift Algorithm Club by Matthijs Hollemans*
Copy file name to clipboardexpand all lines: Binary Search Tree/README.markdown
+24-22
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,12 @@
1
1
# Binary Search Tree (BST)
2
2
3
-
A binary search tree is a special kind of [binary tree](../Binary Tree/) (a tree in which a node only has two children) that performs insertions and deletions such that the tree is always sorted.
3
+
A binary search tree is a special kind of [binary tree](../Binary Tree/) (a tree in which each node has at most two children) that performs insertions and deletions such that the tree is always sorted.
4
4
5
5
If you don't know what a tree is or what it is for, then [read this first](../Tree/).
6
6
7
7
## "Always sorted" property
8
8
9
-
This is an example of a valid binary search tree:
9
+
Here is an example of a valid binary search tree:
10
10
11
11

12
12
@@ -33,7 +33,7 @@ There is always only one possible place where the new element can be inserted in
33
33
34
34
> **Note:** The *height* of a node is the number of steps it takes to go from that node to its lowest leaf. The height of the entire tree is the distance from the root to the lowest leaf. Many of the operations on a binary search tree are expressed in terms of the tree's height.
35
35
36
-
By following this simple rule -- smaller values on the left, larger values on the right -- we keep the tree sorted in a way that whenever we query it, we can quickly check if a value is in the tree.
36
+
By following this simple rule -- smaller values on the left, larger values on the right -- we keep the tree sorted in a way such that whenever we query it, we can quickly check if a value is in the tree.
37
37
38
38
## Searching the tree
39
39
@@ -49,15 +49,15 @@ If we were looking for the value `5` in the example, it would go as follows:
49
49
50
50

51
51
52
-
Thanks to the structure of the tree, searching is really fast. It runs in **O(h)** time. If you have a tree with a million nodes, it only takes about 20 steps to find anything in this tree. (The idea is very similar to [binary search](../Binary Search) in an array.)
52
+
Thanks to the structure of the tree, searching is really fast. It runs in **O(h)** time. If you have a well-balanced tree with a million nodes, it only takes about 20 steps to find anything in this tree. (The idea is very similar to [binary search](../Binary Search) in an array.)
53
53
54
54
## Traversing the tree
55
55
56
56
Sometimes you don't want to look at just a single node, but at all of them.
57
57
58
58
There are three ways to traverse a binary tree:
59
59
60
-
1.*In-order* (or *depth-first*): first look at the left child node, then at the node itself, and finally at the right child.
60
+
1.*In-order* (or *depth-first*): first look at the left child of a node, then at the node itself, and finally at its right child.
61
61
2.*Pre-order*: first look at a node, then its left and right children.
62
62
3.*Post-order*: first look at the left and right children and process the node itself last.
63
63
@@ -148,9 +148,9 @@ Here's how you'd use it:
148
148
let tree = BinarySearchTree<Int>(value: 7)
149
149
```
150
150
151
-
The `count` property determines how many nodes are in the subtree described by this node. This doesn't just count the node's immediate children but also their children and their children's children, and so on. If this is the root node, then it counts how many nodes are in the entire tree. Initially, `count = 0`.
151
+
The `count` property determines how many nodes are in the subtree described by this node. This doesn't just count the node's immediate children but also their children and their children's children, and so on. If this particular object is the root node, then it counts how many nodes are in the entire tree. Initially, `count = 0`.
152
152
153
-
> **Note:** Because `left`, `right`, and `parent` are optionals, we can make good use of Swift's optional chaining (`?`) and nil-coalescing operators (`??`). You could also write this sort of thing with `if let` but that takes up more space.
153
+
> **Note:** Because `left`, `right`, and `parent` are optionals, we can make good use of Swift's optional chaining (`?`) and nil-coalescing operators (`??`). You could also write this sort of thing with `if let` but that is less concise.
154
154
155
155
### Inserting nodes
156
156
@@ -182,7 +182,7 @@ A tree node by itself is pretty useless, so here is how you would add new nodes
182
182
183
183
Like so many other tree operations, insertion is easiest to implement with recursion. We compare the new value to the values of the existing nodes and decide whether to add it to the left branch or the right branch.
184
184
185
-
If there is no more left or right child to look at, we create a new `BinarySearchTree` object for the new node and connect it to the tree by setting its `parent` property.
185
+
If there is no more left or right child to look at, we create a `BinarySearchTree` object for the new node and connect it to the tree by setting its `parent` property.
186
186
187
187
> **Note:** Because the whole point of a binary search tree is to have smaller nodes on the left and larger ones on the right, you should always insert elements at the root, to make to sure this remains a valid binary tree!
188
188
@@ -197,7 +197,7 @@ tree.insert(9)
197
197
tree.insert(1)
198
198
```
199
199
200
-
> **Note:** For reasons that will become clear later, you should insert the numbers in a somewhat random order. If you insert them in sorted order, then the tree won't have the right shape.
200
+
> **Note:** For reasons that will become clear later, you should insert the numbers in a somewhat random order. If you insert them in sorted order, the tree won't have the right shape.
201
201
202
202
For convenience, let's add an init method that calls `insert()` for all the elements in an array:
203
203
@@ -243,11 +243,11 @@ When you do a `print(tree)`, you should get something like this:
243
243
244
244
((1) <- 2 -> (5)) <- 7 -> ((9) <- 10)
245
245
246
-
With some imagination, you should see now that this indeed corresponds to the following tree:
246
+
The root node is in the middle. With some imagination, you should see that this indeed corresponds to the following tree:
247
247
248
248

249
249
250
-
By the way, you may be wondering what happens when you insert duplicate items? We always insert those kinds of items in the right branch. Try it out!
250
+
By the way, you may be wondering what happens when you insert duplicate items? We always insert those in the right branch. Try it out!
251
251
252
252
### Searching
253
253
@@ -269,7 +269,7 @@ Here is the implementation of `search()`:
269
269
270
270
I hope the logic is clear: this starts at the current node (usually the root) and compares the values. If the search value is less than the node's value, we continue searching in the left branch; if the search value is greater, we dive into the right branch.
271
271
272
-
Of course, if there are no more nodes to look at -- when `left` or `right` is nil -- then we return nil to indicate the search value is not in the tree.
272
+
Of course, if there are no more nodes to look at -- when `left` or `right` is nil -- then we return `nil` to indicate the search value is not in the tree.
273
273
274
274
> **Note:** In Swift that's very conveniently done with optional chaining; when you write `left?.search(value)` it automatically returns nil if `left` is nil. There's no need to explicitly check for this with an `if` statement.
275
275
@@ -299,7 +299,7 @@ Here's how to test searching:
299
299
tree.search(5)
300
300
tree.search(2)
301
301
tree.search(7)
302
-
tree.search(6) // nil
302
+
tree.search(6) // nil
303
303
```
304
304
305
305
The first three lines all return the corresponding `BinaryTreeNode` object. The last line returns `nil` because there is no node with value `6`.
@@ -426,7 +426,7 @@ We won't need it for deleting, but for completeness' sake, here is the opposite
426
426
}
427
427
```
428
428
429
-
It returns the rightmost descendent of this node. We find it by following `right` pointers until we get to the end. In the above example, the rightmost descendent of node `2` is `5`. The maximum value in the entire tree is `11`, because that is the rightmost descendent of the root node `7`.
429
+
It returns the rightmost descendent of the node. We find it by following `right` pointers until we get to the end. In the above example, the rightmost descendent of node `2` is `5`. The maximum value in the entire tree is `11`, because that is the rightmost descendent of the root node `7`.
430
430
431
431
Finally, we can write the code the remove a node from the tree:
432
432
@@ -460,8 +460,6 @@ It doesn't look so scary after all. ;-) Here is what it does:
460
460
461
461
The only tricky situation here is `// 1`. Rather than deleting the current node, we give it the successor's value and remove the successor node instead.
462
462
463
-
> **Note:** What would happen if you deleted the root node? Specifically, how would you know which node becomes the new root node? It turns out you don't have to worry about that because the root never actually gets deleted. We simply give it a new value.
464
-
465
463
Try it out:
466
464
467
465
```swift
@@ -482,6 +480,8 @@ But after `remove()` you get:
482
480
483
481
As you can see, node `5` has taken the place of `2`. In fact, if you do `print(node2)` you'll see that it now has the value `5`. We didn't actually remove the `node2` object, we just gave it a new value.
484
482
483
+
> **Note:** What would happen if you deleted the root node? Specifically, how would you know which node becomes the new root node? It turns out you don't have to worry about that because the root never actually gets deleted. We simply give it a new value.
484
+
485
485
Like most binary search tree operations, removing a node runs in **O(h)** time, where **h** is the height of the tree.
486
486
487
487
### Depth and height
@@ -516,7 +516,7 @@ You can also calculate the *depth* of a node, which is the distance to the root.
516
516
var edges =0
517
517
whilecaselet parent?= node.parent {
518
518
node = parent
519
-
++edges
519
+
edges+=1
520
520
}
521
521
return edges
522
522
}
@@ -624,6 +624,8 @@ We've implemented the binary tree node as a class but you can also use an enum.
624
624
625
625
The difference is reference semantics versus value semantics. Making a change to the class-based tree will update that same instance in memory. But the enum-based tree is immutable -- any insertions or deletions will give you an entirely new copy of the tree. Which one is best totally depends on what you want to use it for.
626
626
627
+
Here's how you'd make a binary search tree using an enum:
628
+
627
629
```swift
628
630
publicenumBinarySearchTree<T: Comparable> {
629
631
caseEmpty
@@ -638,9 +640,9 @@ The enum has three cases:
638
640
-`Leaf` for a leaf node that has no children.
639
641
-`Node` for a node that has one or two children. This is marked `indirect` so that it can hold `BinarySearchTree` values. Without `indirect` you can't make recursive enums.
640
642
641
-
> **Note:** The nodes in this binary tree don't have a reference to their parent node. That will make certain operations slightly more cumbersome to implement.
643
+
> **Note:** The nodes in this binary tree don't have a reference to their parent node. It's not a major impediment but it will make certain operations slightly more cumbersome to implement.
642
644
643
-
A usual, we'll implement most functionality recursively. We'll treat each case slightly differently. For example, this is how you could calculate the number of nodes in the tree and the height of the tree:
645
+
A usual, we'll implement most functionality recursively. We'll treat each case of the enum slightly differently. For example, this is how you could calculate the number of nodes in the tree and the height of the tree:
644
646
645
647
```swift
646
648
publicvar count: Int {
@@ -696,7 +698,7 @@ tree = tree.insert(9)
696
698
tree = tree.insert(1)
697
699
```
698
700
699
-
Notice that each time you insert something, you get back a completely new tree. That's why you need to assign the result back to the `tree` variable.
701
+
Notice that each time you insert something, you get back a completely new tree object. That's why you need to assign the result back to the `tree` variable.
700
702
701
703
Here is the all-important search function:
702
704
@@ -748,13 +750,13 @@ When you do `print(tree)` it will look something like this:
748
750
749
751
((1 <- 2 -> 5) <- 7 -> (9 <- 10 -> .))
750
752
751
-
The root node is in the middle. A dot means there is no child at that position.
753
+
The root node is in the middle; a dot means there is no child at that position.
752
754
753
755
## When the tree becomes unbalanced...
754
756
755
757
A binary search tree is *balanced* when its left and right subtrees contain roughly the same number of nodes. In that case, the height of the tree is *log(n)*, where *n* is the number of nodes. That's the ideal situation.
756
758
757
-
However, if one branch is significantly longer than the other, searching becomes very slow. We need to check way more values than we'd ideally have to. In the worst case, the height of the tree can become *n*. Such a tree acts more like a [linked list](../Linked List/) than a binary search tree, with performance degrading to **O(n)**. Not good!
759
+
However, if one branch is significantly longer than the other, searching becomes very slow. We end up checking way more values than we'd ideally have to. In the worst case, the height of the tree can become *n*. Such a tree acts more like a [linked list](../Linked List/) than a binary search tree, with performance degrading to **O(n)**. Not good!
758
760
759
761
One way to make the binary search tree balanced is to insert the nodes in a totally random order. On average that should balance out the tree quite nicely. But it doesn't guarantee success, nor is it always practical.
0 commit comments