Skip to content

Commit 3d03374

Browse files
author
Jake Romer
committed
URL encode links in Markdown files
There are a ton of broken markdown links in the READMEs throughout this repo because the files and directories they refer to have spaces in their names but these aren't HTML encoded in their links. This commit just replaces spaces with %20 to fix these.
1 parent ef1c33b commit 3d03374

File tree

21 files changed

+49
-49
lines changed

21 files changed

+49
-49
lines changed

AVL Tree/README.markdown

+4-4
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
# AVL Tree
22

3-
An AVL tree is a self-balancing form of a [binary search tree](../Binary Search Tree/), in which the height of subtrees differ at most by only 1.
3+
An AVL tree is a self-balancing form of a [binary search tree](../Binary%20Search%20Tree/), in which the height of subtrees differ at most by only 1.
44

55
A binary tree is *balanced* when its left and right subtrees contain roughly the same number of nodes. That is what makes searching the tree really fast. But if a binary search tree is unbalanced, searching can become really slow.
66

77
This is an example of an unbalanced tree:
88

99
![Unbalanced tree](Images/Unbalanced.png)
1010

11-
All the children are in the left branch and none are in the right. This is essentially the same as a [linked list](../Linked List/). As a result, searching takes **O(n)** time instead of the much faster **O(log n)** that you'd expect from a binary search tree.
11+
All the children are in the left branch and none are in the right. This is essentially the same as a [linked list](../Linked%20List/). As a result, searching takes **O(n)** time instead of the much faster **O(log n)** that you'd expect from a binary search tree.
1212

1313
A balanced version of that tree would look like this:
1414

@@ -78,14 +78,14 @@ Insertion never needs more than 2 rotations. Removal might require up to __log(n
7878

7979
Most of the code in [AVLTree.swift](AVLTree.swift) is just regular [binary search tree](../Binary Search Tree/) stuff. You'll find this in any implementation of a binary search tree. For example, searching the tree is exactly the same. The only things that an AVL tree does slightly differently are inserting and deleting the nodes.
8080

81-
> **Note:** If you're a bit fuzzy on the regular operations of a binary search tree, I suggest you [catch up on those first](../Binary Search Tree/). It will make the rest of the AVL tree easier to understand.
81+
> **Note:** If you're a bit fuzzy on the regular operations of a binary search tree, I suggest you [catch up on those first](../Binary%20Search%20Tree/). It will make the rest of the AVL tree easier to understand.
8282
8383
The interesting bits are in the `balance()` method which is called after inserting or deleting a node.
8484

8585
## See also
8686

8787
[AVL tree on Wikipedia](https://en.wikipedia.org/wiki/AVL_tree)
8888

89-
AVL tree was the first self-balancing binary tree. These days, the [red-black tree](../Red-Black Tree/) seems to be more popular.
89+
AVL tree was the first self-balancing binary tree. These days, the [red-black tree](../Red-Black%20Tree/) seems to be more popular.
9090

9191
*Written for Swift Algorithm Club by Mike Taghavi and Matthijs Hollemans*

Big-O Notation.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,6 @@ Big-O | Name | Description
1919

2020
Often you don't need math to figure out what the Big-O of an algorithm is but you can simply use your intuition. If your code uses a single loop that looks at all **n** elements of your input, the algorithm is **O(n)**. If the code has two nested loops, it is **O(n^2)**. Three nested loops gives **O(n^3)**, and so on.
2121

22-
Note that Big-O notation is an estimate and is only really useful for large values of **n**. For example, the worst-case running time for the [insertion sort](Insertion Sort/) algorithm is **O(n^2)**. In theory that is worse than the running time for [merge sort](Merge Sort/), which is **O(n log n)**. But for small amounts of data, insertion sort is actually faster, especially if the array is partially sorted already!
22+
Note that Big-O notation is an estimate and is only really useful for large values of **n**. For example, the worst-case running time for the [insertion sort](Insertion%20Sort/) algorithm is **O(n^2)**. In theory that is worse than the running time for [merge sort](Merge Sort/), which is **O(n log n)**. But for small amounts of data, insertion sort is actually faster, especially if the array is partially sorted already!
2323

2424
If you find this confusing, don't let this Big-O stuff bother you too much. It's mostly useful when comparing two algorithms to figure out which one is better. But in the end you still want to test in practice which one really is the best. And if the amount of data is relatively small, then even a slow algorithm will be fast enough for practical use.

Binary Search Tree/README.markdown

+5-5
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Binary Search Tree (BST)
22

3-
A binary search tree is a special kind of [binary tree](../Binary Tree/) (a tree in which each node has at most two children) that performs insertions and deletions such that the tree is always sorted.
3+
A binary search tree is a special kind of [binary tree](../Binary%20Tree/) (a tree in which each node has at most two children) that performs insertions and deletions such that the tree is always sorted.
44

55
If you don't know what a tree is or what it is for, then [read this first](../Tree/).
66

@@ -49,7 +49,7 @@ If we were looking for the value `5` in the example, it would go as follows:
4949

5050
![Searching the tree](Images/Searching.png)
5151

52-
Thanks to the structure of the tree, searching is really fast. It runs in **O(h)** time. If you have a well-balanced tree with a million nodes, it only takes about 20 steps to find anything in this tree. (The idea is very similar to [binary search](../Binary Search) in an array.)
52+
Thanks to the structure of the tree, searching is really fast. It runs in **O(h)** time. If you have a well-balanced tree with a million nodes, it only takes about 20 steps to find anything in this tree. (The idea is very similar to [binary search](../Binary%20Search) in an array.)
5353

5454
## Traversing the tree
5555

@@ -535,7 +535,7 @@ The code for `successor()` works the exact same way but mirrored:
535535

536536
Both these methods run in **O(h)** time.
537537

538-
> **Note:** There is a cool variation called a ["threaded" binary tree](../Threaded Binary Tree) where "unused" left and right pointers are repurposed to make direct links between predecessor and successor nodes. Very clever!
538+
> **Note:** There is a cool variation called a ["threaded" binary tree](../Threaded%20Binary%20Tree) where "unused" left and right pointers are repurposed to make direct links between predecessor and successor nodes. Very clever!
539539
540540
### Is the search tree valid?
541541

@@ -713,11 +713,11 @@ The root node is in the middle; a dot means there is no child at that position.
713713

714714
A binary search tree is *balanced* when its left and right subtrees contain roughly the same number of nodes. In that case, the height of the tree is *log(n)*, where *n* is the number of nodes. That's the ideal situation.
715715

716-
However, if one branch is significantly longer than the other, searching becomes very slow. We end up checking way more values than we'd ideally have to. In the worst case, the height of the tree can become *n*. Such a tree acts more like a [linked list](../Linked List/) than a binary search tree, with performance degrading to **O(n)**. Not good!
716+
However, if one branch is significantly longer than the other, searching becomes very slow. We end up checking way more values than we'd ideally have to. In the worst case, the height of the tree can become *n*. Such a tree acts more like a [linked list](../Linked%20List/) than a binary search tree, with performance degrading to **O(n)**. Not good!
717717

718718
One way to make the binary search tree balanced is to insert the nodes in a totally random order. On average that should balance out the tree quite nicely. But it doesn't guarantee success, nor is it always practical.
719719

720-
The other solution is to use a *self-balancing* binary tree. This type of data structure adjusts the tree to keep it balanced after you insert or delete nodes. See [AVL tree](../AVL Tree) and [red-black tree](../Red-Black Tree) for examples.
720+
The other solution is to use a *self-balancing* binary tree. This type of data structure adjusts the tree to keep it balanced after you insert or delete nodes. See [AVL tree](../AVL%20Tree) and [red-black tree](../Red-Black%20Tree) for examples.
721721

722722
## See also
723723

Binary Search/README.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ let numbers = [11, 59, 3, 2, 53, 17, 31, 7, 19, 67, 47, 13, 37, 61, 29, 43, 5, 4
1212
numbers.indexOf(43) // returns 15
1313
```
1414

15-
The built-in `indexOf()` function performs a [linear search](../Linear Search/). In code that looks something like this:
15+
The built-in `indexOf()` function performs a [linear search](../Linear%20Search/). In code that looks something like this:
1616

1717
```swift
1818
func linearSearch<T: Equatable>(_ a: [T], _ key: T) -> Int? {

Binary Tree/README.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ The child nodes are usually called the *left* child and the *right* child. If a
88

99
Often nodes will have a link back to their parent but this is not strictly necessary.
1010

11-
Binary trees are often used as [binary search trees](../Binary Search Tree/). In that case, the nodes must be in a specific order (smaller values on the left, larger values on the right). But this is not a requirement for all binary trees.
11+
Binary trees are often used as [binary search trees](../Binary%20Search%20Tree/). In that case, the nodes must be in a specific order (smaller values on the left, larger values on the right). But this is not a requirement for all binary trees.
1212

1313
For example, here is a binary tree that represents a sequence of arithmetical operations, `(5 * (a - 10)) + (-4 * (3 / b))`:
1414

Bloom Filter/README.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ An advantage of the Bloom Filter over a hash table is that the former maintains
1818
1919
## Inserting objects into the set
2020

21-
A Bloom Filter is essentially a fixed-length [bit vector](../Bit Set/), an array of bits. When we insert objects, we set some of these bits to `1`, and when we query for objects we check if certain bits are `0` or `1`. Both operations use hash functions.
21+
A Bloom Filter is essentially a fixed-length [bit vector](../Bit%20Set/), an array of bits. When we insert objects, we set some of these bits to `1`, and when we query for objects we check if certain bits are `0` or `1`. Both operations use hash functions.
2222

2323
To insert an element in the filter, the element is hashed with several different hash functions. Each hash function returns a value that we map to an index in the array. We then set the bits at these indices to `1` or true.
2424

Bounded Priority Queue/README.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Bounded Priority queue
22

3-
A bounded priority queue is similar to a regular [priority queue](../Priority Queue/), except that there is a fixed upper bound on the number of elements that can be stored. When a new element is added to the queue while the queue is at capacity, the element with the highest priority value is ejected from the queue.
3+
A bounded priority queue is similar to a regular [priority queue](../Priority%20Queue/), except that there is a fixed upper bound on the number of elements that can be stored. When a new element is added to the queue while the queue is at capacity, the element with the highest priority value is ejected from the queue.
44

55
## Example
66

Boyer-Moore/README.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ animals.indexOf(pattern: "🐮")
2424

2525
> **Note:** The index of the cow is 6, not 3 as you might expect, because the string uses more storage per character for emoji. The actual value of the `String.Index` is not so important, just that it points at the right character in the string.
2626
27-
The [brute-force approach](../Brute-Force String Search/) works OK, but it's not very efficient, especially on large chunks of text. As it turns out, you don't need to look at _every_ character from the source string -- you can often skip ahead multiple characters.
27+
The [brute-force approach](../Brute-Force%20String%20Search/) works OK, but it's not very efficient, especially on large chunks of text. As it turns out, you don't need to look at _every_ character from the source string -- you can often skip ahead multiple characters.
2828

2929
The skip-ahead algorithm is called [Boyer-Moore](https://en.wikipedia.org/wiki/Boyer–Moore_string_search_algorithm) and it has been around for a long time. It is considered the benchmark for all string search algorithms.
3030

Breadth-First Search/README.markdown

+2-2
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,7 @@ This will output: `["a", "b", "c", "d", "e", "f", "g", "h"]`
148148

149149
Breadth-first search can be used to solve many problems. A small selection:
150150

151-
* Computing the [shortest path](../Shortest Path/) between a source node and each of the other nodes (only for unweighted graphs).
152-
* Calculating the [minimum spanning tree](../Minimum Spanning Tree (Unweighted)/) on an unweighted graph.
151+
* Computing the [shortest path](../Shortest%20Path/) between a source node and each of the other nodes (only for unweighted graphs).
152+
* Calculating the [minimum spanning tree](../Minimum%20Spanning%20Tree%20(Unweighted)/) on an unweighted graph.
153153

154154
*Written by [Chris Pilcher](https://github.com/chris-pilcher) and Matthijs Hollemans*

Count Occurrences/README.markdown

+2-2
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22

33
Goal: Count how often a certain value appears in an array.
44

5-
The obvious way to do this is with a [linear search](../Linear Search/) from the beginning of the array until the end, keeping count of how often you come across the value. That is an **O(n)** algorithm.
5+
The obvious way to do this is with a [linear search](../Linear%20Search/) from the beginning of the array until the end, keeping count of how often you come across the value. That is an **O(n)** algorithm.
66

7-
However, if the array is sorted you can do it much faster, in **O(log n)** time, by using a modification of [binary search](../Binary Search/).
7+
However, if the array is sorted you can do it much faster, in **O(log n)** time, by using a modification of [binary search](../Binary%20Search/).
88

99
Let's say we have the following array:
1010

Hash Set/README.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ difference2.allElements() // [5, 6]
196196

197197
If you look at the [documentation](http://swiftdoc.org/v2.1/type/Set/) for Swift's own `Set`, you'll notice it has tons more functionality. An obvious extension would be to make `HashSet` conform to `SequenceType` so that you can iterate it with a `for`...`in` loop.
198198

199-
Another thing you could do is replace the `Dictionary` with an actual [hash table](../Hash Table), but one that just stores the keys and doesn't associate them with anything. So you wouldn't need the `Bool` values anymore.
199+
Another thing you could do is replace the `Dictionary` with an actual [hash table](../Hash%20Table), but one that just stores the keys and doesn't associate them with anything. So you wouldn't need the `Bool` values anymore.
200200

201201
If you often need to look up whether an element belongs to a set and perform unions, then the [union-find](../Union-Find/) data structure may be more suitable. It uses a tree structure instead of a dictionary to make the find and union operations very efficient.
202202

How to Contribute.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Want to help out with the Swift Algorithm Club? Great!
66

77
Take a look at the [list](README.markdown). Any algorithms or data structures that don't have a link yet are up for grabs.
88

9-
Algorithms in the [Under construction](Under Construction.markdown) area are being worked on. Suggestions and feedback is welcome!
9+
Algorithms in the [Under construction](Under%20Construction.markdown) area are being worked on. Suggestions and feedback is welcome!
1010

1111
New algorithms and data structures are always welcome (even if they aren't on the list).
1212

Ordered Array/README.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ a // [-2, -1, 1, 3, 4, 5, 7, 9, 10]
8383

8484
The array's contents will always be sorted from low to high, now matter what.
8585

86-
Unfortunately, the current `findInsertionPoint()` function is a bit slow. In the worst case, it needs to scan through the entire array. We can speed this up by using a [binary search](../Binary Search) to find the insertion point.
86+
Unfortunately, the current `findInsertionPoint()` function is a bit slow. In the worst case, it needs to scan through the entire array. We can speed this up by using a [binary search](../Binary%20Search) to find the insertion point.
8787

8888
Here is the new version:
8989

Ordered Set/README.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ The next function is `indexOf()`, which takes in an object of type `T` and retur
115115
}
116116
```
117117

118-
> **Note:** If you are not familiar with the concept of binary search, we have an [article that explains all about it](../Binary Search).
118+
> **Note:** If you are not familiar with the concept of binary search, we have an [article that explains all about it](../Binary%20Search).
119119

120120
However, there is an important issue to deal with here. Recall that two objects can be unequal yet still have the same "value" for the purposes of comparing them. Since a set can contain multiple items with the same value, it is important to check that the binary search has landed on the correct item.
121121

Selection Sort/README.markdown

+3-3
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ You are given an array of numbers and need to put them in the right order. The s
66

77
[ ...sorted numbers... | ...unsorted numbers... ]
88

9-
This is similar to [insertion sort](../Insertion Sort/), but the difference is in how new numbers are added to the sorted portion.
9+
This is similar to [insertion sort](../Insertion%20Sort/), but the difference is in how new numbers are added to the sorted portion.
1010

1111
It works as follows:
1212

@@ -108,9 +108,9 @@ The source file [SelectionSort.swift](SelectionSort.swift) has a version of this
108108

109109
## Performance
110110

111-
Selection sort is easy to understand but it performs quite badly, **O(n^2)**. It's worse than [insertion sort](../Insertion Sort/) but better than [bubble sort](../Bubble Sort/). The killer is finding the lowest element in the rest of the array. This takes up a lot of time, especially since the inner loop will be performed over and over.
111+
Selection sort is easy to understand but it performs quite badly, **O(n^2)**. It's worse than [insertion sort](../Insertion%20Sort/) but better than [bubble sort](../Bubble Sort/). The killer is finding the lowest element in the rest of the array. This takes up a lot of time, especially since the inner loop will be performed over and over.
112112

113-
[Heap sort](../Heap Sort/) uses the same principle as selection sort but has a really fast method for finding the minimum value in the rest of the array. Its performance is **O(n log n)**.
113+
[Heap sort](../Heap%20Sort/) uses the same principle as selection sort but has a really fast method for finding the minimum value in the rest of the array. Its performance is **O(n log n)**.
114114

115115
## See also
116116

Shell Sort/README.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ As you can see, each sublist contains only every 4th item from the original arra
3939

4040
We now call `insertionSort()` once on each sublist.
4141

42-
This particular version of [insertion sort](../Insertion Sort/) sorts from the back to the front. Each item in the sublist is compared against the others. If they're in the wrong order, the value is swapped and travels all the way down until we reach the start of the sublist.
42+
This particular version of [insertion sort](../Insertion%20Sort/) sorts from the back to the front. Each item in the sublist is compared against the others. If they're in the wrong order, the value is swapped and travels all the way down until we reach the start of the sublist.
4343

4444
So for sublist 0, we swap `4` with `72`, then swap `4` with `64`. After sorting, this sublist looks like:
4545

0 commit comments

Comments
 (0)