-
Notifications
You must be signed in to change notification settings - Fork 25
/
notes
103 lines (68 loc) · 3.41 KB
/
notes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
common themes:
goal:
common framework powerful enough to encompass both TC and general-purpose
programming
"finding the right abstraction level": prescribing just enough, but not
so much that programmers feel limited
there is such a thing as "more" abstraction, but what matters is the
"right" abstraction.
the right abstraction is one where you can specify everything that's
important to you, but don't have to specify anything else.
the right abstraction does not constrain you from doing
mathematically valid things.
the right abstraction empowers the user
it's easy to have misconceptions about what the right level is.
it would be easy for a computer scientist to conclude, for example,
that scientific users just needed BLAS, or the most popular 200
matlab functions.
computer scientists are good at adding more abstraction, but the
difference here is we try to make sure the more powerful abstractions
can actually do what people want, given the existing popular systems.
theme: code generation
show
- you need some ability to prove things about types, not just JIT with speculative opts
- our design makes it easier to get performance, for both the user and language
implementer
mechanism:
TC is very light on theory and could benefit from more theory
certain aspects of this theory can be used to redesign TC languages in
a better way, as opposed to the ad-hoc things that have come before
review chapter:
all the things that have been tried, what we have learned, and out of
that we identify a convex set of the good stuff
the point is not to prove "our way is fastest" of all possible ways
our thesis statement is not an empirical finding
often too much empiricism in computer systems is not useful;
there is measurement bias. this goes back to presumptions about what
the user wants.
counterexample: lots of compiler papers published about how to
optimize matlab programs. a fine thing to do, but i want to try
a different approach. our approach to the problem is highly
under-explored.
3 problems with speeding up existing languages:
- what to specialize on, what to dispatch on?
e.g. should something be specialized on a constant integer value or just Int?
- during analysis types just get bigger; no types in contravariant position to
narrow them
- ultimately (and somewhat ironically) you end up with different mechanisms
for specialization and selection, just like in C++. the compiler might
end up using multiple dispatch, but you can't have it. not optimal.
on the other end of the spectrum, you might be using very powerful dispatch
(e.g. predicate dispatch, pattern matching) that's not very useful to the
compiler.
what's missing is descriptive information. it might be added later, e.g.
a system of annotations to augment a partial evaluator. but annotations
are inert; they don't do anything but improve performance.
the information is also just nice to have. you can talk about it, refer to it.
maybe measure: % of methods applicable per call site on average, based on
static information.
functions: 1/1
class-based OO: n/n, where n=# subclasses and is fairly small
overloading: 1/n
julia: (small number)/(large number)
solve the equation
dispatch power = program analysis power
the answer is semantic subtyping.
this finds local optima in the design space.
more powerful dispatch => ok, but compiler can help less
less " => the compiler could help, you're leaving power on the table