Sunday, 20 January 2013

soft question - Memorizing theorems

I tend to agree that, nine times out of ten, memorizing theorems is a Bad Thing. The best goal (IMO) is to get comfortable enough in your area and at your level that you can intuit whether or not some statement is true without necessarily knowing a proof (or an ad-hoc counterexample). Of course, you also want to know the best tools in your area, but that often comes with understanding it on a deeper level.



That said, yeah, there are often some technical theorems that are useful as black boxes (examples: Wagner's/Kuratowski's theorem in graph theory, the classification of finite simple groups, maybe structure theorems in closely related areas to the one in which you work), but if you use them often enough that it's an actual hassle to look up their statements, and they're reasonably simply stated, you'll probably memorize them through sheer force of habit.



So how do you get a feel for your corner of mathematics? Well, you just do math. When you read a paper, try to stay a step or two ahead of the author. (I don't remember who said this -- maybe it was in Terry Tao's excellent career advice page -- but this is a can't-lose proposition; if you correctly predict what they're about to do, then you get to feel the satisfaction of being right, and if you incorrectly predict what they're about to do, you get to see something unexpected and new.) Work textbook problems that don't give you the answers. Ask your own questions and try to answer them; if you can't answer them yourself, go to the literature! (Or here, although preferably after the literature.)



Maintain a list of motivating examples and counterexamples in your area. For instance, I think a lot about graph theory; if I want to see if a conjecture holds for all graphs, one of the first things I'll do is ask whether it's true for a random graph. Next, I might ask if it's true for the Petersen graph, for the 5-cycle, for complete graphs or for trees, or for sparse random graphs (a.k.a. expanders, for all intents and purposes.) If I can prove my conjecture -- or even give a heuristic argument -- in these special cases, then I can start wondering if it holds in general.



Try to understand more than one way to approach your subject; not everyone who's worked in it thought about things the same way, and you should be flexible to accommodate their intuitions. Going back again to graph theory, there are several different ways to view a graph. The simplest (and most standard) is as a symmetric binary relation on a set. But you can also think of it geometrically as "dots connected by lines," or topologically as the 1-skeleton of a simplicial complex -- not coincidentally, these two definitions are closely related. Algebraically, you can think of a graph as a certain kind of groupoid, which is closely related to its definition as a symmetric matrix. (This is actually also related to the topological/geometric definition, although less obviously -- the groupoid is a discrete version of the fundamental groupoid of a space.) A separate algebraic approach is to think of graphs as "generalized Cayley graphs," which seems silly but actually pays off big-time when you work with graph products. In computer science, the best way to represent a sparse graph is often as an adjacency or incidence list; this formulation is very often useful in algorithms.

No comments:

Post a Comment