Nope, I don't mean the difference between word thinkers and quantitative thinkers. Been there, done that. Nor the difference between different levels of expertise on technical matters; again, been there, done that.

No, we're talking the

*crème de la crème*, experts that can adapt to changing situations or comprehend complexity across different fields, by being deep understanders.

Because any opportunity to mock those who purport to educate the masses by passing along material they don't understand, let us talk about Igon Values... ahem, eigenvalues and eigenvectors.

Taught in AP math classes or freshman linear algebra, the eigenvectors $\mathbf{x}_{i}$ and associated eigenvalues $\lambda_{i}$ of a square matrix $\mathbf{A}$ are defined as the solutions to $\mathbf{A} \, \mathbf{x}_{i} = \lambda_{i} \, \mathbf{x}_{i}$.

Undergrads learn that these represent something about the structure of the matrix, learn that the matrix can be diagonalized using them, how they appear in other places (principal components analysis and network centrality, for example).

But those who get to use these and other math concepts on a day-to-day basis, who get to really understand them, develop a deeper understanding of the meaning of the concepts. There's something important about how these objects relate to each other.

After a while, one realizes that there are structures and meta-structures that repeat across different problems, even across different fields. Someone said that after a lot of experience in one engineering (say, electrical), adapting to another (say, mechanical) revealed that while the nouns changed, the verbs were very similar.

This is what deep understanding affords: a quasi-intuitive grokking of a field, based on the regularities of knowledge across different fields.

For example: while many who have taken a linear algebra in college may vaguely recall what an eigenvalue is, those who understand the meaning of eigenvalues and eigenvectors for matrices will have a much easier time understanding the eigenfunctions of linear operators:

The structure [something that operates] [something operated upon] = [constant] [something operated upon] is common, and what it means is that the [something operated upon] is in some sense invariant with the [something that operates], other than the proportionality constant. That suggests that there's a hidden meaning or structure to the [something that operates] that can be elicited by studying the [something operated upon].

And this structure, mathematical as it might be, has a lot of applications outside of mathematics (and not just as a mathematical tool for formalizing technical problems). It's a basic principle of undestanding: what is invariant to a transformation tells us something deep about that transformation. (Again, invariant in "direction," so to speak, possibly a change of size or even sign.)

And this is itself a meta-principle: that the study of what changes and what's invariant in a particular set of problems gives some indications about latent structure to that set of problems. That latent structure may be a good point to start when trying to solve problems from this set.

Yep, really dumbing down this blog, pandering to the public...