Skip to content

XW

Efficient Methods

I just came to a realization the other day and wish to share a bit about a certain aspect of my personality that has implications in education and engineering in general.

I’m what many would call a perfectionist. I like to format my code precisely, following a deterministic set of rules. I find precision work enjoyable (like fine-tuning the position of a single-mode optical fiber to achieve maximum coupling), and if it were not for the shakiness of my hands under pressure, I might have more seriously considered surgery as a profession. I like to format my documents precisely, avoiding “hacks” as much as possible. All the projects I have ever done in elementary school to high school, and even into college, I have tried as hard as possible in my power to make things neat, consistent, and error-free. When I tried to learn things, I tried to understand every detail possible. I refused to go on if I got stuck at a point. It wasn’t until graduate school that I learned the art of “assuming the answer,” not because I was too dense to understand it before, but because my mind never even considered it as it was way out of character.

Why am I telling you this? I understand that there is more to life than attempting to follow a set of arbitrary rules, or to attempt to achieve an arbitrary definition of “perfection.” But I want to set a context for the revelation I had so as to accentuate the impact it had on my perspective in education and engineering.

Firstly, my epiphany. I’ve realized that having a stubborn intellect as described above is not always beneficial to the acquisition of knowledge, nor is it always beneficial in the creative process. The best way is not always the diligent way. “Well, duh!” you say. But consider this for a moment: we (or many) are under the impression that we can best make things if we understand how something works inside. So what happens when we don’t understand something? Does that mean quality must suffer? I’m being vague on purpose, because I believe the applicability of the black box is so much more general than I might paint it in the following discussion.

Education: Specifically, mathematics. Were it not for my insistence on learning the reason behind every concept so that I could appease my intuition and my obsessiveness to satisfy my logic, I would not have such a good understanding of basic physics (mechanics and E&M), algebra, or calculus. On the other hand, there are many “leaps of faith” that one must take to learn more advanced topics. I was forced to adapt my learning style in graduate school when I had trouble getting through the material in a timely manner. I did not have the luxury of sitting around all day, repeating to myself a concept and trying to visualize it until I grokked it. I had to accept unintuitive definitions and use symbolic manipulation to find physical results. The mind simply cannot visualize past 3 or 4 dimensions all at once.

So what? Well, when you think about it, even basic mathematics like algebra are based upon definitions. What we feel is “common sense” has been reduced to a set of precise, well-defined definitions upon which every lemma, theorem, and corollary is proven. It would be most beneficial to younger students obsessed with figuring out the reason to everything that “it just works,” or “the math comes out that way,” are perfectly legitimate reasons. I’m not saying every result should come down to that “excuse,” but when the going gets tough and concepts get pushed beyond the envelope of imagination, it is fine to let it go, assume that it works, and move on.

Engineering: Specifically, programming. The idea of encapsulation and object-oriented programming (OOP) was one of the greatest ideas in programming. It allowed unordered complexity to be tamed by strict enforcement of modularity. Modules became black boxes that knew how to take care of themselves, making maintenance much, much simpler. And you know what else cool? Fork. Yes, the *nix fork(). Every other process comes from a single process called “init,” and “init” is created for you when your system starts up. No need to rewrite a program initialization routine for every process. Just call fork() on the current one, look at the return value, and you know which one is the parent and which one is the child. So elegant!

So what? Well, I have always thought that to be a good programmer, one needs to know all aspects of programming— including how to do things from scratch. But in the example of OOP and in the example of fork(), the programmer using them does not need to know how they work. The goal of engineering is to solve problems. To solve complex problems require a good management of complexity. Copying and pasting is a viable solution (issues of intellectual property aside; I’m just talking about the engineering portion), so long as what you’re copying is well-defined and well-made and applicable. At work, I had to make a whole bunch of graphs. I could have made each one separately, making sure that every aspect of its layout is perfect. I could also have created a “clean slate” template, and apply it to my data. Instead, out of laziness, I just copy and pasted a well-done graph, and changed its data series. Upon further reflection, I realized I wasn’t even introducing inefficiency. The graphs aren’t getting bigger every time I copy it. In fact, copying and pasting in this case was probably the optimal method, for efficiency, quality, and size. And now I can justify it by referencing fork(). The *nix guys did it. It must be a good “design.”

So the moral of the story? Not all “lazy” methods are bad. That’s it. =]