Profile photo for Becca Royal-Gordon

There are three kinds of difficult concepts in programming.

1. The basic idea is simple, but the devil is in the details.

Binary search is considered a fundamental and relatively simple algorithm, but it's surprisingly tricky to write—most programmers will make mistakes when they attempt it. One programmer examined twenty textbooks and found that fifteen had buggy binary search implementations. The problem is simply that binary search has a lot of edge cases and opportunities for off-by-one errors, and casual testing won't discover all of them.

Concurrency seems like a simple idea: have the computer do several things at once. The trouble comes when you realize that two tasks might be using the same data at the same time, stomping all over each others' changes or reading half-finished edits. We have techniques to ensure that only one task uses a piece of data at a time, but it's easy to forget to use them or miss some subtle use. Our current best approach is to make as much data as possible either read-only or accessible to only one task, but this merely reduces the problem—it doesn't solve it.

Distributed computing is like concurrency but a thousand times worse: the components of the system can only communicate in very limited ways, communication is unreliable and includes long and variable delays, and there are a lot of opportunities for components to fail, sometimes without other components realizing there's anything wrong. Our current best approach is to design systems so that components can fail gracefully, save lots of redundant information so that things can be replayed and reconciled later, and monitor systems carefully so that humans can intervene when something goes wrong. Even then, the very best programmers, using the very most modern techniques, designing very carefully, and solving very simple problems, often make serious mistakes. It's a miracle that distributed systems work at all.

Information security faces unique challenges because the enemy here is not accidental mistake or random failure, but a malicious, intelligent adversary. If a system is perfectly secure as long as three unlikely events don't happen at the same time, your adversary will cause those three events to happen, even if it takes millions of attempts. If your cryptography is mathematically perfect but your hardware leaks secrets through radio noise, differences in processing speed, or changes in power draw, your adversary will use those things to steal your secrets. It's a merciless environment to work in.

2. The concept is so different from everything you know, it's hard to fit it in to your worldview.

Functional programming demands that all data move through your program by passing it to and returning it from functions, not assigning to variables that may be changed later or modifying memory that may be visible to other code. Any program can be written as a functional program—in fact, compilers often rewrite procedural code into a more functional style to allocate registers—but it can be quite difficult, especially at first.

Object-oriented programming, the technique now taught to beginning programmers, is actually very difficult for experienced procedural programmers to embrace fully. It's easy to start using objects in certain places, but really adjusting to an all-objects mindset took me about eighteen months, and eight years after I started I'm still learning how to better factor my code.

Functional reactive programming, which encourages you to treat input into your program as a series of signals that are progressively transformed by various functions into output, is new to me (I've been researching it, but haven't tried to use it yet), but so far it has a similar feel: I sort of get the concept, but I don't yet understand how you're actually supposed to use it. I'm sure I will eventually, but it'll take time.

3. It's a really difficult problem, and even experts can only offer imperfect solutions.

Artificial intelligence-type problems like computer vision, speech recognition, and natural language processing simply don't have good solutions yet. Research teams can spend decades building systems that still fail 5%, 10%, 20%, or even more of the time. These may be good enough to be useful a lot of the time, but we've all experienced Siri not understanding a restaurant's name or Google Translate butchering an article. Worse, a lot of these systems are based on statistical "learning" by the system, not directly programming the computer to fulfill the task. This makes the systems difficult to debug and improve.

Cryptography is more mathematics than programming—and obscure mathematics at that. A complete cryptographic system might draw on information theory, computational complexity theory, number theory, group theory, the geometry of elliptic curves, and other fields that you and I have probably never heard of. It also has a menu of specialized attacks you must defend against—and it's always possible that someone will invent a new attack five or ten years down the road. In fact, a single discovery of a new mathematical algorithm could destroy the security of most of our current cryptosystems. Nobody knows if such an algorithm exists, but if it did, it could destroy much of the world economy in one fell swoop. (Fortunately, we've been looking for it for 3,000 years, and haven't found it yet!)

View 44 other answers to this question
About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025