This is a good question to illustrate to beginners just how complicated software gets.
Imagine this:
You just got a job at your dream company, and barely passed the interview. This company is big, let’s say it’s Apple. Your first day working on code, you’re asked to implement a solution to filter out junk data that keeps showing up in the logs. Of course you want to do it efficiently, and you know your manager is keeping a close eye on you to make sure you can keep up with the team.
You read the code, and see that all you need to do to implement the solution is to change some poorly named, seemingly useless variable deep in the source code. You change the variable, all the tests pass and your implementation looks good. Nice!
A month later, a server cluster in a different country loses power and shuts down, causing a system reboot.
For some reason, when the servers reboot, the data from before the crash cannot be found. Apparently that variable you changed was actually supposed to be used to help find the data using the crash logs, for countries that don’t have standard english alphaneumeric languages. No wonder there was junk data in the logs of the ASCII (standard English) build. Since the variable was changed, the data from the servers can’t be recovered.
It costs Apple $3 million USD in downtime. Wall Street estimates this scare caused Apple stock value to drop by 3%, which is equal to a roughly $30 billion drop in company value.
Of course we as readers know what caused the bug, but who knows how long before Apple finds the cause? Hopefully no servers go down before then.
That other developer should have made it more clear the variable was extremely important. Why didn’t they put a comment that said //DO NOT CHANGE
They were probably like you, just trying to impress the team with a fast implementation. Hey it worked, the tests passed, the implementation was solid. Just like yours.
Information between developers gets constantly lost. Misunderstandings, miscommunications, and freak accidents are inevitable. Sometimes you’re working on code written by interns who don’t even know English that well. You never know.
These problems are such a reality in software, that most of the forethought of designing a language goes to these problems. Bugs are easier to prevent and fix than human mistakes. Software is full of bugs, human mistakes, and plenty of odd combinations of the 2.
To be honest, my example isn’t very realistic at all, because any large company, especially Apple, has way more tools and checks in place to prevent these mistakes, otherwise this type of catastrophe would happen multiple times an hour at their scale. These mistakes and gimmicks happen that often. The executives at Apple knows as well as anyone that its developers cannot be expected to understand all of the intracacies of the codebase.
So, to answer your question, it’s to prevent people from changing things that shouldn’t be changed, even though at first glance you’d think they should probably know better anyways. It also reduces the likelihood that a freak accident could cause a change in the variable like the server’s clock becoming unsynchronized and changing the wrong data on the system stack. Const variables, in my experience, are generally used to set configurations for a defined scope.
Actually, I just thought of a very similar recent example where I helped my housemate with a problem his company was experiencing on an upgrade to Rails 5. A global variable had been set to False, without a change to the file in over a year. On upgrade of Rails, the build broke unless you changed it to True. Problem is, nobody at the company knew what the variable did. This company is a major contributor to the Ruby/Rails community and codebases, so you can imagine they were very stumped. My housemate committed the change to True and I haven’t heard about it since *knocks on wood*
EDIT: Here’s something that really stuck with me as a beginner. Think about an iPhone…it freezes/crashes for seemingly no reason probably once every 6 months? Maybe more, maybe less? I’d guess average is 3 months but we’ll give them the benefit of the doubt for the sake of math simplicity. Let’s say once per year to make it simple. Currently there are an estimated 228 million iPhones in use worldwide. If an iPhone crashes once per year on average, that means 26,000 Apple customers experience a crash each hour for “no reason.” That’s over a half million bug reports per day that will probably never be solved, and can be attributed to “unpreventable freak accidents.” This means that Apple has to put a lot of time and effort into making sure the iPhone software is pretty much unaffected by completely unexpected, unexplainable crashes, and appreciates any help it can get from language designers to help mitigate the damage.