Myth: Technical excellence is both necessary and sufficient for real success.
Reality: Technical excellence is only really interesting to technical people unless it fills a real need in a way that non-technical people recognize, and outside of that situation (which is honestly very rare) spending a lot of effort marketing a technically-crappy solution is more likely to be successful than spending an equal amount of effort solving the same problem in a way that makes any kind of technical sense.
Many really major technologies, particularly the ones with backing of a single big corporation, get away with being really bad because of a lot of money flowing into marketing (Apple, Microsoft, Oracle, and Sun are notable for marketing stuff that's clearly technically inferior to the competition and yet continues to win out). Other major technologies get away with being crappy by being inexplicably popular/trendy with a whole generation of people as an early part of their experience and those people write a lot of mediocre code and either move on as they gain experience or cling to their first language and defend it while refusing to grow (basic, perl, php, ruby on rails, and arguably unix in the early days fall into the category of tools that gained way more acclaim than they were worth based on being adopted by people who were technically inclined but inexperienced; of these, only unix has really significantly graduated past its larval state).
Myth: Performance matters (or) performance doesn't matter
Reality: Performance only matters when it matters.
You can't generalize about it, because it's always going to be very situation-specific. Putting too much effort into performance trades programmer time for machine time in ways that can be unexpected -- super-optimized code can often be fragile, unmaintainable, and the source of a lot of technical debt, despite being the result of even more deep thought and design work than usual -- but putting too little effort into optimization often means redesigning or rewriting large portions of the code later. So, determine the level of efficiency that's necessary, and try to aim for the appropriate level of efficiency during the initial development. While it's easy to optimize individual hotspots after development, that doesn't help you if the entire thing is architected to take k^n time based on whatever solution first occurred to the programmer instead of a solution of similar complexity that would take log(n) time -- when you do that, you can expect to keep approximately zero percent of the code you wrote after the first pass of optimization.
Myth: It's just a temporary hack; I'll fix it later (or, it will be replaced later anyway)
Reality: In personal projects, temporary hacks do get fixed (and when they don't, it's generally OK).
In enterprise, you often won't get put on solving a problem unless it's hurting profits, so a temporary hack that works will never be replaced. Even if that temporary hack causes more problems down the line, whoever manages that will have an institutional bias toward monkey-patching that problem with another hack. This is why enterprise systems are massive hairballs of technical debt and poor life decisions. Basically, if you're getting paid to fix it and you have some lee-way with home much of a fix you actually do, it's in your best interest (if you care about the company's profits moreso than your job security) to spend a little longer trying to fix it in a way that minimizes technical debt, especially if you can manage to also replace several other hacks in the process.
Myth: It's good to write clever code.
Reality: Writing clever code is a form of 'expensive signalling', like buying an expensive but unreliable sports car or wearing expensive but easily damaged clothing.
Clever code can only ever be maintained by someone at least as clever as the person writing it, and the circumstances under which maintenance occurs (along with the nature of debugging, which is necessarily about identifying a mismatch between one's mental model of the code and how the code actually works) means that if the code is sufficiently clever, even the person who wrote it is unlikely to ever be able to fix even a fairly trivial problem with it. Clever code is only good if it's so perfect that it need never be considered again except as a work of art to be admired by lesser programmers; even then, it's a good idea not to put it in anything important.
Myth: Tutorial-style documentation is sufficient.
Reality: Nobody using your code is necessarily going to want to use it for exactly what you think it should be used for.
Unless the use cases are perfectly aligned, tutorial-style documentation is an indicator that the documentation is probably useless and whoever is stuck using the software is probably going to need to either determine the behavior empirically or examine the code. And, if you wanted everybody to just read the code, why write documentation to begin with? If you're going to write some tutorial-style documentation, at least combine it with auto-generated API documentation and make developers hate you a little bit less.