The software engineer and computer scientist Fred Brooks, who is also very known for his writings about "the mythical man-month", wrote in 1986 that there is "no silver bullet" in software development (another proverb he is quite famous for) which would significantly increase productivity (generally speaking, at least).
Of course this hasn't stopped people from trying. During the past 24 years quite many software development techniques and methodologies have been created which promise significant improvements in productivity and quality. This is nothing new, of course. Software development models have been proposed for as long as there has been software development (for example, one of the most famous is the so-called "waterfall model" from 1970, nowadays considered antiquated and obsolete by most people.)
Ok, I'm being rather unfair here. The people behind the software development techniques are (as far as I know) not claiming that their techniques are the perfect "silver bullet" which can be used in any situation in any project and always get a perfect result. It's probably more like "here are some ideas, see if they could be used in your current software project".
However, the real problem is when a project manager or other person in charge of a software development project (and who might or might not have a good grasp of programming) gets infatuated with a specific development model he has been reading about, and tries to shove it in every project he participates in, even if the technique in question would actually be detrimental to the development process in a particular project. This is especially bad if this person has understood the technique poorly or is taking only bits and pieces of it without really understaind the whole and the overall idea.
For example, such a project manager might have been "evangelized" into believing that "test-driven development" is the "silver bullet" and will somehow automatically produce higher-quality programs with less bugs and flaws. While this may indeed be the case when properly applied by a competent development team with significant amounts of experience (in software development in general, and test-driven development in particular), it's nevertheless way too easy for the inexperienced manager or developer to get so blinded by some details of the development technique that in the end the whole idea actually becomes detrimental to the whole process.
For instance, if the person or team who makes the module specifications by writing the automatized tests for it is not the same person or team who implements the module, all kinds of miscommunication can happen. It's impossible to know in advance all the possible requirements and details a module will end up using, especially with larger and more complicated modules, so writing comprehensive tests for a module before the module has been implemented is next to impossible (with the possible exception of the smallest and simplest modules). In many cases it will happen that the tests will inevitably be lacking (because they are testing only a very small portion of what the module is actually doing). In many cases new tests would have to be created to test features of the module which the developer(s) became aware of only during the implementation of the module.
You can imagine whether such tests will ever be created, especially if the team is on a very busy schedule with tight deadlines. In the worst case scenario this may result in a program where only a small fraction is being actually tested (even though "test-driven development" aims for a very large test coverage), and which may never be due to time or other constraints.
One could fairly argue that even these tests are better than no tests at all (which is way too often the case). I agree, but my point is that the original idea of "test-driven development" being some kind of automatic quality insurance does not always realize, and relying too much on it being so can be a bad idea.
(Of course another big problem is that sometimes some of the the pre-written tests will get obsoleted or "antiquated" by the time the module is implemented because it was impossible to predict how the implementation of the module would evolve to its final form. This may result in some of the tests wrongly failing, or worse, succeeding but testing the wrong things.)
Another example is the "agile software development" craze. The ideas (as a whole) may be sound, but they are often applied in the wrong way, partially, or in situations where they don't really fit.
For instance, a project manager might get infatuated with an idea like "rapid prototyping", where the idea is that rather than the project first going through a large requirements analysis and interface specification stage (which in large projects might take weeks or even months, before even a single line of code is written), these specifications are dynamically grown, tested and fine-tuned by creating "quick-and-dirty" small prototype programs which allow developers and testers to try the concepts very early in and during the entire project, so that mistakes and design flaws can be corrected early on. The design and development thus becomes much more flexible than if these mistakes were caught very late in the project (when the majority of the program has already been implemented) and would thus require an extensive amount of back-tracking (changing specification documents, major code rewritings, etc.)
The idea might sound fancy (which is why so many developers and project managers get infatuated by it), but in practice it can often lead to a real mess. Writing such "quick-and-dirty prototypes" would need, at least in principle, for each such prototype to be written from scratch. After all, they are just prototypes to test how the program would look and feel like, not an early version of the final program.
This could easily lead to tons of repetitive and needless work, which is why most programmers won't start each prototype from scratch (and in many cases they would be quite right, as often it would mean a lot of obsolete and wasted work). The most common thing to do is to create the new prototype by modifying the old one (especially if the changes are mostly superficial or are new features which don't affect existing ones). Now, you can probably easily see where this is very, very easily going.
Probably most programmers who have developed a small-scale software project for a very long time have experienced this. The first primitive version of the program is quite lean and clean, with small and easy-to-understand modules with straightforward implementations, all nicely packaged into a well-designed little program. Then you add a small feature, then another, and so on. The first new features don't affect the overall design almost at all, so they are easy to add. But as time passes and the amount of new features grows from a few to a few hundreds, each one "small" enough to just be easily added to the existing code, at some point you start realizing that, almost imperceptibly, your lean&clean program has actually grown into a behemoth. Your nice 200-line modules from the first version have grown into 2000-line modules (and that's if you are lucky and have been conservative in adding features to one single module), usually without any kind of internal hierarchy or design (in other words, "spaghetti code"), and you are starting to have great difficulties in understanding your own program because it has grown wildly out of control.
Usually when you find yourself in this situation, the only sane solution is a complete redesign and refactoring of the entire program from scratch (perhaps reusing some of the least-horrible pieces of code from the old program). Of course if your program has now grown to tens or even hundreds of thousands of lines of code, this is not something you will be very eager to do, especially if you have a tight deadline on your hands. The end result is that you try to keep your giant behemoth of spaghetti code in shape as well as you can, and pray that there will be no big new features to be added in the future.
The problem with the whole "rapid prototyping" idea is that it very easily leads to exactly this situation. Unless a complete redesign-from-scratch is performed from time to time (which could require quite a large amount of time and work), the end result may end up being a horrible mess which is even worse than what would have been done with a more "traditional" development model. The user interface may be nice, but try suggesting a major new feature and see the developers weep... (And mind you, the developers themselves are probably not the only ones to be blamed here, especially if this whole "rapid prototyping" was the idea of a higher-up manager.)
A very similar phenomenon happens with design patterns (which could be seen as somewhat related): A project manager, or even an individual developer, might get so infatuated with a specific design pattern that he tries to shove it in everything he does, even if it doesn't fit at all and, on the contrary, is detrimental to the overall design of the software.
For example, a project manager may get the conviction that the new library that the team is going to develop must use the "framework" pattern, even if it doesn't make any sense with the library in question (or more usually, is not the best solution to the design problem at hand). The developers might struggle to try to make the library work like a framework, and if the nature of the library is such that it just doesn't fit that pattern, the result may be complex, awkward, complicated to use and hard to understand.
Naturally when this has become clear, the only solution is to start from scratch and design the library properly (possibly reusing existing code). The whole mess could have been avoided if the proper design would have been considered earlier.