I'll start with a hypothetical situation. John works for a software company. He’s a .NET developer for about 5 years now. He’s been involved in many web application projects. He developed pretty remarkable soft skills, and he seems to know how to manage a team.
One day, after giving a demo to stakeholders, the team John is, unofficially, yet for some reason, managing receives an urgent change request; The way taxes are calculated for some products must be changed. John took responsibility immediately.
At that time, there was a new member in the team, Emma. She was given the task to increase the unit testing coverage, in order to familiarize herself with the code and the business behind it, and also to make some monthly reports look better.
After a couple of days, Emma stumbles upon John’s recent changes. They were simply unreadable. He transformed a pretty complicated 30-line method into a 120-line monster. He introduced some dependencies that were untestable and impossible to mock. Emma was shocked. She contacts John and explains the issues she’s facing with her tests. “I know, I know, but we have to deliver this as soon as possible. Just let it be for the moment, we’ll fix it later.”
Eventually the code was shipped, but no one ever got back to fix that method. It turns out John wasn’t very familiar with the unit testing concept. In fact, no one in that team was, that’s why Emma’s work was only valuable when those reports (that nobody read or cared about) were sent.
Within weeks, John became the official Project Manager. He decided how each and every team member will spend their working hour. Every time someone in John's team announced they’ll exceed their initial estimation, the first measure John takes is to give up the unit testing tasks. “We’ll do them later”, he says, but they did them in the same way John fixed his code. Never.
After another few months pass, the whole team faces serious problems. There’s a big release coming soon, and the number of defects seems to have increased exponentially. The more the application is tested, the more bugs arise. All team members are doing crazy overtime, working very late at night and trying their best to get the package done by the release day. Finally, they managed to ship a working release at 3:00 AM, a mear few hours before the dealine. What happened the following couple of months? More than half team members quit their jobs.
If you asked them why they quit their jobs, they would tell you “bad management, poor organization, we had to work overtime, etc etc”. However, these were just the natural consequences of something much deeper and not-so-easy to spot. The poor quality of the code. They all quit because that codebase got unworkable. The chain reaction went like this:
Bad Code → Bugs → Overtime → Bad Management → More Bad Code, and so on; it’s an infinite loop.
The code is working environment of the developers, and this team’s working environment was highly uncomfortable, full of dirt, and very, very dangerous. It was so hard to do changes, even small ones. All the changes were affecting multiple areas of the complex application. The code was so entangled, it was a nightmare to maintain and test it.
I was part of that team at the time. Looking back now, it’s very clear that I quit because of the poor quality of the code. But what made us write such code?
It was our mindset that said “deliver now, fix it later”, instead of saying “deliver only if it’s high quality, so we don’t have to fix it later”. It turns out the quality was lacking at the highest level, which is our culture. The notion of having easy to maintain, easy to read, understandable code just didn’t fit in our shared awareness.
We didn’t have a set of principles and values that a team was supposed to share. Values that would drive our coding process. Principles that make our code look the same, no matter what team member has written it. We didn’t have that. Everybody wrote code by their own rules at best, and by no rules at worst.
Because we didn’t see the value in having a clean codebase, we triggered that infinite loop that starts with bad code without realising. If we only had some sort of metrics that would clearly showed how bad our code looks, the whole story would have been very different. Once you actually see something, you can’t just ignore it. But there was nothing that showed us the overall picture of our code.
Nobody understood that the purpose of a unit test is both to ensure the correctness of the tested unit AND to ensure the code is clean, easy to read and maintain. This is why we break our code into units in the first place, to make sure every piece of logic is isolated and reusable, so we can create the application’s behaviour nice and easy by building these pieces together. For some reason, we couldn't understand that the code should be easy to read. We were spending 80% of our time reading the code, sometimes even more. I remember a night when a colleague and I spent 3 hours debugging, just to move a piece of code up one line. Still we couldn’t see the value in readability.
Now Imagine if the code wasn’t easy to read, the tests were even harder. So we end up with these monstrosities that weren’t actually testing anything, because the focus wasn’t in making them test a scenario, it was in making them pass. By all means.
Therefore having these kind of unit tests is far from enough in terms of good quality code. What we need is something much more powerful.
First we need some standards, that’s for sure. We need clear, written rules that everyone understands and nobody questions.
Second, we need some kind of mechanism to make sure the rules are actually being followed (because you can never trust the devs, can’t you? :)).
Third, because everyone says a picture is worth a thousand words, we need some sort of visual representation about what happens deep down the code.
Meet SonarQube, a tool for visualizing, measuring and tracking the technical debt of a codebase. SonarQube offers a thick set of coding rules for a large variety of languages, so you can choose whatever suits your project, and raises an issue every time a rule is violated. As a bonus, it also tracks the evolution of the project in terms of technical debt. Just what I need!
How does it work? SonarQube has three main parts: the scanner (aka Sonar Runner) which is performing the code analysis, creates the report, and sends it to the Sonar Server. The server then processes that report, stores the result in its database, and displays the final results through a web interface. There are also a lot of available plugins, including one for each supported programming language, SCM integration, custom views for issues and much more.
What SonarQube actually measures are the so-called 7 deadly sins of a developer:
Duplicated code - one of the worst sin of all times, not only because having duplicated code is bad, but by duplicating it you also duplicate all its issues and faults.
Lack of unit tests (aka The Sin of Sloth) - could also be the sin of ignorance spread around by people like John.
Bad distribution of complexity (aka The Sin of Gluttony) - because too much logic inside methods/classes/files will, sooner or later, lead to disaster.
Spaghetti design - it’s still about complexity, but at project level; it happens when you have no clear separation between application layers, or when your projects get so entangled you just want to set them on fire and rewrite all from scratch.
Potential bugs - includes all the issues that could make your application crash, like null pointer dereferences, or conditional blocks that return the same value on all their paths.
Coding standards breach - happens when devs are too lazy to learn and follow team’s rules, or when they simply ignore them.
Comments - whether there are too many or too little. In my opinion we could live without documented public methods, as long as they don’t break the other sins.
As you will see, with SonarQube, quality is not only about how the code is structured, it’s also about whether it’s actually used at all. Any unused piece of code is considered an issue and must be solved, and of course commenting code is not an option, because it will raise another issue.
However, this is what I like most at SonarQube; it doesn’t care whether you implemented some fancy pattern, or you did some crazy optimizations. If your methods, or files, or method’s parameter number exceed the limit of common sense, it warns you there’s something wrong. It doesn’t tell you the things you do are wrong; maybe that optimization makes your code three times faster, but the way you are doing it, that’s what should change. Otherwise sooner or later somebody will suffer. Somebody will have to read, and read over, and over again, and try to figgure out what were you thinking when you wrote that code. Maybe someone else won’t understand how your fancy pattern really works, and will keep adding code, increasing its complexity and ruin your design.
In the end, I strongly believe this tool can make yours and your teammates lives a lot easier. So go ahead, set up your project inside a SonarQube server, define your coding standards through a quality profile, and start delivering quality software in all its aspects!