Let’s start with a riddle:
– everyone is thankful it is being exercised
– everyone is thankful that there is enough of it
– everyone is thankful that they’re good at it
– everyone is thankful that they don’t need assisting tools
– everyone is thankful that the “receiving” end is content and satisfied
What is it? Testing, of course. There’s hardly a discipline in software development that is so crucial and yet so frequently ignored and skipped over. Let’s take as an example, the analysis of business requirements. I think it’s very unlikely that the client will say, “Hey guys, we need an internet banking system, just throw something together. Thanks.” On other hand, I too often hear, “Hey guys, just hurry up and finish implementation so we can go ahead, we’ll do testing later. Thanks.” Why does this happen? Unfortunately, the value of testing is not seen at a first glance. If something is doing what it needs to, it works because it is well programmed, not because a test was written.
From a purely mechanical perspective, if we understood the entire business domain to the tiniest detail, if the requirements were completely clear, if the development environment was ideal, and if the performance was perfect and without a single bug, there would be no need for testing; everything would work work perfectly from the start, for a single user or ten thousand simultaneously, 24/7/365. But the world of software development simply just isn’t so. The business domain is not completely understood, and neither is the infrastructure flawless. We don;t even need to mention the fantastical things that applications do when under great stress. And that’s exactly why testing is necessary, so that we can introduce into this unpredictable world a measure of security and predictability, so that we’re not bewildered when 2 (two) users access an application at the same time and the whole system gets blocked up because at once the entire memory was used up. I admit I’m dramatizing this a bit, even if this story regarding two users is, believe it or not, true (I saw it with my own eyes). The awareness for the need for quality and structured testing grows year by year, for which we at CROZ are at least partially accountable for, through reporting in these kinds of articles, through covering testing at the QED, and of course, through practicing testing on our own projects.
Do we really need testing?
Testing is, everyone would agree, a complex discipline that we can contemplate from many different angles and one that we can start to apply in many different ways. Sometimes it’s enough to perform some final usability testing and we’re ready for production, but at other times it’s necessary to go through all levels, from unit tests on source code to behavioural testing of the whole system in the case of part of the infrastructure going offline. If it’s about an internal application for registering vacation time, then it’s probably okay to check if everything works in the test enviroment and we’re ready for production. After all, if something goes wrong and my marking down of my vacation time is lost, it’s okay, no worries, I’ll just register it again. On the other hand, if we’re talking about the famous online banking system, then we probably want to test even the source code (various calculations, transactions and so on) and security (let’s say, on OWASP Top 10), but also the behaviour of the system if a key part of the infrastructure is inaccessible. Here we also have, of course, regressive testing – following the implementation of new functionalities we want to make sure the old ones work like they did before. Not all testing is convenient, or better to say, cost effective in all situations, but regcognizing the right moment is a skill that is learned and honed over time. Since testing can commonly be even 30-40% of the cost of an entire project, good ogranization and planning of activities not only raises the quality of the supplied software and system in the whole but also reduces the cost of the project.
How mature an organization is from a testing perspective can be relatively quickly and simply determined. The software community is continually working on raising the quality level of the entire production process, and so, the de facto development standards are in a guide called CMMI (Capability Maturity Model Integration. The testing counterpart to CMMI is defined by the TMMi Foundation, a professional organization that consolidates activities related to testing, including standards, reference models and maturity models.
Status snapshot and assessing the maturity of a test environment
Based on TMMi, but also from on our own experiences, we’ve developed our own Testing Environment Maturity Assessment service, as a one-day workshop in which we place precise emphasis on determining the quality of testing in the production process, while simultaneously recognizing areas for improvement and specialization.
The workshop consists of five parts, the first three including selected employees of the organization for which we are performing the analysis. In order to make the most of our time and get results as soon as possible, it’s necessary to gather a competent team that has the necessary knowledge of the internal processes for defining and analyzing business requirements, development, commissioning, and, of course, testing and accepting deliveries.
In the first part, thirty minutes are committed to establishing a common reference point and an idea of an ideal testing world. However much so impossible, the ideal world represents the common goal which must be clear to everyone, regardless of the level of engagement in the actual process. It is crucial to define what testing means to the chosen organization, understanding what vocabulary to use and how the entire environment is arranged in regards to being able to separate related activities. That’s why it’s important to raise awareness of the need for a methodological approach to testing, for strategy and practice, and in the end, create a clear base for the rest of the workshop.
The second part is the longest and presents a real workshop, in the traditional sense. Active participation from the organization’s professionals is crucial here. Fundamentally, the idea is to clearly and undividedly recognize how the entire test environment appears, what the “work group” thinks is good and needs to be kept, and what isn’t great and needs to be fixed. It’s important to understand that there are no correct or incorrect answers, but that we want to honestly clarify to ourselves what our test environment is like. We enter into detailed analysis of the applied methodology, the actual testing process, the organization and the environment. For example, it’s frequently shown that people are skilled in testing their own applications, but without formal instruction, which later negatively influences communication between teams, or not enough attention being paid to automation, which leads to directly losing time that could’ve better been spent on other things.
The third part is probably differs the most from the usual approach, though it has been very well received in cases where we have tried it out. Namely, it consists of short and very targeted, direct interviews individually with all of the participants taking part in the workshop. It’s surprising how much new information is gained in those ten to fifteen minute discussions, and it’s especially interesting that in the course of the interview, many details come out that the people were not satisfied about, but were hard or uncomfortable to come forth with while in a group. This is actually great, because in this way we can create a complete image of the process. Throughout the fourth part of the workshop, we analyze all the gathered data and prepare a report, as well as the presentation we present the next day, in the fifth and last part of the workshop. All gathered information is evaluated and arranged in matrixes of dependent values, in order to display the current state of the environment in one place.
What’s next?
Assessing the maturity of a test environment will provide an insight into the process and the organization of testing and will show critical details that must be fixed as well as those segments that must be kept and strengthened. Results of the status snapshot, so-called “findings”, can be literally employed as a list of tasks that have to be completed in order to improve testing, for the immediate and the short-term, for example, the next project, or for the long-term, for all future activities.
Falls Sie Fragen haben, sind wir nur einen Klick entfernt.
Aktuelle News