I released five new sample lessons from my Test With Spring course: Introduction to Spock Framework

Java Testing Weekly 28 / 2016

There are many software development blogs out there, but many of them don’t publish testing articles on a regular basis.

Also, I have noticed that some software developers don’t read blogs written by software testers.

That is a shame because I think that we can learn a lot from them.

That is why I decided to create a newsletter that shares the best testing articles which I found during the last week.

Let’s get started.

Technical Stuff

  • JMockit 101 is the first part of baeldung’s JMockit tutorial, and it provides a practical introduction to JMockit. You will learn to specify expectations and create mock objects with JMockit. The most interesting thing about JMockit is that it has a totally different API than mockito. I am not sure if I like it, but I think that it is a good thing that we have multiple different mocking frameworks.
  • JUnit 5 M1 announces the release of JUnit 5 M1. The first milestone release concentrated on providing stable APIs for IDEs and other build tools. Also, it included a new feature called dynamic tests. If you want to know more about dynamic tests, you should read this blog post.
  • Robot Framework Tutorial 2016 – Integration with Jenkins describes how you can integrate Robot Framework with Jenkins CI server. This post provides step-by-step instructions and it has a lot screenshots. In other words, you should be able to get the job done as long as you follow the instructions.
  • Testing with Hamcrest is basically a cheat sheet that describes how you can use different Hamcrest matchers. This post is useful to both beginners and more advanced users because it can be used as a “reference manual”.

My "Test With Spring" course helps you to write unit, integration, and end-to-end tests for Spring and Spring Boot Web Apps:

CHECK IT OUT >>

The Really Valuable Stuff

  • Test environments and organizational aspects is a really interesting post because tells two stories. The first story is describes the pros and cons of using mocks and stubs for isolating the system under test from its dependencies. The second one describes how organizational aspects might limit your choices or increase them. The second story made me realize how lucky I am when I work for a company that isn’t afraid to spend money. There is basically “zero” bureaucracy and I feel that our IT department is working for me. All this feels so natural to me that I am always surprised to see that all companies do not act this way.
  • The tester and technical debt is a great post because it provides one excellent insight: technical debt is typically born by accident. The thing is that most of us don’t decide that today is the day when we create technical debt. Instead we make small decisions every day, and one day we realize that our codebase is not as good as it should be. When we realize this, we don’t take responsibility of our actions. We simply call it technical debt and “move on”. I think it’s ironic (and extremely satisfying) that this post provides the best description of technical debt which I have ever read. And, it was written by a tester.
  • Should developers own acceptance tests? argues that acceptance tests should be owned by the team. I think that this is a good idea because of two reasons: First, developers typically don’t have time to own everything and if developers would own acceptance tests, they would not probably write them. Second, testers are good at designing test cases and they typically don’t want to automate everything. If developers would would own acceptance tests, they would probably automate them and this is not always a good thing.
  • We Are Not Gatekeepers is an excellent post that describes why testers are not responsible of quality assurance and they don’t decide when something can be deployed to production. I am not sure why some people don’t get this, but I suspect that these people don’t want to take responsibility of their actions and decisions. Do you agree?

It’s Time for Feedback

Because I want to make this newsletter worth your time, I am asking you to help me make it better.

P.S. If you want to make sure that you don’t ever miss Java Testing Weekly, you should subscribe my newsletter.

About the Author

Petri Kainulainen is passionate about software development and continuous improvement. He is specialized in software development with the Spring Framework and is the author of Spring Data book.

About Petri Kainulainen →

2 comments… add one
  • On the topic of “Should developers own acceptance tests?”

    > First, developers typically don’t have time to own everything and if developers would own acceptance tests, they would not probably write them.

    Of course they would write them. Actually some developers start with an acceptance test and the do TDD on the component level. If QAs own the acceptance tests they will most likely write an acceptance test for every single feature – which is wrong. Acceptance tests take a long time to run and not every single feature should be tested like that.

    > Second, testers are good at designing test cases and they typically don’t want to automate everything. If developers would would own acceptance tests, they would probably automate them and this is not always a good thing.

    I don’t see absolutely anything wrong with automating everything. Quite the contrary – can you explain what in your opinion are the downsides of full automation?

    Reply
    • Hi Marcin,

      Thank you for an interesting comment. I will start my answer by defining what acceptance tests means to me.

      I think that an acceptance test is a test which ensures that a feature meets its acceptance criteria. Because I write mainly web applications, it seems to clear to me that an acceptance test must interact with the user interface of the application because otherwise it cannot verify that the acceptance criteria of the tested feature are met.

      Of course they would write them. Actually some developers start with an acceptance test and the do TDD on the component level.

      I agree that developers would write them if they are “passionate” about automated testing and they have enough time to do it. Sadly, sometimes either of these preconditions (or both) is not met.

      I have noticed that quite many developers would want to write more tests (at least on some level), but they don’t know how or “they don’t have time to do it right now”. The first problem is easy to solve by offering support and training, but the second one is a bit tricky because these people might be actually telling the truth. If these people feel that they have already got too much work, it is unwise to give them more work because they simply won’t have time to do it.

      In fact, I admit that I don’t write too many end-to-end tests because I have noticed that the return of investment is quite low. Writing end-to-end tests is a lot more time consuming than writing integration tests and the “reward” is not spectacular either (slow and brittle tests, a lot of maintenance if your UI is changed constantly). I will rather concentrate my efforts on writing unit and integration tests because I have noticed that they give me faster feedback and a stable test suite that doesn’t require constant maintenance.

      Also, since I work only 7,5 hours a day, I need to spend at least a part of it for writing production code because that is what we deliverer to our customer.

      If QAs own the acceptance tests they will most likely write an acceptance test for every single feature – which is wrong. Acceptance tests take a long time to run and not every single feature should be tested like that.

      I am not sure if this developers vs. QA discussion is useful mainly because we don’t have a QA department. In fact, I think that no one should one. Instead you should have cross-functional teams where every team member can participate in making these decisions. By the way, now that you brought it up, my personal opinion is that developers want to automate everything. In fact, there are quite many testers who do not believe in the power of automation (for obvious reasons).

      I don’t see absolutely anything wrong with automating everything. Quite the contrary – can you explain what in your opinion are the downsides of full automation?

      Full automation is not useful because of these three reasons:

      One, automated tests are good at checking that the system under test returns the output X when it is invoked with the input Y. I think that these checks are indeed useful, but you cannot test every single part of the application by using only these checks. For example, automated tests are not very useful if you need test complex workflows that have complex preconditions that determine when you can move to the next step of the workflow. Actually, I thought that you agree with this because earlier you mentioned that: “If QAs own the acceptance tests they will most likely write an acceptance test for every single feature – which is wrong”.

      Two, automated tests cannot detect usability issues or accessibility issues. Even though I think that the acceptance criteria of a feature should not contain usability requirements, I think that they should contain accessibility requirements (and quite often they do). Because these issues can be detected only by a human, it is clear that automated tests cannot verify if the acceptance criteria are met.

      Three, automated tests might not give a proper return of investment. Before you decide whether or not you should automated a test case, you should always consider if the effort is worth it. Sometimes the test might be too hard to write or running it takes too long. If this is the case, this feature needs to be tested by a human.

      That being said, I think that most important reason why developers should not own acceptance tests is this: developers don’t have (and should not have) the power to decide when a feature can be accepted. Every team member can (and probably should) help the customer to specify the acceptance criteria of each feature, but this doesn’t mean that the team should necessarily own them.

      To summarize: I think that the customer (or product owner) owns the acceptance criteria, and the team should decide how they are going to ensure that every feature fulfills its acceptance criteria.

      Reply

Leave a Comment