We like to take a look at our top blogs each year and see what everyone is interested in so we can create interesting content for the new year. Here is a round up of our Top 10 blogs from 2019.
By Marco Corona
In a previous blog, I went over how to set up headless tests on a centos machine; in this blog, I will be going over how to introduce this machine into a continuous integration environment via Jenkins.
By Max Saperstone
It’s been a while since I wrote about Cucumber, but hopefully I’m not too rusty. As I mentioned in my last post, I’ve recently gotten back into Cucumber. For my current client, I am developing a framework which allows testing the same behavior on multiple applications. To me, this is reminiscent of my first exposure to Cucumber, using it to run through similar behaviors on the web, services, and a database. I previously espoused on multiple best practices of using Cucumber, including tagging, background steps and hooks, and general best practices. In this post, I plan to expand on some of the best practices of glue code.
By Matthew Grasberger
One of the most important things in software testing is integrating tests with the build tool that your project uses. Developers need to be able to run your tests easily, otherwise, they’re probably not going to run them. Another reason for integrating tests is that it encourages clearly defining your project’s build process. In the case of Maven, each step is associated with a goal, which comes from the default lifecycle. Integration tests should be run after your project is built and deployed to ensure it functions as intended. The ‘integration-test’ phase of the default lifecycle is executed after your project is compiled, tested, and packaged. This is where Selenium tests can be run to ensure that the project is working as expected.
By Robin Foster
In a recent development, MicroK8s replaced its dockerd installation with containerd. Many pre-existing sources mention “microk8s.docker”, but this command is no longer available. I will walk you through the full initial installation and basic usage on Ubuntu 18.04.
Practically speaking, this means you now need to install Docker on your Ubuntu machine. In previous versions, MicroK8s came with its own Docker client, which was handy for quick prototyping with local Docker images. Continuing to use local Docker images with the new version of MicroK8s requires a Docker installation and an update to your workflow.
By Alan Crouch
Work at a current client has led to – for a various amount of restrictions with reasons that I won’t get into – the need for using dynamically generated XPATHs in order to locate elements on the page. These elements lack any ids or other defining attributes that would make them easy to spot, and not wanting to build fragile, dynasty based xpath selectors, we’ve settled on using the text itself as our unique identifiers.
By Robin Foster
Helm is a Kubernetes package and operations manager. The name “kubernetes” is derived from the Greek word for “pilot” or “helmsman”, making Helm its steering wheel. Using a packaging manager, Charts, Helm allows us to package Kubernetes releases into a convenient zip (.tgz) file. A Helm chart can contain any number of Kubernetes objects, all of which are deployed as part of the chart. A Helm chart will usually contain at least a Deployment and a Service, but it can also contain an Ingress, Persistent Volume Claims, or any other Kubernetes object.
By Jeff Pierce
‘Pipeline as code’ or defining the deployment pipeline through code rather than configuring a running CI/CD tool, provides tremendous benefits for teams automating infrastructure across their environments.
One of the most popular ways to implement a pipeline as code is through Jenkins Pipeline. Jenkins, an open source automation server, is used to automate tasks associated with building, testing, and deploying software.
By Max Saperstone
As part of the project I’m currently engaged on, my team is writing automated tests for an application which has a web interface, and also 2 mobile apps, one for Android, and one for iOS. As part of the project, we’ve built a test automation pipeline which runs our tests against our application to ensure changes we’re making don’t impact other tests. Yes, we’re testing our tests. One of the challenges we ran into was ensuring we could verify our Android and iOS tests still worked in a timely fashion. We tried multiple options including running tests on SauceLabs but this consumed resources too quickly to be effective due to the multiple threads of the pipeline (we decided to save SauceLabs bandwidth for our actual testing). The solution we eventually found worked best was to create our own emulators inside of a Docker container to test on.
By Jonathan Kauffman
When developing an application in programming language A you may discover that certain parts of the program are easier to code using a different language B. At this point you have one of three choices:
- Write the application entirely in language A.
- Write the application entirely in language B.
- Write most of the application using language A and call language B from A when appropriate.
By Bob Foster
Recently, I wrote an Ansible playbook to extract data from an Informatica PowerCenter repository. The data was then compressed and uploaded into Nexus Repository Manager. I used the command line utility, pmrep, to execute the commands needed to connect to the Informatica repository and to extract the data. A specific Informatica user had been given the necessary privileges to execute the pmrep commands.