As an IT company we strive to instil the best and most up-to-date knowledge and skills of the industry in each and every team member in order to provide the most professional and efficient services to our customers. For some time now we have been conducting “Tech-Talks” every Friday afternoon, where one member shares with the team, the latest technology, software element of our products that they have been working on or have taken an interest in in order to enhance the background knowledge and skill set of the team as a whole. We have found that after these talks and interactions with the team, we gain a more complex and in-depth perspective of the inner-workings of the company and its’ products which in turn helps to better the overall outlook of the team.
A few weeks ago, Ed Pascoe, the DevOps Manager took us through ‘how DNS will use docker containers in the near future.’ With just over 9 years with DNS, Ed’s day-to-day responsibilities include keeping the internet pipes clean and sweeping their memory, managing the day to day technical operations of the DevOps department and working on development of new systems, such as the aforementioned Docker Containers.
During the ‘Tech-Talk’ he explained that a Docker is a software system that runs applications in Containers, whereby a container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
Although Ed did most of the initial setup, the rest of DevOps team and Dev team will also be involved as time goes by. He is working closely with David Peall who is evaluating the Docker Containers use for continuous integration in the Gitlab setup so this is very much a team effort.
Although still in the early stages of the implementation phase, they already have 3 smaller systems in place that are docker-based and are slowly proceeding with conversions to convert the rest of the systems. One of the biggest roadblocks they are facing currently is that they are waiting for dual-stack support in Kubernetes, which is an open-source system for automating deployment, scaling, and management of containerised applications. They essentially want both IPv4 and IPv6 on the same container which will probably take a few months.
Ideally, eventually the DevOps department will be supporting the underlying servers that run Docker and Kubernetes while the Dev team will mainly be responsible for building the containers and deploying changes continuously without needing the DevOps department involved in this area. ‘Docker Containers technology is unique, we want to migrate to Docker containers because currently we are still using most of the traditional virtual machines with the software installed as normal ubuntu packages.’ Ed explained.
Advantages of the going down the Container route are that, as a single, standardised unit of software, that can then be used for development, shipment and deployment, the Development department can guarantee that the Container will run on any system and in turn, simplifies things greatly from a DevOps point of view, as the staff the will only need to understand how to run one system. With our current systems they need to understand the subtleties of every subsystem we use.
Although there is still a lot of work to be done, Ed is excited about this migration and he adds that he would not trade the chaos in the DevOps room for anything else. The team is very cool and hard-working, there’s always something new to learn in this industry. We could not have achieved the things we have and all we look forward to in the future to without the help of each other.