To improve its approach to software development and deployment, a major global Retailing company needed to automate and simplify its highly manual development processes, speeding time to market and reducing the cost of doing business. The bank devised a change optimization program to achieve its goals—and chose DevOps as the centerpiece of that program.
As a leading DevOps practitioner and AI automation experts ,Plusautomate used a DevOps strategy to automate the existing build and deployment processes across the stores in UK and Europe. We also built a robust DevOps governance program with measurable key performance indicators.
Our solution leverages the continuous integration (CI) and continuous delivery (CD) features of DevOps, which reduces deployment time so teams have more time for testing and quality control issues. Through CI, developers integrate code into a shared repository several times a day, enabling quick error detection and mitigation. With CD, teams produce software in short cycles, helping reduce the time and risk of delivering changes. Our AI team also used Real-Time Operating Data Source for a key Retail store automation process , helping accelerate deployment time.
Through DevOps practices and Real-Time Operating Data Source, the Retailer has significantly upgraded its software delivery lifecycle, increasing delivery speed, improving quality and driving down costs.
The solution leveraged continuous integration and continuous delivery aspects of DevOps. Through continuous integration (CI), developers are able to integrate code into a shared repository several times a day. This allows teams to detect errors quickly, and locate them and mitigate them easily. Continuous delivery (CD) is a complementary approach in which teams produce software in short cycles, ensuring that it can be released at any time. It helps reduce the time and risk of delivering changes by allowing for incremental updates to applications in production. Both CI and CD reduce deployment time, giving teams more time for testing and quality-control issues. To help accelerate the deployment process, the team employed Real-Time Operating Data Source for a key financial asset class.
USING MATURE DEVOPS AND KUBERNETES CONTAINER ORCHESTRATION TO DEPLOY APPLICATIONS
A worldwide global IT companty wanted to make it easy for their engineers to ship applications from dev through a mature CI/CD pipeline into production. They desired to containerize all their applications for parity between environments, so that they could meet the scaling challenges that were quickly coming their way. Multi-region resiliency, role based access controls, and other international regulatory restrictions were all required.
The engineering team is distributed worldwide so communication and collaboration between them is critical. Operations had to be automated from the time an engineer checked in code and made a pull request to the time it was deployed into production. We leveraged containerization tools such as Docker and Kubernetes to set up their infrastructure on Amazon Web Services. This allowed the team to deploy an account per environment, with a management account that oversees all operations between them all, as well as monitors, aggregates logs, and runs a Chef and CI/CD server. Each environment contains a Kubernetes cluster, with various supporting services running alongside the teams software.
Open source tools like Jenkins, Elasticsearch, and Sensu are leveraged.
Environments are all created via AWS Cloudformation and Chef Cookbooks, and control the standing up and maintenance of VPCs, VPNs, monitoring stacks, and Kubernetes clusters.
Applications are written and checked into Version Control. Once changes are detected, Jenkins on Kubernetes starts the build process, which creates a Docker image and ships it to the EC2 Container Registry. Developers can run their applications within the Regional Sandbox, which is free for all to use, with built in Guard Rails. Additionally, a User Acceptance Testing environment runs a Kubernetes install and creates a version of the application for Quality Assurance. Once tests pass and the team is satisfied, the Helm updates the application running in production. Regressions are captured in the pipeline and stopped at the door prior to getting to production.
Monitoring the applications allows engineers the ability to instantly detect issues with their deployments and alert the team. Binary monitoring is through Sensu, with metrics being handled by Prometheus. Logs are aggregated with Fluentd and shipped to Elasticsearch and S3.
The engineering team is able to Build, Test, Deploy, and Monitor their work without having to be involved in the infrastructure at all. This is because the tooling allows them to access the items they need to ensure the applications are behaving correctly and consistently performant.
Standing up new regions is a process that takes less than an hour, and delivers a new VPC, VPN if needed and Kubernetes cluster with various items already stood up inside of it, ready to be used. Teams with access can then start using it immediately, because it mirrors the existing environments.
The team is able to ship dozens of applications around into different environments during development & testing and efficiently deploy them to the cloud at scale, worldwide. The team is not concerned about managing different machine types anymore, instead spending their time delivering features.
The business challenges that led the profiled company to evaluate and ultimately select Plusautomate ltd
Primary reasons for investigating or engaging in a DevOps transformation:
The key features and functionalities of Plusautomate DevOps solutions that the company uses:
The company achieved the following results with Plusautomate DevOps solutions: