The latest News and Information on DevOps, CI/CD, Automation and related technologies.
Mattermost Playbooks help software engineering teams orchestrate their work across all tools and teams to plan projects and hit milestones by uniting your tech stack through a single point of collaboration. We want to see how our community is leveraging Playbooks in their own tech stack and share your creations with everyone so the whole community benefits. We’re doing this by launching a new effort to commission original blog articles that show Playbooks in action.
Serverless has become an increasingly popular paradigm among organizations looking to modernize their applications as it allows them to increase agility while reducing their operational overhead and costs. But the highly distributed nature of serverless architectures requires developers to rethink their approach to application design and development. AWS-based serverless applications hinge on AWS Lambda functions, which are stateless and ephemeral by design.
In part 1 of this series, we looked at common design principles and patterns for assembling microservices in serverless environments. But when it comes to building serverless applications, designing your architecture is only part of the challenge. You also have to ensure that each of your individual functions and services are secure, reliable, and highly performant—without incurring enormous costs.
There is a lot of the art of the possible between the GitOps Engine, Argo CD, and the Application-as-Code platform, Shipa. In a recent blog post, we outlined the power of a one-line developer experience. Though if you are unfamiliar with ArgoCD, here is a guide to get you started with Argo CD and leveraging Shipa for your first deployment.
In my prior blog, Continuous Service Virtualization, Part 1: Introduction and Best Practices, we offered an introduction to continuous service virtualization (SV) and discussed some key best practices. In this, the second and final post in the series, we will discuss the continuous SV lifecycle and how it helps to optimize DevOps and the continuous integration/continuous delivery (CI/CD) pipeline.
Service virtualization (SV) has evolved as a popular technique and technology over the last decade. Traditionally, SV has primarily been used by testers to simulate other application components that the application under test interacts with. Typically, virtual services have been created and maintained by center of excellence (COE) teams.