Automating Terramate in GitLab CI/CD just got easier! 🚀 Based on your feedback, Terramate CLI and Terramate Cloud are now fully supported in GitLab CI/CD. Improvements the update provides: ✅ Syncs plan files and metadata from GitLab to Terramate Cloud, optimizing MRs, deployments, and drift detection workflows. ✅Plan previews available for all modified stacks in MRs. ✅Blueprints for quick setup of CI/CD pipelines for previews, deployments, drift detection, and reconciliation in GitLab repositories. For more info, take look at our latest product update (1st comment). 👇
Terramate’s Post
More Relevant Posts
-
Another day-to-day activity of a Kubernetes Admin is to keep the cluster up to date with the newer versions. When you are using a managed cloud service such as GKE, EKS, or AKS, upgrades are straightforward and can be done within just a few clicks or you can also enable automatic upgrades however, you have to take care of many things, and perform many steps while doing it on-premises or on self-managed Kubernetes cluster. In this video, I will explain everything about Kubernetes releases, and upgrades, and a detailed demo of how to perform the version upgrade in a self-managed multi-node Kubernetes cluster. Video link in the comments below.
To view or add a comment, sign in
-
-
VPS/home lab are a great way to learn, especially DevOps stuff. But can you simulate a VPS on your own machine without buying a cloud VPS service? The answer is: of course. Using VirtualBox's NAT Port Forwarding feature, we can simulate SSH-ing into a VPS. On the image attached, I forward the VM's SSH service to port 2222. So I can simply SSH into it from my host machine with $ ssh -p 2222 <usr><at-symbol><local-ip> If you're interested for a step-by-step, I just published a new article on my blog talking about this: https://lnkd.in/gUKfRGPj
To view or add a comment, sign in
-
-
Power of cloud is that you can choose multiple cloud providers to do your things. I am using both AWS, Hetzner and Github to test, build and develop. mixing both servermore and serverless. In under 60 euro per month you can get really decent specs for your build pipeline, remote workstation, block storage, S3 storage and Kubernetes on demand, including all those 3 service providers.
To view or add a comment, sign in
-
In this two-part tutorial, you'll learn how to use the Atlas Operator with Atlas Cloud and ArgoCD to create a modern GitOps workflow for managing your database migrations natively in Kubernetes. More: https://lnkd.in/gPg8Vu7g
To view or add a comment, sign in
-
-
Implementation of Service Discovery in Microservices -> Ever wondered how Microservices communicate each other when they are deployed on different cloud instances and there IP and port changes dynamically. Here Service Discovery comes into the picture as it takes care of maintaining IP address and ports of all the instances of every micro-service, and helps Service A to know about the right IP to contact Service B. In this document, I have mentioned about the working of Service discovery and implementation of it in Spring Boot with code ⬇
To view or add a comment, sign in
-
Hello Folks, Did you know that GCP Cloud Run stores every built image in the Artifact Registry? If you're pushing updates to production frequently, this can significantly increase costs 🤯. But fear not! There are effective ways to manage this. How to Prevent Cost Increases: Manual Deletion: Navigate to the GCP Cloud Run Artifact Registry and manually delete older images as needed ✨ Cleanup Policy: For each repository, you can add a cleanup policy to define your rules, such as deleting images older than 7 days or keeping only the last 5 images. This automates the cleanup process and helps save costs 💰 Before applying the cleanup policy, you can use the dry run option to test whether the policy works as expected ✅ Let me know your thoughts in the comments! 💬 Thank you.
To view or add a comment, sign in
-
Migrating a CI tool to the cloud: easier said than done! Moving from an on-premise to a cloud-based version seemed like a straightforward task. But with 50+ projects, legacy configurations, and new standards to adopt, it became a true engineering challenge. Some lessons were learned the hard way like using a personal access token in production for over a year 🤦♂️. Curious about the full journey? Check out my latest article : link in comments. Have you ever tackled a similar migration? What challenges did you face? Let’s discuss in the comments! 👇
To view or add a comment, sign in
-
Ready to boost your Kubernetes production environment? Join our webinar on July 31st to learn Tikal’s best practices for Kubernetes production readiness, including Cloud Native and Multi-Cloud strategies. Secure your spot today! 👉 https://lnkd.in/dwww9e8H
To view or add a comment, sign in
-
-
He doesn't know how easy it is when ControlMonkey writes the Terraform code for you. 😜 The extent of your Terraform coverage determines the level of control over your cloud environment. As you cover more resources with Terraform, you reduce the risk of misconfigurations in production. Our Terraform Import Engine allows you to achieve the highest Terraform coverage by automatically identifying unmanaged resources and generating Terraform Code with a single click, representing your configuration. ✅ Never write Terraform Code again ✅ Shift to Terraform faster and with zero effort #devops #iac #terraform
To view or add a comment, sign in
-
-
𝘁𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺 𝗶𝗻𝗶𝘁 && 𝘁𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺 𝗽𝗹𝗮𝗻 && 𝘁𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺 𝗮𝗽𝗽𝗹𝘆 HashiCorp Terraform is not just another Cloud/DevOps tool, it is THE tool that revolutionised the entire Infrastructure ecosystem. For the past 3 years, I have been working with Terraform and passed this certification in just 27 minutes :) Topics to study well: 1. State Management. 2. All the terraform commands. 3. Splat Expressions. 4. Secrets.
To view or add a comment, sign in
-
https://terramate.io/rethinking-iac/introducing-terramate-cli-0-9-0-support-for-gitlab-workspaces-and-partial-evaluation/