CircleCI announced information and improved installation features for its self-hosted server offering.
CircleCI’s self-hosted server solution gives software engineering teams the ability to scale under load and run multiple services at once, all within an individual’s cluster and Kubernetes network, but with the full CircleCI cloud experience.
It increases confidentiality, efficiency and collaboration between teams, which is especially useful for teams working in healthcare, finance and other industries with high standards of governance and compliance.
“Users of telecommunications, manufacturing, defense and other highly regulated industries use Modzy to automate the deployment and monitoring of AI models in their organizations. Not only do we use CircleCI’s self-hosted solution as part of our own DevOps processes, but we’ve also integrated it into our MLOps pipelines to ensure easier and faster model deployment, ”said Nathan Mellis, Manager of engineering, Modzy.
With the CircleCI 3.2 server, users will have access to additional installation options to further secure installation environments, including HTTP proxy and SSL termination, as well as extended functionality that provides API access. Insights from CircleCI and larger resource classes.
CircleCI’s Insights API, powered by the 2.5 million tasks that CircleCI’s platform processes every day, provides detailed insight into the health and usage of user repository building processes. The metrics provided include time series data such as success rates, pipeline duration, and other information relevant to making better engineering decisions.
Other benefits offered by CircleCI’s self-hosted server solution include:
- Enterprise-level security. Users can achieve the most stringent security, compliance and regulatory requirements with end-to-end control over their CircleCI installation.
- Powerful development tools and features. Teams operating behind their own firewalls now have the ability to access CircleCI’s full cloud experience and the latest CircleCI features, such as orbs, scheduled workflows, matrix tasks, and more.
- Maintenance and monitoring. Create a complete picture of your software delivery tools with integration into existing infrastructure monitoring solutions such as Datadog, Splunk, ELK stack, and more.
- Strong support for scale and performance. Operate at scale under heavy loads and automatically leverage multiple core services at once within private networks. Ensure deployment redundancy that turns P0s into p1s and keeps teams in place.
- Flexible hosting options. Users can run their installations on Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or a native Kubernetes installation.