At Ubicloud, we aim to write blog posts that dive into the technical details of how we build our cloud services. This however introduces a handicap - we rarely talk about new product lines or features that we roll out. In fact, a few of our customers recently asked for features that were already available on Ubicloud.
So, we decided to write a marketing blog post that covers five new Ubicloud compute features. As we’re working to release new cloud services, we’re also adding more depth to our existing products based on your feedback. Please take a look below and tell us what you think.
With Ubicloud GPU Runners, you can now run your tests and automation workflows on GPUs. It’s perfect for teams working with machine learning models, including large language models (LLMs), looking to integrate testing of ML components into GitHub Actions.
Ubicloud GPU runners come with an Nvidia RTX 4000 GPU (Ada Lovelace, 20 GB VRAM), 6 vCPUs, 32 GB RAM, and 180 GB disk space. These runners are both more powerful than GitHub’s default GPU runners and cost less than half the price ($0.032/minute).
To run your ML ops workloads on our GPU runners, all you need to do is to change the value of runs-on in your GitHub Actions workflow file:
Visit Use GPU Runners to get started and see how Ubicloud GPU Runners can transform your development process.
We have news for our regular GitHub Actions runners, too. Alongside our existing runners with 2, 4, 8, and 16 vCPUs, we now offer runners with 30 vCPUs. To take advantage of this, simply change runs-on to ubicloud-standard-30 in your GitHub Actions configuration and experience the enhanced performance.
Additionally, Ubicloud also supports arm64 runners, enabling you to build and test on ARM architectures. We expanded our arm64 offerings with the new ubicloud-standard-30-arm type, which features 30 full ARM cores. This complements our existing lineup and provides greater flexibility for your ARM-based workflows.
Explore these new options to optimize your CI/CD processes!
We’re excited to announce the expansion of our Compute service with two new options: standard-30 with 30 vCPUs and standard-60 with 60 vCPUs. These new VM types join our existing lineup.
With standard-30 and standard-60 VMs, you can now ensure optimal performance for data processing, batch processing, and high-performance applications.
These new sizes are also available for our PostgreSQL offering, allowing you to provision PostgreSQL servers with up to 60 vCPUs to support your demanding database workloads.
Our Elastic Compute (VMs) and PostgreSQL services now offer more flexibility choosing compute and storage capabilities. Previously, storage size was tied to the number of CPUs.
Now, you can choose your storage size more freely of the CPU count, allowing for customized and efficient resource allocation.
For virtual machines, you can provision up to 2TB of storage on larger VM types:
PostgreSQL servers can be provisioned with up to 4TB of storage:
This increased flexibility ensures that you can optimize resource use and avoid over as well as under-provisioning.
We also improved our VM provisioning times. The median provisioning time on x64 architectures has been reduced from approximately 28 seconds to 19 seconds, measured from the receipt of the request to the point when the VM becomes connectable. The figure below illustrates the provisioning times for both x64 and arm64 VMs over the past two months, with the vertical axis representing median provisioning times in seconds (lower is better).
At a high level, these are the optimizations contributing to this improvement:
Control Plane: We enhanced our control plane to handle provisioning requests more efficiently.
Allocator: Our allocator now considers expected provisioning times when selecting the most suitable host for a virtual machine.
Data Plane: By fine-tuning hypervisor related settings and components, we decreased virtual machine boot times.
Additionally, for our Ubicloud GitHub Actions runners, we maintain a pool of pre-provisioned VMs. This pool further decreases observed provisioning times for GitHub runners and complements the optimizations described above.
In conclusion, we rolled out five new improvements over the past month to Ubicloud compute services. These new features give you more flexibility in running your workloads on Ubicloud. If you have any questions or feedback on these or future improvements, please don’t hesitate to drop us an email at [email protected].