How to Rename Security Group(s) in Customer Portal

Introduction: This guide provides a focused, step-by-step procedure for the Customer Portal User on how to rename a Security Group in the Customer Portal. Renaming security groups is a necessary administrative task, particularly when the existing name is confused, or when the user seeks to make the Name of the Security Group more organized and

How to Use Access NovaGPU for Your Domain in Customer Portal

Introduction: This guide is specifically designed to enable Customer Portal Users to efficiently access and manage their dedicated NovaGPU Site. To initiate access to these specialized GPU resources, users must begin their navigation within the customer interface, utilizing the steps linked to their domain registration. The required sequence of steps is My Service > Domain

Introduction Transaction in Customer Portal

Introduction: This introduction aims to equip users of the Customer Portal with comprehensive tools for monitoring and managing their monthly transactions, thereby gaining essential insights into the financial aspects of their hosted services and enabling transparent and efficient resource management. Accessible within the Customer Portal by navigating to NovaCloud and then selecting “Transactions”, this feature

How to Delete GPU Instance(s) in Customer Portal

Introduction: This guide provides a step-by-step process for customer portal users who need to delete their NovaGPU instances. This action is typically performed when a customer’s ML and AI project is done and they are no longer using the NovaGPU to train, fine-tune, or deploy models. The NovaGPU service is offered as GPU as a

How to Rename GPU Instance(s) in Customer Portal

Introduction: This Knowledge Base article provides a concise, step-by-step guide detailing How to Rename your GPU Instance(s) directly within the Customer Portal. Customers who utilize GPU infrastructure, such as NovaGPU – GPU as a Service, frequently manage multiple resources dedicated to varied tasks like AI training, fine-tuning, or deployment. When the initial name of a

How to Rebuild GPU Instance(s) in Customer Portal

Introduction: Utilizing your powerful GPU Instances, for crucial AI/ML workloads such as training, fine-tuning, and deployment requires operational flexibility and the ability to adapt to evolving project needs. When the current configuration or state of your GPU Instance becomes unsuitable or is no longer optimally aligned with the requirements of ongoing or future development efforts,

How to Shelve GPU Instance(s) in Customer Portal

Introduction: For NovaGPU users who utilize our dedicated GPU as a Service platform for intensive operations like training, fine-tuning, and deploying AI models, efficient resource management is paramount to achieving optimal cost efficiency. High-demand computing resources, whether through public cloud hosting, private cloud hosting, or specialized GPU servers, generate continuous hourly charges. The need to

How to Unshelve GPU Instance(s) in Customer Portal

Introduction: The NovaGPU service is a dedicated GPU as a Service (GaaS) platform provided by IP ServerOne. This platform is built to handle highly intensive computing operations, primarily supporting Artificial Intelligence (AI) and Machine Learning (ML) workloads. These operations include tasks such as model training, fine-tuning, and the deployment of AI models. NovaGPU is marketed

How to Launch GPU Instance(s) from Volume in Customer Portal

Introduction: This guide outlines the precise steps for Customer Portal users looking to deploy specialized computing resources by detailing How to Launch GPU Instances from Volume. This functionality allows you to instantly spin up a NovaGPU instance, which is essential for tasks requiring graphical processing power, utilizing a pre-existing volume or snapshot. Launching the instance

How to Stop GPU Instance(s) in Customer Portal

Introduction: The NovaGPU platform provides GPU as a Service, offering affordable and scalable on-demand GPU resources primarily utilized for compute-intensive tasks like training, fine-tuning, and deploying AI/ML workloads. Since all instances are typically charged on an hourly basis once launched, the prompt deactivation of resources that are temporarily not in use is fundamental for maintaining