Explore more publications!

Scientel Releases Mix-Of-Vendor GPU support For AMD, Intel, Nvidia in Single Large Learning Model (LLM) system

Scientel - Gensonix AI DB

Powered by Gensonix AI DB, Scientel ‘s MOV-LLM solution supports GPUs from AMD, Intel and Nvidia in a single LLM system

We are Thrilled to Announce the Ultimate Flexibility and Scalability for our LLM systems by supporting AMD, Intel and Nvidia GPUs, all in a single system.”
— Norman Kutemperor, CEO

ANN ARBOR, MI, UNITED STATES, April 21, 2026 /EINPresswire.com/ -- Large Language Model (LLM) systems are becoming a common tool for most businesses and even for advanced individual users. But the cost of these systems is rising rapidly, especially when faced with rapid growth due to increase in demand. This is mainly because the current set of GPUs are often not powerful enough to handle the task of running such systems. Therefore, additional GPUs are typically required for this purpose.

This creates a problem as most systems require the same type of GPUs to be employed in the same network to assure proper connectivity and compatibility. This adds to the cost as well as the size and power demands of these systems. Moreover, most users upgrade from a less powerful GPU to a more powerful one, often abandoning the older one due to lack of accommodation. As the need for more GPUs arises, it is not usually possible to pair 2 different types of GPUs in the same system due to vendor restrictions and requirements and other incompatibilities. Thus, most users are forced to upgrade the system with a compatible GPU model which typically adds to the cost and complexity of these systems.

The Scientel MOV-LLM system has capabilities to solve this problem. It is an ultrapowerful network of LLM systems that are comprised of multiple nodes that can utilize GPUs from different manufacturers and/or different models. Currently, Scientel supports GPUs from AMD, Intel and NVidia for its systems and most of the standard GPUs are supported in the current configurations. Each node can be configured with a Language model of varying size that fits the GPU power to provide a system that can mix these GPUs in any combination.

This feature is highly beneficial for applications where users have availability of different GPUs regardless of the manufacturer which can all be put to good use. This provides substantial additional processing power and capability for their LLM system with a minimal cost increase and best GPU utilization.

Scientel’s LLM systems are designed to operate in conjunction with its own Gensonix NewSQL AI DB. These mid-to-large-scale LLM systems are more efficient due to their support of Relational, Document, Text and Vector data in distinctly different stores within a single AI DB. This storage of all data types in its own native form allows for very high efficiency when compared to that of its competitors and often requires less hardware. The Gensonix AI DB as a single data repository can handle the various data storage tasks that are often required by smart AI applications, but without needing to convert data to other formats such as XML or JSON for storage. This should further enhance storage efficiency of its systems.

About Scientel

Scientel, a US–based IT systems technology company, is a single–source supplier of complete LLM systems. We design and produce high–end servers with our GENSONIX™ AI DB software pre–installed. For specific applications, we also customize hardware and software that optimizes performance metrics. For example, our Elastic Scaling Servers can support thousands of compute nodes in a single cluster that can meet customer speed requirements for virtually any data size.

Our specialty is NewSQL Database Management System (DBMS) design applications and systems integration, combined with IT consulting and support. This includes applications in AI, Big Data, and commercial intranets for “beyond mainframe–level” Large Data Warehouse Appliances.
Business customers can take advantage of our capabilities in advanced Business Intelligence and Data Analytics to grow their business by handling AI and Big Data more cost–effectively and with greater insights compared to their peers. Scientific, government, and other organizations can also utilize these capabilities to efficiently process AI and Big Data.

Our Large Language Model (LLM) systems utilize our GENSONIX AI DB for storing all AI data, including structured, document, text and vector data sets, in a single database. We offer LLM solutions based on AMD, Intel and Nvidia GPUs as well as LLM’s that utilize no GPUs. We also offer LLM’s with a mix of GPUs and CPUs and/or with different size language models to fit customer application requirements.

Our main office is conveniently located in Ann Arbor, the western hub of Southeast Michigan’s Innovation Corridor. Let us bring our advanced technology to your organization for successful operations. Contact us for more details or to purchase our products.

Norman Kutemperor
Scientel IT Corp
+1 248-433-4700
norm@scientel.com
Visit us on social media:
X

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions