Artificial intelligence and confidential computing appear to be in a seemingly irresolvable tension: AI presents a latent threat to confidential computing, yet it can also aid in its implementation. The key solution to this contradiction is Confidential AI. This concept addresses AI by identifying major uncertainties and making them manageable, with priority given to data and data sovereignty. Furthermore, AI can be utilized in a targeted and secure manner to enhance collaboration and communication applications. VNC, a developer of enterprise applications based on open source, outlines the essential elements:
Decentralized LLMs: Large Language Models (LLMs) typically run on the vendors’ proprietary hyperscaler platforms. What happens to the data there is unknown. Therefore, it makes sense to download the LLMs to your own platforms using appropriate toolkits (such as Intel OpenVINO). This way, both the data and the applications remain in your own data center and are protected from unauthorized access.
Local AI for application optimization: Stripped-down models of these LLMs can be integrated directly into applications in dedicated software stacks. AI functions such as intelligent assistants can be integrated directly and transparently at any time. They are also potentially available to all applications in the same form across modules via modular platforms.
Train models using local knowledge base: LLMs are usually trained with large amounts of data. By using LLMs locally, they can also be trained and optimized quickly, specifically and practically by querying the company’s own database. This also makes them transparent and auditable at any time.
Acceleration through indexing: In addition, access to the local knowledge base can be accelerated by indexing the databases. Queries are no longer made against the entire database, but only against the indexes. This increases speed, availability and accuracy – and therefore productivity.
Open source: Open source is the natural enemy of proprietary systems. In order to avoid vendor lock-in, not only should chronically non-transparent vendor-specific AI platforms (PaaS) be avoided for confidential computing, but open source should also be the first choice for AI applications at the software level (SaaS).
Managing Director of VNC in Berlin and Member of the Board of VNC AG in Zug, Andrea Wörrlein, emphasizes “Confidential AI gives companies and users back the sovereignty over their own data and applications,” emphasizes Andrea Wörrlein, Managing Director of VNC in Berlin and Member of the Board of VNC AG in Zug. “Artificial intelligence can thus be used in a secure, targeted, independent and practical way to increase productivity and job satisfaction.”
Source: VNC
The post The Five Essential Components of Confidential AI first appeared on IT News Africa | Business Technology, Telecoms and Startup News.