Skip to content
supercomputing glossary

Supercomputing Glossary (A-Z). Vol II

Introduction: Supercomputing Glossary Vol II

Welcome to the Volume II of Comprehensive Supercomputing Glossary.

As the company behind the powerful MeluXina supercomputer, LuxProvide is committed to making cutting-edge technology accessible and comprehensible. Dive in and expand your knowledge of the latest technologies that are shaping the future of computing.

A

AI Dev Toolkit
A suite of tools and libraries that simplify the development, debugging, and deployment of artificial intelligence models.

Agentic Workflow
An AI system design where autonomous agents operate independently or collaboratively to accomplish complex tasks.

B

Benchmarking Sandbox
A controlled environment for testing and comparing models or algorithms under standardised performance criteria.

C

Code Repository
A managed storage for source code, typically integrated with version control systems like Git to track changes and enable collaboration.

Container Repository
A registry that stores container images (e.g. Docker), enabling reproducible software deployment across computing environments.

D

Data Repository
A structured storage system for datasets, often used for training, testing, and validating machine learning models.

Domain Specific Algorithms
Algorithms tailored to solve problems in a particular sector, such as finance, health, or logistics.

E

Embeddings Serving
A service that provides vector representations of data (e.g. text, images) for use in semantic search, clustering, or retrieval.

F

Federated Learning
A distributed AI training method where models learn across multiple devices or servers without exchanging actual data.

H

Hybrid Quantum-Classical Workflows
Workflows that combine classical HPC resources with quantum algorithms to solve complex problems.

I

Image Understanding
The AI capability to interpret and extract meaning from images using techniques such as object detection and segmentation.

Image/Video Generation
The creation of synthetic visual content from models, typically using generative AI techniques like GANs or diffusion models.

Inference Serving: Deploying trained models to provide real-time predictions or decisions in a scalable production environment.

Integration HPC-QC: The technical interface enabling collaboration between high-performance computing and quantum computing systems.

J

JWT Generator: A tool to create JSON Web Tokens for secure user authentication and authorisation in API-driven environments.

L

LLMOps: Operational practices, tools and pipelines for managing the lifecycle of large language models, from training to deployment.

M

Model Repository
A managed environment where trained AI models are stored, versioned, and shared across teams or services.

Model Training / Fine-tuning
The process of teaching a model on data and adjusting it for specific use cases by continuing training on domain-relevant data.

O

OCR: Optical Character Recognition: technology that converts printed or handwritten text into machine-readable data.

P

Physics Informed Machine Learning
Predefined workflows or templates to accelerate the implementation of pipeline-integrated machine learning tasks.

Q

Quantum Error Correction: A technique used to protect quantum information from decoherence and operational errors in quantum computers.

Quantum IDE: An integrated development environment designed for quantum programming, supporting languages like Qiskit, Q# or Cirq.

QPU (Quantum Processing Unit): The core computational unit of a quantum computer, analogous to CPUs or GPUs in classical systems.

R

RAG: Retrieval-Augmented Generation using traditional (non-graph-based) document search to enhance language model outputs.

S

Secure Multi-Party Computation (SMPC): A cryptographic technique that enables multiple parties to compute on data without revealing their inputs to each other.

Synthetic Data Generation: The creation of artificial data that mimics real datasets, used to augment training data or preserve privacy.

T

Tensor Parallelism: A technique for distributing large model computations across multiple GPUs or nodes by splitting tensors.

Z

Zero-Shot Learning: A machine learning capability where a model performs tasks without having been explicitly trained on examples of those tasks.