Skip to main content

Considerations of deploying AI services in a distributed method
draft-hong-nmrg-ai-deploy-05

Document Type Expired Internet-Draft (individual)
Expired & archived
Authors Yong-Geun Hong , Oh Seokbeom , Joo-Sang Youn , SooJeong Lee , Seung-Woo Hong , Ho-Sun Yoon
Last updated 2024-04-25 (Latest revision 2023-10-23)
RFC stream (None)
Intended RFC status (None)
Formats
Stream Stream state (No stream defined)
Consensus boilerplate Unknown
RFC Editor Note (None)
IESG IESG state Expired
Telechat date (None)
Responsible AD (None)
Send notices to (None)

This Internet-Draft is no longer active. A copy of the expired Internet-Draft is available in these formats:

Abstract

As the development of AI technology matured and AI technology began to be applied in various fields, AI technology is changed from running only on very high-performance servers with small hardware, including microcontrollers, low-performance CPUs and AI chipsets. In this document, we consider how to configure the network and the system in terms of AI inference service to provide AI service in a distributed method. Also, we describe the points to be considered in the environment where a client connects to a cloud server and an edge device and requests an AI service. Some use cases of deploying AI services in a distributed method such as self-driving car and digital twin network are described.

Authors

Yong-Geun Hong
Oh Seokbeom
Joo-Sang Youn
SooJeong Lee
Seung-Woo Hong
Ho-Sun Yoon

(Note: The e-mail addresses provided for the authors of this Internet-Draft may no longer be valid.)