A Massive Data Migration Framework
draft-yangcan-ietf-data-migration-standards-02
The information below is for an old version of the document.
Document | Type |
This is an older version of an Internet-Draft whose latest revision state is "Expired".
|
|
---|---|---|---|
Authors | Can Yang , Yu Liu , Cong Chen , Ge Chen , Yukai Wei | ||
Last updated | 2019-05-30 (Latest revision 2018-12-05) | ||
RFC stream | (None) | ||
Formats | |||
Additional resources | |||
Stream | Stream state | (No stream defined) | |
Consensus boilerplate | Unknown | ||
RFC Editor Note | (None) | ||
IESG | IESG state | I-D Exists | |
Telechat date | (None) | ||
Responsible AD | (None) | ||
Send notices to | (None) |
draft-yangcan-ietf-data-migration-standards-02
Network Working Group R. Siemborski Request for Comments: 3656 Carnegie Mellon University Category: Experimental December 2003 The Mailbox Update (MUPDATE) Distributed Mailbox Database Protocol Status of this Memo This memo defines an Experimental Protocol for the Internet community. It does not specify an Internet standard of any kind. Discussion and suggestions for improvement are requested. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (2003). All Rights Reserved. Abstract As the demand for high-performance mail delivery agents increases, it becomes apparent that single-machine solutions are inadequate to the task, both because of capacity limits and that the failure of the single machine means a loss of mail delivery for all users. It is preferable to allow many machines to share the responsibility of mail delivery. The Mailbox Update (MUPDATE) protocol allows a group of Internet Message Access Protocol (IMAP) or Post Office Protocol - Version 3 (POP3) servers to function with a unified mailbox namespace. This document is intended to serve as a reference guide to that protocol. Siemborski Experimental [Page 1] RFC 3656 MUPDATE Distributed Mailbox Database Protocol December 2003 Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Protocol Overview . . . . . . . . . . . . . . . . . . . . . . 3 2.1. Atoms . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2. Strings . . . . . . . . . . . . . . . . . . . . . . . . 4 3. Server Responses . . . . . . . . . . . . . . . . . . . . . . 4 3.1. Response: OK . . . . . . . . . . . . . . . . . . . . . 5 3.2. Response: NO . . . . . . . . . . . . . . . . . . . . . 5 3.3. Response: BAD . . . . . . . . . . . . . . . . . . . . . 5 3.4. Response: BYE . . . . . . . . . . . . . . . . . . . . . 6 3.5. Response: RESERVE . . . . . . . . . . . . . . . . . . . 6 3.6. Response: MAILBOX . . . . . . . . . . . . . . . . . . . 6 3.7. Response: DELETE . . . . . . . . . . . . . . . . . . . 7 3.8. Server Capability Response. . . . . . . . . . . . . . . 7 4. Client Commands . . . . . . . . . . . . . . . . . . . . . . . 8 4.1. Command: ACTIVATE . . . . . . . . . . . . . . . . . . . 8 4.2. Command: AUTHENTICATE . . . . . . . . . . . . . . . . . 8 4.3. Command: DEACTIVATE . . . . . . . . . . . . . . . . . . 9 4.4. Command: DELETE . . . . . . . . . . . . . . . . . . . . 9 4.5. Command: FIND . . . . . . . . . . . . . . . . . . . . . 9 4.6. Command: LIST . . . . . . . . . . . . . . . . . . . . . 10 4.7. Command: LOGOUT . . . . . . . . . . . . . . . . . . . . 10 4.8. Command: NOOP . . . . . . . . . . . . . . . . . . . . . 10 4.9. Command: RESERVE. . . . . . . . . . . . . . . . . . . . 10 4.10. Command: STARTTLS . . . . . . . . . . . . . . . . . . . 11 4.11. Command: UPDATE . . . . . . . . . . . . . . . . . . . . 12 5. MUPDATE Formal Syntax . . . . . . . . . . . . . . . . . . . . 12 6. MUPDATE URL Scheme. . . . . . . . . . . . . . . . . . . . . . 14 6.1. MUPDATE URL Scheme Registration Form. . . . . . . . . . 14 7. Security Considerations . . . . . . . . . . . . . . . . . . . 15 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 9. Intellectual Property Rights. . . . . . . . . . . . . . . . . 16 10. References. . . . . . . . . . . . . . . . . . . . . . . . . . 17 10.1. Normative References. . . . . . . . . . . . . . . . . . 17 10.2. Informative References. . . . . . . . . . . . . . . . . 17 11. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 18 12. Author's Address. . . . . . . . . . . . . . . . . . . . . . . 18 13. Full Copyright Statement. . . . . . . . . . . . . . . . . . . 19 Siemborski Experimental [Page 2] RFC 3656 MUPDATE Distributed Mailbox Database Protocol December 2003 1. Introduction In order to support an architecture where there are multiple [IMAP, POP3] servers sharing a common mailbox database, it is necessary to be able to provide atomic mailbox operations, as well as offer sufficient guarantees about database consistency. The primary goal of the MUPDATE protocol is to be simple to implement yet allow for database consistency between participants. The key words "MUST, "MUST NOT", "REQUIRED", "SHOULD", "SHOULD NOT", "RECOMMENDED", and "MAY" in this document are to be interpreted as defined in BCP 14, RFC 2119 [KEYWORDS]. In examples, "C:" and "S:" indicate lines sent by the client and server respectively. 2. Protocol Overview The MUPDATE protocol assumes a reliable data stream such as a TCP network connection. IANA has registered port 3905 with a short name of "mupdate" for this purpose. In the current implementation of the MUPDATE protocol there are three types of participants: a single master server, slave (or replica) servers, and clients. The master server maintains an authoritative copy of the mailbox database. Slave servers connect to the MUPDATE master server as clients, and function as replicas from the point of view of end clients. End clients may connect to either the master or any slave and perform searches against the database, however operations that change the database can only be performed against the master. For the purposes of protocol discussion we will consider a slave's connection to the master identical to that of any other client. After connection, all commands from a client to server must have an associated unique tag which is an alphanumeric string. Commands MAY be pipelined from the client to the server (that is, the client need not wait for the response before sending the next command). The server MUST execute the commands in the order they were received, however. If the server supports an inactivity login timeout, it MUST be at least 15 minutes. Siemborski Experimental [Page 3] RFC 3656 MUPDATE Distributed Mailbox Database Protocol December 2003 MUPDATE uses data formats similar to those used in [ACAP]. That is, atoms and strings. All commands and tags in the protocol are transmitted as atoms. All other data is considered to a string, and must be quoted or transmitted as a literal. Outside of a literal, both clients and servers MUST support line lengths of at least 1024 octets (including the trailing CR and LF characters). If a line of a longer length must be transmitted, implementations MUST make use of literals to do so. 2.1. Atoms An atom consists of one or more alphanumeric characters. Atoms MUST be less than 15 octets in length. 2.2. Strings As in [ACAP], a string may be either literal or a quoted string. A literal is a sequence of zero or more octets (including CR and LF), prefix-quoted with an octet count in the form of an open brace ("{"), the number of octets, an optional plus sign to indicate that the data follows immediately (a non-synchronized literal), a close brace ("}"), and a CRLF sequence. If the plus sign is omitted (a synchronized literal), then the receiving side MUST send a "+ go ahead" response, and the sending side MUST wait for this response. Servers MUST support literals of atleast 4096 octets. Strings that are sent from server to client SHOULD NOT be in the synchronized literal format. A quoted string is a sequence of zero or more 7-bit characters, excluding CR, LF, and the double quote (<">), with double quote characters at each end. The empty string is represented as either "" (a quoted string with zero characters between double quotes) or as {0} followed by CRLF (a literal with an octet count of 0). 3. Server Responses Every client command in the MUPDATE protocol may receive one or more tagged responses from the server. Each response is preceded by the same tag as the command that elicited the response from the server. Siemborski Experimental [Page 4] RFC 3656 MUPDATE Distributed Mailbox Database Protocol December 2003 3.1. Response: OK A tagged OK response indicates that the operation completed successfully. There is a mandatory implementation-defined string after the OK response. This response also indicates the beginning of the streaming update mode when given in response to an UPDATE command. Example: C: N01 NOOP S: N01 OK "NOOP Complete" 3.2. Response: NO A tagged NO response indicates that the operation was explicitly denied by the server or otherwise failed. There is a mandatory implementation-defined string after the NO response that SHOULD explain the reason for denial. Example: C: A01 AUTHENTICATE "PLAIN" S: A01 NO "PLAIN is not a supported SASL mechanism" 3.3. Response: BAD A tagged BAD response indicates that the command from the client could not be parsed or understood. There is a mandatory implementation-defined string after the BAD response to provide additional information about the error. Note that untagged BAD responses are allowed if it is unclear what the tag for a given command is (for example, if a blank line is received by the mupdate server, it can generate an untagged BAD response). In the case of an untagged response, the tag should be replaced with a "*". Example: C: C01 SELECT "INBOX" S: C01 BAD "This is not an IMAP server" C: S: * BAD "Need Command" Siemborski Experimental [Page 5] RFC 3656 MUPDATE Distributed Mailbox Database Protocol December 2003 3.4. Response: BYE A tagged BYE response indicates that the server has decided to close the connection. There is a mandatory implementation-defined string after the BYE response that SHOULD explain the reason for closing the connection. The server MUST close the connection immediately after transmitting the BYE response. Example: C: L01 LOGOUT S: L01 BYE "User Logged Out" 3.5. Response: RESERVE A tagged RESERVE response may only be given in response to a FIND, LIST, or UPDATE command. It includes two parameters: the name of the mailbox that is being reserved (in mUTF-7 encoding, as specified in [IMAP]) and a location string whose contents is defined by the clients that are using the database, though it is RECOMMENDED that the format of this string be the hostname of the server which is storing the mailbox. This response indicates that the given name is no longer available in the namespace, though it does not indicate that the given mailbox is available to clients at the current time. Example: S: U01 RESERVE "internet.bugtraq" "mail2.example.org" 3.6. Response: MAILBOX A tagged MAILBOX response may only be given in response to a FIND, LIST, or UPDATE command. It includes three parameters: the name of the mailbox, a location string (as with RESERVE), and a client- defined string that specifies the IMAP ACL [IMAP-ACL] of the mailbox. This message indicates that the given mailbox is ready to be accessed by clients. Example: S: U01 MAILBOX "internet.bugtraq" "mail2.example.org" "anyone rls" Siemborski Experimental [Page 6] RFC 3656 MUPDATE Distributed Mailbox Database Protocol December 2003 3.7. Response: DELETE A tagged DELETE response may only be given in response to an UPDATE command, and MUST NOT be given before the OK response to the UPDATE command is given. It contains a single parameter, that of the mailbox that should be deleted from the slave's database. This response indicates that the given mailbox no longer exists in the namespace of the database, and may be given for any mailbox name, active, reserved, or nonexistent. (Though implementations SHOULD NOT issue DELETE responses for nonexistent mailboxes). Example: S: U01 DELETE "user.rjs3.sent-mail-jan-2002&Yang, et al. Expires December 1, 2019 [Page 8] Internet-Draft Data Migration Standards May 2019 o It's OPTIONAL that If the migration process is interrupted, it is needed to support automatic restart of the migration process and continue the migration from where it left off; Additionally, the framework is needed to be able to support the user in the following manner to inform this abnormal interruption: * It MUST support popping up an alert box on the screen of the user; * It SHALL support notifying users by email; * It's OPTIONAL to notify users by an Instant Messenger as We Chat or QQ; 3.4. Scale of Migrated Table 3.4.1. Full Table Migration This framework MUST support the migration of all tables in a relational database to at least two types of target storage containers: o HDFS o HBASE o HIVE 3.4.2. Single Table Migration This framework MUST allow users to specify a single table in a relational database and migrate it to at least two types of target storage containers: o HDFS o HBASE o HIVE 3.4.3. Multi-table migration This framework MUST allow users to specify multiple tables in a relational database and migrate to at least two types of target storage containers: o HDFS Yang, et al. Expires December 1, 2019 [Page 9] Internet-Draft Data Migration Standards May 2019 o HBASE o HIVE 3.5. Split-by This framework is needed to meet the following requirements on split- by. 3.5.1. Single Column 1. The framework MUST allow the user to specify a single column of the data table (usually the table's primary key), then slice the data in the table into multiple parallel tasks based on this column, and migrate the sliced data to one or more of the following target data containers respectively: * HDFS * HBASE * HIVE The specification of the data table column can be based on the following methods: + Users can specify freely; + Users can specify linearly; + Users can select an appropriate column for the segmentation based on the information entropy of the selected column data; 2. The framework SHALL allow the user to query the boundaries of the specified column in the split-by, then slice the data into multiple parallel tasks and migrating the data to one or more of the following target data containers: * HDFS * HBASE * HIVE Yang, et al. Expires December 1, 2019 [Page 10] Internet-Draft Data Migration Standards May 2019 3.5.2. Multiple Column This framework MAY allow the user to specify multiple columns in the data table to slice the data linearly into multiple parallel tasks and then migrate the data to one or more of the following target data containers: o HDFS o HBASE o HIVE 3.5.3. Non-linear Segmentation It's OPTIONAL that this framework is needed to support non-linear intelligent segmentations of data for one or more columns and then migrate the data to one or more of the following target data containers: The non-linear intelligent segmentations refer to: * Adaptive segmentation based on the distribution(density)of the value of numerical columns; * Adaptive segmentation based on the distribution of entropy of subsegments of a column; * Adaptive Segmentation Based on Neural Network Predictor; The target data container includes: * HDFS * HBASE * HIVE 3.6. Conditional Query Migration This framework SHALL allow users to specify the query conditions, then querying out the corresponding data records and migrating them. 3.7. Dynamic Detection of Data Redundancy It's OPTIONAL that the framework is needed to allow users to add data redundancy labels and label communication mechanisms, then it detects Yang, et al. Expires December 1, 2019 [Page 11] Internet-Draft Data Migration Standards May 2019 redundant data dynamically during data migration to achieve non- redundant migration. The specific requirements are as follows: o The framework SHALL be able to deep granulation processing on the piece of data content to be sent. It means the content segment to be sent is further divided into smaller-sized data sub-blocks. o The framework SHALL be able to feature calculation and forming a grain head for each of the decomposed particles, the granular header information includes but not limited to grain feature amount, grain data fingerprint, unique grain ID number, particle generation time, source address and destination address, etc. o The framework SHALL be able to detect the granular header information to determine the transmission status of each information granule content that is decomposed, and if the current information granule to be sent is already present at the receiving end, the content of the granule is not sent. Otherwise the current granule will be sent out. o After all the fragments of the data have been transferred, the framework SHALL be able to reassemble all the fragments and store the data on the receiving disk. 3.8. Data Migration with Compression During the data migration process, the data is not compressed by default. This framework MUST support at least one of the following data compression encoding formats, allowing the user to compress and migrate the data: o GZIP o BZIP2 3.9. Updating Mode of Data Migration 3.9.1. Appending Migration This framework SHALL support the migration of appending data to existing datasets in HDFS. Yang, et al. Expires December 1, 2019 [Page 12] Internet-Draft Data Migration Standards May 2019 3.9.2. Overwriting the Import When importing data into HIVE, the framework SHALL support overwriting the original dataset and saving it. 3.10. The Encryption and Decryption of Data Migration This framework is needed to meet the following requirements: o It MAY support data encryption at the source, and then the received data should be decrypted and stored on the target platform; o It MUST support the authentication when getting data migration source data; o It SHALL support the verification of identity and permission when accessing the target platform of data migration; o During the process of data migration, it SHOULD support data consistency; o During the process of data migration, it MUST support data integrity; 3.11. Incremental Migration The framework SHOULD support incremental migration of table records in a relational database, and it MUST allow the user to specify a field value as "last_value" in the table in order to characterize the row record increment. Then, the framework SHOULD migrate those records in the table whose field value is greater than the specified "last_value", and then update the last_value. 3.12. Real-Time Synchronization Migration The framework SHALL support real-time synchronous migration of updated data and incremental data from a relational database to one or many of the following target data containers: o HDFS o HBASE o HIVE Yang, et al. Expires December 1, 2019 [Page 13] Internet-Draft Data Migration Standards May 2019 3.13. The Direct Mode of Data Migration This framework MUST support data migration in direct mode, which can increase the data migration rate. Note:This mode supports only for MYSQL and POSTGRESQL. 3.14. The Storage Format of Data files This framework MUST allow saving the migrated data within at least one of following data file formats: o SEQUENCE o TEXTFILE o AVRO 3.15. The Number of Map Tasks This framework MUST allow the user to specify a number of map tasks to start a corresponding number of map tasks for migrating large amounts of data in parallel. 3.16. The selection on the elements in a table to be migrated column o The specification of columns This framework MUST support the user to specify the data of one or multiple columns in a table to be migrated. o The specification of rows This framework SHOULD support the user to specify the range of rows in a table to be migrated. o The composition of the specification of columns and rows This framework MAY support optionally the user to specify the range of rows and columns in a table to be migrated. 3.17. Visualization of Migration 3.17.1. Dataset Visualization After the framework has migrated the data in the relational database,,it MUST support the visualization of the dataset in the cloud platform. Yang, et al. Expires December 1, 2019 [Page 14] Internet-Draft Data Migration Standards May 2019 3.17.2. Visualization of Data Migration Progress The framework SHOULD support to show dynamically the progress to users in graphical mode when migrating. 3.18. Smart Analysis of Migration The framework MAY provide automated migration proposals to facilitate the user's estimation of migration workload and costs. 3.19. Task Scheduling The framework SHALL support the user to set various migration parameters(such as map tasks,the storage format of data files,the type of data compression and so on) and task execution time, and then to perform the schedule off-line/online migration tasks. 3.20. The Alarm of Task Error When the task fails, the framework MUST at least support to notify stakeholders through a predefined way. 3.21. Data Export From Cloud to RDBMS 3.21.1. Data Export Diagram Yang, et al. Expires December 1, 2019 [Page 15] Internet-Draft Data Migration Standards May 2019 Figure 2 shows the framework's working diagram of exporting data. +---------+ +----------------+ | | (1) | WebServer | | Browser |-------->| |--------------------- | | | +-----------+ | | +---------+ | | DMOW | | | | +-----------+ | | +----------------+ | |(2) | | +-------------+ +-----------------------+ | | | (3) | | | | Data Source |<-------- | Cloud Platform | | | | | +-----------------+ |<---- +-------------+ | | Migration Engine| | | +-----------------+ | +-----------------------+ Figure 2:Reference Diagram The workflow of exporting data through the framework is as follows: Step (1) in the figure means that users submit the requisition of data migration to DMOW through browser(the requisition includes cloud platform information,the information of target relational database, and related migration parameter settings); Step (2) in the figure means that DMOW submits user's request information of data migration to cloud platform's migration engine; Step (3) in the figure means that the migration engine performs data migration tasks based on the migration requests it receives to migrate data from cloud platform to relational database; 3.21.2. Full Export The framework MUST at least support exporting data from HDFS to one of following relational databases: o SQLSERVER o MYSQL Yang, et al. Expires December 1, 2019 [Page 16] Internet-Draft Data Migration Standards May 2019 o ORACLE The framework SHALL support exporting data from HBASE to one of following relational databases: o SQLSERVER o MYSQL o ORACLE The framework SHALL support exporting data from HIVE to one of following relational databases: o SQLSERVER o MYSQL o ORACLE 3.21.3. Partial Export The framework SHALL allow the user to specify data range of keys on the cloud platform and export the elements in the specified range to a relational database. Exporting into A Subset of Columns. 3.22. The Merger of Data The framework SHALL support merging data in different directories in HDFS and store them in a specified directory. 3.23. Column Separator The framework MUST allow the user to specify the separator between fields in the migration process. 3.24. Record Line Separator The framework MUST allow the user to specify the separator between the record lines after the migration is complete. 3.25. The Mode of Payment 1. One-way payment mode * In the framework by default, users SHALL to pay for downloading data from the cloud platform.It is free when uploading data from a relational database to a cloud platform; Yang, et al. Expires December 1, 2019 [Page 17] Internet-Draft Data Migration Standards May 2019 * In the framework, users SHALL pay for uploading data from a relational database to a cloud platform.It is free when downloading data from the cloud; 2. Two-way payment mode In the framework, the users of the data migration process between the relational database and the cloud platform all SHALL pay a fee; 3.26. Web Shell for Migration The framework provides following shells for character interface to operate through web access. 3.26.1. Linux Web Shell The framework SHALL support Linux shell through web access, which allows users to perform basic Linux command instructions for the configuration management of the data migrated on web. 3.26.2. HBase Shell The framework SHALL support hbase shell through web access, which allows users to perform basic operations such as adding, deleting, and deleting to the data migrated to hbase through the web shell. 3.26.3. Hive Shell The framework SHALL support hive shell through web access, which allows users to perform basic operations such as adding, deleting, and deleting to the data migrated to hive through the web shell. 3.26.4. Hadoop Shell The framework SHALL support the Hadoop shell through web access so that users can perform basic Hadoop command operations through the web shell. 3.26.5. Spark Shell The framework SHALL support spark shell through web access and provide an interactive way to analyze and process the data in the cloud platform. Yang, et al. Expires December 1, 2019 [Page 18] Internet-Draft Data Migration Standards May 2019 3.26.6. Spark Shell Programming Language In spark web shell, the framework SHALL support at least one of the following programming languages: o Scala o Java o Python 4. Security Considerations The framework SHOUD support for the security of the data migration process. During the data migration process, it should support encrypt the data before transmission, and then decrypt it for storage in target after the transfer is complete. At the same time, it must support the authentication when getting data migration source data and it shall support the verification of identity and permission when accessing the target platform. 5. IANA Considerations This memo includes no request to IANA. 6. References 6.1. Normative References [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 3", BCP 9, RFC 2026, DOI 10.17487/RFC2026, October 1996, <https://www.rfc-editor.org/info/rfc2026>. [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997, <https://www.rfc-editor.org/info/rfc2119>. [RFC2578] McCloghrie, K., Ed., Perkins, D., Ed., and J. Schoenwaelder, Ed., "Structure of Management Information Version 2 (SMIv2)", STD 58, RFC 2578, DOI 10.17487/RFC2578, April 1999, <https://www.rfc-editor.org/info/rfc2578>. Yang, et al. Expires December 1, 2019 [Page 19] Internet-Draft Data Migration Standards May 2019 6.2. Informative References [RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629, DOI 10.17487/RFC2629, June 1999, <https://www.rfc-editor.org/info/rfc2629>. [RFC4710] Siddiqui, A., Romascanu, D., and E. Golovinsky, "Real-time Application Quality-of-Service Monitoring (RAQMON) Framework", RFC 4710, DOI 10.17487/RFC4710, October 2006, <https://www.rfc-editor.org/info/rfc4710>. [RFC5694] Camarillo, G., Ed. and IAB, "Peer-to-Peer (P2P) Architecture: Definition, Taxonomies, Examples, and Applicability", RFC 5694, DOI 10.17487/RFC5694, November 2009, <https://www.rfc-editor.org/info/rfc5694>. 6.3. URL References [hadoop] The Apache Software Foundation, "http://hadoop.apache.org/". [hbase] The Apache Software Foundation, "http://hbase.apache.org/". [hive] The Apache Software Foundation, "http://hive.apache.org/". [idguidelines] IETF Internet Drafts editor, "http://www.ietf.org/ietf/1id-guidelines.txt". [idnits] IETF Internet Drafts editor, "http://www.ietf.org/ID-Checklist.html". [ietf] IETF Tools Team, "http://tools.ietf.org". [ops] the IETF OPS Area, "http://www.ops.ietf.org". [spark] The Apache Software Foundation, "http://spark.apache.org/". [sqoop] The Apache Software Foundation, "http://sqoop.apache.org/". [xml2rfc] XML2RFC tools and documentation, "http://xml.resource.org". Yang, et al. Expires December 1, 2019 [Page 20] Internet-Draft Data Migration Standards May 2019 Authors' Addresses Can Yang (editor) South China University of Technology 382 Zhonghuan Road East Guangzhou Higher Education Mega Centre Guangzhou, Panyu District P.R.China Phone: +86 18602029601 Email: cscyang@scut.edu.cn Yu Liu (editor) South China University of Technology 382 Zhonghuan Road East Guangzhou Higher Education Mega Centre Guangzhou, Panyu District P.R.China Email: 201621032214@scut.edu.cn Cong Chen Inspur 163 Pingyun Road Guangzhou, Tianhe District P.R.China Email: chen_cong@inspur.com Ge Chen GSTA No. 109 Zhongshan Road West, Guangdong Telecom Technology Building Guangzhou, Tianhe District P.R.China Email: cheng@gsta.com Yukai Wei Huawei Putian Huawei base Shenzhen, Longgang District P.R.China Email: weiyukai@huawei.com Yang, et al. Expires December 1, 2019 [Page 21]