Hong Kong Baptist University, Department of Computer Science
Henry Hong-Ning Dai is now with the Department of Computer Science, Hong Kong Baptist University as an associate professor. He obtained a Ph.D. in Computer Science and Engineering from the Department of Computer Science and Engineering at the Chinese University of Hong Kong and a D.Eng. in Computer Technology Application from the Department of Computer Science and Engineering at Shanghai Jiao Tong University. Before joining Hong Kong Baptist University, he was with the School of Computer Science and Engineering at Macau University of Science and Technology as an assistant professor/associate professor from 2010 to 2021, and the Department of Computing and Decision Sciences, Lingnan University, Hong Kong as an associate professor from 2021 to 2022. He has published more than 200 papers in referred journals and conferences, including Proceedings of the IEEE, IEEE Journal on Selected Areas in Communications (JSAC), IEEE Transactions on Mobile Computing (TMC), IEEE Transactions on Parallel and Distributed Systems (TPDS), IEEE Transactions on Computer (TC), IEEE Transactions on Knowledge and Data Engineering (TKDE), IEEE Transactions on Wireless Communications, ACM/IEEE IEEE/ACM International Conference on Automated Software Engineering (ASE), IEEE Conference on Computer Communications (INFOCOM), AAAI Conference on Artificial Intelligence (AAAI), The conference on Information and Knowledge Management (CIKM), ACM Symposium on Cloud Computing (SoCC), IEEE Transactions on Neural Networks and Learning Systems (TNNLS), ACM Computing Surveys (CSUR), IEEE Communications Surveys & Tutorials, etc. His publications have received more than 13,000 citations. He has also led over 10 research projects as Principle Investigator (PI) or Co-PI with totally HK$ 12M. He has won more than 15 awards. He is also the holder of 1 U.S. patent and 1 Australia innovation patent. He is a senior member of ACM, IEEE, and EAI. He has served as associate editors/editors for IEEE Communications Surveys & Tutorials, IEEE Transactions on Intelligent Transportation Systems, IEEE Transactions on Industrial Informatics, IEEE Transactions on Industrial Cyber-Physical Systems, Ad Hoc Networks (Elsevier), and Connection Science (Taylor & Francis). He has served as a PC member of top-tier conferences, including ASE'23, KDD'23, CIKM'19, ICPADS'19. He has also served as general chair/PC chair of many international conferences.
Hong Kong Baptist University, Department of Computer Science
Lingnan University, Department of Computing and Decision Sciences
Macau University of Science and Technology, Faculty of Information Technology
Macau University of Science and Technology, Faculty of Information Technology
Chinese University of Hong Kong, Department of Information Engineering
Ph.D. in Computer Science and Engineering
Chinese University of Hong Kong
DEng. in Computer Science and Engineering
Shanghai Jiao Tong University
MEng. in Computer Science and Engineering
South China University of Technology
BEng. in Computer Science and Engineering
South China University of Technology
Henry Hong-Ning Dai has been Principle Investigators (PI) and Co-PIs of over 10 research projects with totally HK$ 12M.
|Role||Grant Source||Project Title||Reference No.||During||Amount (HKD)|
|Co-PI||FDS, UGC, Hong Kong||A Study of Distributed Service-aware Wireless Cellular Networks (DS-WCNs): From User Demand Modeling to Performance Optimization||UGC/FDS16/E02/22||2022 - 2025||1,047,750|
|Co-PI||NSFC, China||Studies of Key Technologies on Collaborative Cloud-Device-Edge in Sensor Cloud Systems||62172046||2021 - 2024||748,684 (RMB 590,000)|
|Sub-project leader||Macao Key R & D Projects of FDCT, Macau||STEP Perpetual Learning based Collective Intelligence: Theories and Methodologies||0025/2019/AKP||2020 - 2023||6,000,000.00|
|PI||FDCT, Macau||Key Technologies to Enable Ultra Dense Wireless Networks||0026/2018/A1||2018 - 2021||1,000,000.00|
|PI||FDCT, Macau||Large Scale Wireless Ad Hoc Networks: Performance Analysis and Performance Improvement||096/2013/A3||2014 - 2017||1,356,100.00|
|Co-PI||NSFC, China||Studies on Network Resilience in Self-Organized Intelligent Manufacturing Internet of Things||61672170||2017 - 2020||799,442 (RMB 630,000)|
|PI||FDCT, Macau||Studies on Multi-channel Networks using Directional Antennas||036/2011/A||2012 - 2014||495,900.00|
|Co-PI||FDCT, Macau||Idle Sense Scheme in Non-saturated Wireless LANs||081/2012/A3||2013 - 2016||1,102,200.00|
|Total awarded research grants as PI/Co-PI (2012 - present)||Total||12,550,076|
Copyright and all rights therein are retained by the authors or by the respective copyright holders (including Springer Verlag, Elsevier, ACM, IEEE Press, Wiley, etc.). You can download them only if you follow their restrictions (e.g., for private use or academic use). The online version may be slightly different from the final version. They are put online just for the convenience of academic sharing.
You may refer to Henry Dai's personal web or DBLP or Google Scholar for more comprehensive publications.
Terrestrial-satellite networks (TSNs) can provide worldwide users with ubiquitous and seamless network services. Meanwhile, malicious eavesdropping is posing tremendous challenges on secure transmissions of TSNs due to their widescale wireless coverage. In this paper, we propose an aerial bridge scheme to establish secure tunnels for legitimate transmissions in TSNs. With the assistance of unmanned aerial vehicles (UAVs), massive transmission links in TSNs can be secured without impacts on legitimate communications. Owing to the stereo position of UAVs and the directivity of directional antennas, the constructed secure tunnel can significantly relieve confidential information leakage, resulting in the precaution of wiretapping. Moreover, we establish a theoretical model to evaluate the effectiveness of the aerial bridge scheme compared with the ground relay, non-protection, and UAV jammer schemes. Furthermore, we conduct extensive simulations to verify the accuracy of theoretical analysis and present useful insights into the practical deployment by revealing the relationship between the performance and other parameters, such as the antenna beamwidth, flight height and density of UAVs.
For the complicated input-output systems with nonlinearity and stochasticity, Deep State Space Models (SSMs) are effective for identifying systems in the latent state space, which are of great significance for representation, forecasting, and planning in online scenarios. However, most SSMs are designed for discrete-time sequences and inapplicable when the observations are irregular in time. To solve the problem, we propose a novel continuous-time SSM named Ordinary Differential Equation Recurrent State Space Model (ODE-RSSM). ODE-RSSM incorporates an ordinary differential equation (ODE) network (ODE-Net) to model the continuous-time evolution of latent states between adjacent time points. Inspired from the equivalent linear transformation on integration limits, we propose an efficient reparameterization method for solving batched ODEs with non-uniform time spans in parallel for efficiently training the ODE-RSSM with irregularly sampled sequences. We also conduct extensive experiments to evaluate the proposed ODE-RSSM and the baselines on three input-output datasets, one of which is a rollout of a private industrial dataset with strong long-term delay and uncertainty. The results demonstrate that the ODE-RSSM achieves better performance than other baselines in open loop prediction even if the time spans of predicted points are uneven and the distribution of length is changeable.
As computer programs run on top of blockchain, smart contracts have proliferated a myriad of decentralized applications while bringing security vulnerabilities, which may cause huge financial losses. Thus, it is crucial and urgent to detect the vulnerabilities of smart contracts. However, existing fuzzers for smart contracts are still inefficient to detect sophisticated vulnerabilities that require specific vulnerable transaction sequences to trigger. To address this challenge, we propose a novel vulnerability-guided fuzzer based on reinforcement learning, namely RLF, for generating vulnerable transaction sequences to detect such sophisticated vulnerabilities in smart contracts. In particular, we firstly model the process of fuzzing smart contracts as a Markov decision process to construct our reinforcement learning framework. We then creatively design an appropriate reward with consideration of both vulnerability and code coverage so that it can effectively guide our fuzzer to generate specific transaction sequences to reveal vulnerabilities, especially for the vulnerabilities related to multiple functions. We conduct extensive experiments to evaluate RLF's performance. The experimental results demonstrate that our RLF outperforms state-of-the-art vulnerability-detection tools (e.g., detecting 8%-69% more vulnerabilities within 30 minutes).
Mobile crowdsensing (MCS) can promote data acquisition and sharing among mobile devices. Traditional MCS platforms are based on a triangular structure consisting of three roles: data requester, worker (i.e., sensory data provider) and MCS platform. However, this centralized architecture suffers from poor reliability and difficulties in guaranteeing data quality and privacy, even provides unfair incentives for users. In this paper, we propose a blockchain-based MCS platform, namely BlockSense, to replace the traditional triangular architecture of MCS models by a decentralized paradigm. To achieve the goal of trustworthiness of BlockSense, we present a novel consensus protocol, namely Proof-of-Data (PoD), which leverages miners to conduct useful data quality validation work instead of “useless” hash calculation. Meanwhile, in order to preserve the privacy of the sensory data, we design a homomorphic data perturbation scheme, through which miners can verify data quality without knowing the contents of the data. We have implemented a prototype of BlockSense and conducted case studies on campus, collecting over 7,000 data from workers’ mobile phones. Both simulations and real-world experiments show that BlockSense can not only improve system security, preserve data privacy and guarantee incentives fairness, but also achieve at least 5.6x faster than Ethereum smart contracts in verification efficiency.
Payment channel networks (PCNs) are considered as a prominent solution for scaling blockchain, where users can establish payment channels and complete transactions in an off-chain manner. However, it is non-trivial to schedule transactions in PCNs and most existing routing algorithms suffer from the following challenges: 1) one-shot optimization, 2) privacy-invasive channel probing, 3) vulnerability to DoS attacks. To address these challenges, we propose a privacy-aware transaction scheduling algorithm with defence against DoS attacks based on deep reinforcement learning (DRL), namely PTRD. Specifically, considering both the privacy preservation and long-term throughput into the optimization criteria, we formulate the transaction-scheduling problem as a Constrained Markov Decision Process. We then design PTRD, which extends off-the-shelf DRL algorithms to constrained optimization with an additional cost critic-network and an adaptive Lagrangian multiplier. Moreover, considering the distribution nature of PCNs, in which each user schedules transactions independently, we develop a distributed training framework to collect the knowledge learned by each agent so as to enhance learning effectiveness. With the customized network design and the distributed training framework, PTRD achieves a good balance between the optimization of the throughput and the minimization of privacy risks. Evaluations show that PTRD outperforms the state-of-the-art PCN routing algorithms by 2.7%–62.5% in terms of the long-term throughput while satisfying privacy constraints.
Sharding has been considered as a prominent approach to enhance the limited performance of blockchain. However, most sharding systems leverage a non-cooperative design, which lowers the fault tolerance resilience due to the decreased mining power as the consensus execution is limited to each separated shard. To this end, we present Benzene, a novel sharding system that enhances the performance by cooperation-based sharding while defending the per-shard security. Firstly, we establish a double-chain architecture for function decoupling. This architecture separates transaction-recording functions from consensus-execution functions, thereby enabling the cross-shard cooperation during consensus execution while preserving the concurrency nature of sharding. Secondly, we design a cross-shard block verification mechanism leveraging Trusted Execution Environment (TEE), via which miners can verify blocks from other shards during the cooperation process with the minimized overheads. Finally, we design a voting-based consensus protocol for cross-shard cooperation. Transactions in each shard are confirmed by all shards that simultaneously cast votes, consequently achieving an enhanced fault tolerance and lowering the confirmation latency. We implement Benzene and conduct both prototype experiments and large-scale simulations to evaluate the performance of Benzene. Results show that Benzene achieves superior performance than existing sharding/non-sharding blockchain protocols. In particular, Benzene achieves a linearly-improved throughput with the increased number of shards (e.g., 32,370 transactions per second with 50 shards) and maintains a lower confirmation latency than Bitcoin (with more than 50 shards). Meanwhile, Benzene maintains a fixed fault tolerance at 1/3 even with the increased number of shards.
Blockchain technology has gained popularity owing to the success of cryptocurrencies such as Bitcoin and Ethereum. Nonetheless, the scalability challenge largely limits its applications in many real-world scenarios. Off-chain payment channel networks (PCNs) have recently emerged as a promising solution by conducting payments through off-chain channels. However, the throughput of current PCNs does not yet meet the growing demands of large-scale systems because: 1) most PCN systems only focus on maximizing the instantaneous throughput while failing to consider network dynamics in a long-term perspective; 2) transactions are re-actively routed in PCNs, in which intermediate nodes only passively forward every incoming transaction. These limitations of existing PCNs inevitably lead to channel imbalance and the failure of routing subsequent transactions. To address these challenges, we propose a novel proactive look-ahead algorithm (PLAC) that controls transaction flows from a long-term perspective and proactively prevents channel imbalance. In particular, we first conduct a measurement study on two real-world PCNs to explore their characteristics in terms of transaction distribution and topology. On that basis, we propose PLAC based on deep reinforcement learning (DRL), which directly learns the system dynamics from historical interactions of PCNs and aims at maximizing the long-term throughput. Furthermore, we develop a novel graph convolutional network-based model for PLAC, which extracts the inter-dependency between PCN nodes to consequently boost the performance. Extensive evaluations on real-world datasets show that PLAC improves state-of-the-art PCN routing schemes w.r.t the long-term throughput from 6.6% to 34.9%.
Multi-view clustering (MVC) aims at exploiting the consistent features within different views to divide samples into different clusters. Existing subspace-based MVC algorithms usually assume linear subspace structures and two-stage similarity matrix construction strategies, thereby posing challenges in imprecise low-dimensional subspace representation and inadequacy of exploring consistency. This paper presents a novel hierarchical representation for MVC method via the integration of intra-sample, intra-view, and inter-view representation learning models. In particular, we first adopt the deep autoencoder to adaptively map the original high-dimensional data into the latent low-dimensional representation of each sample. Second, we use the self-expression of the latent representation to explore the global similarity between samples of each view and obtain the subspace representation coefficients. Third, we construct the third-order tensor by arranging multiple subspace representation matrices and impose the tensor low-rank constraint to sufficiently explore the consistency among views. Being incorporated into a unified framework, these three models boost each other to achieve a satisfactory clustering result. Moreover, an alternating direction method of multipliers algorithm is developed to solve the challenging optimization problem. Extensive experiments on both simulated and real-world multi-view datasets show the superiority of the proposed method over eight state-of-the-art baselines.
Tensor analysis has received widespread attention in high-dimensional data learning. Unfortunately, the tensor data are often accompanied by arbitrary signal corruptions, including missing entries and sparse noise. How to recover the characteristics of the corrupted tensor data and make it compatible with the downstream clustering task remains a challenging problem. In this article, we study a generalized transformed tensor low-rank representation (TTLRR) model for simultaneously recovering and clustering the corrupted tensor data. The core idea is to find the latent low-rank tensor structure from the corrupted measurements using the transformed tensor singular value decomposition (SVD). Theoretically, we prove that TTLRR can recover the clean tensor data with a high probability guarantee under mild conditions. Furthermore, by using the transform adaptively learning from the data itself, the proposed TTLRR model can approximately represent and exploit the intrinsic subspace and seek out the cluster structure of the tensor data precisely. An effective algorithm is designed to solve the proposed model under the alternating direction method of multipliers (ADMMs) algorithm framework. The effectiveness and superiority of the proposed method against the compared methods are showcased over different tasks, including video/face data recovery and face/object/scene data clustering.
Graph auto-encoder is considered a framework for unsupervised learning on graph-structured data by representing graphs in a low dimensional space. It has been proved very powerful for graph analytics. In the real world, complex relationships in various entities can be represented by heterogeneous graphs that contain more abundant semantic information than homogeneous graphs. In general, graph auto-encoders based on homogeneous graphs are not applicable to heterogeneous graphs. In addition, little work has been done to evaluate the effect of different semantics on node embedding in heterogeneous graphs for unsupervised graph representation learning. In this work, we propose a novel Heterogeneous Graph Attention Auto-Encoders (HGATE) for unsupervised representation learning on heterogeneous graph-structured data. Based on the consideration of semantic information, our architecture of HGATE reconstructs not only the edges of the heterogeneous graph but also node attributes, through stacked encoder/decoder layers. Hierarchical attention is used to learn the relevance between a node and its meta-path based neighbors, and the relevance among different meta-paths.HGATE is applicable to transductive learning as well as inductive learning. Node classification and link prediction experiments on real-world heterogeneous graph datasets demonstrate the effectiveness of HGATE for both transductive and inductive tasks.
This article studies the PBFT-based sharded permissioned blockchain, which executes in either a local datacenter or a rented cloud platform. In such permissioned blockchain, the transaction (TX) assignment strategy could be malicious such that the network shards may possibly receive imbalanced transactions or even bursty-TX injection attacks. An imbalanced transaction assignment brings serious threats to the stability of the sharded blockchain. A stable sharded blockchain can ensure that each shard processes the arrived transactions timely. Since the system stability is closely related to the blockchain throughput, how to maintain a stable sharded blockchain becomes a challenge. To depict the transaction processing in each network shard, we adopt the Lyapunov Optimization framework. Exploiting drift-plus-penalty (DPP) technique, we then propose an adaptive resource-allocation algorithm, which can yield the near-optimal solution for each network shard while the shard queues can also be stably maintained. We also rigorously analyze the theoretical boundaries of both the system objective and the queue length of shards. The numerical results show that the proposed algorithm can achieve a better balance between resource consumption and queue stability than other baselines. We particularly evaluate two representative cases of bursty-TX injection attacks, i.e., the continued attacks against all network shards and the drastic attacks against a single network shard. The evaluation results show that the DPP-based algorithm can well alleviate the imbalanced TX assignment, and simultaneously maintain high throughput while consuming fewer resources than other baselines.
The emergence of infectious disease COVID-19 has challenged and changed the world in an unprecedented manner. The integration of wireless networks with edge computing (namely wireless edge networks) brings opportunities to address this crisis. In this paper, we aim to investigate the prediction of the infectious probability and propose precautionary measures against COVID-19 with the assistance of wireless edge networks. Due to the availability of the recorded detention time and the density of individuals within a wireless edge network, we propose a stochastic geometry-based method to analyze the infectious probability of individuals. The proposed method can well keep the privacy of individuals in the system since it does not require to know the location or trajectory of each individual. Moreover, we also consider three types of mobility models and the static model of individuals. Numerical results show that analytical results well match with simulation results, thereby validating the accuracy of the proposed model. Moreover, numerical results also offer many insightful implications. Thereafter, we also offer a number of countermeasures against the spread of COVID-19 based on wireless edge networks. This study lays the foundation toward predicting the infectious risk in realistic environment and points out directions in mitigating the spread of infectious diseases with the aid of wireless edge networks.
Network slicing has been widely agreed as a promising technique to accommodate diverse services for the Industrial Internet of Things (IIoT). Smart transportation, smart energy, and smart factory/manufacturing are the three key services to form the backbone of IIoT. Network slicing management is of paramount importance in the face of IIoT services with diversified requirements. It is important to have a comprehensive survey on intelligent network slicing management to provide guidance for future research in this field. In this paper, we provide a thorough investigation and analysis of network slicing management in its general use cases as well as specific IIoT services including smart transportation, smart energy and smart factory, and highlight the advantages and drawbacks across many existing works/surveys and this current survey in terms of a set of important criteria. In addition, we present an architecture for intelligent network slicing management for IIoT focusing on the above three IIoT services. For each service, we provide a detailed analysis of the application requirements and network slicing architecture, as well as the associated enabling technologies. Further, we present a deep understanding of network slicing orchestration and management for each service, in terms of orchestration architecture, AI-assisted management and operation, edge computing empowered network slicing, reliability, and security. For the presented architecture for intelligent network slicing management and its application in each IIoT service, we identify the corresponding key challenges and open issues that can guide future research. To facilitate the understanding of the implementation, we provide a case study of the intelligent network slicing management for integrated smart transportation, smart energy, and smart factory. Some lessons learnt include: 1) For smart transportation, it is necessary to explicitly identify service function chains (SFCs) for specific applications along with the orchestration of underlying VNFs/PNFs for supporting such SFCs; 2) For smart energy, it is crucial to guarantee both ultra-low latency and extremely high reliability; 3) For smart factory, resource management across heterogeneous network domains is of paramount importance. We hope that this survey is useful for both researchers and engineers on the innovation and deployment of intelligent network slicing management for IIoT.
The traditional production paradigm of large batch production does not offer flexibility toward satisfying the requirements of individual customers. A new generation of smart factories is expected to support new multivariety and small-batch customized production modes. For this, artificial intelligence (AI) is enabling higher value-added manufacturing by accelerating the integration of manufacturing and information communication technologies, including computing, communication, and control. The characteristics of a customized smart factory are: self-perception, operations optimization, dynamic reconfiguration, and intelligent decision-making. The AI technologies will allow manufacturing systems to perceive the environment, adapt to the external needs, and extract the process knowledge, including business models, such as intelligent production, networked collaboration, and extended service models. This article focuses on the implementation of AI in customized manufacturing (CM). The architecture of an AI-driven customized smart factory is presented. Details of intelligent manufacturing devices, intelligent information interaction, and construction of a flexible manufacturing line are showcased. The state-of-the-art AI technologies of potential use in CM, that is, machine learning, multiagent systems, Internet of Things, big data, and cloud-edge computing, are surveyed. The AI-enabled technologies in a customized smart factory are validated with a case study of customized packaging. The experimental results have demonstrated that the AI-assisted CM offers the possibility of higher production flexibility and efficiency. Challenges and solutions related to AI in CM are also discussed.
The wide proliferation of various wireless communication systems and wireless devices has led to the arrival of big data era in large-scale wireless networks. Big data of large-scale wireless networks has the key features of wide variety, high volume, real-time velocity, and huge value leading to the unique research challenges that are different from existing computing systems. In this article, we present a survey of the state-of-art big data analytics (BDA) approaches for large-scale wireless networks. In particular, we categorize the life cycle of BDA into four consecutive stages: Data Acquisition, Data Preprocessing, Data Storage, and Data Analytics. We then present a detailed survey of the technical solutions to the challenges in BDA for large-scale wireless networks according to each stage in the life cycle of BDA. Moreover, we discuss the open research issues and outline the future directions in this promising area.
We have experienced the proliferation of diverse blockchain platforms, including cryptocurrencies as well as private blockchains. In this chapter, we present an overview of blockchain intelligence. We first briefly review blockchain and smart contract technologies. We then introduce blockchain intelligence, which is essentially an amalgamation of blockchain and artificial intelligence. In particular, we discuss the opportunities of blockchain intelligence to address the limitations of blockchain and smart contracts.
Henry Hong-Ning Dai has been teaching the following courses since he joined the department of computer and science at Hong Kong Baptist University. Before joining HKBU, he also taught many CS-related courses at Lingnan University (LNU), Hong Kong (from 2021 to 2022) and Macau University of Science and Technology (MUST), Macau (from 2010 to 2021).
To introduce the fundamental issues of big data management; To learn the latest techniques of data management and processing; To conduct application case studies to show how data management techniques support large-scale data processing.
To introduce the organization of digital computers, the different components and their basic principles and operations.
Dr. Henry Dai is recruiting self-motivated Ph.D./RA/Post-doc with a strong background in computer science, electronic engineering, or applied mathematics, working in the fields including (but not limited to) blockchain, the Internet of Things, and big data analytics. If you are interested in joining Henry's group, please feel free to send him an email with your CV, transcripts (undergraduate and postgraduate), and publications (if any). Please read Henry's research areas and recent publications before sending your emails.
Room 643, David C. Lam Building
Hong Kong Baptist University
55 Renfrew Road, Kowloon Tong, Hong Kong