Machine Intelligence at the Edge With Learning Centric Power Allocation
@article{Wang2019MachineIA, title={Machine Intelligence at the Edge With Learning Centric Power Allocation}, author={Shuai Wang and Yik-Chung Wu and Minghua Xia and Rui Wang and H. Vincent Poor}, journal={IEEE Transactions on Wireless Communications}, year={2019}, volume={19}, pages={7293-7308} }
While machine-type communication (MTC) devices generate considerable amounts of data, they often cannot process the data due to limited energy and computational power. To empower MTC with intelligence, edge machine learning has been proposed. However, power allocation in this paradigm requires maximizing the learning performance instead of the communication throughput, for which the celebrated water-filling and max-min fairness algorithms become inefficient. To this end, this paper proposes…Â
Figures and Tables from this paper
32 Citations
Learning Centric Power Allocation for Edge Intelligence
- Computer ScienceICC 2020 - 2020 IEEE International Conference on Communications (ICC)
- 2020
This paper proposes a learning centric power allocation (LCPA) method, which allocates radio resources based on an empirical classification error model, and an asymptotic optimal solution is derived.
Learning Centric Wireless Resource Allocation for Edge Computing: Algorithm and Experiment
- Computer ScienceIEEE Transactions on Vehicular Technology
- 2021
This paper proposes the learning centric wireless resource allocation (LCWRA) scheme that maximizes the worst learning performance of multiple tasks and shows that the optimal transmission time has an inverse power relationship with respect to the generalization error.
System Delay Minimization for NOMA-Based Cognitive Mobile Edge Computing
- Computer ScienceIEEE Access
- 2020
This paper first employs non-orthogonal multiple access (NOMA) in a cognitive radio (CR) based mobile edge computing (MEC) network to reduce the system delay, and proposes a low-complexity algorithm to obtain the sub-optimal solution.
Edge Federated Learning via Unit-Modulus Over-The-Air Computation
- Computer ScienceIEEE Transactions on Communications
- 2022
Simulation results show that the proposed UMAirComp framework with PAM algorithm achieves a smaller mean square error of model parameters’ estimation, training loss, and test error compared with other benchmark schemes, and reduces the computational complexity by orders of magnitude compared with existing optimization algorithms.
Reconfigurable Intelligent Surface Assisted Mobile Edge Computing With Heterogeneous Learning Tasks
- Computer ScienceIEEE Transactions on Cognitive Communications and Networking
- 2021
An infrastructure to perform ML tasks at an MEC server with the assistance of a reconfigurable intelligent surface (RIS) is presented and the maximum learning error of all participating users is minimized.
Reconfigurable Intelligent Surface Assisted Edge Machine Learning
- Computer ScienceICC 2021 - IEEE International Conference on Communications
- 2021
An infrastructure to perform machine learning tasks at an MEC server with the assistance of a reconfigurable intelligent surface (RIS) is presented and the maximum learning error of all users is minimize by jointly optimizing the beamforming vectors of the base station and the phase-shift matrix of the RIS.
Wireless for Machine Learning
- Computer ScienceArXiv
- 2020
An exhaustive review of the state of the art wireless methods that are specifically designed to support Machine Learning services, including over-the-air computation and radio resource allocation optimized for Machine Learning.
Edge Learning With Unmanned Ground Vehicle: Joint Path, Energy, and Sample Size Planning
- Computer ScienceIEEE Internet of Things Journal
- 2021
A graph-based path planning model, a network energy consumption model, and a sample size planning model that characterizes F-measure as a function of the minority class sample size are proposed that outperform the fixed EL and the full path EL schemes.
When Deep Reinforcement Learning Meets Federated Learning: Intelligent Multitimescale Resource Management for Multiaccess Edge Computing in 5G Ultradense Network
- Computer ScienceIEEE Internet of Things Journal
- 2021
An intelligent UDEC (I-UDEC) framework is proposed, which integrates blockchain and artificial intelligence (AI) into 5G UDEC networks, and a novel two-timescale deep reinforcement learning (2Ts-DRL) approach is designed, consisting of a fast- Timescale and a slow- timescale learning process, respectively, to minimize the total offloading delay and network resource usage.
Sum Rate Maximization of Secure NOMA Transmission with Imperfect CSI
- Computer ScienceICC 2020 - 2020 IEEE International Conference on Communications (ICC)
- 2020
By leveraging the first-order and log-concavity properties of the Marcum Q-function, the maximum sum rate of the secure NOMA transmission scheme is efficiently obtained and results validate the strength of this newly established scheme when compared with conventional orthogonal multiple access scheme.
References
SHOWING 1-10 OF 65 REFERENCES
Adaptive Federated Learning in Resource Constrained Edge Computing Systems
- Computer ScienceIEEE Journal on Selected Areas in Communications
- 2019
This paper analyzes the convergence bound of distributed gradient descent from a theoretical point of view, and proposes a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Achieving the Maximum Sum Rate Using D.C. Programming in Cellular Networks
- Computer ScienceIEEE Transactions on Signal Processing
- 2012
The results show that the proposed algorithm outperforms the known conventional suboptimum schemes and it is shown that the algorithm asymptotically converges to a globally optimum power allocation.
Wireless Network Intelligence at the Edge
- Computer ScienceProceedings of the IEEE
- 2019
In a first of its kind, this article explores the key building blocks of edge ML, different neural network architectural splits and their inherent tradeoffs, as well as theoretical and technical enablers stemming from a wide range of mathematical disciplines.
A Joint Learning and Communications Framework for Federated Learning Over Wireless Networks
- Computer ScienceIEEE Transactions on Wireless Communications
- 2021
Simulation results show that the proposed joint federated learning and communication framework can improve the identification accuracy by up to 1.4%, 3.5% and 4.1%, respectively, compared to an optimal user selection algorithm with random resource allocation and a wireless optimization algorithm that minimizes the sum packet error rates of all users while being agnostic to the FL parameters.
An SMDP-Based Resource Allocation in Vehicular Cloud Computing Systems
- Computer ScienceIEEE Transactions on Industrial Electronics
- 2015
This paper proposes an optimal computation resource allocation scheme to maximize the total long-term expected reward of the VCC system and utilizes the iteration algorithm to develop the optimal scheme that describes which action has to be taken under a certain state.
Solution of the multiuser downlink beamforming problem with individual SINR constraints
- Computer ScienceIEEE Transactions on Vehicular Technology
- 2004
The optimality and global convergence of the algorithm is proven and stopping criteria are given, and the global optimum of the downlink beamforming problem is equivalently obtained from solving a dual uplink problem, which has an easier-to-handle analytical structure.
Multicast Wirelessly Powered Network With Large Number of Antennas via First-Order Method
- Computer ScienceIEEE Transactions on Wireless Communications
- 2018
An algorithm is developed which reduces the computation time by orders of magnitude, while still guaranteeing the same performance compared with the difference of convex programming, and is guaranteed to obtain a Karush–Kuhn–Tucker solution.
Majorization-Minimization Algorithms in Signal Processing, Communications, and Machine Learning
- Computer ScienceIEEE Transactions on Signal Processing
- 2017
An overview of the majorization-minimization (MM) algorithmic framework, which can provide guidance in deriving problem-driven algorithms with low computational cost and is elaborated by a wide range of applications in signal processing, communications, and machine learning.
Communication-Efficient Learning of Deep Networks from Decentralized Data
- Computer ScienceAISTATS
- 2017
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Massive Connectivity With Massive MIMO—Part I: Device Activity Detection and Channel Estimation
- Computer ScienceIEEE Transactions on Signal Processing
- 2018
It is shown that in the asymptotic massive multiple-input multiple-output regime, both the missed device detection and the false alarm probabilities for activity detection can always be made to go to zero by utilizing compressed sensing techniques that exploit sparsity in the user activity pattern.