Robust and Adaptive Sequential Submodular Optimization
Emerging applications of control, estimation, and machine learning, ranging from target tracking to decentralized model fitting, pose resource constraints that limit which of the available sensors, actuators, or data can be simultaneously used across time. Therefore, many researchers have proposed solutions within discrete optimization frameworks where the optimization is performed over finite sets. By exploiting notions of discrete convexity, such as submodularity, the researchers have been able to provide scalable algorithms with provable suboptimality bounds. In this paper, we consider such problems but in adversarial environments, where in every step a number of the chosen elements in the optimization is removed due to failures/attacks. Specifically, we consider for the first time a sequential version of the problem that allows us to observe the failures and adapt, while the attacker also adapts to our response. We call the novel problem Robust Sequential submodular Maximization (RSM). Generally, the problem is computationally hard and no scalable algorithm is known for its solution. However, in this paper we propose Robust and Adaptive Maximization (RAM), the first scalable algorithm. RAM runs in an online fashion, adapting in every step to the history of failures. Also, it guarantees a near-optimal performance, even against any number of failures among the used elements. Particularly, RAM has both provable per-instance a priori bounds and tight and/or optimal a posteriori bounds. Finally, we demonstrate RAM’s near-optimality in simulations across various application scenarios, along with its robustness against several failure types, from worst-case to random.
https://arxiv.org/abs/1909.11783