Optimal Continuous State POMDP Planning with Semantic Observations
This work develops novel strategies for optimal planning with semantic observations using continuous state Partially Observable Markov Decision Processes (CPOMDPs). We propose two major innovations to Gaussian mixture (GM) CPOMDP policy approximation methods. While these state of the art methods have many theoretically nice properties, they are hampered by the inability to efficiently represent and reason over hybrid continuous-discrete probabilistic models. The first major innovation is the derivation of closed-form variational Bayes (VB) GM approximations of PBVI Bellman policy backups, using softmax models of continuous-discrete semantic observation probabilities. The second major innovation is a new clustering-based technique for mixture condensation that scales well to very large GM policy functions and belief functions. Simulation results for a target search and interception task with binary semantic observations show that the GM policies resulting from these innovations are more effective than those produced by other state of the art GM approximations, but require significantly less modeling overhead and runtime cost.