Shashank Pathak
Safety, Artificial Intelligence, Robotics

I am primarily interested in safety of autonomous systems, especially robots. Safety is a very general term that can encapsulate notion of not reaching a bad state on one hand and of certified behaviour of the system under all conditions, on the other. I firmly believe no matter how much of an autonomy our systems gain, in order to be acceptable for the public use they must become demonstrably safe. Ensuring such a safety for systems that adapt -- via learning or planning -- is a challenging task that enthuses me to no bounds!

Though I have in the past looked into control-theoretic as well as algorithmic approaches to safety, I am more inclined towards formal approaches to safety such as formal verification in general and probabilistic model checking in particular. The tools that can be used for Probabilistic Computational Tree Logic (PCTL) based model checking are PRISM and STORM. An intuitive example of PCTL is asking a question

Is the probability that for k steps a boolean proposition $$\Phi_1$$ is true and thence another boolean proposition $$\Phi_2$$ holds, less than a given threshold $$\sigma$$.

Such things could be expressed as:

$$\mathbb{P}_{\;\bowtie\; \sigma}(\Phi_1\;\; U^{< k}\;\; \Phi_2)$$

where $$\bowtie$$ represents the inequality operators.

One of my current research interest is to seek to apply formal approaches to real robotic applications, where real implies that the problem has not been abstracted to such a high level where a discrete grid-world is sufficient to represent everything relevant about the system. More precisely, I am interested in domains that are challenging on one hand -- such as partial observability which makes standard methods quite unscalable at times -- and yet have useful structures that allow for efficient solutions -- such as SLAM approaches in path-planning under high uncertainty that use motion models and observation models.

A very high level description of Belief Space Planning approaches would be that it solves for a locally optimal solution to a given POMDP i.e., Partially Observable Markovian Decision Problem. More precisely, if $$\{x,u,z\}$$ denote the unobservable state, control action and the observation itself, respectively, then at at any planning step $$k$$, belief space planning involves non-myopic planning of the control actions $$u_{k+1}$$ using the posterior distribution of the robot pose $$X_k$$ given certain observation $$z_{k}$$ . $$X_k \doteq \{ x_0, x_1 , \dots x_k\}$$ represents the vector of all poses up till the step $$k$$. Here, the motion model as well as the observation models are of the forms:

$$x_{i+1} = f(x_i,u_i) + w_i, \; \; w_i \sim {\cal N}(0, \Sigma_w)\\ z_k = h(x_k, A_i) + v_k, \; \; v_k \sim {\cal N}(0, \Sigma_v)$$

where $$\Sigma_w$$ and $$\Sigma_v$$ are process noise covariance and observation noise covariance, respectively while $$A_i$$ denotes the scene captured by the observation $$z_i$$. These models can be denoted probabilistically as $$\mathbb{P}(x_{i+1}| x_i, u_i)$$ and $$\mathbb{P}(z_k | x_k, A_i)$$. Given a prior distribution $$\mathbb{P}(x_0)$$ and these models, the joint probability distribution of $$X$$ at any time $$k$$ is:

$$\mathbb{P}(X_k | {\cal H}) = \mathbb{P}(x_0) \prod_{i=1}^{k} \mathbb{P}(x_i|x_{i-1},u_{i-1}) \mathbb{P}(Z_i | x_i, A_i)$$

where $${\cal H}$$ stands for history of all observations $$\{ z_1, z_2 , \dots z_k\}$$ and control actions $$\{ u_0, u_1 , \dots u_{k-1} \}$$.

Under the Gaussian noise models mentioned before, the joint PDF of $$X_k$$ turns out to be a Gaussian itself of the form $${\cal N}(X_k; \hat X_k, \hat\Sigma)$$ where mean $$\hat X_k$$ and covariance $$\hat \Sigma$$ can be estimated efficiently using maximum a posteriori estimate (MAP).

A question then is:

Can the same assumptions be utilised to construct a formal model, possibly a sub-class of POMDP, where concrete formal guarantees can be made for the underlying system.

last update: Mon Mar 06 2017