The goal of this paper is to provide a starting point for investigations into a mainly underdeveloped area of research regarding the computational cost analysis of complex stochastic strategies for solving parametric inverse problems. This area has two main components: solving global optimization problems and solving forward problems (to evaluate the misfit function that we try to minimize). For the first component, we pay particular attention to genetic algorithms with heuristics and to multi-deme algorithms that can be modeled as ergodic Markov chains. We recall a simple method for evaluating the first hitting time for the single-deme algorithm and we extend it to the case of HGS, a multi-deme hierarchic strategy. We focus on the case in which at least the demes in the leaves are well tuned. Finally, we also express the problems of finding local and global optima in terms of a classic complexity theory. We formulate the natural result that finding a local optimum of a function is an NP-complete task, and we argue that finding a global optimum is a much harder, DP-complete, task. Furthermore, we argue that finding all global optima is, possibly, even harder (#P-hard) task. Regarding the second component of solving parametric inverse problems (i.e., regarding the forward problem solvers), we discuss the computational cost of hp-adaptive Finite Element solvers and their rates of convergence with respect to the increasing number of degrees of freedom. The presented results provide a useful taxonomy of problems and methods of studying the computational cost and complexity of various strategies for solving inverse parametric problems. Yet, we stress that our goal was not to deliver detailed evaluations for particular algorithms applied to particular inverse problems, but rather to try to identify possible ways of obtaining such results.