How Markov Decision Process Analyzer helps Decision Makers

Know what action to take in a situation.

Markov Decision Process can be used to evaluate a policy for repeatable situations. If you have some states that can occur repeatedly with some probabilities, then the Markov Decision Process can be used to evaluate the right action to take in a specific situation.

Model Markov Decision Process with the modern and intuitive wizard

Markov Decision Process from SpiceLogic offers a very rich modeling application. It starts with a wizard that captures all the necessary information from you to create the model.

Once the wizard completes the data collection, it prepares the sophisticated Markov Decision Model Graphical User Interface, so that you can fine-tune and optimize the model. The Same UI displays the evaluated policy as you can see here.

Use rich objective attributes to define your reward in a state.

SpiceLogic Markov Decision Process offers a rich wizard for identifying objectives that can be used to attribute your rewards in a state. The reward can be modeled with multiple criteria.

And use Multiple Criteria attributes to model the reward of a state.

Prioritize and quantify your objectives using the AHP algorithms

Using Analytic Hierarchy Process priority algorithms, your multiple objectives can be prioritized and quantified using pairwise comparisons. Even the Transitivity rule can be enforced. The consistency ratio is also calculated and displayed while you do the pairwise comparison.

Model your Reward with Utility Function

The state reward can be modeled with a sophisticated Utility function as well.

Use our Gameplay wizard to elicit the Utility Function

If you are not sure about your utility function, play the built-in Certainty Equivalent based Game to let the application infer your utility function. The inference logic is driven by behavioral economics.

Use Probability Distribution to model Uncertain reward in a state.

A reward in a state can be uncertain. The uncertainty can be modeled with a probability distribution.

Use the Markov Chain to forecast the future.

When a policy is generated by the Markov Decision Process, based on the recommended action, a Markov Chain is also generated. The generated Markov chain can be used to predict the future state after a certain iteration.

Use custom equation in a Markov Chain to predict a custom state

You can even use custom expressions using the state names to predict a custom state of your Markov chain.

A plethora of charts on Markov Chain and Markov Decision Process

A lot of charts are available for you to get various perspectives of the Markov decision process.

A Part of our Rational Will software

This Markov Decision Process software is also available in our composite (bundled) product "Rational Will", where you get streamlined user experience of many decision modeling tools. Therefore, if you get Rational WIll, you won't need to acquire this software separately.

Supported Operating Systems

This software is made for Windows machines. Any windows operating system that has Microsoft .NET Framework 4.5 or higher, you can install this software. If you are a Mac user, you can still use this software on your Mac OS using software like Parallels Desktop.