Markov Decision Process

1.0

Model and Analyze Markov Chain and Markov Decision Process.

Video Demonstration for Markov Decision Process

A policy maker for your repeatable situations.

Know what action to take at what situation.

Markov Decision Process can be used to evaluate a policy for repeatable situations. If you have some states that can occur repeatedly with some probabilities, then the Markov Decision Process can be used to evaluate the right action to take in a specific situation.

Markov_ Decision_ Graph

Model MDP with the modern and intuitive wizard

Markov Decision Process from SpiceLogic offers a very rich modeling application. It starts with a wizard that captures all the necessary information from you to create the model.

Markov_ Decision_ Process_ Transition_ Probabilities

Once the wizard completes the data collection, it prepares the sophisticated Markov Decision Model Graphical User Interface, so that you can fine tune and optimize the model. The Same UI displays the evaluated policy as you can see here.

Markov_ Decision_ Process

Use rich objective attribute to define your reward in a state.

SpiceLogic Markov Decision Process offers a rich wizard for identifying objectives that can be used to attribute your rewards in a state. The reward can be modeled with multiple criteria.

Objective_identifier

And use Multiple Criteria attributes to model the reward of a state.

State_ Reward

Prioritize and quantify your objectives using the AHP algorithms

Using Analytic Hierarchy Process priority algorithms, your multiple objectives can be prioritized and quantified using pairwise comparisons. Even Transitivity rule can be enforced. Consistency ratio is also calculated and displayed while you do the pairwise comparison.

Objective_tradeoff

Model your Reward with Utility Function

The state reward can be modeled with a sophisticated Utility function as well.

Utility_function_for_reward

Use our Gameplay wizard to elicit the Utility Function

If you are not sure about your utility function, play the built-in Certainty Equivalent based Game to let the application infer your utility function. The inference logic is driven by behavioral economics.

Certainty_ Equivalent_ Game_ Play

Use Probability Distribution to model Uncertain reward in a state.

A reward in a state can be uncertain. The uncertainty can be modeled with a probability distribution.

Probability_ Distribution

Use the Markov chain to forecast the future.

When a policy is generated by the Markov Decision Process, based on the recommended action, a Markov Chain is also generated. The generated Markov chain can be used to predict the future state after certain iteration.

markov_chain

Use custom equation in a Markov chain to predict a custom state

You can even use custom expressions using the state names to predict a custom state of your Markov chain.

custom_equation_for_states

A plethora of charts on Markov Chain and Markov Decision Process

A lot of charts are available for you to get various perspectives of the Markov decision process.

markov_charts

Supported Environments

Operating System

Windows 7, Windows 8-8.1, Windows 10

Microsoft .NET Framework

4.5 or later