MarkovAlgorithmLibrary "MarkovAlgorithm"
Markov algorithm is a string rewriting system that uses grammar-like rules to operate on strings of
symbols. Markov algorithms have been shown to be Turing-complete, which means that they are suitable as a
general model of computation and can represent any mathematical expression from its simple notation.
~ wikipedia
.
reference:
en.wikipedia.org
rosettacode.org
parse(rules, separator)
Parameters:
rules (string)
separator (string)
Returns: - `array _rules`: List of rules.
---
Usage:
- `parse("|0 -> 0|| 1 -> 0| 0 -> ")`
apply(expression, rules)
Aplies rules to a expression.
Parameters:
expression (string) : `string`: Text expression to be formated by the rules.
rules (rule ) : `string`: Rules to apply to expression on a string format to be parsed.
Returns: - `string _result`: Formated expression.
---
Usage:
- `apply("101", parse("|0 -> 0|| 1 -> 0| 0 -> "))`
apply(expression, rules)
Parameters:
expression (string)
rules (string)
Returns: - `string _result`: Formated expression.
---
Usage:
- `apply("101", parse("|0 -> 0|| 1 -> 0| 0 -> "))`
rule
String pair that represents `pattern -> replace`, each rule may be ordinary or terminating.
Fields:
pattern (series string) : Pattern to replace.
replacement (series string) : Replacement patterns.
termination (series bool) : Termination rule.
Markov
MarkovChainLibrary "MarkovChain"
Generic Markov Chain type functions.
---
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the
probability of each event depends only on the state attained in the previous event.
---
reference:
Understanding Markov Chains, Examples and Applications. Second Edition. Book by Nicolas Privault.
en.wikipedia.org
www.geeksforgeeks.org
towardsdatascience.com
github.com
stats.stackexchange.com
timeseriesreasoning.com
www.ris-ai.com
github.com
gist.github.com
github.com
gist.github.com
writings.stephenwolfram.com
kevingal.com
towardsdatascience.com
spedygiorgio.github.io
github.com
www.projectrhea.org
method to_string(this)
Translate a Markov Chain object to a string format.
Namespace types: MC
Parameters:
this (MC) : `MC` . Markov Chain object.
Returns: string
method to_table(this, position, text_color, text_size)
Namespace types: MC
Parameters:
this (MC)
position (string)
text_color (color)
text_size (string)
method create_transition_matrix(this)
Namespace types: MC
Parameters:
this (MC)
method generate_transition_matrix(this)
Namespace types: MC
Parameters:
this (MC)
new_chain(states, name)
Parameters:
states (state )
name (string)
from_data(data, name)
Parameters:
data (string )
name (string)
method probability_at_step(this, target_step)
Namespace types: MC
Parameters:
this (MC)
target_step (int)
method state_at_step(this, start_state, target_state, target_step)
Namespace types: MC
Parameters:
this (MC)
start_state (int)
target_state (int)
target_step (int)
method forward(this, obs)
Namespace types: HMC
Parameters:
this (HMC)
obs (int )
method backward(this, obs)
Namespace types: HMC
Parameters:
this (HMC)
obs (int )
method viterbi(this, observations)
Namespace types: HMC
Parameters:
this (HMC)
observations (int )
method baumwelch(this, observations)
Namespace types: HMC
Parameters:
this (HMC)
observations (int )
Node
Target node.
Fields:
index (series int) : . Key index of the node.
probability (series float) : . Probability rate of activation.
state
State reference.
Fields:
name (series string) : . Name of the state.
index (series int) : . Key index of the state.
target_nodes (Node ) : . List of index references and probabilities to target states.
MC
Markov Chain reference object.
Fields:
name (series string) : . Name of the chain.
states (state ) : . List of state nodes and its name, index, targets and transition probabilities.
size (series int) : . Number of unique states
transitions (matrix) : . Transition matrix
HMC
Hidden Markov Chain reference object.
Fields:
name (series string) : . Name of thehidden chain.
states_hidden (state ) : . List of state nodes and its name, index, targets and transition probabilities.
states_obs (state ) : . List of state nodes and its name, index, targets and transition probabilities.
transitions (matrix) : . Transition matrix
emissions (matrix) : . Emission matrix
initial_distribution (float )
FunctionProbabilityViterbiLibrary "FunctionProbabilityViterbi"
The Viterbi Algorithm calculates the most likely sequence of hidden states *(called Viterbi path)*
that results in a sequence of observed events.
viterbi(observations, transitions, emissions, initial_distribution)
Calculate most probable path in a Markov model.
Parameters:
observations (int ) : array . Observation states data.
transitions (matrix) : matrix . Transition probability table, (HxH, H:Hidden states).
emissions (matrix) : matrix . Emission probability table, (OxH, O:Observed states).
initial_distribution (float ) : array . Initial probability distribution for the hidden states.
Returns: array. Most probable path.
FunctionBaumWelchLibrary "FunctionBaumWelch"
Baum-Welch Algorithm, also known as Forward-Backward Algorithm, uses the well known EM algorithm
to find the maximum likelihood estimate of the parameters of a hidden Markov model given a set of observed
feature vectors.
---
### Function List:
> `forward (array pi, matrix a, matrix b, array obs)`
> `forward (array pi, matrix a, matrix b, array obs, bool scaling)`
> `backward (matrix a, matrix b, array obs)`
> `backward (matrix a, matrix b, array obs, array c)`
> `baumwelch (array observations, int nstates)`
> `baumwelch (array observations, array pi, matrix a, matrix b)`
---
### Reference:
> en.wikipedia.org
> github.com
> en.wikipedia.org
> www.rdocumentation.org
> www.rdocumentation.org
forward(pi, a, b, obs)
Computes forward probabilities for state `X` up to observation at time `k`, is defined as the
probability of observing sequence of observations `e_1 ... e_k` and that the state at time `k` is `X`.
Parameters:
pi (float ) : Initial probabilities.
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing
states given a state matrix is size (M x M) where M is number of states.
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. Given
state matrix is size (M x O) where M is number of states and O is number of different
possible observations.
obs (int ) : List with actual state observation data.
Returns: - `matrix _alpha`: Forward probabilities. The probabilities are given on a logarithmic scale (natural logarithm). The first
dimension refers to the state and the second dimension to time.
forward(pi, a, b, obs, scaling)
Computes forward probabilities for state `X` up to observation at time `k`, is defined as the
probability of observing sequence of observations `e_1 ... e_k` and that the state at time `k` is `X`.
Parameters:
pi (float ) : Initial probabilities.
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing
states given a state matrix is size (M x M) where M is number of states.
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. Given
state matrix is size (M x O) where M is number of states and O is number of different
possible observations.
obs (int ) : List with actual state observation data.
scaling (bool) : Normalize `alpha` scale.
Returns: - #### Tuple with:
> - `matrix _alpha`: Forward probabilities. The probabilities are given on a logarithmic scale (natural logarithm). The first
dimension refers to the state and the second dimension to time.
> - `array _c`: Array with normalization scale.
backward(a, b, obs)
Computes backward probabilities for state `X` and observation at time `k`, is defined as the probability of observing the sequence of observations `e_k+1, ... , e_n` under the condition that the state at time `k` is `X`.
Parameters:
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing states
given a state matrix is size (M x M) where M is number of states
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. given state
matrix is size (M x O) where M is number of states and O is number of different possible observations
obs (int ) : Array with actual state observation data.
Returns: - `matrix _beta`: Backward probabilities. The probabilities are given on a logarithmic scale (natural logarithm). The first dimension refers to the state and the second dimension to time.
backward(a, b, obs, c)
Computes backward probabilities for state `X` and observation at time `k`, is defined as the probability of observing the sequence of observations `e_k+1, ... , e_n` under the condition that the state at time `k` is `X`.
Parameters:
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing states
given a state matrix is size (M x M) where M is number of states
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. given state
matrix is size (M x O) where M is number of states and O is number of different possible observations
obs (int ) : Array with actual state observation data.
c (float ) : Array with Normalization scaling coefficients.
Returns: - `matrix _beta`: Backward probabilities. The probabilities are given on a logarithmic scale (natural logarithm). The first dimension refers to the state and the second dimension to time.
baumwelch(observations, nstates)
**(Random Initialization)** Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the
unknown parameters of a hidden Markov model (HMM). It makes use of the forward-backward algorithm
to compute the statistics for the expectation step.
Parameters:
observations (int ) : List of observed states.
nstates (int)
Returns: - #### Tuple with:
> - `array _pi`: Initial probability distribution.
> - `matrix _a`: Transition probability matrix.
> - `matrix _b`: Emission probability matrix.
---
requires: `import RicardoSantos/WIPTensor/2 as Tensor`
baumwelch(observations, pi, a, b)
Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the
unknown parameters of a hidden Markov model (HMM). It makes use of the forward-backward algorithm
to compute the statistics for the expectation step.
Parameters:
observations (int ) : List of observed states.
pi (float ) : Initial probaility distribution.
a (matrix) : Transmissions, hidden transition matrix a or alpha = transition probability matrix of changing states
given a state matrix is size (M x M) where M is number of states
b (matrix) : Emissions, matrix of observation probabilities b or beta = observation probabilities. given state
matrix is size (M x O) where M is number of states and O is number of different possible observations
Returns: - #### Tuple with:
> - `array _pi`: Initial probability distribution.
> - `matrix _a`: Transition probability matrix.
> - `matrix _b`: Emission probability matrix.
---
requires: `import RicardoSantos/WIPTensor/2 as Tensor`
FunctionSMCMCLibrary "FunctionSMCMC"
Methods to implement Markov Chain Monte Carlo Simulation (MCMC)
markov_chain(weights, actions, target_path, position, last_value) a basic implementation of the markov chain algorithm
Parameters:
weights : float array, weights of the Markov Chain.
actions : float array, actions of the Markov Chain.
target_path : float array, target path array.
position : int, index of the path.
last_value : float, base value to increment.
Returns: void, updates target array
mcmc(weights, actions, start_value, n_iterations) uses a monte carlo algorithm to simulate a markov chain at each step.
Parameters:
weights : float array, weights of the Markov Chain.
actions : float array, actions of the Markov Chain.
start_value : float, base value to start simulation.
n_iterations : integer, number of iterations to run.
Returns: float array with path.
FunctionDecisionTreeLibrary "FunctionDecisionTree"
Method to generate decision tree based on weights.
decision_tree(weights, depth) Method to generate decision tree based on weights.
Parameters:
weights : float array, weights for decision consideration.
depth : int, depth of the tree.
Returns: int array