Consider this scenario: a guy (say John) has three dice. The shapes of the die are different. Die 1 has numbers 1, 2, 3, 4, 5, 6 on it, Die 2 has numbers 1, 2, 3, 4 and Die 3 has numbers 1, 2, 3, 4, 5, 6, 7, 8, as seen in the following figure [1]:

John throws one die at a time, and the next die he chooses is based on his previous selection. For example, he is more **likely **to pick up Die 1 if he picked Die 2 last time, and he **unlikely** to pick Die 1 if he picked Die 3 last time. We do not know which die he selected but we can see the number shown on the die. Now, after the observation of a sequence of throwing dice, what do you think the next number would be?

This may sound a very difficult question but actually in linguistics the researchers are dealing with this kind of problem all the time. It’s like you can **hear** the sound of each word every time and based on the **hidden** connection rule of the words (i.e. syntax and meaning) we want to predict what the next word could be. Mathematical models were built to represent this type of question. In this example, the states are determined by its previous state(s) and we call it Markov Model, or Markov Chain. A simple case is *the state is determined by its previous one state* — a Markov chain of order 1. Also, which die (state) was selected is not know and instead, the consequence of the state (number) can be observed. It is called Hidden Markov Model (HMM).

There are three problems in HMM that need be addressed:

- Evaluation: Given the probability of the state transmission and the probability of the shown observations of each hidden state (i.e. for a given HMM), calculate the probability of an observed sequence.
- Decoding: Given the HMM and the observed sequence, what is the most likely hidden states happened behind this.
- Learning: Given the observed sequence, estimate the HMM. As we can see from this, the third problem is the most difficult one.

Hidden Markov Model is a powerful tool for analysing the time series signal. There is a good tutorial explaining the concept and the implementation of HMM. There are codes implementing HMM in different languages such as C, C++, C#, Python, MATLAB and Java etc. Unfortunately I failed to find one implemented in **LabVIEW**. This may be a reinvention of the wheel, but instead of calling the DLLs in LabVIEW, I built one purely in LabVIEW with no additional add-ons needed.

Multiple references were used to implement this LabVIEW HMM toolkit, [2], [3], [4] [5]. The test demo of forward algorithm, backward algorithm and Viterbi Algorithm are in the code referenced [3].

The following demo analysed the hidden states of a chapter of texts. You can find the detailed description in [2]. The following figure is the observed sequence of the HMM model. There are about 50,000 characters (including space) in this text. All punctuations were removed and only the space and letters were kept as the hidden states. Thus there are 27 states, State 0 to 26, of which State 0 = Space, State 1 = a/A, State 2 = b/B and so on.

With no prior knowledge of this text, or even English, we initialise a HHM model that has two hidden states. The probability of propagating from one state to another is unknown yet. The 26 letters are the observed phenomenons of the hidden states. The probability of each letter in State 1 is plotted in dots, and the probability of each letter is plotted in line in the 2nd state.

Running the forward-backward algorithm in HMM we obtained two states: Letters A, E, I, O, U more likely to appear in State 1 while the rest letters more likely to appear in State 2. So with no specified rules or prior knowledge we managed to divide the letters into vowels and consonants.

## References:

[1] http://www.niubua.com/?p=1733

[2] https://www.cs.sjsu.edu/~

### Watch Bo’s presentation from the August London LabVIEW User Group where he talks through the Hidden Markov Model in LabVIEW.

## Author Bio

#### Dr Bo Fu

Certified LabVIEW Architect

Bo was exposed to LabVIEW when he started his Ph.D. in 2007. He has been developing biological devices for the neuroscience lab since then, and has developed a high-speed compressive-sampling camera and a real-time spatial light modulator (SLM) for his Ph.D. A precise flow control device for neuron feeding, a Daphnia heart beat monitoring device and a multiple-electrode array (MEA) for monitoring neural activity were developed when he worked as a Post Doc.

Bo is now working on solving signal processing problems and applying Machine Learning to reduce manual operation. Bo regularly contributes to the Austin Consultants blog, maintains his own personal blog at bofu.me and is a Certified LabVIEW Architect.

Hi My name is Kyungil Lim from KOREA (Republic of)

These days I study Markov model, today I find your open source HMM!

But I can not operate it, because I don’t find subVI ”String to character Array_ogtk”.

Please some way to run

Thank you

Hi Kyungil,

Thanks for pointing it out. ”String to character Array_ogtk” is a vi in OpenG. So you need to download OpenG toolkit in VIPM to open it.

Actually all this vi does is to convert a string into individual characters. So for a input ‘hello’ you will get the output as an array [h, e, l, l, o]. You can duplicate this code with this method https://lavag.org/uploads/monthly_09_2011/post-26690-0-66519300-1316189881.png

Cheers,

Bo

Hi, Dr Fu, My name is Jacko from Hong Kong.

Nice to meet you. I am an labview user.

Recently, I am studying the HMM.

I have downloaded your labview HMM example. I would like to study of it.

However, what I am using is the labview 7 , which is a very old version.

If I want to open your vi(HMM.lvproj), which version of labview should I use?

Thank you very much.

Hi Jacko,

The code was written in LabVIEW 2014. So you need LabVIEW 2014 or later to open it.

Cheers,

Bo

Hello Dr;

I have LABVIEW 2013 is there a solution to use HMM on it.

Thanks

Hello Dr Bo Fu,

I need develop” log-likelihood ratio ” for a Serial Decoder Turbo in labview, suddenly you have some VI that help me to do it.