Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Basic Ideas Behind Probabilistic Automata Philosophy Essay

Paper Type: Free Essay Subject: Philosophy
Wordcount: 5173 words Published: 1st Jan 2015

Reference this

The concept of learning is used in many contexts and has various interpretations. By combining several dictionary definitions for the verb to learn” we get: to gain knowledge, understanding Or skill by study, experience, practice, or being taught”. In the context of designing machines that learn, learning is often interpreted as inferring rules from examples. The inferred rule is

Then used to perform a related task, such as prediction or identification. There are many cases in which a fully determined algorithm or machine is the best solution to a problem. Such is the case when using a robot to perform a well defined task in an assembly line, or when designing a compiler for a specific programming language.

Probabilistic Automata (PAs) constitute a mathematical framework for the speci¬cation and analysis of probabilistic systems as are based on state transition systems and make a clear distinction between probabilistic and non-deterministic choice. We will go into the differences and similarities between both types of choices later in this paper. As subsume nonprobabilistic automata, Markov chains and Markov decision processes. The PA framework does not provide any syntax, but several syntaxes have been de¬ned on top of it to facilitate the modeling of a system as a PA. Properties of probabilistic systems that can be established formally using PAs include correctness and performance issues, such as: Is the probability that an error occurs small enough? Is the average time between two subsequent failures large enough? What is the minimal probability that the system responds within 3 seconds? The aim of this paper is to explain the basic ideas behind PAs and their behavior in an intuitive way.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

PROBABILISTIC AUTOMATA

Basically, a probabilistic automaton is just an ordinary automaton (also called labeled transition system or state machine) with the only difference that the target of a transition is a probabilistic choice over several next states. The most powerful (and perhaps most popular) type of probabilistic automata which are used in numerous practical applications are Hidden Markov Models (HMM’s). Before going into the details of this, we brie¬‚y recall the notion of a non-probabilistic automaton, which also known as a state machine, a labeled transition system, or a state transition system. The only difference between a non-probabilistic and a probabilistic automaton is that the target of a transition in the latter is no longer a single state, but is a probabilistic choice over several next states.

COIN FLIP EXAMPLE

A transition in a Probabilistic Automata may reach one state with probability 1/2, 3/4 and another one with probability 1/2, 3/4 too. In this way, we can represent a coin flip and also a dice roll.

Transition representing a fair coin flip

EXPLANATION

This diagram illustrates a coin flip. As there is a head and a tail in a coin, there is only a probability that the resultant should be a head or a tail. This diagram explains that as when there comes the head on the coin, only the probability of ½ will be achieve and as when there comes a tail on the coin, only the probability of ¾ will be achieved.

PROCESS SENDING BITS MODEL EXAMPLE

Modeling Multi labeled Transitions

EXPLANATION

Each transition in a Probabilistic Automata is labeled with a single action. The above given figure explains the model of a process that sends a bit 0 or 1 each with a probability i.e. at once, a 0 can be sent or a 1 can be send. However, the above model is not a Probabilistic Model as there are two actions appear in a single transition.

BEHAVIOUR OF PROBABILISTIC AUTOMATA

In order to understand what the behavior of a PA is like and how the nondeterministic choices in a PA are resolved, consider the channel in Figure 3 and recall that send (1) models the sending of a bit by a sender process, which corresponds to the receipt by the channel. What can happen during the execution of this channel? Being in its start state “, the channel may either receive a 0, a 1 or it might receive no bit at all. One of the fundamental aspects in the theory of PAs is that each of these possibilities may occur with a certain probability. Say that the probability on a 0 to arrive is q0, on a 1 to arrive is q1and on no bit to arrive at all is 1-q0-q1. Then the channel takes the transition send (0) with probability 0. Similarly, it takes the transition send (0) with probability q1and remains in the state “(forever) with 1-q-q1. In the latter case we say that the execution of the channel is interrupted.

Each choice for the values q0and q1in [0, 1] yields a potential (and different) behavior of the channel. In this example, the probabilities naturally arise from a probabilistic environment (a sender process) that determines probabilistically whether to send a bit and which one. In general, we describe the resolution of the nondeterministic choices by an adversary.

Upon taking the transition that has been chosen probabilistically, the system determines its next state according to the target distribution of the transition chosen. Before resolving the nondeterministic choices, do not know the probability to lose a bit, we can only say that it is at most 01. After taking the transition, the procedure starts over in the new state: the channel makes a probabilistic choice between the outgoing transitions in the new state and an interruption. That is, in the states I, the channel has the choice between re (I) and an interruption; in the state “there is a choice between send (0), send (1) and interruption. Obviously, these choices are not there if we are in as the result of an interruption.

Moreover, when resolving the nondeterministic choice in “, we do not have to take the same probabilities q0and q1as before: for instance, the environment may now send the bits with different probabilities. Moreover, the probabilities may be different depending on the bit that the channel previously received. Therefore the resolution of the non-determinism can be history-dependent: it may not only depend on the current system state, but also on the path leading to that state.

Lossy Channel

PROBABILISTIC VERSUS NON-DETERMINISTIC CHOICES

One can specify non-deterministic choices in a Probabilistic Automata in exactly the same way as in a NA, viz. by having internal transitions or by having several transitions leaving from the same state. Also the distinction between external and internal non-determinism immediately carries over to Probabilistic Automata`s. Hence, the probabilistic choices are speci¬ed within the transitions of a Probabilistic Automata and the nondeterministic choices between the transitions (leaving from the same state) of a Probabilistic Automata.

In particular, non-determinism cannot be replaced by probability in cases. As mentioned, non-determinism is used if we deliberately decide not to specify how a certain choice is made, so in particular we do not want to specify a probability mechanism that governs the choice. Rather, we use a probabilistic choice if the event to be modeled has really all the characteristics of a probabilistic choice. For instance, the outcome of coin ¬‚ip random choices in programming languages. Thus, probability and non-determinism are two orthogonal and essential ingredients in the Probabilistic Automata model.

An important difference between probabilistic and nondeterministic choice is that the former are governed by a probability mechanism, whereas the latter are completely free. Therefore, probabilistic choices fully obey the laws from probability theory, and in particular the law of large numbers.

THE LAW

This law states that “If the same random choice is made very often, the average number of times that a certain event occurs is approximately (or, more precisely, it converges to) its expected value”.

EXPLANATION

If we ¬‚ip a fair coin one hundred times, it is very likely that most of the outcomes is heads and the other half is tails. If, on the other hand, we make a nondeterministic choice between two events, then we cannot quantify the likelihood of the outcomes. In particular, we cannot say that each of the sequences is equally likely, because this refers to a probabilistic choice!

The following example illustrates the combination of nondeterministic choice and probabilistic choice.

Transition of a Fair & Un-Fair Coin Flip

Probabilistic Models without Nondeterministic Choice

Probabilistic models without non-determinism are sometimes called purely probabilistic models. Below we discuss discrete time, continuous time, and semi-Markov Chains, which widely used in performance analysis, economics and the social sciences.

Discrete time Markov Chains

A discrete time Markov Chain (DTMC) is basically an unlabeled PA in which each state has exactly one outgoing probabilistic transition.

Semi-Markov chains

Semi-Markov chains (SMCs) generalize CTMCs, by allowing the sojourn time to be determined by arbitrary probability distributions. An advantage of purely probabilistic models over models with non-determinism is that the probability on a certain event is a single real number, not an interval. A long research tradition in these models has put forward many algebraic, analytical and numerical techniques to compute these probabilities. A disadvantage is that the absence of non-determinism does not allow an asynchronous parallel composition operator.

Probabilistic Models with External Nondeterministic Choice

In models that combine probabilistic choice with external non-determinism, all outgoing edges of a state have different labels. We discuss Markov Decision Processes, Probabilistic I/O automata and Semi-20 Markov decision processes. The advantage of these models is that an asynchronous parallel composition operator can be defined, allowing a large system to be split up by several smaller components. Furthermore, when we put the system in a purely probabilistic environment (such that each system transition synchronizes with an environment transition), the whole system becomes purely probabilistic and the analysis techniques for these systems can be used.

Markov Decision Processes

A Markov Decision Process (MDP) is a PA without internal actions in which each state contains at most one outgoing transition labeled with a.

Probabilistic I/O automata

Probabilistic I/O automata (PIOAs) combine external non-determinism with exponential probability distributions. The memory less property of these distributions allows a smooth definition of a parallel composition operator, as far as independent transitions are concerned. For synchronizing transitions, the situation is more difficult. Various solutions have been proposed. We find the solution adopted in PIOAs one of the cleanest. This model partitions the visible actions into input and output actions. Output and internal actions are governed by rates. This means that they can only been taken when the sojourn time has expired. Furthermore, the choice between the various output and internal action is purely probabilistic. Input actions, on the other hand, are always enabled and can be taken before the sojourn time has expired.

Semi-Markov decision processes

Peterman discusses Semi-Markov decision processes (SMDPs), which are basically Semi-Markov chains with external nondeterministic choice.

Probabilistic Models with Full Non-determinism

Probabilistic automata

We have already seen that probabilistic automata and variants thereof combine nondeterministic and discrete probabilistic choice. Several process algebras, and the probabilistic process algebra defined allow one to describe such models algebraically.

Interactive Markov chains

Interactive Markov Chains (IMCs) combine exponential distributions with full non-determinism. The definition of a parallel composition operator poses the same problems as when one combines exponential distributions with external non-determinism. IMCs propose an elegant solution by distinguishing between interactive transitions and Markovian transitions. The Markovian transitions specify the rate with which the transition is taken, similarly to CTMCs.

SPADES

Full non-determinism and arbitrary probability distributions are combined in the process algebra SPADES and its underlying semantic model stochastic automata (SAs). The sojourn time in a state of an SA is specified via clocks, which can have arbitrary probability distributions. Stochastic automata in their turn have semantics in terms of stochastic transition systems. These are transition systems in which the target of a transition can be an arbitrary probability space.

The behavior of a PA relies on randomized, partial adversaries. These resolve the nondeterministic choices in the model by replacing them by probabilistic ones. When ranging over all possible adversaries, one obtains the set of associated probability spaces of a Probabilistic Automata.

Learning Probabilistic Automata

The most powerful (and perhaps most popular) type of probabilistic automata which are used

In numerous practical applications are Hidden Markov Models (HMM’s). As noted previously,

HMM’s are used to model the probabilistic generation of various natural sequences such as human speech and handwritten text .A commonly used procedure for learning an HMM from a given sample is a maximum likelihood estimation procedure that is based on the Baum-Welch method (which is a special case of the Expectation Modification algorithm). However, this algorithm is guaranteed to converge only to a local maximum, and thus we are not assured that the hypothesis it outputs can serve as a good approximation for the target distribution. One might hope that the problem can be overcome by improving the algorithm used or by finding a new approach. Unfortunately, there is strong evidence that the problem cannot be solved efficiently.

The HMM training problem is the problem of approximating an arbitrary, unknown source distribution by distributions generated by HMM’s. They prove that HMM’s are not trainable in time polynomial in the alphabet size. Gillman prove that learning is hard: any algorithm for learning must make exponentially many oracle calls. Their method is information theoretic and does not depend on separation assumptions for any complexity classes.

Natural simpler alternatives, which are often used as well, are order L Markov chains (also known as n-gram models) in which the probability that a symbol is generated depends on the last L symbols generated. These models were first considered by Shannon Sha51] for modeling statistical dependencies in the English language, and were later studied in the same context by several researchers (cf. Jel83, BPM+92]). The size of order L Markov chains is exponential in L and hence, if one wants to capture more than very short term memory dependencies in generated sequences such as natural language, then these models are clearly not practical. Studies relate families of distributions, where the algorithms depend exponentially on the order, or memory length, of the distributions. If we require that for each state in an HMM, there will be only one outgoing transition labeled by each symbol, then we get a restricted family of HMM’s known as unfiled hidden Markov models. As these Automata will be our centre of attention, we shall simply refer to them as Probabilistic Finite Automata (PFA’s). The problem of learning PFA’s from an infinite stream of data was studied by Rudich and by DeSantis, Markowsky, and Wegman. Carrasco and Oncina give an alternative algorithm for learning in the limit when the algorithm has access to a source of independently generated sample strings. The problem of exactly learning a PFA using a probability oracle and an equivalence oracle (which returns as counterexamples strings which have different probabilities of being generated by the target PFA and by the queried hypothesis).

PROBABILISTIC TIMING

Timing can be incorporated in the PA model in a similar way as in the NA model (c.f. the “old fashioned recipe for time” A probabilistic timed automaton (PTA) is a PA with time passage actions. These are actions that indicate the passage of “d” time units. While time elapses, no other actions take place and, in the PTA approach, time advances deterministically. So, in particular, no (internally) non-deterministic or probabilistic choices can be specified within time passage actions. The state of the PTA is well-defined at each point in time and that, conversely, two subsequent time passage actions can be combined into a single one.

DEFINITION

A PTA is a PA enriched with a partition of into a set of discrete actions and the set R>0 of positive real numbers or time-passage actions.

Part of PTA

As PTAs are a special kind of PAs, we can use the notions defined for PAs also for PTAs. By letting time pass deterministically, PTAs treat probabilistic choice, nondeterministic choice and time passage as orthogonal concept which leads to a technically clean model.

The discrete probabilistic choices over time can be encoded in PTAs via internal actions. Nondeterministic choices over time can be encoded similarly: just replace the probabilistic choice in the example by a nondeterministic one. Thus, although we started from a deterministic view on time, non-determinism and probabilistic choices over time sneak in via a back door. The advantage of the PTA approach is that we separate concerns by specifying one thing at the time: time passage or probabilistic/nondeterministic choice.

EXAMPLE

We can use a PTA to model a system that decides with an internal probabilistic choice

Whether to wait a short period (duration one time unit) or a long period (two time units) before performing an a action. This PTA is partially given in Figure that is given above, where the second element of each state records the amount of time that has elapsed.

Continuous distributions over time can of course not be encoded in a PTA, since such distributions cannot be modeled in the PA model anyhow.

THE RIGHT REASONS FOR CONSIDERING A RISK ACCEPTABLE

Experience has shown that there are four ways of justifying a decision to carry out a dangerous activity and that they have very different chances of gaining unanimous acceptance, depending on whether they call upon pure logic, scientific calculations, or statistics.

The feared event is physically impossible

The most radical way to reassure the people concerned is to prove that the activity in question uses inherently safe techniques and that the feared accident is physically impossible. In such cases, the message is almost always very easy to get across, because it calls upon basic logic. For example, it is very easy to get people to understand that a buried tank cannot explode if it comes in contact with a flame because it is impossible to keep a flame going in the ground in the absence of air. Similarly, everyone can understand that a tank’s wall cannot be pierced by a missile if it is protected by earth of one meter depth.

Dangerous effects of the feared event will not reach any crowded or populated areas.

When the accident that is feared is not logically impossible, it is still easy to get and ground a decision to allow the dangerous activity if one can prove that, even in the worst-case scenario, the scope of the dangerous effects is limited enough not to reach crowded or populated areas. This type of argument may be used, for example, to get acceptance of the risk of a flash fire following a flammable gas leak at a loading station that is correctly equipped with means to limit the leak’s flow rate and duration. In this case, the approach is deterministic. Calculations that are based on the laws of physics will be used to prove the safety around the installations. Even if the calculations may be marked by great uncertainty, the experts will always manage to agree on a distance that everyone is sure will not be exceeded.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

The feared event is slow enough to guarantee that the population can be

Kept out of harm’s way.

This third type of argument may be used for feared events that are not logically impossible and the harmful effects of which are of large enough magnitude to reach crowded or populated areas but unfold slowly enough to guarantee that they will not have any catastrophic consequences. This applies, for example, to the smoke generated by fires that spread slowly enough to give nearby residents enough time to close their windows and doors or evacuate the premises if necessary. In this case, the proof of safety is usually based on empiricism, if the phenomenon is well known, and, if need be, on physical calculations such as combustion and evaporation rates.

The probability of the feared event’s occurring is small enough to believe

That it will never occur.

The very small probability of occurrence may be used to justify a favorable decision only as a last resort, if none of the preceding arguments can be invoked. The main reason for putting this argument in last place is that it is the least convincing and most difficult to develop. It is the least convincing because: aversion to risk varies greatly from one individual to the next; – the probabilistic projections that are applied to events that are rare or never seen are only constructs of the mind; the available statistical data often have only tenuous connections with the case studied, and the margins of error are considerable; – even a highly improbable event can nevertheless occur tomorrow. It is also the most difficult to develop because it requires very fine analysis, strict logic, and the use of often uncertain figures. Still, despite this approach’s known weaknesses, it is used intensively, for rejecting it would lead to refusing scads of reputedly dangerous activities that are no more dangerous than other better-known and generally accepted activities such as urban gas distribution or maritime passenger transport. To be understood by the largest number of people, we talk about the probability of “dangerous effects” occurring in places used by people rather than “expected value of death”, as is usual elsewhere. This concept has many advantages, among them: – a more accurate perception of the true fears of the people, who do not want a disaster at all and are not, willing to accept concessions according to the number of fatalities. If the risk of a disaster is deemed acceptable, it is always because of its probability, not because of the disaster’s magnitude; – it takes account of the non-lethal consequences (physical and mental trauma and injury); it simplifies the study by removing what is most open to contestation, for there are effectively too many unpredictable givens to allow the establishment of a true correlation between the physical effects of an accident and too few observations of major accidents to allow serious projections. So, for example, it is much easier and certain to content oneself with estimating the probability that a building will be subjected to dangerous pressure overloads than to estimate the number of people who would die if the building collapsed. When it comes to buildings’ stability, it is also worthwhile to confine oneself to the risk of dangerous effects, in the interests of quality communication. The situation of a building’s collapse is already unacceptable to the residents. Speculating about the number of deaths to judge the acceptability of a risk would imply that the Authority considered it acceptable for the resident to be buried under the rubble of his home, as long as he survived. Such an attitude would create doubt as to the protective role that the citizen is entitled to expect from public services. To sum up, the procedure used in safety studies consists in asking four questions in the order of the most reassuring answers, as follows:

Is the feared event physically impossible?

If it is not, then:

Is the magnitude of the dangerous effects limited enough not to reach crowded or populated areas?

If it is not, then

Is the pace of the feared event slow enough to make it possible to get the threatened population out of harm’s way?

If it is not, then:

Is the feared event’s probability low enough to believe that it will not occur?

A single ‘yes’ answer will be sufficient to justify the decision to grant the permit and the study’s authors know that they can end their proof as soon as they have a good reason. This practice saves time for the study’s authors and evaluators alike.

Decision Making Process

COMPUTATIONAL COMPLEXITY OF TRAINING PA

The hardness of training the 2 state null PA constraints is shown via a strong non-approximability result for the single string problem for the same class. We emphasize again that the training problem for the class of s-states null constraints is the natural problem of finding a near optimal probabilistic automaton of a given number of states. It should be noted that sowing that the class of null constraints is hard to train is much more significant and also more difficult than constructing an artificial class of constraints that is hard to train.

THEOREM

For any α > 0, the single string problem for the class of 2- state null PA constraints is not approximable within in time polynomial in the alphabet size a and |w|, where w is the input word unless P=NP

To explain the intent of the transformation, there is a need to introduce some terminology concerning stochastic matrices.A deterministic stochastic matrix is a stochastic matrix in which for each state i and for each word Z, there is atmost one transition with a positive probability out of i labeled with z. Thus any deterministic stochastic matrix M, induces for each letter z, a transition function from states to states. If the letter has transitions out of each of the two states, then the associated function must be one of the four possible total bolean functions over one variable these are :

0-reset

1-reset

Identity(id)

Flip(flip)

Four Boolean Functions

With a letter that has a transition out of only one of the two states we associate one of the four possible partial one-variable Boolean functions: (V) 0→0 , 0→1,1→0,1→1. A letter is referred into a total partial transition function as a total respectively partial letters. If in a stochastic matrix M a letter z is associated with flip, for example, we write z = m flip. If a pair of letter x and y are id and flip respectively, then we write (x, y) =m (id, flip).

Probabilistic automata for computing with words

A probabilistic automaton for computing with words (or PACW for short) is a probabilistic automaton Mw = (Q, δ, q0, F), where the components Q, q0, F are the finite set of states, the initial state and set of final states.

(b′) is a finite subset of D (∑), where ∑ is a finite set of symbols, called the underlying

Input alphabet.

(e′) δ is a transition probability function from Q Ã- to D (Q).

The new features of the model in this definition are that the input alphabet consists of some (not necessarily all) probability distributions over a finite set of symbols (i.e., the underlying input alphabet) and the transition probability function can be specified arbitrarily. In particular, when = D (∑), we say that the PACW is a probabilistic automaton for computing with all words (or PACAW for short). The choice of and the specification of the transition probability function δ are provided by expert from experiment or intuition. The definition of language accepted by a PACV is applicable to PACWs, and we thus get a direct way of computing the string of words.

Probabilistic grammars for computing with words

Before introducing probabilistic grammars for computing with words, let us recall definitions

A grammar is a tuple G = (V, ∑, P, S) where V and ∑ are respectively finite sets of variables and terminals with V ∩ ∑ = Ø, P is the set of productions of the form α → β, and S ε V is the starting variable.

The following are some frequently used notations on grammar G:

1) Is the set of all finite-length strings of ∑ (including the empty string ǫ), and

= {ε}.

2) η → means that there exist ω1, ω2 ε (V ∑)* and α → β ε P such that

η = ω1 α ω2 and γ = ω1 β ω2.

3) η → γ denotes that there is a sequence of strings ξ1, ξ n such that ξ1 = η,

= γ, and → for all 1 ≤ i ≤ n − 1.

4) The language generated by the grammar G is defined as L (G) = {s ε : S ε → s}.

The name G below the arrows will be omitted if the grammar G that is being used is obvious. The form of productions determines the type of a grammar. It is well known that regular grammars are equivalent to deterministic finite automata, context free grammars are equivalent to pushdown automata, and context-sensitive grammars are equivalent to Turing machines.

Probabilistic automata vs. probabilistic grammars

Given a probabilistic grammar G = (V, ∑, P, S), the following process generates a probabilistic automaton = (Q, ∑, δ, q0, F) satisfying that L (G) (s) = L () (s) for all s ε:

1) Let Q = V, q0 = S, and F = {A ε V: Pr (A → ǫ) = 1}.

2) Define δ (A, a) (B) = Pr (B|A, a) for all A, B ε Q and a ε ∑.

In turn, given a probabilistic automaton M = (Q, ∑, δ, q0, F), we can also construct

An equivalent probabilistic grammar = (V, ∑, P, S):

1) Let V = Q and S = q0.

2) Let P = {A → aB: A, B ε V, a ε ∑} ε {A → ǫ: A ε V} and define the probabilities of the productions as follows:

Pr (A → aB) = δ (A, a) (B);

Pr (A → ε) =

For convenience, we say that is the probabilistic automaton induced from the probabilistic grammar G and also is the probabilistic grammar induced from the probabilistic automaton M. Clearly, the construction above is applicable to PACWs and PGCWs, which gives the equivalence between them.

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: