Learning DFA from corrections
description
Transcript of Learning DFA from corrections
Learning DFA from correctionsLearning DFA from corrections
Leonor Becerra-Bonache, Cristina Leonor Becerra-Bonache, Cristina Bibire, Adrian Horia DediuBibire, Adrian Horia Dediu
Research Group on Mathematical Linguistics, Rovira i Research Group on Mathematical Linguistics, Rovira i Virgili UniversityVirgili University
Pl. Imperial Tarraco 1, 43005, Tarragona, SpainPl. Imperial Tarraco 1, 43005, Tarragona, SpainE-mail: E-mail:
{leonor.becerra,cristina.bibire,adrianhoria.dediu}@estud{leonor.becerra,cristina.bibire,adrianhoria.dediu}@estudiants.urv.esiants.urv.es
OutlineOutline
Learning from queries
Learning from corrections
Comparative results
Concluding remarks
Further research
Bibliography
Learning from Learning from queriesqueriesIn the last four decades three important formal models have been
developed within Computational Learning Theory: Gold's model of identification in the limit [4], the query learning model of Angluin [1,2] and the PAC learning model of Valiant [7].
Our paper is focused on learning DFA within the framework of query learning.
Learning from queries was introduced by Dana Angluin in 1987 [1]. She gave an algorithm for learning DFA from membership and equivalence queries and she was the first who proved learnability of DFA via queries.
Later, Rivest and Schapire in 1993 [6], Hellerstein et al. in 1995 [5] or Balcazar et al. in 1996 [3] developed more efficient versions of the same algorithm trying to increase the parallelism level, to reduce the number of equivalence queries, etc.
Learning from Learning from queriesqueriesIn query learning, there is a teacher that knows the language and has
to answer correctly specific kind of queries asked by the learner. In Angluin’s algorithm, the learner asks two kinds of queries:
• membership query
- consists of a string s; the answer is YES or NO depending on whether s is the member of the unknown language or not.
• equivalence query
- is a conjecture, consisting of a description of a regular set U. The answer is YES if U is equal to the unknown language and is a string s in the symmetric difference of U and the unknown language otherwise.
Learning from corrections Learning from corrections In Angluin's algorithm, when the learner asks about a word in the language, the teacher's answer is very simple, YES or NO.
Learning from corrections Learning from corrections In Angluin's algorithm, when the learner asks about a word in the language, the teacher's answer is very simple, YES or NO.
Our idea was to introduce a new type of query: • correction query - it consists of a string s; the teacher has to return the smallest string s' such that s.s' belongs to the target language.
Learning from corrections Learning from corrections In Angluin's algorithm, when the learner asks about a word in the language, the teacher's answer is very simple, YES or NO.
Our idea was to introduce a new type of query: • correction query - it consists of a string s; the teacher has to return the smallest string s' such that s.s' belongs to the target language.
Formally, for a string ,
where is the left quotient of by :
where is an automaton accepting .
*
the minimum string of the set , if
, otherwise.
L LC
L L
0 , , , ,L L q q q F
0, , , ,A Q q F L
Learning from corrections Learning from corrections Observation table
An observation table consists of: a non-empty prefix-closed set S of strings, a non-empty suffix-closed set E of strings, and the restriction of the mapping C to the set ..
CC EE
SS
SSΣΣ-S-S
S S E
s
e
C(s.e)
For any , denotes the finite function from E to defined by
Learning from corrections Learning from corrections Closed, consistent observation tables
An observation table is called closed if
An observation table is called consistent if
s.t.t S S s S row t row s
1 2 1 2 1 2, s.t. . . ,s s S row s row s row s a row s a a
row s *
: .row s e C s e
s S S
Example:
Learning from corrections Learning from corrections
, , , ,S a aa aaa E
λλ
λλ aa
aa λλ
aaaa λλ
aaaaaa
φφ
aaaaaaaa φφ
SE
Is it closed?
21 30 a a aa
S
S S
Example:
Learning from corrections Learning from corrections
, , , ,S a aa aaa E
λλ
λλ aa
aa λλ
aaaa λλ
aaaaaa
φφ
aaaaaaaa φφ
SE
Is it closed? Yes
21 30 a a aa
S
S S
Example:
Learning from corrections Learning from corrections
, , , ,S a aa aaa E
λλ
λλ aa
aa λλ
aaaa λλ
aaaaaa
φφ
aaaaaaaa φφ
SE
Is it consistent?
21 30 a a aa
row(a)=row(aa) row(a.a)=row(aa.a)S
S S
Example:
Learning from corrections Learning from corrections
, , , ,S a aa aaa E
λλ
λλ aa
aa λλ
aaaa λλ
aaaaaa
φφ
aaaaaaaa φφ
SE
NoIs it consistent?
21 30 a a aa
row(a)=row(aa) row(a.a)=row(aa.a)S
S S
Learning from corrections Learning from corrections
Learning from corrections Learning from corrections Remark 1 C(α)=βγ implies C(αβ)=γ
Remark 2 C(α)=φ implies C(αβ)=φ
0, , , , , ,A S E C Q q F
( )Q row s s S
0q row
( ) andF row s s S C s
( ), .row s a row s a
Learning from corrections Learning from corrections
Learning from corrections Learning from corrections Lemma 31.2.Sketch of the proof:Remark 3:
1.2.
. . .
0 0s.t. , . , and .s S S e E s S q s e q s C s e C s
0 0. , , .C s e C s q s deadSet A q s e deadSet A
0. , .C s e q s e deadSet A *
0 0. , . . & , . .C s e t q s e t F t q s e t F t t
0 0I. : . , , .t C s e C s q s F q s e F
1 2II. ... , 1nt a a a n
0 0 0 0 0s,e s.t. , . , and .s q s e q s C s e C s 0 1 1 0 0 1 0 1 0 1 1s a , s.t. , , ands q s a q s C s a C s
n-1 n 0 1 0 1s a , s.t. , , andn n n n n n ns q s a q s C s a C s
Learning from corrections Learning from corrections 1 2... nt a a a
0 0 0 0 0s,e s.t. , . , and .s q s e q s C s e C s 0 1 1 0 0 1 0 1 0 1 1s a , s.t. , , ands q s a q s C s a C s
n-1 n 0 1 0 1s a , s.t. , , andn n n n n n ns q s a q s C s a C s …
0 0, . . , and .n nq s e t q s C s e t C s
0. , . .C s e t q s e t F
To conclude we show that if t is the smallest string s.t. then .
0 , . .q s e t F .C s e t
Learning from corrections Learning from corrections Lemma 4 Assume that (S,E,C) is a closed, consistent observation table. Suppose the automaton A(S,E,C) has n states. If is any automaton consistent with C that has n or fewer states then is isomorphic with A(S,E,C).
Sketch of the proof: We define the function
1. is well defined
2. is bijective
3.
4.
5.
The proof of Theorem 1 follows, since Lemma 3 shows that A(S,E,C) is consistent with C, and Lemma 4 shows that any other automaton consistent with C is either isomorphic to A(S,E,C) or contains at least one more state. Thus, A(S,E,C) is the unique smallest automaton consistent with C.
: ,Q Q
0, , , ,A Q q F A
0 0q q F F
, ( ) ,row s a row s a
0 ,row s q s
Learning from corrections Learning from corrections o Correctness
If the teacher answers correctly then if LCA ever terminates it is clear that it outputs the target automaton
o Termination
Lemma 5. Let (S,E,C) be an observation table. Let n denote the number of different values of row(s) for s in S. Any automaton consistent with C must have at least n states.
o Time analysis
The total running time of LCA can be bounded by a polynomial in n and m.
Comparative results Comparative results
Language descriptionLanguage description LL** LCALCAIIdd
AlphabAlphabetet
Linear transition Linear transition tabletable
Final Final statesstates
EQEQ MMQQ
EQEQ CQCQ
LL
11
{0,1}{0,1} (1,2,1,2,3,4,3,3,1,3)(1,2,1,2,3,4,3,3,1,3) {1}{1} 33 4444 22 88
LL
22
{0,1}{0,1} (1,2,0,3,3,0,2,1)(1,2,0,3,3,0,2,1) {0}{0} 22 1919 11 66
LL
33
{0,1}{0,1} (1,2,3,4,4,4,1,4,4,4)(1,2,3,4,4,4,1,4,4,4) {2,3}{2,3} 22 2323 22 1010
LL
44
{0,1,a,{0,1,a,b}b}
(1,2,2,2,2,3,2,2,2,2,(1,2,2,2,2,3,2,2,2,2,2,2,0,0,4,2,2,2,2,5,02,2,0,0,4,2,2,2,2,5,0,0,3,3),0,3,3)
{3,5}{3,5} 44 101088
22 4848
LL
55
{0,1}{0,1} (1,2,3,4,5,6,7,8,9,1(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,0)0,11,12,13,14,15,0)
{1,2,4,{1,2,4,8}8}
33 2424 33 88
LL
66
{0,1}{0,1} (1,2,3,2,2,2,4,2,5,2,(1,2,3,2,2,2,4,2,5,2,1,2)1,2)
{5}{5} 33 6565 11 77
Comparative results for different languages using L* and LCA
Comparative resultsComparative results
Language descriptionLanguage description LL** LCALCAIIdd
AlphabAlphabetet
Linear transition Linear transition tabletable
Final Final statesstates
EQEQ MMQQ
EQEQ CQCQ
LL
11
{0,1}{0,1} (1,2,1,2,3,4,3,3,1,3)(1,2,1,2,3,4,3,3,1,3) {1}{1} 33 4444 22 88
LL
22
{0,1}{0,1} (1,2,0,3,3,0,2,1)(1,2,0,3,3,0,2,1) {0}{0} 22 1919 11 66
LL
33
{0,1}{0,1} (1,2,3,4,4,4,1,4,4,4)(1,2,3,4,4,4,1,4,4,4) {2,3}{2,3} 22 2323 22 1010
LL
44
{0,1,a,{0,1,a,b}b}
(1,2,2,2,2,3,2,2,2,2,(1,2,2,2,2,3,2,2,2,2,2,2,0,0,4,2,2,2,2,5,02,2,0,0,4,2,2,2,2,5,0,0,3,3),0,3,3)
{3,5}{3,5} 44 101088
22 4848
LL
55
{0,1}{0,1} (1,2,3,4,5,6,7,8,9,1(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,0)0,11,12,13,14,15,0)
{1,2,4,{1,2,4,8}8}
33 2424 33 88
LL
66
{0,1}{0,1} (1,2,3,2,2,2,4,2,5,2,(1,2,3,2,2,2,4,2,5,2,1,2)1,2)
{5}{5} 33 6565 11 77
Comparative results for different languages using L* and LCA
o We have improved Angluin's query learning algorithm by replacing MQ with CQ. This approach allowed us to use a smaller number of queries and, in this way, the learning time is reduced.
o One of the reasons of this reduction is that an answer to a CQ contains embedded much more information.
o Another advantage of our approach is that we can differentiate better between states.
o Among the improvements previously discussed, we would like to mention here the adequacy of CQ's in a real learning process. They reflect in a more accurate manner the process of children's language acquisition. We are aware that this kind of formalism is for an ideal teacher who knows everything and always gives the correct answers, which is an ideal situation. The learning of a natural language is an infinite process.
Concluding remarksConcluding remarks
o To prove that the number of CQs is always smaller than the number of MQs
o To prove that the number of EQs is always less or equal
o To prove the following conjectures:
o To show that we have improved on the running time
o CQs are more expensive than MQs. How much does this affect the total running time?
Further researchFurther research
consistentAng consistentLCA
closedAng closedLCA
[1] D. Angluin, Learning regular sets from queries and counterexamples. Information and Computation 75, 1987, 87-106.
[2] D. Angluin, Queries and concept learning. Machine Learning 2, 1988, 319-342.
[3] J. L. Balcázar, J. Díaz, R. Gavaldá, O. Watanabe, Algorithms for learning finite automata from queries: A unified view. Chapter in Advances in Algorithms, Languages and Complexity. Kluwer Academic Publishers, 1997, 73-91.
[4] E. M. Gold, Identification in the limit. Information and Control 10, 1967, 447-474.
[5] L. Hellerstein, K. Pillaipakkamnatt, V. Raghavan, D.Wilkins, How many queries are needed to learn? Proc. 27th Annual ACM Symposium on the Theory of Computing. ACM Press, 1995, 190-199.
[6] R. L. Rivest, R. E. Schapire, Inference of finite automata using homing sequences. Information and Computation 103(2), 1993, 299-347.
[7] L. G.Valiant, A theory of the learnable. Communication of the ACM 27, 1984, 1134-1142.
BibliographyBibliography