skip to Main Content

With the advancement in technology, industry has been moving towards automation and intelligence. In this regards artificial intelligence and machine learning has played a vital role. Natural language processing (NLP) is a field of computer science and linguistics which focuses on methods to process the natural languages. So, which one is more reliable and efficient in natural language processing, Finite state machine [FSM] or Push down Automata?

2

Answers


  1. It seems like the real question you want to be asking are: “What are some efficient techniques in natural language processing?” But I will address your question first.

    First of all, neither FSA (Finite State Automata) not PDA (Push Down Automata) are sufficient techniques to model language. FSAs can handle regular languages. They cannot, however, even answer the question of whether a word is a palindrome. PDAs are a little more powerful, and can answer such questions. Turing machines give universal computation and are useful for writing programs of arbitrary complexity.

    Now to start bridging this gap. Natural languages are not regular languages. They thus cannot be handled by FSAs. Some context-free grammars such as LR(k) grammars are handled by PDAs, however natural human language is not context-free. As an example. The following three statements. “Jill drove to the grocery store to meet her friend Sally before she picked up her kids. Sally bought three boxes of cereal. Then she drove to the school.” While this is poor grammar, it is “natural” in that they are utterances that people make and they are generally parseable by other people. The antecedent to the pronoun “She” in the third sentence clearly refers to Jill as she is the one with children. However, it is ambiguous and we have to infer that association.

    The amount of ambiguity in context in natural human language makes it impossible to parse deterministically. Instead, we turn towards the fields of statistics and decision theory to make our inferences about the maximally likely model for the communication.

    The locality but non-determinism in speech and writing are one of the things that make the application of machine learning techniques such as the utilization of deep recurrent neural networks so immensely effective by comparison to their classical rule-based counterparts.

    While the term Neural Network is a bit of a misnomer as ultimately the human brain is far, far more complex than these rudimentary models from a neurological perspective, the general learning through approximate inference is ostensibly close to reality. We might better call these methods “Differentiable Computing” but that is a digression for another time.

    In summary. The answer to your question you actually asked would be PDAs are going to produce better models than FSAs but both are going to be absolutely worthless by comparison to even rudimentary statistical methods.

    If you are curious about NLP, I would actually recommend a course in machine learning and a follow up in deep learning.

    Andrew Ng has a good series of courses that are targeted toward beginners. After that, I would follow up with Sirajs course on deep learning in Tensorflow.

    Login or Signup to reply.
  2. Even though there are many techniques to do NLP, the state of art way is to use deep learning. Many significant improvements are shown in NLP using Deep Learning Techniques. This has happened because of the enormous amount of processing power which is available at low cost. If you want to read cutting edge techniques used in NLP domain or any other research domain, Go to google scholar (https://scholar.google.com/).

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search