History of Natural Language Processing


For the final project for History of Computing, I decided to explore the history of Natural Language Processing. I wanted to explore this topic because I am fascinated by the intersectionality of this subject. It also provides us a brand new way to evaluate implicit biases in our culture. Natural Language Processing is a broad field, and its history intertwin with the history of machine translation, the history of Linguistics, as well as the history of artificial intelligence. In this article, I will walk through the timeline of major achievements in these related fields that led to what we now know as natural language processing.

The History of natural language processing started in the early seventeenth century. Although nothing physical was built, a few philosophers such as Leibniz and Descartes proposed code that would relate words from different languages with each other.

In the mid 1930s, Georges Artsrouni and Peter Troyanskii made contributions to the first "translating machine." Figure 1 is a photo of Georges Artsrouni’s machine which was named "mechanical brain." Although Georges Artsrouni was known to apply for the first patent for translating machines, Peter Troyanskii’s proposal is a lot more detailed. This was before the time of computers, and the machines utilize paper tapes.

Artsrounis Machine

Figure 1: Artsrouni’s Machine [1]

In 1950, "Computing Machinery and Intelligence" was published by Alan Turing, and the Turing test was proposed. This advance in artificial intelligence relies on "on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge", which is a big part of the goal of the modern field of natural language processing.

In 1954, the Georgetown-IBM Experiment was held in New York. The Georgetown-IBM Experiment involved fully automatic translation of more than sixty Russian sentences into English. It is the first public demonstration of machine translation. Sentences in Russian are punched onto punch cards and are fed into the machine. Fun fact: the person who punched the words onto the punch cards is a woman. Figure 2 is a photo of an example punch card that was used.

Punch Card for Machine Translation

Figure 2: punch card for machine translation [2]

In 1957, American Linguist Noam Chomsky published "Syntactic Structures", a revolutionary book that introduces a rule based system of syntactic structures. Chomsky’s Generative Grammar laid the foundation for an alternative linguistic formalisation. Almost all work in natural language processing since 1957 has been influenced by Chomsky’s work.

In the 1960s and 1970s, a lot of natural language processing systems were developed. Some famous achievements includes the conceptual dependency theory as well as the augmented transition network (ATN) to represent natural language input.

In the 1980s, machine learning was introduced into the field of natural language processing. It is a revolutionary milestone for this field since almost all natural language processing work before were based on complex sets of hand-written rules. Nowadays, natural language processing research still utilizes machine learning and deep learning tools like neutral nets. Research institutions around the world are still producing new research in the field of natural language processing frequently.


  1. Hutchins, J. (2004). Two precursors of machine translation: Artsrouni and Trojanskij http://www.hutchinsweb.me.uk/IJT-2004.pdf
  2. Hutchins W.J. (2004) The Georgetown-IBM Experiment Demonstrated in January 1954. In: Frederking R.E., Taylor K.B. (eds) Machine Translation: From Real Users to Research. AMTA 2004. Lecture Notes in Computer Science, vol 3265. Springer, Berlin, Heidelberg