Divine myth of Sanskrit as the perfect language for computers

What is the reason behind saying that Sanskrit is the most suitable language for programming?

Sanskrit is one language in a tree-like structure of languages with Proto Indo-european at its root. Like many of its sister languages, it features an extense system of verb conjugation that encodes person, time, mood, etc. It also has a rich declension for nouns indicating their relationship to each other in sentences. These two features mean that the Sanskrit relies less on word order than English or Chinese, for example. Sanskrit is not unique, as these characteristics are shared by Russian, Latin and Greek.
I have several questions as to what its particular advantage would be in computer use. The question is vague enough that it does not specify what ”for computers” means.
Programming, for example, is built on languages of explicit terms and instructions. For instance, int x = 2; is a quite explicit instruction—what would a human language contribute to this? Sanskrit’s grammatical features such as verb conjugation and case would be a novel idea if applied to computing, but the benefit seems tenuous to me (if you have evidence otherwise, please let me know). For Sanskrit to be useful in computing, would a complete overhaul and redesign of systems be needed?


Artificial Intelligence and Natural Language Processing (NLP)


Sanskrit has a long written and oral tradition. Panini’s codification of Sanskrit has conferred scholars unprecedented knowledge of its inner workings. His works illustrate meticulously how an unlimited number of things can be expressed in Sanskrit. However, the claim that a natural language, however well documented its grammar may be, is more fit for use that another in AI sounds highly suspect. A meticulously defined grammar does not absolve Sanskrit from any potential ambiguities that can arise in expression. Ask people who read Sanskrit prose to tell you about the challenge that it can be to tease out the meaning in those beautiful and rich verses. Furthermore, a fundamental nature of every natural language is its ability to express virtually every thought possible.

The idea of logic in languages is part of a bigger debate in linguistics about the role language has in shaping human thought. The Sapir-Whorf theory (alluded to in the novel 1984) proposed that language directly constrains what thoughts humans can conceive. Most linguists seem to have softened their stance to believe that it only affects one’s thoughts somewhat. Out of this desire for more logical languages, many people throughout history have attempted to create many ideal languages, such as Esperanto. It’s a topic for another discussion but it might be worth looking into some of these languages. One, called Ithkuil, stands out to me because its creator’s goal was to create a medium through which human thought would be completely specific and unambiguous.
One reason that has been brought up for the use of Sanskrit is the appeal to tradition. Sanskrit is unlikely to change because people refer to its written form, as with Latin. This same argument though undermines its validity, as English could just as likely be used; its written form largely a fossilized form of the language, and nobody stops Natural Language Processing from employing a static and non-evolving kind of English. Furthermore, many people already understand it. I am not in favour or against any particular language being used in AI, but given the structural nature of natural languages, there would in theory be little difference; the hurdles machines might have to get past in Sanskrit would be the same or similar to any other language.

Sanskrit is not a language completely devoid of inconsistencies, and such irregularities betray its history of evolution from Indo-european. If finely structured were a real objective, quantifiable term here, by the fact that it evolved from that language, one could argue that Indo-european would have an even more perfect structure. Furthermore, Sanskrit is a synthetic language: one where morphemes carry more than one unit of meaning). If this were the definition of finely structured, what stops one from considering agglutinative languages such as Turkish as perfect candidates for NLP? Turkish strings together units of meaning in ways that could arguably be processed by computers with more versatility. It really is fascinating.


Such a regular system that can be widely applied to Turkish shows us that Sanskrit is not the only incredibly versatile language there. If by mechanically applying the sutras of Panini or Jiva Goswami to noun and verbal roots, one can form perfectly correct words and sentences in Sanskrit without even knowing what they do, and this shows the possibility that it could be done in Turkish too.

A lot of these answers here have focused on claims that Sanskrit grammar is detailed, unambiguous, ”finely structured” and definite in rules making it ideal for programming. However, grammar is not something that only emerges out of noun conjugation. Chinese, seen from a strict grammatical sense can produce the same amout of detail. The biggest difference is that Chinese explicitly encodes relationship through word order. Either way through conjugation or word order, both systems are equally valid.


And finally, another claim on Sanskrit is not productive: The clear correspondence between pronunciation and spelling make the language ideal. According to the Sanskrit Manual, A Quick-Reference Guide to the Phonology and Grammar of Sanskrit, there are ambiguities that come with the sounds of Sanskrit.

By
Eamon Bohan
Computer Programmer and Linguistic Expert

 815 view/s

Print or share on ;

Leave a Reply

Your email address will not be published. Required fields are marked *