Semantic Problems Of Intellectual Information Technologies

Abstract

The article is devoted to the study of the features of the use of intelligent technologies in the digital economy related to the interaction of logic and semantics of language structures. The problem of natural language logic is still relevant, because this logic is significantly different from traditional mathematical logic. With the advent of artificial intelligence systems, the importance of this problem only increases. The article analyses the logical problems that impede the application of classical logic methods to natural languages. The problems arising in the study of the logic of natural language in the framework of R-linguistics are analysed. These issues are discussed in three aspects: logical, linguistic and correlation with reality. A very general approach to the semantics of the language is considered and semantic axioms of the language are formulated. The problems of language and its logic associated with the most general view of semantics are shown. It is shown that the use of mathematical logic, regardless of its type, for studying the logic of a natural language faces serious problems. This is a consequence of the inconsistency of existing approaches with the model of the world. But it is precisely coordination with the model of the world that allows us to build a new logical approach. Alignment with the model means a semantic approach to logic. The very general view of semantics allows us to formulate important results on the properties of languages in which there is no meaning.

Keywords: Intelligent technologiesdigital economylogicexpert systemssemantics

Introduction

The article is devoted to the study of the logic of natural language based on the approach developed in relations linguistics (hereinafter R-linguistics). The problem of natural language logic still remains relevant, since this logic differs significantly from traditional mathematical logic. Moreover, with the appearance of artificial intelligence systems, the importance of this problem only increases. The article analyses logical problems that prevent the application of classical logic methods to natural languages. This is possible because R-linguistics forms the semantics of a language in the form of world model structures in which language sentences are interpreted.

Problem Statement

You can look at the logic of natural language from several angles. The first glance is a glance from the perspective of mathematical logic itself. The second look is the look from the side of the language. And finally, this problem can be viewed from the side of reality.

From the point of view of the first glance in the logic of predicate calculus (first or second order), some initial data specified in the form of predicates are subjected to various manipulations in the form of applying logical operations and operations of binding by quantifiers (variables, functions or predicates). For example, in the propositional algebra with the help of AND, OR, NOT operators we perform the transformation of propositions. The transition to the calculus of propositions involves specifying a system of axioms and inference rules. The system of axioms in this case describes the properties of the Boolean lattice, and the only derivation rule modus ponens (the syllogism rule) allows us to determine which manipulation results satisfy the axioms of the Boolean lattice. It all has to do with a particular lattice, but what does that have to do with language and semantics?

From a purely technical point of view, the logical approach to the language faces significant problems (Reformatsky, 1996; Valuitseva, 2006). We list only a few of them. We are talking about changing universes, about the changing arity of predicates (Polyakov, 2019a, 2019b), etc. Say, the binary predicate “girl beat the boy” in the language easily turns into ternary (“girl beat the boy with a stick”) or even 5-arity (“on the street the girl beat the boy with a stick on the head”). How should these transformations be treated in terms of traditional logic?

The world model and, in particular, the economic model should have the main property - the ability to forecast. From the point of view of this property, it would be extremely important for the logic tool to strengthen this advantage, and not just increase the informational expressiveness of the language. If logic allowed new predictions to be obtained from various initial predictions through various manipulations with them, this would significantly increase the value of intellectual technology. For example, the prediction of global warming is obtained from many different simpler predictions of various parameters that affect the climate of the planet. It is clear that the prediction about global warming, about its parameters is extremely important for our survival, and representatives of the animal world do not have this advantage. Unfortunately, logic alone does not increase our predictive capabilities, since the predictive function in the language is inside sentences (predicates) (Kubitsky, 1934), and logic operates with sentences themselves as if from outside. Nevertheless, we successfully generalize simple predictions into more general ones. Can this process really be described by existing means of logic?

From a second glance, the language uses categories and variables connected by verbs. The language reflects the work of a linguistic model (Polyakov, 2019a, 2019b), which uses “pieces” of relations (predicates), because, for example, two categories connected by a verb describe only some part of the relationship. These “pieces” always look like complete relationships on some “small” universe, since they are Cartesian products of two or three categories. For unary verbs, categories or variables correspond with the trajectories of the change in tuples of parameter values or with the values of parameters or attributes themselves. For example, the phrase “girl is spinning” means that a certain tuple of parameters, recognized as “girl”, has periodic fluctuations (cyclic trajectory) inside a certain kind. All this is very different from what we have in traditional logic.

From the point of view of the third view, we indicate three aspects. The first aspect is that it is very conditionally possible to call a "soul". The linguistic model gives predictions, but it does not say what decisions should be made in a particular case.

The soul contains within itself the grounds on which some consciousness, possessing a model, makes choices. When we know a person well, then, predicting his possible choices, we use knowledge about his soul, that is, about his system of values, emotional characterization of a person, etc. In linguistics, we find attempts to study this factor in the formation of language in the theory of speech acts, pragmatics, and psycholinguistics. True, in linguistics this factor has been significantly simplified so far: it is one thing to understand how decisions are made, and another - as a result of this choice, it is reflected in the language in the form of requests, orders, etc. So, for example, the theory of speech acts studies only the echoes of something more substantial.

So, two people who have exactly the same world models will exhibit different behaviours and generate different texts and different decisions about the same situation. Does the soul factor affect the logical component of these texts? If you believe the famous article Beklemisheva (2018) about female logic, then yes, because the principles of modelling the world for women and men are the same and, therefore, differences in the logic of behaviour are most likely associated with the factor of the soul.

The second aspect can conditionally be called a state problem. The phrase "I want to plant fruit trees on the site" has a different meaning for a resident of the south and north-west of Russia. When someone in the northwest pronounces this phrase, he definitely does not mean sweet cherries, apricots, etc., but these plants are part of the concept of “fruit trees” for a resident of the Krasnodar Territory. Consider the phrase "students went on an excursion to the Hermitage." It is clear that not all schoolchildren of the world came on a tour. In addition, at the same time in the Hermitage there may be several different school excursions that need to be distinguished somehow. From the point of view of classical logic, under one name we have many different previously unknown predicates. Which of these predicates corresponds to the relation associated, for example, with the phrase, “schoolchildren came on an excursion to the Hermitage”? What is the reliability of any logical constructions in these conditions?

From the point of view of the third aspect, we must ask the traditional linguistic question about the nature of truth in language. An amazing collection of outstanding minds has addressed this question! This problem gave rise to no less remarkable ideas than the great Fermat's theorem in arithmetic. Nevertheless, we must admit: the problem remains.

So, there is no structured set of statements (sentences of the language), and we are trying to introduce some structure from the outside. As such a structure, for example, we choose the structure of a Boolean algebra of two elements “true” and “false”. Now we only need to map the statements into a set of these two elements, or, in other words, mark the statements with these two symbols. Instead of statements, we could take shells, ants, or something else. Why do we do this with statements? In what sense does this structure correspond to the nature of statements? This is one side of the question and, as was noted, there is no answer to it in the classical approach to the logic of natural language. By the classical approach, we understand the idea of superimposing one or another external order on linguistic constructions depending on the tastes of the researcher: someone is a supporter of a Boolean lattice, while others prefer residual lattices and algebras MV (fuzzy logic).

Language reflects (encodes) the model of the human world, while the model of the world already reflects the outside world. Animals have a model of the world and are able to act adequately, but they do not speak the language. Does this mean that their behaviour is out of logic? The question of truth is a question of the adequacy of the model, not the properties of language sentences.

In Polyakov (2019a, 2019b), an analogy with export / import of spreadsheets was used to explain the place of the language. Roughly speaking, the “properly” organized export of a spreadsheet makes it possible to convert it into a sequence of signals in a communication line (into a language sentence) so that on the receiving side, the original spreadsheet can be restored exactly. The question of how correctly the spreadsheet reflects some aspect of the real world has nothing to do with export / import. Language does not directly correspond to the world: it corresponds to the model. What does this mean? This means that linguistic structures reflect the structure of the model. But the model itself is a structure - these are interconnected nested linguistic spaces (Blum, 2018; Polyakov, 2019a, 2019b). It follows that we do not need to impose any logical structures on sentences: we just need to fit into the structure of the model.

But what does it mean to “fit into the structure of the model”?

This means that the sentence of the language must be interpreted (displayed) in the model. Nouns must be related to model categories. Variables must receive a scope or value. Adjectives should correspond with signs (and not only). Verbs must relate to transitions from one category and variable to another, or to the trajectories of data tuples. Adverbs (in particular) should adjust the operation of the generators of the trajectories corresponding to verbs, etc. This process for each person depends on his model, desires, emotional state, etc., and only depending on the result of the interpretation, he will tell whether it is possible to believe what he was told. The statement “all the devils are green” has different meanings of truth for different people. An atheist logician will say that this statement is true, since there are no devils, and anything follows from a lie. The believer will argue only about colour, and someone will simply say that this statement does not make sense. Perhaps, they will object to me that this is not a scientific, that is, not verified fact. But the fact of observing UFOs has been verified hundreds of thousands of times, but this did not become scientific. In addition, we study the language, that is, what we can talk about (for which there is a model). In this sense, the green devils are no worse than colourful quarks.

The problems described here compel us to, at least for a start, pay attention to the correlation of language, its semantics and logic.

Research Questions

So, the subject of discussion in this article is the clarification of the basic relations of logic, language and semantics in the most general form. Are there rules of logical thinking outside of semantics that are independent of the meanings of the modelled area?

Modern mathematical logic believes that such a system of rules exists. A similar situation is in grammar theory. Namely, in Chomsky’s theory of grammars (Chomsky, 1959, 1976, 1988, 2008) it is believed that grammar rules should not depend on the meaning of the text itself, but only have grammatical grounds.

Purpose of the Study

The aim of the research is to develop methods for calculating the semantics of language constructions based on a linguistic model.

Research Methods

As research tools, semantic language models and models are described, described in a new direction of intellectual technologies - R-linguistics. To develop the necessary mathematical representations in the field of logic and semantics, the formulated concept of the interpretation operator is used.

Findings

All the problems discussed above actually have semantics as the basis. Without understanding this aspect, it is impossible to talk about logical constructions for natural languages. Therefore, we will proceed as follows: first, we will consider a very general semantic view of the language, and then through the prism of this view, we will evaluate the semantics of traditional logical constructions, especially since in modern linguistics it is understood that the semantics of a language is logical semantics (Carnap, 1963, 1975; Stadler, 2015).

Let there be some natural language at our disposal and P - a lot of sentences in this language. By sentence we mean a sequence of words in a language that can be interpreted. This means the existence of some interpretation operator Ψ, which processes sentences into semantics or meaning. We actually call the sentence such finite sequences of words of the language that fall into the domain of the operator Ψ.

At this point, we don’t know how this interpretation works and what semantics are made of, so C so far does not denote the set, but the semantic structure found by the linguists of the future, which is obtained by interpreting s. In particular, ⌀ here denotes not an empty set, but an empty semantic structure, which corresponds to the absence of meaning in the sentence. Of course, each person at a particular moment in time has his own operator of interpretation. It depends on the model of the world, on mood, desires, etc. For example, in a state of severe fright, a person’s interpretation of the same text can differ significantly from interpretation in a benevolent mood. But all of the above factors are fixed at a particular moment of interpretation, only the accumulated meaning changes, so that at the time of interpreting a sentence, each person has a specific operator Ψ.

On the set of sentences of the language P, an assignment operation (*) is defined, which ascribes another to one sentence so that the result is some text. By the text s = s1 * ... * sn we mean the finite sequence of sentences from P. We will assume that the interpretation of the text s = s1 * ... * sn is as follows. First, the first sentence C1 = Ψ (⌀, s1) is interpreted. Here it is assumed that there is no preliminary meaning with respect to the text before the start of interpretation. Based on the interpretation of the first sentence, the second C2 = φ (C1, s2) is interpreted, based on the interpretation of the first two, the third sentence is interpreted, etc. to cn. Two points need to be made here.

The current interpretation of text C and the model of the world M in the head of a person should not be confused. The world model certainly determines the interpretation: in fact, because the interpretation is carried out in the model. But the model of the world is determined by the operator of interpretation Ψ and through it the result of interpretation of the text C. The results of the interpretation of the text can subsequently change the model M, but for the period of interpretation they accumulate without changing the model. That is why often the text has a zero initial interpretation. This resembles the difference between RAM and read-only memory in computers.

Although we defined the text as a sequence of sentences, it would still be more correct to understand the text as a paragraph. Unfortunately, the uncertainty associated with this semantic concept does not allow us to give a strict definition that does not boil down to a tautology of the type: “paragraph is a segment of written speech between two red lines,” which actually means: a paragraph is what is highlighted as a paragraph. In a language, as a rule, sentences are interpreted more than one at a time. Interpretation usually takes place in paragraphs, which in oral speech are distinguished by longer pauses. The breakdown of spoken language into paragraphs is clearly visible when a person speaks under the translation, pausing and as if inviting the translator to proceed with the translation of the paragraph. If a sentence is a unit of interpretation, then a paragraph is a unit of completed thought. The completion of a paragraph usually means that the speaker has given him enough information so that the listener can complete the interpretation, ask the questions necessary for the interpretation and make a logical conclusion. It seems extremely important to understand the basis on which the speaker defines the end of the paragraph. In particular, for expert systems, this is a signal to the beginning of the conclusion, or rather, a full conclusion.

No matter how the semantic structure of C looks, two axioms are fulfilled for natural languages.

The first axiom states that there is an empty sentence e in the language, the attribution of which to any text on the right and left does not change the interpretation of the text. This means that e itself has empty semantics (φ (С, е) = С for any С) and does not change the semantics of any text: Ψ (С, е * s) = Ψ (С, s * е) = Ψ (С , s). For example, if after the text (before the text) there is a piece of blank paper, then this does not change the semantics of the text. This semantic rule in the language is displayed as s * e = e * s = s. Obviously, only one semantically empty sentence can exist in a language, since if there were more than one (for example, e and e), then e = e * e = e. It should be noted that, by definition, an empty sentence is interpretable and has zero meaning (corresponds to a zero semantic structure).

The second axiom (idempotency axiom) states that repeating the same test s (sentences) does not change the semantics of the text: Ψ (С, s) = С ′ = Ψ (С ′, s) for any semantics C. In the language, this property interpretation is reflected by the equality s = s * s. For example, we skip reprinted text because it does not carry additional information. At first glance, the axiom of idempotency contradicts the proverb “repetition is the mother of learning.” However, this proverb means that the interpretation results can subsequently change the model and thereby correct the interpretation operator, so that later the interpretation results of the same text can change. However, by virtue of the remark made above, at the stage of interpreting the text, we assume that the operator Ψ is unchanged. This of course also means that the attention of the person perceiving the text in the process of interpretation remains unchanged.

Definition 1. We say that the sentence s has the right negation s-1 if interpreting the text s * s-1 with a zero initial sense gives rise to an empty meaning ("I will go to the store. I will not go to the store"). In other words, if C = Ψ (⌀, s), then Ψ (C, s-1) = ⌀. In language, we get the negation of the sentence when we put “not” in front of the verb, thereby denying the predicate of the sentence. The semantic property of right negation in the language is expressed by the rule s * s-1 = e, where s is the sentence and the initial meaning of the text is zero.

We say that the sentence s has an unconditional right negation, if for any meaning of C the equalities hold: if Ψ (C, s) = C, then Ψ (C, s-1) = C. In other words, Ψ (С, s * s-1) = С. This semantic property of the sentence means that the operation * is associative for the text s * s-1, that is, for any text t the equality (t * s) * s-1 = t * (s * s-1) holds. Of course, if operation * is generally associative, then it generates languages with unconditional right negatives.

Example 1. In natural languages, the property of the unconditionality of right negation is generally not satisfied. So, in the famous film “Watch Out for the Car,” investigator Maxim Podberezovikov, speaking as a witness to the actions of Detochkin, says: “He is certainly to blame, but he ... is not to blame.” This phrase does not cause the listener a sense of zero meaning, since in addition to the film itself, which forms certain semantics, Podberezovikov before this phrase makes sentences that create semantics that preserves the non-empty semantics of the phrase. Namely, he gives the positive part of the phrase the meaning of the illegality of the actions of Detochkin, and the negative part - the meaning of the justice of his actions. As a result, the meaning of justice cannot neutralize the meaning of illegality and vice versa.

The law of double negation in language is ensured by unconditional right negation. In fact, (s-1)-1 = e * (s-1)-1 = (s * s-1) * (s-1)-1 = s * (s-1 * (s-1)-1) = s * e = s. If we assume that logic was abstracted from the language, then most likely the assignment operation was the prototype of the conjunction operation, and the unconditional negation operation was the prototype of the logical negation. As you know, the logical operations of conjunction and negation are enough to express all the operations of the algebra of logic. However, the following is true.

Theorem 1. Any sentence that has unconditional right negation does not make sense.

Evidence.

Let the sentence s have an unconditional right negation of s-1, then s = s * e = s * (s * s-1) = (s * s *) s-1 = s * s-1 = e.

Consequence. If the text consists of sentences having unconditional right negations, then the semantics of the first sentence is Ψ (С, s) = Ψ (С, e) = С. Also get for the second, etc. text sentences. In particular, for C = ⌀ we get an empty interpretation of the text. If almost all sentences of the language have unconditional negatives, then the following statement will not be a mistake: any text in any language with unconditional right negatives does not make sense.

Since sentences of natural languages make sense, there are no unconditional right negatives in the language, and the double negation rule is not fulfilled with them. For example, in the future we will see that “not” from the subject or addition can be transferred to the verb, so from the sentence “summer residents do not plant apple trees” it follows “summer residents do not plant apple trees”. But the first sentence does not mean “summer resident’s plant apple trees,” because according to the first sentence, summer residents cannot plant anything at all, so the double negation rule does not work in natural languages. Naturally, all this fully applies to associative languages.

We can also introduce the operation of splitting the text “∨”, which in the language corresponds to the union “or”, connecting the sentences of the language. So, the notation s * (w1∨ ... ∨wn) * t is equivalent by the definition of the notation of n texts: s * w1 * t∨, ..., ∨ s * wn * t. Such an abbreviated notation means that after interpretation s, the further interpretation should be split into n independent directions, so that as a result we get an interpretation of n texts. The splitting operation left in the record of n texts means that the semantics of these texts are related by some alternative. We will not consider the meaning of the alternative text and its measure (“either”, “either”, “otherwise”) for now. If there are several splitting operations in the text, then the leftmost splitting is performed first, then the leftmost again in the received texts, etc. For example, the complex sentence “the administration must fulfill the demands of the workers or there will be unrest” corresponds to two sentences connected by the union “or” and this combination of two sentences splits the interpretation of the text into the interpretation of two alternative texts: one uses the sentence “the administration must fulfill the requirements of the workers”, and in another, “riots will occur.”

Let us consider an example of the propositional logic, where the predicates P (x) of one variable on one universe are used as simple statements (statements in which there are no “AND” and “OR” operations). Each such predicate defines a certain property: those and only those objects of the universe for which the predicate assumes the value “truth” have this property). Each predicate corresponds to a simple sentence of the language “an object x has property P” and this sentence has an interpretation in the form of a set of APs, which consists of all objects of the universe that have property P. The operations of attributing simple sentences correspond to the conjunction operation, and the splitting operation corresponds to the disjunction operation. As is known (Novikov, 1973), each Boolean formula can be represented as a disjunctive normal form, which looks like a disjunctive compound of a conjunction of simple statements or their negations. The disjunctive normal form represents the “OR” operation in fact as a splitting operation, although there is some significant difference related to interpretation.

Interpretation of the conjunction of simple sentences corresponds to the intersection of sets defined by predicates and their negations, but the disjunction operation is not only a splitting designation, but it can be performed (!) When interpreted as a union of sets. Therefore, when confronted with a disjunction, it is possible not to split the formula, but to interpret this operation by combining the corresponding sets and still preserving one formula.

The interpretation of a unit in our example is the universe, and the unit itself is a sentence that defines a true predicate everywhere. For our example, the idempotency property is satisfied since AP∩AP = AP for any predicate (sentence) P and any conjunction context in which AP∩AP is involved. True, any sentence (except for a single one) does not have a reverse sentence, either conditional or unconditional. Otherwise, by virtue of Theorem 1, such a language would have no interpretation under an associative conjunction. What is generally considered to be a predicate negation in logic is not a negation of a proposition by virtue of the law of contradiction (AP∩⸣AP = ⌀), so such an interpretation (without logical negation) is a semantic interpretation in our sense. The universum acts here as an empty semantic structure. So, set theory, although it is a model for the described example, but this model is semantic, only under the restrictions on logical negation. In the framework of such semantics, we can say that the predicate calculus for our example is the calculus of different forms of recording a single sentence.

Conclusion

We found that even the most general assumptions about the structure of the semantic model radically affect the expressive properties of the language and its logical foundations. This is especially true of negatives in the language. Assumptions about the unconditional nature of negations destroy any kind of semantic model, including the economic one. The same applies to the assumption of the associativity of language constructions, which is made everywhere in linguistics and the theory of expert systems, which form the basis of intelligent systems in the economy. In addition, we showed how logic circumvents assumptions that are destructive for semantics by actually distorting semantic function. The reward for this is the sense of independence of “correct logical thinking” from semantic models of thinking.

References

Copyright information

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

About this article

Publication Date

21 October 2020

eBook ISBN

978-1-80296-089-1

Publisher

European Publisher

Volume

90

Print ISBN (optional)

-

Edition Number

1st Edition

Pages

1-1677

Subjects

Economics, social trends, sustainability, modern society, behavioural sciences, education

Cite this article as:

Blyum, V., Moskaleva, O., & Polyakov, O. (2020). Semantic Problems Of Intellectual Information Technologies. In I. V. Kovalev, A. A. Voroshilova, G. Herwig, U. Umbetov, A. S. Budagov, & Y. Y. Bocharova (Eds.), Economic and Social Trends for Sustainability of Modern Society (ICEST 2020), vol 90. European Proceedings of Social and Behavioural Sciences (pp. 1197-1205). European Publisher. https://doi.org/10.15405/epsbs.2020.10.03.137