Abstract
The study of mountain areas has always received great attention from science. However, the lack of a unified model for the development of mountain areas leads to a variety of recommendations that may not always be consistent. To achieve sustainable development, it is necessary to conduct a comprehensive assessment of the natural resource potential and level of economic development of the analyzed territory. The object description is an mdimensional vector, where m is the number of signs used to characterize the object, with theth coordinate of this vector equal to the value of theth feature, In the description of an object, the absence of information about the meaning of a particular feature is permissible. The combination of a certain number of objects and their attributes is a sample on which n algorithms (proposed development models) have been worked out. The quality of operation of each algorithm is assessed (the model is estimated by the Boolean function). None of the algorithms considered performed perfectly on all the set of specified objects. A logical method is proposed for constructing a new algorithm (correction model), which is optimal on the entire set of recognized objects. The result of the study is the optimal model which includes the positive properties of the previously considered models and corrects their shortcomings. The proposed approach may be the basis for obtaining expert assessments and recommendations in order to build an optimal strategy for the development of mountain areas.
Keywords: algorithm, training set, knowledge base, subject domain, variablevalued logic, decision rule
Introduction
In the earlier ages of pattern recognition study in theory and practice, intended to solve rational problems, a large number of methods and algorithms were applied without any justification. Such methods were checked experimentally. Meeting the challenges of medical and technical diagnostics, computer predictions of deposits, as well as the creation of expert systems produced a large number of incorrect (heuristic) algorithms. Need for development of the theory of adjusting operations, synthesis of correct algorithms with minimum complexity and their stability issues resulted.
We find that the logical approach can be the basis in building of the synthesis theory for the recognition correct algorithms with the help of the existing algorithm families. These methods, despite the lack of adequate mathematical models of the studied dependences between an image and its properties, incompleteness and discrepancy of data, allow creating the algorithms which produce expert’s reasonings (Zhuravljov, 1978; Zhuravljov & Rudakov, 1987).
As a rule, the mathematical logic is usually applied to the(SVA). We use the apparatus of mathematical logic that proceeds from the real qualities of the objects in solving our problem stated in this paper. Since the object is described with a number of characteristics broken corresponding to the number of states, so it is convenient to encode each characteristic by variable value predicates.
The ultimate goal of using the variable value predicates is to define to what class or object researched data belong.
In this paper, we study logical approach to theoretical justification for constructing correct algorithms which expandof obtained on the base of the existing algorithms.
Problem Statement
We take socioeconomic aspects, potential and resources of the region, etc., as development features of the territory, and formally we refer to them as objects (Tumenova et al., 2018). All objects possess their own characteristic features. For example, the resource potential includes land, water, biological diversity, energy, labor and others. Since there are characteristic varieties and different measurement scales, it seems convenient to encode them with variablevalued predicates. In the framework of our method, the task for searching the best strategy of mountainous regions development can be formulated in the language of mathematical logic (Obeid, 1996; Fagin, Halpern & Megiddо, 1998).
The description of object represents an mdimensional vector $X=\{{x}_{1},{x}_{2},\dots ,{x}_{m}\}$, where is a number of characteristics that describe the object and jth coordinate of this vector is equal toth of=1,...,m. In the object description, absence of information on this or that characteristic value is admissible. A set of some number of objects and their properties is a selection that has worked n of algorithms (proposed development models). The performance quality of each algorithm (models) is evaluated using Boolean function ${a}_{j}\left({X}_{i},{y}_{i}\right)$. None of the algorithms under consideration performed ideally on all the set of specified objects. None of the algorithms under consideration recognized the whole set of the predetermined objects (Renegar, 1986). We propose logical method in creation of a new algorithm valid within the entire set of recognizable objects. For this purpose, we used existing algorithms and decision rules made for the study domain.
Research Questions
In this paper we study logical approach to theoretical justification for constructing correct algorithms which expand area of obtained on the base of the existing algorithms
Purpose of the Study
The purpose of this work is to build an optimal strategy for the development of mountain areas based on previously known models by extracting the most optimal solutions from them.
Research Methods
As a working method, we propose a logical analysis of a given subject area, in which the objects are various spheres, determining the level of development of mountain territories, and signs are their characteristics, presented in terms of variablevalued logic of predicates. The characteristics of the development of the territory can be the economy, the social sphere, the resource potential of development, etc. These areas of development will be referred to as objects in the formal formulation of the problem.
Findings
On the subject domain consisting of objects and their characteristics a number of recognition problem algorithms ${A}_{1},{A}_{2},\dots ,{A}_{n}$, are considered.
Suppose $X=\{{x}_{\mathrm{1}},{x}_{\mathrm{2}},\dots ,{x}_{m}\}$, is variable of ${x}_{i}\in \left\{\mathrm{0,1},\dots ,{k}_{r}\mathrm{1}\right\},$ where $\mathrm{}{k}_{r}\mathrm{}\u03f5\left[\mathrm{2},\dots ,N\right],\mathrm{}N\u03f5Z$ is the set of features considered within the variablevalued logic; ${X}_{i}=\left\{{x}_{\mathrm{1}}\left({y}_{i}\right),{x}_{\mathrm{2}}\left({y}_{i}\right),\dots ,{x}_{m}\left({y}_{i}\right)\right\},\mathrm{}i=\mathrm{1},\dots ,l$ is the feature characterizing vector, ${y}_{i}\in Y$, $Y=\{{y}_{\mathrm{1}},{y}_{\mathrm{2}},\dots ,{y}_{l}\}$ is the set of objects; $A=\{{A}_{\mathrm{1}},\mathrm{}{A}_{\mathrm{2}},\mathrm{}\dots ,\mathrm{}{A}_{n}\}$ is the set of algorithms, ${a}_{j\mathrm{}}\left({X}_{i},{y}_{i}\right)\in \left\{\mathrm{0,1}\right\};\mathrm{}i=\mathrm{1,2},\dots ,l;j=\mathrm{1,2},\dots ,n\mathrm{}$is the performance quality of the algorithm on a given set ${X}_{i}=\left\{{x}_{\mathrm{1}}\left({y}_{i}\right),{x}_{\mathrm{2}}\left({y}_{i}\right),\dots ,{x}_{m}\left({y}_{i}\right)\right\},\mathrm{}i=\mathrm{1,2},\dots ,l$: formulated as follows:
${a}_{j}\left({y}_{i}\right)=\left\{\begin{array}{c}\mathrm{1},\mathrm{}\mathrm{}\mathrm{}{A}_{j}\left({X}_{i}\right)={y}_{i}\\ \mathrm{0},\mathrm{}\mathrm{}\mathrm{}{A}_{j}\left({X}_{i}\right)\ne {y}_{i}\end{array}\right.$, $i=\mathrm{1,2},\dots ,l,\mathrm{}\mathrm{}j=\mathrm{1,2},\dots ,n$,
i.e. the algorithm operation with the given characteristics set is evaluated by Boolean algebra:
1 – algorithm recognizes object ${y}_{i}$ by given characteristics of ${X}_{i}$,
0  algorithm ${A}_{j}$ does not recognize object ${y}_{i}$ by given characteristics of ${X}_{i}$.
The set of recorded data can be represented in a twodimensional matrix of the following form (Table 01):
${\u0410\text{'}}_{i}=\left\{{a}_{i}\left({y}_{1}\right),{a}_{i}\left({y}_{2}\right)),\dots ,{a}_{i}({y}_{l})\right\},i=\mathrm{1,2},\dots ,n$. is a vector provided by a column of assessment values for the algorithm ${A}_{i}$ performance quality
Some of the given objects in the training sample remain unrecognized by any of the study algorithms. We can write it as follows:
$\exists \mathrm{}\left(\right)close=""\; separators="">{\mathrm{\u0443}}_{i}\in Y\mathrm{}$
It is necessary to construct algorithm on the basis of the given algorithms which allows detection of all objects defined in the domain ${A}_{n+1}\left({X}_{i}\right)\left{A}_{n+1}\left({X}_{i}\right)\right.={y}_{i}$ and ${A}_{n+1}\left(X\right)\left{A}_{n+1}\left(X\right)\right.=Y$.
We say that the algorithm is correct on the set of objects $Y$ defined by the set of characteristics $X$ when $\forall {y}_{i}\in Y$: ${a}_{j}\left({X}_{i},{y}_{i}\right)=1,i=\mathrm{1,2},\dots ,l;j=1,\dots ,n$. In other words, the algorithm is correct for that set of objects which it identifies correctly.
For the analysis of the subject domain we use algebra of variable valued logic (Timofeev & Ljutikova, 2005; Voroncov, 2000) which provides indicative coding of heterogeneous information, since each separate characteristic of ${x}_{i}\in \left\{\mathrm{0,1},\dots ,{k}_{r}1\right\}$ can be encoded by a predicate of any suitable for this characteristic value.
The apparatus of variablevalued logic is good for simple and indicative coding and decoding of properties of the researched objects. It simplifies fuzzification and defuzzification procedures which are necessary in case with fuzzy logic. And also significantly simplifies creation of the logical constructions that reveal compliance of the researched objects and their properties. Within the offered approach, these logical design is presented in the form of production rules.
Variablevalued logic operations
Statements of variablevalued logic are statements whose truth is determined by values: $\left\{\mathrm{0,1},...,{k}_{r}1\right\},{k}_{r}\u03f5\left[2,\dots ,N\right],N\u03f5Z$, $B$is the statements formula defined with three operations:
 negation or generalized inverse (unary operation),
 & conjunction (binary),
 disjunction (binary).
We also use constants: $\mathrm{0,1}\dots \mathrm{}{k}_{r}\mathrm{1},\mathrm{}\mathrm{}{k}_{r}\mathrm{}\u03f5\left[\mathrm{2},\dots ,N\right],\mathrm{}\mathrm{}N\u03f5Z$.
Supposing_{} is an independent multivalued variable_{} $\in $[0,…,_{}1], which is one of the object characteristics. We enter a few more functions and properties of variablevalued logic.
Here are the functions of the variablevalued logic called
Variable value:
${x}_{i}^{j}=\left\{\begin{array}{c}j,\mathrm{}\mathrm{}\mathrm{}{x}_{i}=j\\ \mathrm{0},\mathrm{}\mathrm{}\mathrm{}{x}_{i}\ne j\end{array}\right.$,
Generalized inversion:
$\stackrel{}{{x}^{j}}={x}^{\mathrm{0}}\vee {x}^{\mathrm{1}}\vee \dots \vee {x}^{j\mathrm{1}}\vee {x}^{j+\mathrm{1}}\vee \dots \vee {x}^{k\mathrm{1}}$.
The inversion set thus allows all possible interpretations of negation in various multi valued logic systems.
Supposing variables $X\in \left[0,\dots ,{k}_{i}1\right],Y\in \left[0,\dots ,{k}_{j}1\right]$ are of various values, then the generalized disjunction:
$X\vee Y=\text{max}\left[\frac{X}{{k}_{i}1};\frac{Y}{{k}_{i}1}\right]*l$, where $l=\left\{\begin{array}{c}{k}_{i}1\text{when}\frac{X}{{k}_{i}1}>\frac{Y}{{k}_{i}1}\\ {k}_{j}1\text{otherwise}\end{array}\right.$ (5)
3. And generalized conjunction:
$X\&Y=\text{min}\left[\frac{X}{{k}_{i}1};\frac{Y}{{k}_{j}1}\right]*l$, where $l=\left\{\begin{array}{c}{k}_{i}1\text{when}\frac{X}{{k}_{i}1}<\frac{Y}{{k}_{j}1}\\ {k}_{j}1\text{otherwise}\end{array}\right.$
4. We define implications for the variablevalued logic, using following expression: $X\to Y=\stackrel{}{X}\vee Y.$
The elementary function of the variablevalued logic has the following properties:
${x}^{j}\&{x}^{k}=\left\{\begin{array}{c}{x}^{j},\mathrm{}\mathrm{}j=k\\ \mathrm{0},\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}j\ne k\end{array}\right.$
Decision rules and response quality function
Definition Let us call the statement:
${\&}_{j=\mathrm{1}}^{m}{x}_{j}({y}_{i},)\to {y}_{i}$,
$i=\mathrm{1},\dots ,l,\mathrm{}\mathrm{}\mathrm{}{x}_{j}\left({y}_{i}\right)\in \left\{\mathrm{0,1},\dots ,{k}_{i}\mathrm{1}\right\},\mathrm{}{k}_{i}\mathrm{}\u03f5\left[\mathrm{2},\dots ,N\right],\mathrm{}\mathrm{}N\u03f5Z$ decision rule
In this case, the decision rule is a production rule which logic interpretation says that a definite object follows from the set of definite characteristics (this or that characteristics).
Supposing there are algorithms $\left\{{A}_{1},{A}_{2},\dots ,{A}_{n}\right\}$ that partially recognize the specified domain. For each given feature set ${X}_{i}$ we build operation quality functions of each algorithm and obtain a set of vectors ${\u0410\text{'}}_{j}=\left\{{a}_{j}\left({y}_{1}\right),{a}_{j}\left({y}_{2}\right)),\dots ,{a}_{j}({y}_{l})\right\},j=\mathrm{1,2},\dots ,n$, presented in a matrix in a column ${\u0410\text{'}}_{j}$. We get the algorithm results on every predetermined row corresponding to the object y_i, to the same object there corresponds the production rule
${\&}_{s=\mathrm{1}}^{m}{x}_{s}\left({y}_{i}\right)\to {y}_{i}$, $\mathrm{}\mathrm{}{x}_{s}\left({y}_{i}\right)\in \{\mathrm{0,1},\dots ,{k}_{r}\mathrm{1}\}$,
$i=\mathrm{1},\dots ,l,\mathrm{}\mathrm{}s=\mathrm{1},\dots ,m$.
The resulting column can be assumed as adefined on $\left\{X,Y\right\}$.
Algorithm design for solutions area expanding
While data processing the choice of algorithm with: ${a}_{j}\left({X}_{i},{y}_{i}\right)=1$ is a proper thing. In case when at least one algorithm has found the solution, we have: ${{A}_{j}\left({X}_{i}\right)=y}_{i}$, then ${\vee}_{j=1}^{n}{{a}_{j}(y}_{i})=1$. If none of the study algorithms recognize the object, then ${y}_{i},{\vee}_{j=1}^{n}{{a}_{j}(y}_{i})=0$.
Suppose all the training sample are decision rules:
$i=\mathrm{1},\dots ,l,{x}_{s}\left({y}_{i}\right)\in \left\{\mathrm{0,1},\dots ,{k}_{r}\mathrm{1}\right\},\mathrm{}{k}_{r}\mathrm{}\u03f5\left[\mathrm{2},\dots ,N\right],\mathrm{}\mathrm{}N\u03f5Z$.
For each algorithm we select decision rules to recognize objects if $\exists {{a}_{j}(y}_{i})=1$,
then ${\&}_{s=\mathrm{1}}^{m}{x}_{s}\left({y}_{i}\right)\to {y}_{i}$, $i=\mathrm{1},\dots ,l,{x}_{s}\left({y}_{i},\right)\in \left\{\mathrm{0,1},\dots ,{k}_{r}\mathrm{1}\right\},\mathrm{}{k}_{r}\mathrm{}\u03f5\left[\mathrm{2},\dots ,N\right],\mathrm{}\mathrm{}N\u03f5Z$.
We a function which is the conjunction of decision rules for the given algorithm. Being guided by the following logical reasonings: algorithm ${A}_{j}$ recognizes the object ${y}_{i}$ and algorithm ${A}_{j}$ recognizes the object ${y}_{p}$ and all remaining objects.
${F}_{j}\left({X}_{i}\right)={\&}_{{a}_{j\left({\mathrm{\u0443}}_{i}\right)}=\mathrm{1}}\left({\&}_{s=\mathrm{1}}^{m}{x}_{s}\right({y}_{i})\to {y}_{i})={\&}_{{a}_{j\left({\mathrm{\u0443}}_{i}\right)}=\mathrm{1}}({\vee}_{s=\mathrm{1}}^{m}\stackrel{}{{x}_{s}\left({y}_{i}\right)}\vee {y}_{i})$.
Further it is possible to apply reduction algorithm adapted for multiplevalued logics.
 If DNF has some singlevalued disjunct ${x}_{i}^{j}$, then we remove all ${x}_{i}^{j}\&\dots $, disjuncts (absorption law).
As a result, for a given algorithm ${A}_{j}$, we obtain ${F}_{j}$, corresponding to those decision rules recognized by the predetermined algorithm. This function has a number of features [2] and nearly builds up the knowledge base for this algorithm, breaking solution area into all possible classes.
Necessary and sufficient condition, generated by the characteristic set {_{}} for К_{ r} is the equality: ${F}_{j}\left({X}_{i}\right)=$_{.}
Proof:
Suppose_{}_{}Since $f\left(X\right)={f}_{1}\left(X\right)\vee {f}_{2}\left(X\right)$ then_{})_{}.
definitely characterizes the given DB. It is possible to claim that the specific characteristic set_{} describes a class of_{} with data provided in the DB
Assume that the feature set {X_{j}}, describes an object of the class Kr and this agreeable with basic data, then the f (X) equal to $f\left({X}_{i}\right)={f}_{\mathrm{1}}\left({X}_{i}\right)\vee {f}_{\mathrm{2}}\left({X}_{i}\right)$,…. f(X)= K_{r}. Since f_{2}(X_{j} does not hold disjuncts containing classes, it is possible to state that f_{1}( X)= K_{r}.
Having designed appropriate functions ${F}_{j}$, $j=\mathrm{1,2},\dots ,n$ for each algorithm we obtain a set of functions ${F}_{1},\dots ,{F}_{n}$. Adhering to these arguments we construct the generalizing function which is a conjunction for ${F}_{1},\dots ,{F}_{n}$: $F={\&}_{i=1}^{n}{F}_{i}$. Carry out computation and conversions and obtain:
$F(X,Y)={f}_{\mathrm{1}}\left(X\right)\vee {f}_{\mathrm{2}}(X,Y)$,
where ${f}_{1}\left(X\right)$ that holds only ${x}_{s}\mathrm{}\mathrm{v}\mathrm{a}\mathrm{r}\mathrm{i}\mathrm{a}\mathrm{b}\mathrm{l}\mathrm{e}\mathrm{s}\mathrm{};$ ${f}_{1}\left(X\right)$ is a; and function disjuncts areelements which do not matter in object identification but matter for new algorithm design on the base of objects unrecognized earlier; ${f}_{2}(X,Y)$ with features and objects conjunction, intended to determine individual features of the predetermined objects.
For the new correct algorithm design on the datasets unrecognized by previous algorithms it is enough to use function ${f}_{1}\left(X\right)$. A new algorithm is a conjunction of ${f}_{1}\left(X\right)$ and the object decision rule unrecognized by other algorithms. The result is a unique object characteristic and its features combination that do not belong to any of the previously recognized objects
${A}_{n+\mathrm{1}}={f}_{\mathrm{1}}\left(X\right)\&{(\&}_{s=\mathrm{1}}^{m}{x}_{s}^{j}\to {y}_{j}){\vee}_{j=\mathrm{1}}^{n}{A}_{j}$
Example 1
Let $X=\{{x}_{1},{x}_{2},{x}_{3}\}$ be a feature set; the value of each characteristic is encoded within the threevalued logic system ${x}_{s}\in \{\mathrm{0,1},2\}$, $s=\mathrm{1,2},3$.
The input data ratio (objects features), objects and recognition algorithms results are provided by the following matrix (Table 02).
On the basis of the given ratios we can write:
${A}_{\mathrm{1}}:\mathrm{}\mathrm{}{F}_{\mathrm{1}}=\left({x}_{\mathrm{1}}^{\mathrm{0}}\&{x}_{\mathrm{2}}^{\mathrm{1}}\&{x}_{\mathrm{3}}^{\mathrm{1}}\to a\right)\&\left({x}_{\mathrm{1}}^{\mathrm{0}}\&{x}_{\mathrm{2}}^{\mathrm{1}}\&{x}_{\mathrm{3}}^{\mathrm{2}}\to c\right)$
(algorithm ${A}_{1}$ recognizes object and)
${A}_{\mathrm{2}}:\mathrm{}\mathrm{}{F}_{\mathrm{2}}=\left({x}_{\mathrm{1}}^{\mathrm{1}}\&{x}_{\mathrm{2}}^{\mathrm{2}}\&{x}_{\mathrm{3}}^{\mathrm{2}}\to b\right)\&\left({x}_{\mathrm{1}}^{\mathrm{0}}\&{x}_{\mathrm{2}}^{\mathrm{1}}\&{x}_{\mathrm{3}}^{\mathrm{2}}\to c\right)$
${A}_{\mathrm{3}}:\mathrm{}\mathrm{}{F}_{\mathrm{3}}=\left({x}_{\mathrm{1}}^{\mathrm{0}}\&{x}_{\mathrm{2}}^{\mathrm{1}}\&{x}_{\mathrm{3}}^{\mathrm{1}}\to a\right)$
${f}_{\mathrm{1}}\left(X\right)={x}_{\mathrm{1}}^{\mathrm{2}}\vee {x}_{\mathrm{2}}^{\mathrm{0}}\vee {x}_{\mathrm{3}}^{\mathrm{0}}\vee {x}_{\mathrm{1}}^{\mathrm{1}}{x}_{\mathrm{2}}^{\mathrm{1}}\vee {x}_{\mathrm{1}}^{\mathrm{1}}{x}_{\mathrm{3}}^{\mathrm{1}}\vee {x}_{\mathrm{2}}^{\mathrm{2}}{x}_{\mathrm{3}}^{\mathrm{1}}$
${f}_{\mathrm{2}}(X,Y)={bx}_{\mathrm{1}}^{\mathrm{1}}\vee b{x}_{\mathrm{2}}^{\mathrm{2}}\vee {ax}_{\mathrm{3}}^{\mathrm{1}}\vee c{x}_{\mathrm{1}}^{\mathrm{0}}{x}_{\mathrm{3}}^{\mathrm{2}}\vee c{x}_{\mathrm{2}}^{\mathrm{1}}{x}_{\mathrm{3}}^{\mathrm{2}}\vee bc{x}_{\mathrm{3}}^{\mathrm{2}}\vee a{x}_{\mathrm{1}}^{\mathrm{0}}{x}_{\mathrm{3}}^{\mathrm{1}}\vee a{x}_{\mathrm{2}}^{\mathrm{1}}{x}_{\mathrm{3}}^{\mathrm{1}}\vee ab$
${A}_{\mathrm{4}}={f}_{\mathrm{1}}\left(X\right)\&({x}_{\mathrm{1}}^{\mathrm{1}}\&{x}_{\mathrm{2}}^{\mathrm{0}}\&{x}_{\mathrm{3}}^{\mathrm{0}}\to d)=$
$={x}_{\mathrm{1}}^{\mathrm{0}}{x}_{\mathrm{2}}^{\mathrm{0}}\vee {x}_{\mathrm{2}}^{\mathrm{0}}{x}_{\mathrm{3}}^{\mathrm{1}}\vee {x}_{\mathrm{2}}^{\mathrm{0}}{x}_{\mathrm{3}}^{\mathrm{2}}\vee {x}_{\mathrm{1}}^{\mathrm{0}}{x}_{\mathrm{3}}^{\mathrm{0}}\vee {x}_{\mathrm{2}}^{\mathrm{1}}{x}_{\mathrm{3}}^{\mathrm{0}}\vee {x}_{\mathrm{2}}^{\mathrm{2}}{x}_{\mathrm{3}}^{\mathrm{0}}\vee {x}_{\mathrm{1}}^{\mathrm{1}}{x}_{\mathrm{2}}^{\mathrm{1}}\vee {x}_{\mathrm{1}}^{\mathrm{1}}{x}_{\mathrm{3}}^{\mathrm{1}}\vee {x}_{\mathrm{2}}^{\mathrm{2}}{x}_{\mathrm{3}}^{\mathrm{1}}\vee d{x}_{\mathrm{2}}^{\mathrm{0}}\vee d{x}_{\mathrm{3}}^{\mathrm{0}}$
Algorithm ${A}_{4}$ identifies individual features of the object $d$, namely ${x}_{2}=0$ and ${x}_{3}=0$. Algorithm ${A}_{4}$ in a disjunction with the earlier set algorithms presents the entire solution area in the predetermined data domain.
Logical approach to correct algorithm design on the given data domain
When in the previous matrix we add correctness requirements to algorithm ${A}_{n+1}\left(X\right),$ we obtain the following matrix (Table 03):
I.e. for ${\mathrm{}A}_{n+\mathrm{1}}\left(X\right)$ all values are ${a}_{n+\mathrm{1}}\left({y}_{i}\right)=\mathrm{1}$, $i=\mathrm{1,2},\dots ,l.$
Since we can consider ${a}_{j}\left({y}_{i}\right)$ as a Boolean variable then ${A}_{n+1}^{\text{'}}\left({{\u0410}^{\text{'}}}_{1},{{\u0410}^{\text{'}}}_{2}\dots {{\u0410}^{\text{'}}}_{n}\right)$ is the Boolean function with a value of 1 in all specified sets of the domain $\left({{\u0410}^{\text{'}}}_{1},{{\u0410}^{\text{'}}}_{2}\dots {{\u0410}^{\text{'}}}_{n}\right)$. And we can write:
${A}_{n+\mathrm{1}}^{\mathrm{\text{'}}}\left({{\mathrm{\u0410}}^{\mathrm{\text{'}}}}_{\mathrm{1}},{{\mathrm{\u0410}}^{\mathrm{\text{'}}}}_{\mathrm{2}}\dots {{\mathrm{\u0410}}^{\mathrm{\text{'}}}}_{n}\right)={\bigvee}_{i=\mathrm{1}}^{l}{\&}_{j=\mathrm{1}}^{n}{{{A}^{\sigma}}^{\mathrm{\text{'}}}}_{j}\left({y}_{i}\right)$, . $i=\mathrm{1,2},\dots ,l,\mathrm{}\mathrm{}j=\mathrm{1,2},\dots ,n$
${{{A}^{\sigma}}^{\mathrm{\text{'}}}}_{j}\left({y}_{i}\right)=\left\{\begin{array}{c}{\mathrm{\u0410}\mathrm{\text{'}}}_{j},\mathrm{}\mathrm{}\mathrm{}{a}_{j}\left({y}_{i}\right)=\mathrm{1}\\ \stackrel{\xaf}{{A\mathrm{\text{'}}}_{j}}{,\mathrm{}\mathrm{}\mathrm{}a}_{j}\left({y}_{i}\right)=\mathrm{0}\end{array}\right.$
We assume that ${\u0410}_{j}\text{'},$is a set of decision rules recognized by the algorithm, $\stackrel{\xaf}{{A\text{'}}_{j}}$ is a set of decision rules unrecognized by this algorithm.
${\mathrm{\u0410}}_{j}^{\mathrm{\text{'}}}=\mathrm{}\mathrm{}{\&}_{i=\mathrm{1}}^{l}\mathrm{}\left({\&}_{s=\mathrm{1}}^{m}{x}_{s}\right({y}_{i})\to {y}_{i})$ when ${a}_{j}\left({y}_{i}\right)=\mathrm{1}$,
$\stackrel{\xaf}{{A\mathrm{\text{'}}}_{j}}=\stackrel{\xaf}{{\&}_{i=\mathrm{1}}^{l}\mathrm{}\left({\&}_{s=\mathrm{1}}^{m}{x}_{s}\right({y}_{i})\to {y}_{i})}$ when ${a}_{j}\left({y}_{i}\right)=\mathrm{0}$,
Through implication we obtain the following expression:
${\mathrm{\u0410}}_{j}^{\mathrm{\text{'}}}=\mathrm{}\mathrm{}{\&}_{i=\mathrm{1}}^{l}\mathrm{}({\bigvee}_{s=\mathrm{1}}^{m}\stackrel{\xaf}{{x}_{s}({y}_{i},)}\bigvee {y}_{i})$ when ${a}_{j}\left({y}_{i}\right)=\mathrm{1}$,
$\stackrel{\xaf}{{A\mathrm{\text{'}}}_{j}}={\&}_{i=\mathrm{1}}^{l}\mathrm{}({\&}_{s=\mathrm{1}}^{m}x\left({y}_{i}\right)\&\stackrel{}{{y}_{i}})$ when ${a}_{j}\left({y}_{i}\right)=\mathrm{0}$.
The whole study data domain can be presented as decision
rules: ${\&}_{s=\mathrm{1}}^{m}{x}_{s}\left({y}_{i}\right)\to {y}_{i}$, $\mathrm{}i=\mathrm{1},\dots ,l,\mathrm{}\mathrm{}\mathrm{}{x}_{s}\left({y}_{i}\right)\in \left\{\mathrm{0,1},\dots ,{k}_{r}\mathrm{1}\right\},\mathrm{}{k}_{r}\mathrm{}\u03f5\left[\mathrm{2},\dots ,N\right],\mathrm{}\mathrm{}N\u03f5Z$ (1)
Theorem: We define a set of decision rules of the form
${\&}_{j=\mathrm{1}}^{m}{x}_{s}\left({y}_{i}\right)\to {y}_{i}$, $\mathrm{}i=\mathrm{1},\dots ,l,{x}_{j}\left({y}_{i}\right)\in \left\{\mathrm{0,1},\dots ,{k}_{r}\mathrm{1}\right\},\mathrm{}{k}_{r}\u03f5\left[\mathrm{2},\dots ,N\right],\mathrm{}\mathrm{}N\u03f5Z$
that represents a certain subject domain under study
${A}_{n+\mathrm{1}}^{\mathrm{\text{'}}}\left({{\mathrm{\u0410}}^{\mathrm{\text{'}}}}_{\mathrm{1}},{{\mathrm{\u0410}}^{\mathrm{\text{'}}}}_{\mathrm{2}}\dots {{\mathrm{\u0410}}^{\mathrm{\text{'}}}}_{n}\right)={\bigvee}_{i=\mathrm{1}}^{l}{\&}_{j=\mathrm{1}}^{n}{{{A}^{\sigma}}^{\mathrm{\text{'}}}}_{j}\left({y}_{i}\right)=\mathrm{1}$, $i=\mathrm{1,2},\dots ,l,\mathrm{}\mathrm{}j=\mathrm{1,2},\dots ,n$
Proof:
Each algorithm enters the proposed disjunction and the one or more conjunctions as ${\mathrm{\u0410}}_{\mathrm{j}}^{\mathrm{\text{'}}},$ and also one or more conjunctions as $\stackrel{\xaf}{{A}_{j}^{\text{'}}}$, otherwise it is a universal algorithm, for which all ${a}_{j}\left({y}_{i}\right)=1$, $i=\mathrm{1,2},\dots ,l$, or not an operating algorithm ${a}_{j}\left({y}_{i}\right)=0$, $i=\mathrm{1,2},\dots ,l$. Since ${\u0410}_{j}^{\text{'}}$ is the set of decision rules recognized by algorithm ${A}_{j}$, and $\stackrel{\xaf}{{A}_{j}\text{'}}$ is the set of decision rules unrecognized by this algorithm, so the disjunction of these rules provides full description of the study domain for each algorithm. In case of DNF creation
${A}_{n+\mathrm{1}}^{\mathrm{\text{'}}}\left({{\mathrm{\u0410}}^{\mathrm{\text{'}}}}_{\mathrm{1}},{{\mathrm{\u0410}}^{\mathrm{\text{'}}}}_{\mathrm{2}}\dots {{\mathrm{\u0410}}^{\mathrm{\text{'}}}}_{n}\right)={\bigvee}_{i=\mathrm{1}}^{l}{\&}_{j=\mathrm{1}}^{n}{{{A}^{\sigma}}^{\mathrm{\text{'}}}}_{j}\left({y}_{i}\right)$
it can be reduced to a deadlock DNF by known methods. Further when A J ^ is replaced by the decision rules, it is possible to apply a reduction algorithm adapted for multiplevalued logics.
If some variable enters DNF with one value in all disjuncts, we delete all disjuncts containing this variable; (this variable is not informative)
If DNF has a singlevalue disjunct ${x}_{i}^{j}$, then we apply the rule of absorption of a disjunct.
As a result, each disjunct get minimized knowledge base relevant to the set of rules described by this disjunct. Such disjuncts have a number of properties (Shibzukhov, 2014). They break solutions domain into all possible within it classes. By combining these domains, we minimize the knowledge base for the entire predetermined area.
Example 2.
Let $X=\{{x}_{\mathrm{1}},{x}_{\mathrm{2}},{x}_{\mathrm{3}}\}$, ${x}_{i}\in \{\mathrm{0,1},\mathrm{2}\}$.
We build a disjunction in lines
${F=A}_{n+\mathrm{1}}^{\mathrm{\text{'}}}\left({{\mathrm{\u0410}}^{\mathrm{\text{'}}}}_{\mathrm{1}},{{\mathrm{\u0410}}^{\mathrm{\text{'}}}}_{\mathrm{2}}\dots {{\mathrm{\u0410}}^{\mathrm{\text{'}}}}_{n}\right)={\bigvee}_{i=\mathrm{1}}^{l}{\&}_{j=\mathrm{1}}^{n}{{{A}^{\sigma}}^{\mathrm{\text{'}}}}_{j}\left({y}_{i}\right)$
$F={A}_{\mathrm{1}}\&\stackrel{}{{A}_{\mathrm{2}}}\&{A}_{\mathrm{3}}\&\stackrel{}{{A}_{\mathrm{4}}}\vee {\stackrel{}{{A}_{\mathrm{1}}}\&\stackrel{}{{A}_{\mathrm{2}}}\&A}_{\mathrm{3}}\&{A}_{\mathrm{4}}\vee {\stackrel{}{{A}_{\mathrm{1}}}\&\stackrel{}{{A}_{\mathrm{3}}}\&A}_{\mathrm{2}}\&{A}_{\mathrm{4}}\vee \stackrel{}{{A}_{\mathrm{1}}}\&\stackrel{}{{A}_{\mathrm{2}}}\&\stackrel{}{{A}_{\mathrm{3}}}\&\stackrel{}{{A}_{\mathrm{4}\mathrm{}}}$
and then we write algorithms through decision rules, transform them and obtain the following expression:
${A}_{\mathrm{5}}=\left({x}_{\mathrm{1}}^{\mathrm{0}}\&{x}_{\mathrm{2}}^{\mathrm{0}}\&{x}_{\mathrm{3}}^{\mathrm{1}}\to a\right)\&\left({x}_{\mathrm{1}}^{\mathrm{0}}\&{x}_{\mathrm{2}}^{\mathrm{2}}\&{x}_{\mathrm{3}}^{\mathrm{1}}\to b\right)\&\left({x}_{\mathrm{1}}^{\mathrm{1}}\&{x}_{\mathrm{2}}^{\mathrm{2}}\&{x}_{\mathrm{3}}^{\mathrm{0}}\to d\right)=$
$={x}_{\mathrm{1}}^{\mathrm{2}}\vee {x}_{\mathrm{3}}^{\mathrm{2}}\vee {x}_{\mathrm{2}}^{\mathrm{1}}\vee {x}_{\mathrm{1}}^{\mathrm{1}}{x}_{\mathrm{2}}^{\mathrm{0}}\vee {x}_{\mathrm{1}}^{\mathrm{1}}{x}_{\mathrm{3}}^{\mathrm{1}}\vee {x}_{\mathrm{1}}^{\mathrm{0}}{x}_{\mathrm{3}}^{\mathrm{0}}\vee {x}_{\mathrm{2}}^{\mathrm{0}}{x}_{\mathrm{3}}^{\mathrm{0}}\vee $
${\vee x}_{\mathrm{3}}^{\mathrm{0}}d\vee b{x}_{\mathrm{1}}^{\mathrm{0}}{x}_{\mathrm{2}}^{\mathrm{2}}\vee b{x}_{\mathrm{2}}^{\mathrm{2}}{x}_{\mathrm{3}}^{\mathrm{1}}\vee bd{x}_{\mathrm{2}}^{\mathrm{2}}\vee a{x}_{\mathrm{1}}^{\mathrm{0}}{x}_{\mathrm{2}}^{\mathrm{0}}\vee a{x}_{\mathrm{2}}^{\mathrm{0}}\vee a{x}_{\mathrm{2}}^{\mathrm{0}}{x}_{\mathrm{3}}^{\mathrm{1}}\vee {x}_{\mathrm{1}}^{\mathrm{1}}d$
The algorithm A_5 selects personal features of object.
Conclusion
The results of the logical analysis of the predetermined domain and decision rules that describe objects make clear that complexity of the obtained algorithm depends on the algorithms quality that has already been given and regularities that are hidden in the subject domain. The proposed logic synthesis method allows building of a correct algorithm on the entire data area, simulating the knowledge base, minimizing it, and selecting unique set of features for each object. The result obtained could also form the basis for obtaining expert assessments and recommendations in order to build an effective development strategy for the mountain regions.
Acknowledgments [if any]
The study was supported by the RFFI (project No. 2001000441 A)
References
Fagin, R., Ha1pern, J. Y., & Megiddо, N. A. (1998). Logic for reasoning about probabilities. Proc. 3rd IEEE Symp. on Logic in Сотр. Sci, 271291.
Obeid, N. (1996). Three valued logic and monotonie reasoning. Computer and Artificial Intellegence, 15(6), 509530.
Renegar, J. A. (1986). Polynomialtime algorithm, based on Newton's method for linear programming. Ivíath. Sci. Res. Institute. Berkley. Preprint, 0711886, 103.
Shibzukhov, Z. M. (2014). Correct Aggregation Operations with Algorithms. Pattern Recognition and Image Analysis, 24(3), 377–382.
Timofeev, A. V., & Ljutikova, L. A. (2005). Development and application of multivalued logics and network flows in intelligent systems. SPIIRAS Proceedings, 2, 114 126.
Tumenova, S. A., Kandrokova, M. M., Makhosheva, S. A., Batov, G. H., & Galachieva, S. V. (2018). Organizational Knowledge and its Role in Ensuring Competitiveness of Modern SocioEconomic Systems. Revista Espacios, 39(26), 12.
Vorontsov, K. V. (2000). Optimization methods of linear and monotone correction in an algebraic approach to the problem of recognition. Journal of Computational Mathematics and Mathematical Physics, 40.
Zhuravljov, Ju. I., & Rudakov, K. V. (1978). On the algebraic correction of information processing (transformation) procedures. Problems of Applied Mathematics and Informatics, 187–198.
Copyright information
This work is licensed under a Creative Commons AttributionNonCommercialNoDerivatives 4.0 International License.
About this article
Publication Date
31 March 2022
Article Doi
eBook ISBN
9781802961249
Publisher
European Publisher
Volume
125
Print ISBN (optional)

Edition Number
1st Edition
Pages
11329
Subjects
Freedom, philosophy, civilization, media, communication, information age, globalization
Cite this article as:
Mottaeva, A. B., Merzho, M. S., Toguzaev, T. H., Abubakarov, M. V., & Urusova, A. B. (2022). Development Of An Effective Strategy For Regional SocioEconomic Development. In I. Savchenko (Ed.), Freedom and Responsibility in Pivotal Times, vol 125. European Proceedings of Social and Behavioural Sciences (pp. 10071016). European Publisher. https://doi.org/10.15405/epsbs.2022.03.120