Abstract
This article studies the evolution of the concept of rationality of the choice and decision theory within neoclassic economics during the 20th century. The main idea is that diminishing the economic agent significance to a consistent degree was implemented by revealed preferences theory of Samuelson and expected utility theory of Savage. Formal characterization, generalization, and refinement of the rationality concept were carried out by such economists as Samuelson, Debreu, and Savage, whose works put an end to the recognition of Nash equilibrium as a characteristic of rational behaviour. The object of research in this article is the historical approach to the understanding of rationality in the twentieth century, with the help of which a conscious paradigm shift took place in the traditional maximization understanding. Rationality is one of the many available higherorder alternatives that has a specific preference relation, formed from a deep study of rationality properties. Because of the simultaneous emergence of tools for analyzing the system of relations in the Bayesian solution theory, the mathematical method in economics is associated with mathematical topology, set theory and axiomatization. Formally, there was a revolution in the economy, but the main subject of the study dropped out of neoclassic analysis, i.e. the price, which now began to be set only on an individual basis.
Keywords: Agent, behaviour, consistency, preferences, rationality, utility
Introduction
Modern realities have shown that the economic and social sciences actively use general game theory and Nash equilibrium, and even minimal social interaction can model the rationality. Despite its oblivion during the second half of the last century, game theory became popular only in the 80s of the twentieth century, because, in our opinion, representatives of the neoclassic school did not consider a new interpretation of the concept of rational behaviour in the Nash equilibrium.
If traditionally the rational agent is a subject that constantly maximizes needs and pursues one’s own goals, then the term “agent” began to require its consistent definition, and human properties were divided according to the concept of an agent in order to conceptualize individuals and mechanisms for their formation into groups. Such economists as Samuelson, Debreu and Savage carried out formal characterization, generalization, and refinement of the rationality concept. Their work put an end to the Nash equilibrium recognition as a rational behaviour characteristic (Aumann, 1987, p. 18).
At the end of the XX century, neoclassic theory has significantly changed the goals and methods of modelling, and the rationality concept analysis was associated with the evolution.
To analyse the research object, it is necessary to solve two questions: the first includes the subject, and the second – the object. The scope and knowledge reflection flow from this dichotomy, where the first includes theories, facts, methods and open problems, and the second consists of the knowledge management and selection principle. In general, they solve a number of the following problems, such as the open problems relevance, the experiment expediency and argumentation, individual, institutional and internal contradictions of the object research, methodology research, and effective technologies for solving problems, which, in the final knowledge reflection, will form cognitive and normative scientific views (Corry, 1996, pp. 34).
Neoclassic economics applies this dichotomy as an attempt. Therefore, a better evolution of understanding the rationality concept helped obtain profound changes in the reflection methodology and theory content, which occurred only in the second half of the XX century.
Profound changes in economics traditionally concerned the forces and relations systems. In market economy, the main subject of the forces system was the analysis of economic processes caused by the market and nonmarket levers, as well as the equilibrium system creation processes. And the main subject of the relations system was the compliance achievement of conditions development to the existence of economic equilibrium properties.
Historically, the marginalism proponents were the first, who recognized the forces system that reflected the traditional understanding of the market economy. However, at the beginning of the XX century innovations began to influence economic research with such epistemological trends as logical positivism and mathematical formalism. This influence became a catalyst for generalization and intellectual integrity; it lasted until the emergence of joint scientific views, which until the late 1970s sought to change the attitude of economics to reflect the forces system. With these changes, generalization and integrity began to prevail over realism: economics came closer to mathematical rather than natural science.
The dichotomy is quickly applied in rationality. Of course, in economic life, there is a coexistence of different but not mutually exclusive approaches to rational behaviour, one of which is an equation of consistent approach to rationality. Another approach is maximization, where rationality motivates the pursuit of one's own interests (MasColell et al., 1995, p. 5). The first approach dominates the second, since rationality is the condition of the «consistent choice from different sets of alternatives» (Arrow, 1996, p. xiii).
The definition of rationality can be found in the advanced microeconomics course by MasCollell, Whinston and Green, who believe that «a preference relation $\ge \mathrm{}$is rational if it has the properties of completeness and transitivity» (MasColell et al., 1995, p. 6).
Many scientists still believe that the basic market economy principle is selfishness, according to which «rational» agents are just the subjects with set preferences and deliberate choices. However, even if it is possible to achieve equality between the two approaches, consistency and transitivity still prevail over maximization, despite their distant relationship to the general definition of rationality. In addition, noneconomic constraints are also applied to the agent's behaviour in order to represent it in the standard choice theory.
The approaches that determine rationality, compete each other in a stable way, because in economics, maximization conditions are a traditional phenomenon, and consistency emerges later. However, with its growing popularity, this characterizes not just the competition of tradition with innovation, but also the rejection of neoclassic theory from its main mission  the individual behaviour interpretation. In rationality, any decisionmaking agents formally represent an elementary consistent constraint. This can be a person, a group of individuals, institutional units, and even computing systems. In other words, the modern economic science of agent rationality initially predicts and then models the behaviour of individuals using a formal execution algorithm. Therefore, there is no need to use psychology as the basis of rational choice theory within the neoclassic economics (Davis, 2002, p. 145).
For the better understanding of the reflections of the forces and relations systems, it is necessary to study rationality in the interaction of two economic approaches. The behaviour of agents, their motives and needs as a system previously played a significant role. The subject of neoclassicism traditionally considered the selfish behaviour of a «real» economic person in accordance with one’s tastes, motives and needs. However, in the 1930s, many of them concluded that neoclassic should only study the behaviour of a logic agent, so its main task is to study the consequences of consistent restriction. If earlier the individual behaviour and the agent choice answered the questions «how and why», in the new relations system reflection they replaced the problems of comparative properties and qualities using different approaches and axioms. Reflection transformation in neoclassic science is not a universal and fast solution of problems, so these two approaches still interact.
This article also explains the reason of the psychological approach rejection, which consists in the certain choice of conditions of using the Samuelson’s revealed preferences, Debreu’s axiomatization, Neumann's decisionmaking theory, Morgenstern and Savage approaches.
Problem Statement
The reason of the psychological approach rejection in the neoclassic direction in the choice theory, search for more flexible values, analysis of the utility problem have been explored by Aumann, Arrow, Davis, Ellsberg, Green, MasColell, Morgenstern, Pareto, Samuelson, Sen, von Neumann and many other scientists.
Research Questions
The reason of the psychological approach rejection was an attempt to create own neoclassic direction in the choice theory, where internal variables (needs) and psychological processes (introspection) would not depend on the agents decisions. Fischer and Pareto initiated the transformations in neoclassicism. Savage and Debreu completed them, and all of them were associated with changes in epistemology at the beginning of the XX century, when mathematical formalism and logic positivism appeared. On the one hand, the empirical content of economics with its operationality took root, on the other – the value theory tried to replace the pure choice logic on an axiomatic basis. As a result, the new rationality characteristic allowed, but incompletely, revealing the logic and attractiveness of choice as a phenomena.
What are the reasons for refusing from such variables as utility, will, pleasure, etc.?
The answer is in empirical variability of positive, then positive logic epistemology, named as the «empiricist motive», corresponding to the views of Pareto, Hicks and Allen, or Samuelson's choice theory. Moreover, the methodology of economics itself has evolved through the phases of apriorism, deduction, and verification that emerged through operationality and falsification. However, the «empiricist motive» does not accept complete rejection.
Indeed, how did the main features of the early neoclassic approach apply rationality? Of course, the first generation of marginalists followed instrumental rationality of David Hume, where passion or greed was the ultimate behaviour motive, and the reason is the «servant» of passions, a tool for achieving goals that are not set for internal reasons. This rationality economically explained an external manifestation of behaviour within the internal goal selection process. Subjectively the agent linked one’s consciousness to the objective picture of the world; one acted according to desires that could measure utility in a linear way. In general, neoclassicists in the late XIX century tried to integrate rationality into psychology and saw it as an analytical strategy for achieving equivalence with maximizing utility.
This type of theoretical reasoning depends on empirical approach specifics, where actually a linear way measures psychological pleasure. However, early marginalists noticed that it was difficult to measure utility, and it was even more difficult to test empirically human psychology, but in neoclassicism they found out the way. In fact, instrumentally rationality measures without using a psychological approach: detailed research results aim at differentiating effective means from ineffective ones and do not depend on the nature of the results themselves. This article recognizes the presence of the gap in the utility theory that developed historically and required an empirical approach to the behaviour study. And neoclassicists paved the way for the development of a formal rationality concept as a coordinated action without the psychology use.
How can we resolve the conflict between the value theory and the choice theory with the argumentation of the process due to empirical motives? The defense of economics against the influence of psychology established a solid empirical foundation in neoclassicism. If epistemological psychology had a dubious status, then the desire to create a theory based on observed behaviour looked very optimistic. As a result, behaviourism and operationalism for many scientists became the idea of abandoning psychology.
In «The history of economic analysis», Schumpeter noted that more flexible values were indifference curves, rather than utility, despite the impossibility of achieving them. In addition, he argued that they were more convenient to use in the equilibrium theory. However, in practice, these curves become invalid only when the utility function is invalid. Therefore, they cannot use the equations of equilibrium theory. Previously, the value theory considered to be useful in independent purchase and sale operations by entities with the given prices and revenues set. Brown confirmed this, but he could not definitively prove the effectiveness of this axiom, which Samuelson was able to identify using the elements of the sequence (Schumpeter, 1954, p. 10667).
This transition reveals the essence of Samuelson who revealed preferences theory, neglected by neoclassicists for almost half a century until the scientific proof of the choice theory. By the mid of the 1930s, many people believed that the works of Fischer and Pareto, then Slutsky, Allen, and Hicks, completed this process. However, Samuelson in his «Economy» of 1938 recognized that the revealed preferences theory did not depend on subjective utility, and the indifference curves and the Pareto – Hicks substitution limit norms did not completely reflect it. That suggested that the modern theory did not completely, but indirectly, covered the utility concept. Moreover, Samuelson (1938) saw the marginal substitution rate as an artificial quantity that explained the price behaviour (p. 6162).
Thus, Samuelson (1938) proposed to study the revealed preferences without utility analysis (p. 6162). Samuelson suggested investigating not the preferences, but the demand behaviour, i.e. the agents’ choice. He extracted information from market behaviour observations and considered it the only element independent of the psychological demand analysis. He showed that with the help of consistent behaviour it was possible to get the results of neoclassical demand theory, that he called the «weak axiom of revealed preferences». In the «Foundations of economic analysis», Samuelson derived the following formula:
${\sum}_{\mathrm{1}}^{n}\u200a{p}_{i}\left({x}_{i}^{\mathrm{\text{'}}}{x}_{i}\right)\le \mathrm{0}\Rightarrow {\sum}_{\mathrm{1}}^{n}\u200a{p}_{i}^{\mathrm{\text{'}}}\left({x}_{i}^{\mathrm{\text{'}}}{x}_{i}\right)<\mathrm{0}$ (Samuelson, 1947, p. 55),
where the number of goods ${x}^{\text{'}}$at the price $p$ with the income $px$ has not been chosen yet. Therefore, it is not available, and then it is chosen by price ${p}^{\text{'}}$ at income ${p}^{\text{'}}{x}^{\text{'}}$, i.e. the agent with a new combination of «price – income» and unavailable product consistently will choose ${x}^{\text{'}}$. Otherwise the choice will correspond to the outdated scheme of «price – income» but available quantity of goods $x$ and ${x}^{\text{'}}$.
Since the price, income, and quantity of a product shape the market behaviour, the «weak axiom of revealed preferences» in consumption theory restricts the empirical research results and blurs the structure of such cognitive elements as ordinal utility, preferences, or indifference. Samuelson (1998) economically showed the limitations of demand theory that used unconsciousness and selfreflexivity (p. 1380).
The modern theory of Samuelson's revealed that preferences relate the choice to the property of rationality, but not to the agent's preference, i.e. demand becomes rational when more preferred choices were made of the price, revenue and quantity of the product (Richter, 1987, p. 167). Therefore, rationality is one of the many available higherorder alternatives with specific preference relation, formed by a deep study of its properties. Thus, rationalization can explain the market demand.
Why did the rationality property come to agent preferences in microeconomics, where preference relations had the properties of completeness and transitivity? Changes in the rationality property have given rise to rational preference, i.e. behaviour that theoretically neutralizes, for example, such problems as the «money pump» or «Dutch disease». At the same time, irrationality became a logic contradiction and a reason for research. If the old marginalism saw in rationality the person who maximized his own interests, then modern microeconomics saw rationality as a specific and technically useful property of the agent's preferences, which still cannot completely explain economic behaviour.
It is necessary to note that consistent behaviour in Samuelson's «weak axiom of revealed preferences» played an important role in consistent rationality. The reason to this was the demand analysis that revealed the equality of the theory of identified preferences and preferences. If the agent has a rational preference, i.e. complete and transitive, then his choice will satisfy the condition of the «weak axiom of revealed preferences», which is possible to rationalize. The «weak axiom of revealed preferences» contributes itself to the understanding of rationality, and when the axiom strengthens, the agent's choice becomes consistently rational, i.e., Houthakker's «strong axiom of revealed preferences», whose preferences are rational, complete and transitive. Thus, if the «weak axiom of revealed preferences» does not correspond to standard rational preferences and the preference relations do not meet the transitivity conditions, then thanks to the theory of revealed preferences, the agent will have a consistent, rational choice, the property of which is literally the property of the agent's consistent preferences.
The behaviour psychology has changed the rationality content. Instrumentally, rationality in human activity is a result of external psychological impulses in order to pursue one's own interests. The revealed preferences theory connects rationality only with the observed behaviour; it does not study the motives of the agent's behaviour. Therefore, rationality does not explain behaviour, but only describes it. If the logic positivism based on the revealed preferences theory and rationality refuses to describe the causes of behaviour, then the core of neoclassic economics is human behaviour with one’s inherent individuality.
Samuelson saw the replacement of the economic man by the rational agent in the rejection of psychology. The revealed preferences theory proves that decisionmaking characterizes not only the mental agent’s state, but also represents rationality in consistent way, where two characteristics of economic behaviour may belong to different data sets. In other words, economists should only study and describe behavioural motivations from the outside. Therefore, the individual's preferences and decisions are behind economic analysis, since appeared the set of observable market choices. Davis noted, that Samuelson considered the choice theory mechanically, where he saw the individual as the relations subject, which prevailed in the late neoclassic formal choice system (Davis, 2002, p. 146).
However, under a formal system, the revealed preferences theory mechanism looks unreal. Experimentally, Samuelson's approach is not applicable, since the consumer is not required to make a choice to identify preferences, which means that his entire analytical system is fictious. In 1932, a representative of the Austrian school and critic of the Pareto theory, Mayer, proved that the subject of Samuelson's experiment was utopian and illusory (Mayer, 1932, pp. 1167). Moreover, the «weak axiom of revealed preference» with empirical data used the logic laws: the consistent agent acted on the base of the set axioms that proved the existence of a cognitive experiment, so the theory of revealed preference, based on the consumption theory, was due to strict logic, unattractive to economists, who sought the cognitive, but not psychological way.
Changes in neoclassic economics, related to rationality, showed the principle of transition from the forces system to the relations system. Here one of the problems in the choice theory was the problem of diagnosis the actual agent’s choice that formally satisfied the weak or strong axioms of identified preferences and guaranteed other formal relations of transitivity and completeness of the agent preferences. In case of a positive diagnosis, a set of formal relations characterizes the agent's behaviour, i.e. demand has constraints, such as the Slutsky equation, so early marginalists could not explain the problem and the reason for choosing an agent.
Outwardly, however, it was difficult to measure the changes, since in reality they were beyond the neoclassic. Sugden noticed that the transition from instrumental to sequential rationality did not affect the structure of mathematical analysis, so Samuelson and his contemporaries began to use its tools, like the early marginalists, but without a psychological approach (Sugden, 2001, p. 128).
Formally, the problem analysis begins with the simple concepts, which in the process of theorizing become independent, consistent and complete, are deduced in axioms, and then in theorems. Therefore, formally, preliminary or heuristic, but also simple concepts carry out the analysis; also, they may be neutral to the likely application of the analysis. In theory, this type of axiomatic Debreu saw in the separation and complete independence on the mathematical form from the economic content. Moreover, if the economic model does not use simple concepts, then the mathematical structure is used (Debreu, 1986, p. 1265).
Consequently, the axiomatic theory is released from empiricism, and the meaning of its content is filled only after the fact, i.e., during the filling, there is an exit from the theory and a transition either to another area of scientific research or to an empirical justification.
The main advantage of this approach is the possibility of obtaining a new theory using a new economic interpretation of simple concepts and theorems. For example, according to Debreu, the development of mathematical economics preceded its quantitative characteristics. He believed that the study of social phenomena should carry out using a deductive method of the mathematical approach, where economics as a science had an advantage, since only it could give a quantitative assessment of goods and prices in the appropriate units of measurement. Consequently, in economy with a certain number of goods costs of the production or consumption of a unit of goods form the economic agent behaviour. If this relationship distinguishes production or consumption costs, then a point on the product space, which maximally measures the real vector space, represents the agent’s behaviour (Debreu, 1986, p. 1261).
In other words, economics studies the agents market behaviour using the mathematical method, since the decision  maker completely uses the conceptual apparatus of mathematics. However, the relations system can bring the agent behaviour to the «limit point of the measured space», which is an advantage of economics over other social sciences.
In the «Theory of value» Debreu (1959), who proposed an ordinal approach to explain the utility of Pareto and Hicks, created a basic set for future analysis of choice and demand theories. To characterize an economic agent, he took great importance to restrictions in the criteria for selecting a complete consumption plan. Therefore, under the conditions of restrictions, there must be a selected consumption plan that does not have a preference. He believed that the consumer could be «an ordinary individual, household, or even a large group with a common goal»; this proves that the theory applies not only homo economicus, but also a wider range of agents (Debreu, 1959, p. 50). However, it is not necessary to use only selection criteria and restrictions as conditions for consumer characteristics, since they narrow down the analysis of individual decision  makers.
Then the author of the «Theory of value» assigns the index $\mathrm{i}=\mathrm{1}\dots ,\mathrm{m}\mathrm{}$to the agent to determine the identity that characterizes the double ratio of the complete reflection of needs to the tastes of the agent (Debreu, 1959, p. 51). Consequently, Debreu introduces an agent into the indicator of a double relation that satisfies the properties of completeness and transitivity, which are an absolute distribution.
According to the selection criteria, the agent selects the best set from the existing ones based on the distribution preference. Therefore, by rationality, many economists understand the behaviour of an agent that meets the selection criteria, whose preference is absolute distribution. According to the axioms of completeness and transitivity, a rational agent is a person whose preferences are complete and transitive. However, these axioms did not initially completely cover human behaviour, but were the only properties of analyzing certain results from mathematical operations.
In neoclassic not only Samuelson and Debreu observed the rejection of psychology
In the framework of decision theory, von Neumann, Morgenstern and Savage with their theories of expected and subjective expected utility, integrated the axiomatic method into the behaviourist/operational framework. Applying these theories, they axiomatized Debreu's ordinal utility theory and Samuelson's revealed preference theory, embracing consistent rationality. Thus, the modern theory of decisionmaking refused the theory of rational behaviour, the alternative to which was not the choice of the agent itself, but a predefined set of requirements for it.
After the publication in 1944 «Theory of games and economic behaviour», von Neumann and Morgenstern's expected utility theory had an accusation of returning to introspection and the cardinalist’s understanding of utility as the basis for economic behaviour analysis. The reason for this, in our opinion, was the ability of the agent to evaluate the differences between pairs of values and extract the properties of a set of utility indicators from them. Therefore, the expected utility theory was like a repetition of the old marginalism with its power of set and the expected utility of Jevons's «Theory of political economy» and Marshall's «Principles».
However, the charges were unfounded, since von Neumann and Morgenstern sought to keep the utility function linear by using the power property of the set, whose problem was the ability to preserve the utility function of preference analysis based on ordering utility differences. Before the linear transformation, the function was in a state of chaos, because it required uniqueness and operability. Therefore, the motives underlying the utility function have a difference character, for example, the power of the set of expected utility is not the result of the behaviour psychology or the ability of an individual to make a decision, but the product of the useful value of a set of axioms.
The unsubstantiated accusation omitted the operational essence of the expected utility theory, i.e. the specifics of selection by the decisionmaker was applicable to game theory and revealed the criteria for selecting utility from the observed behaviour. Von Neumann and Morgenstern believe that in the upcoming constraint outside the relation system, one can quantify the utility whose function is unique by linear transformation and power of the set. The operationality of the expected utility theory arises quantitatively under an additional constraint; naturally, reproducible observations extract it (von Neumann & Morgenstern, 1944, p. 23). Utility as the only and natural element of relations belongs to the concept of «more than», which is the basic representation of preference. However, this is not enough to limit the quantitative transformation of the utility function, i.e. further restriction is necessary.
Many marginalists believed in the measurability of pleasure intensity preferred means of limiting, which were differences in the utility of a products pair and differences analysis. This restriction relates to the selfanalytical utility measurability, since it requires the decisionmaker to prefer one product to another. To obtain the power property of the set of a utility function, the restriction concerns the ordering of comparisons in utility differences, where the main utility function is achieved that preserves the order of linear transformation of such differences.
The problem with this approach is that differences in utility relationships become natural due to reproducible observations.
Therefore, the agent must abandon introspection of alternatives that is a nonreproducible feature of difference analysis. The axiomatic approach in game theory aims to change the ratio of the power of a set to the analysis of difference; its property measures the intensity and differences of utility, but not the psychology and development limits.
Von Neumann and Morgenstern state that utility can use a probability theory. They believe that if it is not possible to compare utility with alternative events, the agent can compare utilities combinations with their probabilities, which is a natural phenomenon, where utility naturally brings the transformation system to the difference analysis, which is a combination of two utilities with two given alternative probabilities $\mathrm{\alpha},\mathrm{}\mathrm{1}\mathrm{\alpha}$, where $\left(\mathrm{0}<\mathrm{\alpha}<\mathrm{1}\right)$.
So there are two utilities in utility $u$ and $v$ as a natural phenomena (standard preference relation, where $u>v$ and a natural process where the mathematical expectation has the form of $\mathrm{\alpha}\mathrm{u}+\left(\mathrm{1}\mathrm{\alpha}\right)\mathrm{v}$. Then if there is a consistency $\mathrm{u}\to \mathrm{\rho}=\mathrm{V}\left(\mathrm{u}\right)$, which forms both the relation and the process with the help of utilities, then it, being a real number, as a preference, dominates the mathematical expectation equal to the amount of utility, and the correspondence itself as a utility function transforms linearly, where it is sufficient to establish the numeric nature of utility. For such a correspondence to exist, some properties and axioms must extend the relation and process.
Von Neumann and Morgenstern emphasized that their theory is based on events combined with probabilities, and extraction of utility carried out the associated process. They seemed to have determined the quantitative utility of mathematical expectations, the feature of which was the natural process of reproducibility of the experiment, and the authors saw objectively longterm repetitions in probabilities. They note that if the subjective concept of probability was preferred, the preferences and probabilities subject themselves to simultaneous axiomatization, successfully implemented by Savage in 1954.
In 1954 Ellsberg saw a specific type of «experiment» as a natural operation of mathematical expectation in the operational features of the von Neumann and Morgenstern’s game theory. Analysis of P. Bridgman's operational approach measured different concepts by two different sets of operations (Ellsberg, 1954, p. 270). It was from Ellsberg's point of view that a turning point came in the interpretation of the expected utility theory due to the clear distinction between the approaches of von Neumann and Morgenstern from the marginalist’ understanding of the cardinality set; their properties were operationally different from each other.
Let us take the alternatives $\mathrm{A},\mathrm{B}\mathrm{}\mathrm{a}\mathrm{n}\mathrm{d}\mathrm{}\mathrm{C}$ in order of weak preference, where $\mathrm{A}\ge \mathrm{B}\ge \mathrm{C}.$ We suppose that somebody asks an agent to choose between a certain amount $\mathrm{B}$ and random $\mathrm{L}$, endowing $\mathrm{A}\mathrm{}\mathrm{a}\mathrm{n}\mathrm{d}\mathrm{}\mathrm{C}$ with probabilities $(\mathrm{p},\mathrm{}\mathrm{1}\mathrm{p})$. Then, is it possible to formulate general forecasting criteria and agent behaviour norms, and what information is needed for this?
The marginalists, von Neumann and Morgenstern, affirmatively answered the first half of the question, while they refuced to answer the second half. Von Neumann and Morgenstern believed that the observed behaviour of the agent in other choice situations only under conditions of uncertainty should be basic criteria. However, Jevons and Marshall believed that under conditions of uncertainty it was not necessary to know the behaviour of the agent, since the criteria of the agent for security purposes revealed the preference strength only at the last moment of time and only then used the information to explain, predicted and normalized behaviour in uncertainty conditions. The difference between these two approaches was that marginalists were able to analyze and quantify the satisfaction of the agent's subjective preferences in order to avoid risk, whereas von Neumann and Morgenstern associated risk with ordinal preferences or random events.
Thus, marginalists did not associate the expected utility theory under uncertainty conditions with selective behaviour, since the scheme of its specific normalization under expected utility conditions did not preserve the previous alternatives properties. Such rationing had no operational (behavioural) value, despite the reflexive basis of the abstract construction. On the contrary, the system required the decisionmaker to complete and consistently assess utility in terms of certainty and psychological attitude to it – in terms of uncertainty.
The expected utility theory of von Neumann and Morgenstern had an operational feature, where the decisionmaker in a normal situation experimentally lacked behaviour introspection, because this would require him to choose either a random variable $\mathrm{L}$, or a steadily worsening case $\mathrm{B}$. This choice showed the person's propensity to take risks. Not only ordinary, but also complex choice situations applied the axioms to this relation of preference to consistent behaviour. Therefore, under uncertainty conditions, simple and complex solutions built by von Neumann and Morgenstern the tested system, and the expected utility was a suitable deterrent factor for understanding consistent behaviour.
While Ellsberg did not take into account the criteria for operationally stable consistency, von Neumann and Morgenstern's expected that utility theory shared the strengths and weaknesses of Samuelson's revealed preference theory. For example, from the point of view of behaviour psychology rationality has lost its meaning, i.e. in the revealed preferences and expected utility theory, the concept of rationality considered only the observed behaviour, but without motives. Therefore, for the theory of decisionmaking by von Neumann and Morgenstern, there were prerequisites for the further development of the relations system and the forces system in economics. Moreover, the operational approach was fictitious, since in reality no experiment was conducted that could reveal the agent's attitude to solve risky situations.
In «The Foundations of statistics» Savage (1954) showed that the dominance of von Neumann and Morgenstern's expected utility theory over the strong expected utility theory was premature and erroneous. The new theory had two features that gave rise to introspection, which characterized the power of the expected utility function set and the subjective probabilities nature. To be sure, the understanding of a strong expected utility theory and the Bayesian solution theory only mechanically characterized rationality with poorly verifiable variables and unclear psychology.
Savage sought to extend the logic of the rational behaviour general characterization using uncertainty, but without applying a psychological approach (Savage, 1954, p. 6). Unlike von Neumann and Morgenstern, he used more intensively the axiomatic approach, since he was able to quantify both the agent's preferences and the expected utility choice, as well as the subjective probability and Bayes rule from his own set of axioms.
The expected utility maximization theory contains consistency, i.e. agents are consistent in their decisions if they maximize expected utility only under a subjective probability distribution. Savage's theory forms an opinion about a consistently rational agent that analyzes the degree to which its needs correspond to axioms and otherwise changes them according to Bayes' rule. Thus, Savage could establish the interdependence of rationality and the consistency criteria through the expanded understanding of the agent behaviour principles.
To obtain a general characterization of rationality, Savage solved the problem of finding an operational method that could extract the cognitive agents’ abilities and embed in testableselected criteria, which would correspond to the operational and behavioural traditions without using the psychology and introspection methods. To solve this problem, he developed his own subjective probability theory.
First, unlike von Neumann and Morgenstern, Savage saw in the expected utility rule the elementary essence of the probability value, which reflected the weakness of his position in the expected utility theory. A simple solution was not enough, since the repeated use of the concept of probability without repeatability of the experiment became the basis for scientific speculation that excluded the characteristic of stable behaviour. However, Savage could solve this problem in 1954.
Second, the based concept choice its analysis, which revealed an alternative preference and the agent's expectation of the probability of two occurring events. He considered introspection an inconvenient tool, since two events should solve problems of preference and indifference, but not questions of introspection.
The Savage’s method showed that the choice could really reflect the use of subjective probabilities. It consisted of a survey that did not contain sentiments about the probability of two occurring events, but rather the behaviour and likely expectations of the agent in the future under specific conditions of choice. Although Savage did not use introspection as a source of information, in decision theory he preferred to use an imaginary experiment rather than an empirical approach, with the help of which he was able to obtain subjective probabilities.
The general methodology applied both the strong expected utility theory and Samuelson's revealed preferences theory. Therefore, both sets of selected data and a cognitive experiment used the «disclosed» subjective probabilities.
Let us assume that the problem solution because of selecting an agent is proportional to one or more events<inlinegraphic xlink:href="d2"/>, called gambling. He makes his choice on the base of occurrence probability of various events. To characterize a rational choice, a set of random numbers fixes the agent's preferences, which are a subjective assessment of the various events occurrence.
According to the expected utility rule, some consistent restrictions of the decisionmaker preferences prove that such utility function exactly maximizes the choice, which is the final utility sum, where the weight of the agent's probable belief intended for each individual event multiply the results. In gambling, the agent' actual alternatives reveal the weights in expected utility formula. That is why they satisfy such a set of axioms, where quantitative measurements can determine the agent preferences and beliefs.
Applying consistency criteria reveals the essence of opposite events ${S}_{1}$ and ${S}_{2}$ where the decisionmaker in two simple gambling conditions becomes biased because they shall have paid one dollar at the event ${S}_{1}$ and it won't pay anything if the event result is zero ${S}_{2}$ whereas in another game, he will pay one dollar at the event ${S}_{2}$ and will not pay anything when the event occurs ${S}_{1}$ And if the agent sees a greater preference in the first gambling game than in the second, then the event is subjectively more ${S}_{1}$ than the event ${S}_{2}$. Thus, the decisionmaker chooses from two gambling games based on their beliefs about the probability of two occurring events. However, according to Savage, if behaviourism as the essence of opposites achieves a qualitative or ordered subjective probability, then it is very difficult to quantify the decisionmaker beliefs.
Expected utility theory faces the problem of maximizing its function, where the alternative preference of the decision maker must be applicable to the probability space. While von Neumann and Morgenstern solved this problem using objective probabilities, Savage associated the analysis of arbitrary probabilities or events with accurate data extracted from the expected utility. Thus, the principle of inevitability and the imaginary event in a strong theory of expected utility becomes the norm. They tend to turn the preference domain into a probability space. Therefore, the previous approach was first used by Savage, and then in the early 1960s – by Anscombe and Aumann.
Moreover, a strong expected utility theory covers the agent preferences whose subjective beliefs about the probability of various events were detected by monitoring selective behaviour. A set of axioms consisting of agent preferences reached its result, where the theory effectively combines axiomatic and behavioural methods. It turns out that this is the final achievement of the psychology rejection.
When motivating certain axioms (the axiom of uniformity and the principle of inevitability), the construction itself requires consistent behaviour, stated by Samuelson's weak revealed preferences theory or by random samples of von Neumann and Morgenstern. It follows that the psychology rejection as a requirement of consistency is a hallmark of any rationality characteristics.
The problem with this requirement was logic, but not empiric rejection. Indeed, in revealed preference theory, expected utility theory, and weak expected utility theory, consistency required a nonempiric, illogic justification. A consistent agent logically acts according to the relevant axioms in a cognitive way. Thus, the approaches of Samuelson, von Neumann, Morgenstern, and Savage to the experiment were incorrect: in reality, the agent does not have to make a choice only to demonstrate a preference or the probability of its occurrence. Therefore, the whole system represented only by a cognitive construct based on the indifference curves of distorted Pareto (1927) experiments (p. 118).
Purpose of the Study
The main purpose of the article is to find a connection between the relations system and rationality, where consistent rationality completely reflects the market relations system.
Research Methods
The basic research methods are modelling, the method of scientific abstractions, induction and deduction, historical and logical methods (or approaches), analysis and synthesis
Findings
If the empirical motive is not convincing, then the real motive changes the idea of economics. It uses the value and choice theories, which form a new image with a strict formal choice representation. R. Sugden noted that economists tried not to touch empirical problems in the theory of decisionmaking, so they did not combine the explanatory principle, but adhered to a priori research. In this respect, the new consistent rationality is more transparent than the traditional maximization.
If there are no changes, it means that there are economic problems over time. However, some economists realized that outdated mathematical tools lost their relevance before radical changes. The new rationality required the development of new research tools and methods, so they changed the understanding of economics as a system of relations, and at the end of the XIX century mathematics became the modelling economic processes basis.
It is better to call Debreu's economic agent a «useful computer», because, first, consistent behaviour must dominate in the equilibrium system. And second, the computer functions according to rules and instructions with a specified internal organization and predetermined consistent calculation tasks (Patinkin, 1965, p. 7). The agent of the Debreu’s «Theory of value», in our opinion, does not correspond to the economic development concept, since he only deals with the mathematization of the distribution process, despite the fact that his actions reach a real space point in the corresponding direction. Consequently, the neoclassic economic agent’s behaviour when its tastes are saturated is, in one way or another, conditioned by logic constraints.
Experimental research sought to prove the existence of logic laws and analysis flexibility, highly valued in the relations system.
Unfortunately, it is difficult to experiment on the modern decision theory. Allais and Ellsberg explicitly studied the relativity of the expected utility theory and the strong expected utility theory; they showed that these two theories satisfied the requirements of positivism that could test its limitations only experimentally. However, their experiments in 1953 and 1961 did not produce positive results. For example, they could not check a consistency in a simple or complex choice. Therefore, if the theories were operational, there could be result’s revision. However, in reality the expected utility theory and the strong expected utility theory characterized only rationality, while the results of Allais and Ellsberg remained paradoxes.
Ignoring negative results in experiments transforms economics into empirical science that on the contrary becomes a logic system using a mathematical apparatus. Despite behaviorism and operationalism, neoclassicists generally preferred the formal logic of economic behavior to empirical problems.
Conclusion
The rational agent is a model of a decisionmaker that formulates rational expectations in order to choose a general equilibrium in a long term and implement a Nash equilibrium strategy into interaction process. A neoclassic agent is a subject that responds regularly and consistently, and therefore rationally, to stimuli with certain beliefs, expectations, ideas and choices. Thus, rationality consistently and finally manifests itself in external agent behaviour and its levels knowledge.
Consistent rationality completely reflects the market relations system
In fact, the psychology rejection transformation proves that the neoclassic character of acute problems connects with the logic system. Outside of analysis and existing realities in demand theory, its development prerequisite is still a desire for a qualitative choice analysis.
If neoclassic economists were able to justify the psychology rejection, the proponents of economic individualism failed to extract at least some cognitive calculations from the purpose of analysing economic behaviour. Such goal would correspond to positivism with its research methods: in choice theory, it is important to refer to behavioural conditions study with an exception of such cognitive elements as preferences, expectations and beliefs, evaluated only by introspection. However, the theory subjectivity does not completely show the knowledge objectivity, since the structure and results of its functioning depersonalize the observation and reproduction process, i.e. the theory meets the general conscious activity norms.
Neoclassic economics, like any theory of social sciences, begins with an individual approach and necessarily contains the conditions of rationality
Complete dependence of social phenomena on the human behaviour psychology bases an individual approach, because deliberate actions that have a cognitive process cause an individual behaviour type. Hence, the cognitive explanation of intentional economic behaviour in the neoclassic economics structure is a logic phenomenon. If economists in the inter and postwar periods pursued the positive goal of not using the cognitive approach in their analysis, then in the neoclassic direction they began to abandon the principle of individual behaviour, which led to rational behaviour assessment, in which such properties of behaviour as conscious activity, passions and individual goals almost ceased to be analysed. The reason to this, in our opinion, was the transformation of neoclassic analysis of market individuals’ behaviour into the logic of formal relations, characterized by rational agent behaviour and restrictions. Hence, the economy began to undergo major transformations in the forces and relations systems. However, the new cognitive behaviour conditions were not of great importance, because the main problem in explaining economic behaviour was its formal description, which explains the individual trait rating that brought the neoclassic agent to an elementary formal unit.
M. Weber recognized that an economic agent did not repeat the behaviour of other real subjects; it was not a fictitious unit, i.e. a logic construction with a certain set of properties and the ability to evaluate its effectiveness. He believed that logic and mathematics modeled economics, the concepts of which would be cognitively consistent. Therefore, he stood among the early apologists of the relations system characteristics in economy.
Weber's deductive method in interaction with the Austrian school and Debreu's formal human being characterization in economics has shown that the positive approach incorporates into Samuelson's revealed preferences theory and Savage's strong expected utility theory. In fact, Samuelson's contribution to the psychology rejection had a logic – formal rather than empiric basis. Many economists have sought to analyse a perfectly rational agent, but not a real. Moreover, Samuelson and Savage were no exception, because their analysis used an idealized construct, an imaginary consumer whose choice model was just the creative imagination result.
Thus, the mathematical method in economics as a resulting simultaneous emergence of tools for analysing the relations system in the Bayes solution theory associates with mathematical topology, set theory and axiomatization. Formally, there was a revolution in economy, but the main subject of study fell out of neoclassic analysis, i.e. the price, which now began to be set only on individual basis.
Neoclassic theory in the economic relations system directly relates to human behaviour. The question arises: if a person in model becomes superfluous, is it possible to call the theory as neoclassic, does it follow the traditions in achieving the goal? If economic analysis expands in the absence of human activity, then there is a decline and vice versa, the success of modern microeconomics shows the scientific direction potential and flexibility that applies both to people and mechanisms underlying the methodological individualism principle. Therefore, modern economic models for solving problems must strictly follow the analysis rules. After all, in economics, the model is an important component, and the game rules change because of the players types. The behaviour strategy completely changes from the transition results from the von Neumann and Morgenstern preference utility sets to the amount of money or the intellectual development level. The formal model corresponds to all these results, but in reality they do not exist.
References
Aumann, R. J. (1987). What is Game Theory Trying to Accomplish? Arrow K.J., Honkapohja S. (eds.), Frontiers of economics, Oxford, Basil Blackwell.
Arrow, K. J. (1996). Preface. Arrow K.J., Colombatto E., Perlman M. (eds.). The Rational Foundations of Economic Behavior. London, Macmillan.
Corry, L. (1996). Modern Algebra and the Rise of Mathematical Structures. Basel, Birkhäuser.
Davis, J. B. (2002). The emperor’s clothes. Journal of the History of Economic Thought, 24(2), p. 141154. DOI:
Debreu, G. (1959). Theory of Value. New York, John Wiley & Sons.
Debreu, G. (1986). Theoretic models: mathematical form and economic content. Econometrica, 54(6), p. 12591270. https://doi.org/00129682(198611)54:6<1259:TMMFAE>2.0.CO;2R
Ellsberg, D. (1954). Classic and current notions of measurable utility. in Page A. (ed.), Utility Theory: A Book of Readings. John Wiley & Sons, New York.
MasColell, A., Whinston, M. D., & Green, J. R. (1995). Microeconomic Theory. Oxford, Oxford University.
Mayer, H. (1932). The cognitive value of functional theories of price. Critical and positive investigations concerning the price problem, in Kirzner I. (ed.), Classics in Austrian Economics. A Sampling in the History of a Tradition. London, Pickering&Chatto.
Pareto, V. (1927). Manual of Political Economy. New York, A.M. Kelley, 1971. DOI:
Patinkin, D. (1965). Money, Interest and Prices: an integration of monetary and value theory. London, Harper & Row.
Richter, M. K. (1987). Revealed preference theory. Eatwell J., Milgate M. & Newman P. (eds.). The New Palgrave: A Dictionary of Economics. London, Macmillan.
Samuelson, P.A. (1938). A note on the pure theory of consumer’s behavior. Economica, 5, 6171. DOI:
Samelson, P.A. (1947). Foundations of Economic Analysis. Harvard University.
Samuelson, P. A. (1998). How Foundations came to be. Journal of Economic Literature, 36(3), 13751386. https://www.jstor.org/stable/2564803
Savage, L. J. (1954). The Foundations of Statistics. New York, John Wiley & Sons.
Schumpeter, J. A. (1954). History of Economic Analysis. London, Routledge, 1997, pp. 10667.
Sugden, R. (2001). The evolutionary turn in game theory. Journal of Economic Methodology, 8(1), pp. 113130 DOI:
von Neumann, J., & Morgenstern, O. (1944). Theory of Games and Economic Behavior. Princeton, Princeton University Press, 1953. https://www.jstor.org/stable/j.ctt1r2gkx
Copyright information
This work is licensed under a Creative Commons AttributionNonCommercialNoDerivatives 4.0 International License.
About this article
Publication Date
01 July 2021
Article Doi
eBook ISBN
9781802961126
Publisher
European Publisher
Volume
113
Print ISBN (optional)

Edition Number
1st Edition
Pages
1944
Subjects
Land economy, land planning, rural development, resource management, real estates, agricultural policies
Cite this article as:
Abgaldaev, V. Y., Dambueva, M. M., Mikulchinova, Е. А., Sakharovskaya, E. T., & Tsybikdorzhieva, Z. D. (2021). Influence Of Modelling Consistently Rational Agents On Neoclassic Economics. In D. S. Nardin, O. V. Stepanova, & V. V. Kuznetsova (Eds.), Land Economy and Rural Studies Essentials, vol 113. European Proceedings of Social and Behavioural Sciences (pp. 806821). European Publisher. https://doi.org/10.15405/epsbs.2021.07.96