Monitoring The Assessment Of Student Educational Performance: Information Aspects

Abstract

The paper discusses information aspects of identifying student educational performance by means of monitoring helpful for assessing this performance within the scope of singled out content clusters focused on achieving set learning objectives. This gives a more comprehensive and objective information on student learning progress. Within the framework of this approach a universal procedure is proposed for conversion of objectives, which are usually not diagnosable, into requirements for educational outcomes. Since these requirements are diagnosable and verifiable, monitoring surveys can be launched. Presenting the monitoring results visually is helpful for identifying regularities in the obtained data array. With the assessment result obtained, managing inputs can be organized as an important element of educational management. There is quite a variety of monitoring tools to choose from, such as computer-assisted telecommunications enabling supervisory bodies to obtain information on specific learning progress of students as well as other information on the instructional process in a given institution of higher learning. Diagnostic data can be processed using Microsoft Office Excel spreadsheets including statistical treatment of obtained data. For the quality of monitoring the most important factor is congruence of the educational performance assessment procedure being used with real educational conditions or to put it in information language terms – the information quality factor. When all the mentioned factors are taken into account, the quality of student learning progress can be assessed more accurately and, accordingly, adequate effort can be put into managing the instructional process as part of educational management.

Keywords: Informationmonitoringstructuringvisualizationmanagement

Introduction

One critical challenge of modern educational process is assessment of its outcomes. Modern approaches are known for requiring a distinct, operational statement of learning objectives. The objectives, in their turn, are specified through the requirements for learning outcomes and through the content of tools that measure conformity to these requirements. This gives a more comprehensive and accurate assessment of student educational performance.

The essential modern tool for such an assessment is monitoring; with it one can evaluate not only the outcome per se, but also track its dynamics, and that is crucial for managing the educational process as a whole.

Currently, monitoring is widely used in a variety of areas, such as economy, sociology, etc. Of late it has become a fundamental tool of education as a component of educational management.

Problem Statement

On the basis of the study by Trubina (2003), within the scope of this article we will define monitoring as specially arranged, continuous surveillance of changes in the core parameters of educational outcomes for the purpose of their assessment and making follow-up executive decisions.

The core parameter of assessment is conformity of educational outcomes to the set objectives. The approach adopted earlier in instructional procedures for determining learning objectives through analysis of related state documents on development of education (as a reflection of social demand for the level and nature of education) cannot meet the requirements for the construction of modern technology. Description of the objectives must conform to the diagnosability (Bespalko, 2018) and verifiability (Avanesov, 2016) requirements. These requirements can only be met if the objectives have been determined with sufficient accuracy; conformity to their separate components correlates with certain manifestations thereof measurable on some assessment scale. Such an approach to the objectives description and setting aligns with the trend towards technologization, standardization and formalization particularly conspicuous of late when requirements are set for description of components of educational standards (Analytics Landscape: A Comparison of Institutional and Learning Analytics in Higher Education, 2016; Shihnabieva, 2016).

Research Questions

Measurements for the learning process are needed as a tool for unbiased assessment of its results, for its orderly structuring and management. Significantly, such a tool is needed not only to get a "static snapshot" of some learning outcomes but also to see the learning process dynamics.

Purpose of the Study

The study aims to develop a standardized procedure for conversion of formulated learning objectives of a given academic course to verifiable requirements for student educational performance as a prerequisite for launching monitoring surveys.

Research Methods

The main methodological approach aimed at solving this problem is to "convert" the notions of learning objective, student educational performance etc. into the language of the theory of models (Safsouf, Mansouri, & Poirier, 2018). Next, the modeling methodology can be used to broaden the structure of a requirement and to develop a standardized procedure for translation of a stated learning objective into a requirement.

Findings

For its effectiveness the monitoring depends primarily on the adequacy of the tracked parameters presentation in the obtained information.

Usually, assessments of educational performance provide a generalized picture without differentiating the performance by the degree to which the learning objectives related to a given academic course were met.

The cause of this situation is as follows.

The formulated objectives of learning a given academic course are achieved as progress in learning the given instructional material is made. Each objective has its matching cluster of instructional material essential for forming this particular objective. Thus, each component of the content of an academic course is focused on meeting a certain objective.

On the other hand, the system of content clusters, with each cluster focused on meeting a set objective, "immersed" in the logic of a given academic course. In this case, clustering disappears to give rise to a holistic academic course having its learning objectives and internal logic.

As a result, achievement of the set objectives is assessed on the basis of knowledge and skills related to a given academic course. Knowledge here is interpreted as notions, theories and concepts representing this area of reality, while skills, as a rule, are understood as problem-solving skills applied to instructional problems. The procedure for assessment of learning progress, under the circumstances, boils down to assessment of knowledge and skills.

To verify the achievement of a particular objective each content cluster should be assessed for conformity to its criterion consistent with the set objective. The result of assessment will be a piece of information representing student performance for a given content cluster not within the scope of general subject-related logic but in terms of meeting the intended objective. Such an assessment yields an information-filled "vector", and the information contained therein is already complete and valid and can serve as the basis for the management of cognitive activities intended to meet the set objectives of learning a given academic course.

For all the obvious significance of the above approach, the reality is that the mastery of content is assessed predominantly by the averaged score overlooking its cluster nature and its focus on formation of specific competencies (Trubina, Beshenkov, & Braines, 2018; Uskov, Bakken, & Lakhmi, 2017).

This gap can be bridged with monitoring aimed at obtaining the above information vector. By its definition, this information is complete and integral, since it represents the formed/unformed status of all the objectives set before an academic course.

The major challenge is that objectives of learning some academic course are formulated, as a rule, in general terms, so the attainment thereof cannot be accurately diagnosed.

Using the modeling methodology one can design, refine and specify the set objectives.

The general idea of modeling methodology boils down to the following (Beshenkov & Rakitina, 2014; Beshenkov, Mindzaeva, Beshenkova, & Shutikova, 2016).

Solving a problem requires exploring and obtaining information about some object. Singling out this object's properties seen as substantial for the set objective, with this objective, in its turn, being determined by a given problem, leads to the concept of a model. In reference to a given objective this model can be adequate or inadequate for the object of modeling. In other words, adequacy is always understood as adequacy for the object under modeling and the objectives of modeling.

A model is used to obtain information about the object under study. This information can be complete or incomplete, valid or invalid depending on the adequacy of the model for the modeled object and the objectives of modeling.

From the standpoint of this methodology the requirement structure is as follows:

a system of knowledge to be learned by students – the object of simulation;

a learning activity – an objective of modeling;

the outcome of learning the system of knowledge from the perspective of a given learning objective – a model;

the characteristic of a learning level – the degree of the model's adequacy to the object and the goal of modeling.

The knowledge system falls across individual clusters in accordance with learning objectives (Logvinov, 2013). Each cluster corresponds to a learning objective. Some clusters may overlap.

This design identifies at least three levels of learning, which match three degrees of adequacy:

conformity to knowledge and a given activity;

conformity to the knowledge but non-conformity to the activity;

non-conformity to the knowledge but conformity to the activity;

The second type of non-conformity represents formal knowledge, the third one represents an unintelligent activity.

The following important point should be emphasized.

Acquisition of formal knowledge and automated skills is absolutely unacceptable with regard to the whole content of education. However, with regard to individual learning objectives – individual activities, and equally with regard to individual knowledge clusters, such a situation is not only possible but is also necessary. E.g., the knowledge of historical dates or physical constant values is "formal" – they need only to be reproduced. On the other hand, some activities in a certain situation are to be carried out formally, according to an algorithm, automatically.

We propose to treat a model as a student knowledge learning outcome viewed from the perspective of a given objective. By now a quite complete classification of models has been developed (e.g., by S. A. Beshenkov, Ye. A. Rakitina, M. I. Shutikova) to be used for further refinement of the requirements (Beshenkov & Rakitina, 2014). The stages of implementation of this approach are as follows:

the object of modeling in terms of philosophy is threefold: appearance, structure and behavior (dynamics);

the objectives – the activities of a subject (student) – can be conditionally sorted into three main classes: cognition, social intercourse, practice.

models can differ in their reflection of the essential characteristics of an object, such as material-and-energy characteristics or information characteristics.

Thus, a requirement can be viewed as a "point" in space with the coordinates of: entity, object, subject, or in our interpretation: area of knowledge, characteristic of knowledge, activities.

On the other hand, models can be different in their degree of formalization (formal, half-formal, informal).

In conventional terms the structure of a requirement looks as follows: "This knowledge cluster manifesting the component of material and energy (the information component) is explored from the standpoint of its structure (appearance, dynamics with the purpose of performing this activity focused on cognition (social intercourse, practice) at the informal (formal, half-formal) level knowledge-wise, and at the formal (informal, half-formal) level activity-wise."

Each parameter included in this requirement can, in its turn, be specified, e.g., the degree of formalization can be understood in terms of dynamics or statics, etc. (Serdyukova, Serdyukov, & Slepov, 2018). This is evidence that the set of parameters in the requirement can be substantially increased compared to the above parameters.

Thus, we have designed a standardized procedure (algorithm) for determining the context of learning objectives or, to put it another way, a standardized procedure for converting the system of learning objectives determined through activities into requirements. This point seems to be practically important, since the availability of such a procedure considerably reduces the time costs involved in preparation and arrangement of the monitoring.

The application of this procedure is illustrated by the following requirement that can be made up within the scope of the Federal component of the general education standard for history. "The historical events related to the Napoleonic era (the information component) are explored chronologically (dynamics) in order to distinguish facts and opinions therein. This objective is focused on (the cognition of) historical reality and is being accomplished, in terms of knowledge and activities, at an informal level.

E.g., should an answering student depict personal relationships between the protagonists of that era (the structural point), one may speak of non-conformity to the stated requirement in terms of this parameter (the object description aspect).

Within the framework of the Federal General Education Standard for Informatics an example of the requirement could be formulated as follows: "The elements of social informatics (the information component) are studied from the perspective of the structure of social information processes and systems (the structural component) for the purpose of their analysis and practical use. Moreover, this purpose is achieved knowledge-wise at the half-formal level, and activity-wise at the informal level."

The given examples demonstrate generality in the approach to the construction of requirements, with the approach being independent from the selected educational area. This approach can be viewed as a generalization of the methodology for the construction of requirements underlain by the analysis of functions and morphology of instructional material.

Conclusion

The proposed approach can be used to extract meaningful management information from the evidence provided by monitoring. However, a certain methodology should be followed here, too.

As seen in practice, an effective way of solving this problem is utilization of various means and methods of visualization. Many researchers, such as Reznik (2012) call for treating visualization not merely as a secondary tool, but as a tool focused on generating new visual forms exposing their internal meaning and leading to informative results.

Eventually, it is a matter of activation of holistic thinking, since rational and imaginative thinking complement each other (Beshenkov, Shutikova, & Mindzaeva, 2017; Learning Analytics For Tracking Student Progress, 2016)

Visualization helps to:

single out, generalize and systematize large quantities of information and data (Schmarzo, 2014);

eliminate superfluous, secondary information, which is essential from the perspective of the search for regularities in a very big array of loosely structured information;

bring the available information as close as possible to the form convenient for human perception.

Relying on the above formulated procedure for translation of objectives into diagnosable requirements and on the methodology of visualization one can get meaningful information for managing the educational process.

Acknowledgments

This is a government-commissioned paper (N073-00086-19ПР).

References

  1. Analytics Landscape (2016). A Comparison of Institutional and Learning Analytics in Higher Education. ЕDUCASE. Retrieved from: https://library.educause.edu/~/media/files/library/2016/4/eig1504.pdf
  2. Avanesov, V. S. (2016). Problema soedineniya testirovaniya s obucheniem [The problem of merging testing with learning]. Narodnoe obrazovanie, 7–8, 132–140 [in Rus.].
  3. Beshenkov, S. A., & Rakitina, Ye. A. (2014). Modelirovaniye i formalizatsiya [Simulation and formalization]. Moscow: BINOM. Laboratoriya bazovykh znaniy. [in Rus.].
  4. Beshenkov, S.A., Shutikova М.I., & Mindzaeva E.V. (2017). Information and Cognitive Technologies. Modern Educational Trend. In International Conference "Education Environment for the Information Age" (EEIA-2017). Moscow. DOI:
  5. Beshenkov, S. A., Mindzaeva, E. V., Beshenkova, E. V., Shutikova, М. I. (2016). Information Education in Russia. Smart Innovation, Systems and Technologies Series, 59, 563–571.
  6. Bespalko, V.P. (2018). Kiberpedagogika. Pedagogicheskiye osnovy upravliaemogo kompiuterom obucheniya [Cyberpedagogics. Pedagogical fundamentals of computer-controlled learning]. E-Learning. Moscow: T8RUGRAM.[in Rus.].
  7. Learning Analytics For Tracking Student Progress. (2016). Hanover Research. Retrieved from: https://www.imperial.edu/research-planning/7932-learning-analytics-for-tracking-student-progress/file
  8. Logvinov, I.I. (2013). O soderzhaniyi shkolnogo obucheniya [On the contents of schooling]. Pedagogika, 3, 14–22. [in Rus.].
  9. Reznik, N.A. (2012). Nauchnost, dostupnost i naglyadnost uchebnogo kontenta v sovremennom informatsionnom prostranstve [The scientific nature, accessibility and visuality of educational content in the contemporary information space]. Saarbrucken, Germany: Lambert Academic Publishing. [in Rus.].
  10. Safsouf, Y., Mansouri, К., & Poirier, F. (2018). A New Model of Learner Experience in Online Learning Environments. Information Systems and Technologies to Support Learning Series, 111, 29–38.
  11. Schmarzo, B. (2014). What Universities Can Learn from Big Data – Higher Education Analytics. InFocus Blog. Dell EMC Services. Retrieved from: https://infocus.dellemc.com/william_schmarzo/what-universities-can-learn-from-big-data-higher-education-analytics/
  12. Serdyukova N. А., Serdyukov V. I., & Slepov V. А. (2018). Smart Education Analytics: Quality Control of System Links. Smart Innovation, Systems and Technologies Series, 99,104–113.
  13. Shihnabieva, T. S. (2016). Intelligent system of training and control of knowledge. Smart Innovation, Systems and Technologies Series, 59, 595–603.
  14. Trubina, I.I. (2003). Pedagogichesky monitoring kak instrument razvitiya informatsionnoi osnovy upravleniya obrazovatelnym uchrezhdeniyem [Pedagogical monitoring as a tool to develop the information basis of an educational institution]. Moscow: Obrazovaniye i Informatika. [in Rus.].
  15. Trubina, I.I., Beshenkov S. A., & Braines, A. A. (2018). Informatics discipline in the context of digital civilization. In International Conference "Education Environment for the Information Age” (pp. 768–774). DOI:
  16. Uskov, V. L., Bakken, J. P., & Lakhmi, C. J. (2017). Building smart learning analytics system for Smart University. Smart Education and e-Learning 2017, 191–204. DOI:

Copyright information

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

About this article

Publication Date

30 September 2019

eBook ISBN

978-1-80296-068-6

Publisher

Future Academy

Volume

69

Print ISBN (optional)

-

Edition Number

1st Edition

Pages

1-1054

Subjects

Education, educational equipment, educational technology, computer-aided learning (CAL), Study skills, learning skills, ICT

Cite this article as:

Trubina*, I. I., Shutikova, M. I., Braines, A. A., Dzamykhov, A. K., & Dzamykhova, M. T. (2019). Monitoring The Assessment Of Student Educational Performance: Information Aspects. In S. K. Lo (Ed.), Education Environment for the Information Age, vol 69. European Proceedings of Social and Behavioural Sciences (pp. 932-938). Future Academy. https://doi.org/10.15405/epsbs.2019.09.02.104