SciELO - Scientific Electronic Library Online

 
vol.68 número6Standard enthalpies of formation of 3-hydroxyphthalic anhydride índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Revista mexicana de física

versión impresa ISSN 0035-001X

Rev. mex. fis. vol.68 no.6 México nov./dic. 2022  Epub 31-Jul-2023

https://doi.org/10.31349/revmexfis.68.061702 

Research

Thermodynamics and Statistical Physics

Prediction of equations of state of molecular liquids by an artificial neural network

A. Torres-Carbajala  b  * 

U. Que-Salinasa 

P. E. Ramírez-Gonzálezc 

a Instituto de Física “Manuel Sandoval Vallarta", Universidad Autónoma de San Luis Potosí, Álvaro Obregón 64, 78000, San Luis Potosí, México.

b Instituto de Física, Universidad Nacional Autónoma de México, Apartado Postal 20-364, 01000, Ciudad de México, México.

c Investigadores CONACYT-Instituto de Física “Manuel Sandoval Vallarta", Universidad Autónoma de San Luis Potosí, Álvaro Obregón 64, 78000, San Luis Potosí, México.


Abstact

In this work an artificial neural network (ANN) was used to determine the pressure and internal energy equations of state of noble gases and some molecular liquids by predicting thermodynamic state variables like density and temperature encoded in the radial distribution function. The ANN is trained to predict the thermodynamic state variables using only the structural data. Then, predicted values are used to compute equations of state of real liquids such as argon, neon, krypton and xenon as well as some molecular liquids like nitrogen, carbon dioxide, methane and ethylene in the supercritical regime of each fluid. In order to assess the ANN predictions the relative percentage error with the exact values were determined, showing that its magnitude is less than 1%. Thus, the comparison between equations of state computed with the predicted variables and experimental results exhibits a very good agreement for most of the liquids studied here. Since our ANN implementation only requires the microscopic structure as an input, data incoming from experiments, theoretical frameworks or simulations are suitable to perform predictions of state variables and with that complement the thermodynamic characterisation of liquids through the determination of equations of state. Moreover, further improvements or extensions related with the microscopic structure database can be safely addressed without changing the neural network architecture presented here.

Keywords: Artificial neural network; equation of state; molecular liquids

1 Introduction

Nowadays machine learning (ML) methods are mainly used to enhance solutions of specific problems in industry [1] and science [2]. In the last case, applications go from molecular design [3,4] to cosmology [5,6]. Particularly in condensed matter physics examples of ML developments are vast [7]. However, implementations of such methods to describe the liquid state are in the earliest stage, where problems like the determination of the molecular interaction potential from the microscopic structure [8] or the pattern identification in fluids [9] as well as the determination of transport properties [10] has been addressed.

On the other hand and in general terms, thermodynamic description of liquids can be done through experimental measurements of thermophysical properties like pressure, density, internal energy among other thermodynamic properties. However, since the time and cost needed to perform such experiments are high, just a subset of thermodynamic properties are able for measurement. Actually, results from such experiments can be consulted in different data banks for some specific molecular fluids [11,12]. Additionally, theoretical and computational approaches are commonly used to complement and in some cases predict the thermodynamic behaviour of liquids. In this regard, a correlation between the microscopic properties of the liquids and its macroscopical thermodynamic behaviour is always desirable.

The bridge between microscopic and macroscopic thermal properties is given by the equilibrium statistical mechanics, which provides a quite robust thermodynamic description of the liquid state [13]. This theoretical framework allows us to determine, for example, the fluid pressure, P, in terms of the microscopic structure, g(r) the interaction between molecules, u(r), density, ρ and temperature, T, of the liquid as [13]

P=ρkT-23πρ20r3du(r)drg(r)dr, (1)

where k is the Boltzmann constant. From Eq. (1) it is clear that if any of the macroscopic parameters, density or temperature, are unknown the pressure can not be determined.

Although from a theoretical or computational point of view a situation where density and temperature are both unknown is specially odd, there are some experimental systems of interest where the structure is easily determined but one or both thermodynamic variables are missing, for instance, in order to theoretically describe some non-vibrated granular systems it is fundamental to establish an effective temperature [14,15]. On the other hand, in temperature-dependent colloidal systems, like PNIPAM colloids [16], the thermal behaviour is known but the density must be inferred.

It is well know that a microscopic level, for a given u(r), the microscopic structure g(r), usually called radial distribution function (rdf), can be determined by experimental means and for some liquids even with theoretical [17] or computational [18] schemes. In any case, it is well documented that g(r) has a subtle relation with the density and temperature of the liquid. Indeed, although the explicit dependence of such parameters is unknown, it is widely observed that if density, temperature or both parameters are modified, the rdf will have changes in consequence, i.e., exist an implicit dependence g(r; ρ, T)[19], in fact, it is possible to observe the start of the fluid-solid transition observing the rdf second maximum behaviour [20].

Considering that different thermophysical properties have been already investigated using ML methods, for instance, interfacial [21] and critical properties [22] as well as the liquid-vapour phase diagrams of refrigerants [23] and even properties like dynamic viscosity in ionic liquids [24], the use of such methods has been proven to have a great potential to complement and find solutions in a wide spectrum of different kind of problems. This versatility is often related to one of the major advantages provided by ML methods, that is the establishment of non apparent correlations among data in a large set, that ultimately allows to describe or predict parameters of interest within the data-set [25]. Among the most famous and used of such methods one found the so-called artificial neural network (ANN) algorithms that allows us to model non linear statistical data or deal with incomplete or noise datasets [26,27].

The main goal of this contribution is the determination of the internal energy and pressure equations of state of noble gases like Argon (Ar), Neon (Ne), Krypton (Kr), Xenon (Xe) and some molecular liquids such as Nitrogen (N2), Carbon Dioxide (CO2), Methane (CH4) and Ethylene (C2H4) using only the microscopic structure provided by the g(r) of a model fluid. To this end, we implement an ANN that simultaneously predicts the density and temperature at which a given g(r) belongs, then, the respective equations of state are computed and compared against experimental results. Thus, this work is organised as follows: In Sec. 2 details about the model fluid and the ANN implementation are discussed. In Sec. 3 we show and discuss the results regarding to the prediction of thermodynamic state variables, determination of equations of state and their comparison with experimental results. Finally, in Sec. 4 we give the conclusions of this work.

2 Artificial neural networks to predict thermodynamic state variables

2.1 Artificial neural network architecture and implementation

Depending on the particular problem that one wants to solve with an ANN, its architecture must be chosen carefully, since the data processing and final predictions strongly depend on it [28]. Once the ANN architecture is selected, the training procedure starts feeding the ANN with a set of data where the values and functions of interest are well known. Using these data, hidden layers determines compute the weights of a non-linear function [29]. Using this function variables of interest are predicted, then, a comparison with the real values is performed in order to computed an error. If error is greater than a previously established tolerance value, the activation function is computed again using new weights, else, predictions are updated and the ANN training finishes. Additionally, using an independent data set of well known variables, the ANN is used to predict them in order to validate its training. A general flowchart of this algorithm is shown in Fig. 1a).

Figure 1 a) Flowchart of the ANN training. Pictorial representations of the b) training and c) prediction stages of the ANN. 

In this context, the feed-forward ANN is the most commonly architecture employed and this consists of a number of neurons within interconnecting layers which are distributed as [30]: one input layer, hidden layer(s) and one output layer. In practice, the implementation of an ANN consist of two main stages, namely, i) training (Fig. 1b) and ii) predictions (Fig. 1c), where in each one an independent data-set must be used. Such data are established as is discussed further below.

Thus, using the Python language we implement from scratch a feed-forward ANN whose architecture consists of the input layer with 101 neurons, four hidden layers with 404, 202, 101 and 10 neurons, respectively, and the output layer with two neurons that performs the density and temperature predictions. The Adam optimization algorithm [31] with a learning rate γ = 0.001 [32] along with the rectified linear unit [33] (ReLU) as an activation function were used in order to minimise the error between the real function and the predicted one during the learning stage.

The assessment of the training stage is mainly determined through the mean absolute error (MAE) and the mean squared error (MSE) whose obtained values were 7.7 ×10-3 and 1.4 ×10-4 respectively. Nevertheless, the accuracy of the prediction stage is evaluated individually instead to perform a statistical analysis. This procedure allows us to clearly identify the prediction performance of the ANN at specific thermodynamic regions since we can determine in the magnitude of deviations and identify systematic errors. Then, the ANN prediction accuracy is taken into account through the magnitude of the percentage relative error, defined as 100×(X - XANN)/X, where X stands for the real value of ρ* or T*, respectively. Therefore, XANN is the respective prediction performed by the ANN. This error definition allows us to identify if any individual prediction is underestimated or overestimated through the sign of each result, being positive in the former scenario and negative in the last one.

2.2 Molecular model of simple liquids and training data set

The liquids of interest in this work are commonly referred as noble gases and have been widely studied by experimental, theoretical and computer simulations means. In the last two instances, the interaction between molecules has been modelled by the Lennard-Jones (LJ) fluid with great success. Therefore, we focus on the LJ fluid to make a database to train the ANN. The database is composed of several g(r) determined at different thermodynamic states that spans a wide region in the phase diagram [34].

In particular, we determine the g(r) of the LJ fluid with an analytical equation provided by Morsali et al. [35]. This approach was developed to be accurate in the density and temperature range of 0.35 ≤ ρ* ≤ 1.1 and 0.35 ≤ T* ≤ 4.5, respectively, where ρ*=ρσ3 and T*=kT/ϵ are the dimensionless number density and temperature, with σ and ϵ being the respective molecular diameter and the energy interaction between molecules in a specific liquid. Inside this region we create 10201 different radial distribution functions that compose the database used to train and validate the ANN implemented whose details are discussed further below. From this database, 80% of the data is used for the ANN learning and the remaining data is used to validate such a learning stage in order to achieve the training of the ANN. Additionally, 56 different g(r)at four main isotherms were computed and used in the ANN to predict the density and temperatures, then, such predicted data is employed to compute the energy and pressure equations of state for the different liquids of interest of this work.

3 Equations of state of molecular liquids

3.1 Prediction of thermodynamic state variables

In this work the accuracy of the ANN is assess performing predictions along thermodynamic states in the isotherms T* = 1.5, 2.5, 3.5 and 4.5.For all fluids studied here these temperatures belong to supercritical states, this election attends to the lack of experimental results at lower temperatures and densities. Thus, once that the ANN is trained we give them a g(r), one that was not used during the learning or validation stages. The ANN predicts the density and temperature at which these g(r) was determined. Results about the prediction accuracy of both density and temperature are shown in Fig. 2. In panel a) density predictions are shown, while temperature predictions are presented in panel b). In the graphs, the error magnitude is associated to a specific colour tone indicated by the bar at the right in each case.

Figure 2 Accuracy of the simultaneous ANN predictions of density and temperature for different thermodynamic states. The error magnitude is shown through the colour map in the bar at the right of each graph. Panel a) and b) shows density and temperature predictions, respectively. 

From Fig. 2 it is clear that most of the predictions have a very good agreement, in fact, deviations are lesser than |1%| for both thermodynamic variables. However, there are three states where the density prediction is overestimated, each one belongs to the T* = 1.5, 3.5 and T* = 4.5 isotherms. Nevertheless, although the maximum deviation is around the -5% the prediction can be considered good enough. In fact, considering that only structural information where required, the prediction accuracy is very high. In this sense, one can expect the same degree of accuracy if experimental, theoretical or simulation microscopic structure for similar fluids than the studied here will be used. Furthermore, the ANN architecture could be re-trained to predict the same thermodynamic variables for more complex liquids whose microscopic structure is known in a wide region. Such a task only depends on the quantity and quality of data associated with the microscopic structure available and, in the last instance, on a reasonable assumption of the particular interaction between molecules of such complex liquid.

3.2 Equations of state from ANN predictions

For a thermodynamic description of liquids with theoretical or simulation frameworks usually different equations of state are computed [13], in particular, those that involves the interaction potential and the microscopic structure are the most simplest ones, e.g., the pressure equation of state, given by Eq. (1) or the internal energy equation of state given by

EN=32kT+2πρ0r2u(r)g(r)dr, (2)

where E/N is the total internal energy per mol. As one can clearly observe from Eqs. (1) and (2) if the microscopic structure g(r) for a given interaction between molecules u(r) is already known, the pressure and internal energy can be determined. Nevertheless, if any, the temperature or density are unknown, such equations are useless.

As we already shown, these thermodynamic state variables can be determined with a high degree of accuracy using our ANN implementation, which, assuming u(r), only needs the information provided by g(r). Thus, encouraged by the good results discussed in Fig. 1, we determined the pressure and internal energy equations of state for different liquids and compared them with the respective experimental results. In order to express the reduced units of density and temperature into real ones we use the respective molecular diameter and energy interaction of the liquid of interest. For the liquids studied here, these parameters can be found elsewhere [36-38].

In Figs. 2 and 4 results for pressure and internal energy of Ar are displayed as a function of density for isotherms at T(K) = 175; 292; 409 and 526. As one can notice, both equations of state are in general well predicted by the ANN, nevertheless, as the density and temperature increases the ANN tends to overestimates the pressure, see for example the isotherm T = 526K in Fig. 3. This behaviour can be traced to the slight overestimation of density predictions at high temperatures, see Fig. 2a).

Figure 3 Pressure equation of state as a function of the density for Ar at different temperatures. Lines are experimental available results provided by the NIST data-bank [12] and symbols are results computed with Eq. (1) using the ANN predictions for temperature and density. 

Figure 4 Ar internal energy as a function of the density for different isotherms. Lines are experimental results [12] and symbols are results of Eq. (2) computed with the ANN predictions of density and temperature. 

On the other hand, the internal energy prediction within the ANN is remarkably good at any of the studied temperatures as can be seen in Fig. 4.

Although the microscopic structure used to train and predict values with our ANN implementation is well known to be good in the computation of thermophysical properties of Ar [39-41], it should be stressed that a poor estimation of density and/or temperature from the ANN can not give us the good agreement found with the experimental results. At this instance, even the deviations found in the density prediction seems negligible in the computation of the equations of state.

Being that thermodynamic state variables predictions by the ANN have a such good agreement we also compute the pressure and internal energy equations of state for other noble gases, namely, Ne, Kr, and Xe. For these fluids we found that in general the pressure determination is more accurate than the internal energy computation. Actually, for the last property, we observe deviations at high temperatures despite the fluid nature, nevertheless, this behaviour might be an effect of the molecular parameters used to characterise the respective fluid [36-38], however, an improvement of such parameters are out of the scope of this contribution. A comparison between pressure results determined with the state variables predicted by the ANN and experimental ones as a function of density and temperature is shown in Fig. 5. Again, experimental results were obtained from the NIST data bank [12].

Figure 5 Pressure equation of state as a function of the density for different liquids (Ne, Ar, Kr, and Xe). Lines are experimental results [12] and symbols are the computation of Eq. (1) using the corresponding ANN predictions. 

As one can see from Fig. 5, predictions shows a good overall agreement. However, for Kr, as the density is increased we can see an underestimation of the pressure. Nevertheless, such deviations can be due to the molecular parameters (σ and ϵ) used to represent Kr instead to the ANN predictions. Additionally and unfortunately we do not have experimental results at higher densities for Kr. In this regard, a better estimation of those molecular parameters could improve the ANN predictions, however, such task is out of the scope of this contribution. In any case, it is worth to stress the fact that even for densities where experimental pressure results are not reported, the predicted values by the ANN are sound physically meaningful. It means that the ANN could be used to predict such experimental values.

The prediction capabilities of the ANN are also tested against experimental results of some molecular liquids, like N2, CO2, CH4 and C2H4. In Fig. 6a) the pressure behaviour as a function of the density of CH4 for three isotherms is shown. As one can see the agreement is remarkably good, despite that at the highest densities and temperature certain deviations can be seen. Additionally, in Fig. 6b) results for different molecular liquids are also shown, in this scenario CO2 and CH4 are very well predicted but N2 and C2H4 are slightly overestimated, nevertheless, we remark the fact that the qualitative behaviour is well represented.

Figure 6 Pressure equation of state as a function of the density for different molecular liquids. Panel a) shows results for methane CH4 at three different isotherms. Panel b) shows results for N2, CO2, CH4 and C2H4 at different isotherms. 

4 Concluding remarks

Supported by the previously discussed results, we can assure that a straightforward application of ANN algorithms can be safely used to predict thermodynamic state variables in particular situations where only the g(r) is known. Here, we have explored the special case of liquids described by the LJ interaction potential and, using an ANN, supercritical thermodynamic state variables such as density and temperature were predicted. Lastly, such values were used to determine equations of state in the aforementioned regime that also were compared to experimental available results, showing excellent quantitative agreement.

Since the ANN architecture was built and optimised to deal with data related to microscopic structure of liquids, it can be used along other related experimental databases, for instance, that includes thermodynamic states in low density and temperature regime. Another exciting possibility consists in the training of the ANN to predict thermodynamic state variables of different kinds of fluids, like the ones that are commonly used to model colloidal systems, e.g., hard-spheres, square-well or attractive Yukawa fluids among others. Here, the determination of the structure is almost inexpensive but at the same time a correct determination of state variables is fundamental for a complete description considering that data banks of such kinds of systems are scarce.

Nevertheless, it is worth to stress out the paramount relevance of the database used to train any ANN, here, it directly depends on the quality of the microscopic structure data. This impacts the predictions done by the ANN that could be qualitative or quantitative. However, the improvement of the database like the one used in this work can be done by experimental, simulation or theoretical means while the ANN architecture could remain the same and results with the same degree of accuracy can be expected. Work in this direction is in progress.

Acknowledgments

A. Torres-Carbajal and P. E. Ramírez-González acknowledge the financial support provided by CONACyT México through grants: Estancias Posdoctorales Nacionales grant no. 422753/2021, Cátedras CONACyT No. 1631 and CB-2015-01 No. 257636. The authors thankfully acknowledge the computer resources, technical expertise and support provided by the Laboratorio Nacional de Supercómputo del Sureste de México, CONACYT member of the network of national laboratories. Authors also acknowledge LANIMFE for the infraestructure and computational resources provided during this project.

References

1. Z. Ge, Z. Song, S. X. Ding, and B. Huang, Data mining and analytics in the process industry: The role of machine learning, IEEE Access 5 (2017) 20590. [ Links ]

2. G. Carleo et al. Machine learning and the physical sciences, Rev. Mod. Phys. 91 (2019) 045002. [ Links ]

3. B. Sanchez-Lengeling and A. Aspuru-Guzik, Inverse molecular design using machine learning: Generative models for matter engineering, Science 361 (2018) 360. [ Links ]

4. K. T. Butler, D. W. Davies, H. Cartwright, O. Isayev, and A. Walsh, Machine learning for molecular and materials science, Nature 559 (2018) 547. [ Links ]

5. E. E. O. Ishida, Machine learning and the future of supernova cosmology, Nat. Astron. 3 (2019) 680. [ Links ]

6. A. Peel et al., Distinguishing standard and modified gravity cosmologies with machine learning, Phys. Rev. D 100 (2019) 023508. [ Links ]

7. E. A. Bedolla-Montiel, L. C. Padierna, and R. Castaneda-Priego, Machine learning for condensed matter physics, J. Phys.: Condes. Matter 33 (2020) 053001. [ Links ]

8. G. Toth, N. Kiraly, and A. Vrabecz, Pair potentials from diffraction data on liquids: A neural network solution, J. Chem. Phys. 123 (2005) 174109. [ Links ]

9. M. S. G. Nandagopal, E. Abraham, and N. Selvaraju, Advanced neural network prediction and system identification of liquid-liquid flow patterns in circular microchannels with varying angle of confluence, Chem. Engineering J. 309 (2017) 850. [ Links ]

10. J. P. Allers, J. A. Harvey, F. H. Garzon, and T. M. Alam, Machine learning prediction of self-diffusion in lennard-jones fluids, J. Chem. Phys. 153 (2020) 034102. [ Links ]

11. Royal chemical society, chemspider. http://www.chemspider.com/ ,accessed:2021-08-10. [ Links ]

12. U. S. Department of Commerce, National Institute of Standards and Technology (NIST), Department of Commerce, National Institute of Standards and Technology (NIST), https://www.nist.gov/ accessed: 2021-08-10. [ Links ]

13. J. P. Hansen and I.R. McDonald, Theory of Simple Liquids 3rd ed. (Academic Press, London, 2006). [ Links ]

14. F. Donado, J. García-Serrano, G. Torres-Vargas, and C. Tapia-Ignacio, Temperature and particle concentration dependent effective potential in a bi-dimensional nonvibrating granular model for a glass-forming liquid, Physica A 524 (2019) 56. [ Links ]

15. M. J. Saínchez-Miranda, J. L. Carrillo-Estrada, and F. Donado, Crystallization processes in a nonvibrating magnetic granular system with short range repulsive interaction, Scientific Reports 9 (2019) 3531. [ Links ]

16. R. Rivas-Barbosa et al., Different routes into the glass state for soft thermo-sensitive colloids, Soft Matter 14 (2018) 5008. [ Links ]

17. Y. Zhao, Z. Wu, and W. Liu, Theoretical and analytical radial distribution function for dense fluids, Physica A 389 (2010) 5007. [ Links ]

18. M. P. Allen and D. J. Tildesley, Computer Simulation of Liquids 1st ed. (Oxford University Press, Oxford, 1987). [ Links ]

19. B. A. Klumov, On the behavior of indicators of melting: Lennard-jones system in the vicinity of the phase transition, JETP Lett. 98 (2013) 259. [ Links ]

20. P. L. Fehder, anomalies in the radial distribution functions for simple liquids, J. Chem. Phys. 52 (1970) 791. [ Links ]

21. Y. Vasseghian, A. Bahadori, A. Khataee, E. N. Dragoi, and M. Moradi, Modeling the interfacial tension of water-based binary and ternary systems at high pressures using a neuro-evolutive technique, ACS Omega 5 (2020) 781. [ Links ]

22. L. H. Hall and C. T. Story, Boiling point and critical temperature of a heterogeneous data set, J. Chem. Inf. Comput. Sci. 36 (1996) 1004. [ Links ]

23. A. Azari, S. Atashrouz, and H. Mirshekar, Boiling point and critical temperature of a heterogeneous data set: QSAR with atom type electrotopological state indices using artificial neural network, ISRN Chemical Engineering 36 (2013) 1. [ Links ]

24. K. Golzar, S. Amjad-Iranagh, and H. Modarress, Prediction of thermophysical properties for binary mixtures ofcommon ionic liquids with water or alcohol at several temperatures and atmospheric pressure by means of artificial neural network, Ind. Eng. Chem. Res. 53 (2014) 7247. [ Links ]

25. Kenji Suzuki, Artificial Neural Networks: Architectures and Applications (InTech, Croatia, 2013). [ Links ]

26. D. Livingstone, D. Manallack, and I. Tetko, Data modelling with neural networks: Advantages and limitations, J. Comput. Aided Mol. Des. 11 (1997) 135. [ Links ]

27. J. Bourquin, H. Schmidli, P. van Hoogevest, and H. Leuen-berger, Advantages of artificial neural networks (anns) as alternative modelling technique for data sets showing non-linear relationships using data from a galenical study on a solid dosage form, European Journal of Pharmaceutical Sciences 7 (1998) 5. [ Links ]

28. J. Sola and J. Sevilla, Importance of input data normalization for the application of neural networks to complex industrial problems, IEEE Transactions on Nuclear Science 44 (1997) 1464-1468. [ Links ]

29. J. A. Snyman and D. N. Wilke, Practical Mathematical Optimization: An Introduction to Basic Optimization Theory and Classical and New Gradient-Based Algorithms (Springer, Switzerland, 2018). [ Links ]

30. C. C. Aggarwal, Neural Networks and Deep Learning (Springer, Switzerland, 2018). [ Links ]

31. D. P. Kingma and J. Ba, Adam: A method for stochastic optimization (2015), arXiv:1412.6980. [ Links ]

32. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT Press, Cambridge, 2016). [ Links ]

33 . K. Eckle and J. Schmidt-Hieber, A comparison of deep networks with relu activation function and linear spline-type methods, Neural Networks 110 (2019) 232. [ Links ]

34. U. Que-Salinas, P. E. Ramírez-Gonzaílez, and A. Torres-Carbajal, Determination of thermodynamic state variables of liquids from their microscopic structures using an artificial neural network, Soft Matter 17 (2021) 1975. [ Links ]

35. A. Morsali, E. K. Goharshadi, G. A. Mansoori, and M. Abbaspour, An accurate expression for radial distribution function of the lennard-jones fluid, Chem. Phys. 310 (2005) 11. [ Links ]

36. G. Rutkai, M. Thol, R. Span, and J. Vrabec, How well does the lennard-jones potential represent the thermodynamic properties of noble gases? Mol. Phys. 115 (2017) 1104. [ Links ]

37. L. S. Tee, S. Gotoh, and W. E. Stewart, Molecular parameters for normal fluids. lennard-jones 12-6 potential, Ind. Eng. Chem. Fundamen. 5 (1996) 356. [ Links ]

38. S.-K. Oh, Modified lennard-jones potentials with a reduced temperature-correction parameter for calculating thermody-namic and transport properties: Noble gases and their mixtures (he, ne, ar, kr, and xe), Journal of Thermodynamics 2013 (2013) 828620. [ Links ]

39. A. Rahman, Correlations in the motion of atoms in liquid argon, Phys. Rev. 136 (1994) A405. [ Links ]

40. I. R. McDonald and K. Singer, Calculation of thermodynamic properties of liquid argon from lennard-jones parameters by a monte carlo method, Discuss. Faraday Soc. 43 (1967) 40. [ Links ]

41. J. A. Barker, R. A. Fisher, and R. O. Watts, Liquid argon: Monte carlo and molecular dynamics calculations, Mol. Phys. 21 (1971) 657. [ Links ]

Received: January 19, 2022; Accepted: May 17, 2022

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License