0% found this document useful (0 votes)
29 views9 pages

Paper 12

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views9 pages

Paper 12

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Annals of Nuclear Energy 141 (2020) 107334

Contents lists available at ScienceDirect

Annals of Nuclear Energy


journal homepage: www.elsevier.com/locate/anucene

Wall temperature prediction at critical heat flux using a machine


learning model
Hae Min Park a, Jong Hyuk Lee b, Kyung Doo Kim b,⇑
a
Maritime Reactor Development Division, Korea Atomic Energy Research Institute, 111 Daedeok-daero 989 Beon-gil, Yuseong-gu, Daejeon 34057, Republic of Korea
b
Reactor System Safety Research Division, Korea Atomic Energy Research Institute, 111 Daedeok-daero 989 Beon-gil, Yuseong-gu, Daejeon, 34057, Republic of Korea

a r t i c l e i n f o a b s t r a c t

Article history: To determine heat transfer regimes of the pre and post CHF, the SPACE code calculates the wall temper-
Received 4 October 2019 ature from a nucleate boiling heat transfer model at the given CHF. It needs iterations and consumes a
Received in revised form 1 January 2020 large amount of computing time. To reduce the calculation time, this paper introduces the application
Accepted 11 January 2020
of a machine learning method. Big data of the wall temperature at CHF was built by using the subprogram
constructed as is in the SPACE code. Based on that database, the neural network models were trained and
two neural network models having different configurations were suggested. The developed neural net-
Keywords:
work models were implemented in the SPACE code and test calculations were performed. The neural net-
Wall temperature
Critical heat flux
work applied SPACE code properly predicted the wall temperature at CHF. In test calculations, the
Machine learning calculation time was also investigated. All suggested neural network models highly enhanced the calcu-
SPACE code lation speed corresponding to a maximum 86% time reduction.
Ó 2020 Elsevier Ltd. All rights reserved.

1. Introduction requires a large amount of CPU time to complete the entire


calculation.
On the boiling curve, the critical heat flux (CHF) is considered as To reduce computing time, the machine learning methodology
a kind of turning point between nucleate boiling (high heat trans- was used to develop a simple model for the Tw,CHF calculation in
fer coefficient) and transition boiling regions (low heat transfer this study. Machine learning based on neural network, has been
coefficient) (Incropera et al., 2007). To determine the heat transfer recently utilized in many fields of engineering, including physics
regime of pre and post CHF, thermal hydraulic system codes, such modeling (Moon et al., 1996; Grag et al., 2017; Khayet et al.,
as the SPACE (Ha et al., 2011; KHNP et al., 2013) and the TRACE 2011; Tracey, 2015; Chang and Dinh, 2017), image & signal pro-
(USNRC, 2008) codes, calculates the wall temperature correspond- cessing (Ciresan et al., 2012; Ciresan et al., 2010; Behnke, 2003)
ing to the CHF (Tw,CHF) and compare with the wall temperature. In and so on. Additionally, machine learning is a useful modeling tool
the SPACE code developed by various nuclear industries and insti- that can solve non-linear regression problems with multiple vari-
tutes (KHNP, KAERI, KEPCO E&C and KEPCO NF) in the Republic of ables (Moon et al., 1996; Grag et al., 2017; Khayet et al., 2011;
Korea, Tw,CHF is always checked when wall temperature is over sat- Tracey, 2015; Chang and Dinh, 2017).
uration temperature. In the SPACE code, first, the CHF is obtained In the field of TH modeling, Moon et al. (1996) developed a neu-
for thermal–hydraulic (TH) conditions given by initial conditions ral network model using a large CHF database in the 1990s. The
and hydraulic solver based on two-fluid and three-field equations. accuracy of the trained CHF model was good and the parametric
At the given CHF, the SPACE code calculates the wall temperature trends for inlet subcooling, mass flux, diameter, heated length,
based on heat transfer correlations for nucleate boiling. Those pro- pressure and exit quality were well matched with general under-
cesses for obtaining Tw,CHF need some iterative procedures. Because standing. In recent research, there have been some applications
of those complex processes and demand of Tw,CHF at every time step of machine learning to computational fluid dynamics (CFD) codes.
and for every boiling regime, the calculation of Tw,CHF via some iter- Tracey (Tracey, 2015) developed the neural network model based
ations based on CHF correlation and heat transfer correlations on the simulation database for the turbulent flow. Chang et al.
(Chang and Dinh, 2017) built a deep neural network which learned
the heat transfer equation. Their machine learning models worked
⇑ Corresponding author. properly in the CFD code with a high fidelity.
E-mail addresses: haeminpark@kaeri.re.kr (H.M. Park), leejonghyuk@kaeri.re.kr
(J.H. Lee), kdkim@kaeri.re.kr (K.D. Kim).

https://doi.org/10.1016/j.anucene.2020.107334
0306-4549/Ó 2020 Elsevier Ltd. All rights reserved.
2 H.M. Park et al. / Annals of Nuclear Energy 141 (2020) 107334

Nomenclature

Cpl liquid heat capacity (J/kgK) x thermodynamic quality


Dh hydraulic diameter (m)
F Reynolds number factor Greek symbols
G mass flux (kg/m2s) a void fraction
hfg latent heat (J/kg) ll viscosity (Pas)
hliq liquid heat transfer coefficient (W/m2K) ql liquid density (kg/m3)
hmac macroconvective heat transfer coefficient (W/m2K) qg vapor density (kg/m3)
hmic microconvective heat transfer coefficient (W/m2K) r surface tension (N/m)
kl liquid thermal conductivity (W/mK)
P pressure (Pa) Abbreviations
Pw,sat saturation pressure for wall temperature (Pa)
CFD computational fluid dynamics
q‘‘ heat flux (W/m2) CHF critical heat flux
S suppression factor GPU graphic processing unit
Tf fluid temperature (K) TH Thermal-hydraulic
Ts saturation temperature (K)
Tw wall temperature (K)
Tw,CHF wall temperature at CHF (K)

 
As the application of machine learning to system code, the k 0:79 C pl 0:45 ql 0:49 g 0:25
hmic ¼ 0:00122 rl0:5 l 0:29 hfg 0:24 qg 0:24
ðDT Þ0:24 ðDPÞ0:75 S
machine learning method was adopted to predict Tw,CHF calculated l ð3Þ
in the SPACE code in this study. To develop the machine learning where DT ¼ T w  T s ; DP ¼ Pw;sat  P
models, a Tw,CHF database was constructed using the SPACE code.
Based on the constructed database, machine learning models with where kl, Cpl, ql, r, ll, hfg, qg, Pw,sat and P are liquid thermal conduc-
different frameworks were trained. The developed machine learn- tivity, liquid heat capacity, liquid density, surface tension, liquid
ing model may reduce the computing time because the computing viscosity, latent heat of vaporization, vapor density, saturated pres-
process of the machine learning model is relatively simple. Test sure at Tw, and pressure, respectively. The S factor was also given in
calculations were performed to observe how fast the machine Chen’s correlation in terms of the two-phase Reynolds number
learning models properly predict Tw,CHF in comparison with the (Chen, 1966).
existing model. Thus, the heat flux in equation (1) is a function of the wall tem-
perature for the given TH conditions of pressure, temperature,
thermodynamic quality, mass flux and heat flux. Owing to the sat-
2. Basic knowledge
uration pressure term (Pw,sat) for Tw in equation (3), the wall tem-
perature (Tw) is not simply solved for the given TH conditions and
2.1. Theoretical review of wall temperature prediction at the CHF
CHF, and some iterative steps are required. Via iterative approxi-
mation using the Newton-Raphson method and the Bisection
This section introduces how to obtain Tw,CHF in the SPACE code.
method, the SPACE code acquires a numerical solution of the wall
Basically, Chen’s correlation (Chen, 1966) for nucleate boiling is
temperature at the CHF. In the SPACE code, the CHF is given by the
utilized by assuming that the CHF phenomenon is in a high heat
2006 CHF lookup table developed by Groeneveld et al. (2007) and
flux region of nucleate boiling. Chen suggested that the total heat
the pool boiling CHF model developed by Kim et al. (2016). Usually,
transfer rate is the sum of the heat transfer rates resulting from
a solution of Tw,CHF can be obtained within 5 iterations, but in some
the macroconvective behavior in the whole flow channel and the
cases more than 10 iterations are required.
microconvective behavior near the heated wall, as shown in the
following equation;
 2.2. Machine learning
q00 ¼ hmac T w  T f þ hmic ðT w  T s Þ ð1Þ
A key tool of machine learning is the neural network which is a
where q” is the heat flux, hmac and hmic are heat transfer coefficients
non-linear processing system. McCulloch and Pitts (1943) sug-
for macroconvective and microconvective behaviors and T w ; T f ; and
gested the concept of the neural network for the first time in
T s are the wall, liquid and saturation temperatures, respectively. In
1943. At that time, the neural network did not receive attention
saturated boiling, Tf is identical to Ts. hmac is the product of the heat
because it was very difficult to train the neural network and com-
transfer coefficient for the single-phase liquid (hliq) and the F factor,
puting resources were significantly insufficient. Nowadays, com-
as follows:
puting capability has been highly developed, and recently many
hmac ¼ hliq F ð2Þ researchers have been applying in many fields.
The concept of the neural network was literally inspired by the
hliq can be obtained from the Dittus-Boelter equation for the forced
biological neural systems of humanity. The neural network con-
convection, and the McAdams correlation (McAdams, 1954) and the
sists of neurons and connections with each neuron. The neuron is
Warner & Arpaci correlation (Warner and Arpaci, 1968) for natural
a single computing cell and the hidden layer shown in Fig. 1 is
convection. For the F factor, Chen has provided a value set with
composed of several neurons. In each neuron, the weighted sum
respect to the inverse of the Martinelli parameter (Chen, 1966).hmic
of the outputs of the previous layer with a bias added is trans-
was given by the Forster and Zuber formulation (Forster and Zuber,
formed by an activation function. The activation function can be
1955) and the S factor as follows:
the following equations;
H.M. Park et al. / Annals of Nuclear Energy 141 (2020) 107334 3

Fig. 1. Neural network configuration.

1 By taking into account the theoretical review of the calculation


Sigmoid : uðxÞ ¼
1 þ ex scheme in the SPACE code, several important TH terms and geo-
metric parameter have been found. As independent parameters,
ex  ex there were four input variables: mass flux (G), pressure (P), ther-
Hyperbolic tangent : uðxÞ ¼
ex þ ex modynamic quality (x) and hydraulic diameter (Dh). Using those
four variables, all of important terms, such as saturated water
Rectified Linear Unit ðReLU Þ : uðxÞ ¼ maxð0; xÞ property, CHF, two-phase Reynolds number, Martinelli parameter,
and macroconvective and microconvective heat transfer coeffi-
The transformed output via the selected activation function is
cients, can be obtained. Even if in subcooled boiling, based on P
moved to the neurons in the next hidden layer or the output layer.
and , the fluid temperature (Tf) can be obtained for corresponding
Therefore, the whole framework of the neural network contains
enthalpy and consequently other important terms mentioned
an input layer, hidden layers, and an output layer. Such network
above can be calculated based on the selected four variables. In
frames ‘learn’ some works or functions by considering many exam-
the SPACE code, the CHF is quite important to obtain Tw,CHF, but
ples, so-called big data. The learning process finds the best values
the CHF is also a function of G, P, x, and Dh. Therefore, the CHF
of the weight factors and the biases and this is achieved by using
was not considered as the input of the neural network. The output
the back propagation method. The learning efficiency strongly
of the neural network was selected to be the wall superheat tem-
depends on which activation function is selected in the neural net-
perature at the CHF, DTw,chf, which is a difference between the wall
work. Generally, the ReLU function is more efficient than Sigmoid
temperature and the saturation temperature.
and hyperbolic tangent functions for the learning process (Nair
and Hinton, 2010). DT w;chf ¼ f ðG; P; x; Dh Þ ð4Þ
The accuracy of the neural network depends on the number of
hidden layers and the number of neurons in each hidden layer.
Depending on the complexity of the target problem, the configura- 3.1.2. Step 2. Produce the Tw,chf data
tion of the neural network should be optimized. In the training process of the neural network, the Tw,CHF data-
base is essential. To develop a better neural network model, the
big database should be well made.
3. Results and discussion
To generate a database, DTw,CHF should be calculated being the
same as SPACE simulation. To obtain enormous data in a wide
To develop the neural network based machine learning models,
range of the selected input variables (G, P, x and Dh), the values
a big database was built for Tw,CHF calculated from the SPACE code.
of Tw,CHF have to be calculated as a function of the selected input
Using the database, the neural network model was trained and
variables. However, it is difficult for the SPACE code to control
implemented in the SPACE code. In comparison with the existing
the input variables for Tw,CHF calculation because G, P, x are not
calculation scheme, the advantages of the machine learning model
the input values but calculated values. A subprogram, using inputs
were investigated.
of G, P, x and Dh, should be built to calculate the wall temperature
at CHF. In this study, to produce the Tw,CHF data rapidly and effi-
3.1. Construction of neural network based machine learning model ciently, the Tchf program was developed. In the Tchf program, all
functions and procedures related to the CHF and Tw,CHF were used
In this study, the neural network based machine learning model as is in the SPACE code. Fig. 2 shows the procedure for calculating
was developed via the following three steps. Tw,CHF. For the given G, P, x and Dh, first, the CHF was obtained by
using the 2006 AECL lookup table (Groeneveld et al., 2007) for flow
3.1.1. Step 1. Find the independent variables boiling and the pool boiling CHF model developed by Kim et al.
To construct the neural network, the inputs and the output (2016). In this step, only the K1 correction factor (Groeneveld
should be properly selected. The accuracy of the neural network et al., 2005) for the hydraulic diameter was considered among
model is highly dependent on which inputs and output are chosen. the eight correction factors. Based on Chen’s correlation with the
4 H.M. Park et al. / Annals of Nuclear Energy 141 (2020) 107334

Fig. 2. Tchf calculation scheme in the Tchf program (left) and machine learning model (right).

given CHF and heat transfer coefficients, the Tchf program finally to determine a flow boiling CHF, but the pool boiling CHF has no
obtained an iterative solution of Tw,CHF. dependency on the mass flux in Kim’s correlation. Consequently,
To confirm that the developed Tchf program is properly worked, the divided Tchf databases have 194,579 and 40,964 data for flow
verifications of the Tchf program for calculation of the CHF and Tw, and pool boiling conditions, respectively. Both databases include a
CHF were performed via comparison with the 2006 AECL lookup dataset (20,482 data) at 200 kg/m2s to maintain a continuity of Tw,
table and the SPACE simulation for Bennett’s experiments CHF predicted by neural network models for flow and pool boiling
(Bennett et al., 1968). As demonstrated in Tables 1 and 2, good conditions. Those databases were utilized as a training set to
agreements with the comparison sets were found. develop the neural network models.
Using the Tchf program, the Tw,CHF was calculated for each data Being independent of the training database, the validation set
point of the 2006 AECL CHF lookup table. Taking into account the was produced. Within the range of Tchf database (Table 3), the ran-
sampling data of CHF lookup table, the total number of Tw,CHF data dom points were selected and the total 20,000 data were utilized
was determined to be 215,061 and all data were used for con- as a validation set.
structing the Tchf database. The range of the Tchf database covered
that of the 2006 AECL CHF lookup table (Table 3) except for some 3.1.3. Step 3. Train the neural network model
points having minus enthalpy. The constructed database was To develop the neural network based machine learning model,
divided into two groups according to boiling conditions; flow boil- the Tensorflow code (Abadi et al., November 2016) developed by
ing (200 kg/m2s) and pool boiling (200 kg/m2s). The reason of Google was used. The Tensorflow code facilitates adjusting the
database separation was because the SPACE code uses Kim’s corre- number of neurons and hidden layers and selecting the activation
lation in a pool boiling condition, not the CHF lookup table. In the function. As the activation function, the ReLU function was
2006 AECL CHF lookup table, the mass flux is an important variable employed to configure the deep neural networks. The Tensorflow

Table 1
Test calculation of the Tchf program and comparison with the CHF lookup table (Dh = 0.008 m).

Pressure (kPa) Mass flux (kg/m2s) Quality (-) CHF (2006 AECL lookup table) CHF (Tchf program) Tchf (Tchf program) (K)
(kW/m2) (kW/m2)
100 1000 0.1 2349 2349 461.6
100 5000 0.5 1030 1030 382.8
100 100 0.8 459 464 (Pool boiling) 382.8
1000 1000 0.1 4351 4351 526.3
1000 5000 0.5 1109 1109 513.2
1000 100 0.8 708 129 (Pool boiling) 463.0
10,000 1000 0.1 3793 3793 617.8
10,000 5000 0.5 575 575 607.7
H.M. Park et al. / Annals of Nuclear Energy 141 (2020) 107334 5

Table 2
Test calculation of the Tchf program and comparison with SPACE (Bennett’s experiments, Dh = 0.0126 m).

Pressure (kPa) Mass flux (kg/m2s) Quality (-) CHF (Tchf program) (kW/m2) CHF (SPACE) (kW/m2) Tchf (Tchf program) (K) Tchf (SPACE) (K)
6.91 1072 0.033 4862 4854 598.1 598.0
6.95 1407 0.256 2668 2665 594.7 594.7
6.93 1008 0.669 942 946 572.8 573.0

Table 3
Range of Tchf database.

Tchf database
Mass flux 10 – 8000 kg/m2s
Pressure 100 – 21000 kPa
Quality 0.5 – 0.99
Diameter 1 – 90 mm

code is capable of using graphic processing units (GPUs) that are


specialized in deep learning. The GPU system Geforce GTX 1080
Ti was employed in a personal computer.
As mentioned above, the neural network should train the data-
base for the target problem and find the best weight factors and
biases. In this study, the Tchf database obtained from the Tchf pro-
gram was utilized. Based on that database, the numbers of neurons
and hidden layers were optimized in a fully connected configura-
tion shown in Fig. 2. Via several tests with various numbers of neu-
rons and hidden layers, the configurations of the neural network Fig. 3. Comparison of neural network prediction (10  10  10) with the SPACE
with three hidden layers and 10 – 15 neurons in each hidden layer calculation.
were determined to have a good accuracy. The neural network
model having three hidden layers and 15 neurons in each hidden
layer had the best accuracy and one more neural network models
with three hidden layers and 10 neurons in each hidden layer was
suggested for comparison of the accuracy and calculation speed.
As listed in Table 4, all neural network models have the root-
mean-square (RMS) absolute errors less than ±2.5 K in comparison
with the training set. For the validation set, all neural network
models predicted well with similar RMS errors with those for the
training set. Comparisons between the predictions from the neural
network models and Tchf database (training and validation sets)
are plotted in Figs. 3 and 4.

3.2. Implementation of the machine learning model in the SPACE code

All neural network models developed in this study were imple-


mented in the SPACE code. Being the same as the original SPACE
code, Tw,CHF was obtained if the wall temperature is higher than
the saturation temperature in the neural network applied SPACE
code. The developed neural network models provide Tw,CHF without
Fig. 4. Comparison of neural network prediction (15  15  15) with the SPACE
using the CHF lookup table and the pool boiling CHF correlation. calculation.
Exceptionally, the neural network applied SPACE code calculates
the CHF only when the CHF is needed. In the transition boiling
region, the CHF is used to obtain a heat transfer rate by interpola- a correction factor for the heat transfer rate in the flow developing
tion between the CHF and the heat flux at the minimum film boil- region. Therefore, the neural network applied SPACE code first
ing. Also, in the film boiling condition, the CHF is needed to obtain obtains Tw,CHF and, if the boiling regime is the transition boiling
or the flow developing condition in the film boiling regime, the
CHF is provided by the existing method of the CHF lookup table
Table 4
and the pool boiling CHF model. When the neural network applied
Accuracies of the developed neural network models.
SPACE code calculates Tw,CHF, the corresponding neural network
Neural network model RMS absolute error RMS absolute error models are used according to the range of mass flux; flow or pool-
(training set) (validation set)
ing boiling condition.
10  10  10 neurons ± 2.35 K ± 2.09 K For validation cases, Bennett’s experiments (Bennett et al.,
(flow boiling: ± 2.47 K, 1968) and FLECHT-SEASET boiloff tests (Wong and Hochreiter,
pool boiling: ± 1.69 K)
15  15  15 neurons ± 1.49 K ± 1.45 K
1981) covering nucleate boiling, transition boiling, and film boiling
(flow boiling: ± 1.57 K, regions were selected. Bennett’s experiments using a single verti-
pool boiling: ± 1.07 K) cal tube made by Nimonic 80A provided the axial wall temperature
6 H.M. Park et al. / Annals of Nuclear Energy 141 (2020) 107334

distributions and, seven test series were used for the validation
(Table 5). The FLECHT-SEASET boiloff tests (unblocked bundle
tests) measured the wall temperature with respect to the time,
and among three different experiments of the FLECHT-SEASET boil-
off tests only No. 35557 was used for test calculations. The original
SPACE code was already validated for those two tests, and in this
study it was confirmed that the neural network applied SPACE
code could or could not follow the calculations of the original.
Using the pre-made SPACE input sets for the selected two tests,
the developed neural network models were tested.
Comparisons between the original SPACE code and the neural
network applied SPACE code are shown in Figs. 5–12. For Bennett’s
experiments, there were small differences of the wall temperature
as well as Tw,CHF in some cases (Nos. 5358 and 5336), but the neural
network applied SPACE code well predicted the wall temperature,
Tw,CHF and the CHF locations in all different cases (Figs. 5–11). For
all cases, the neural network models with 15 neurons predicted
more accurately the calculation results of the original SPACE code
than the neural network models with 10 neurons. For the FLECHT- Fig. 5. Comparison of wall temperature and Tchf – Bennett’s experiments Run No.
SEASET boiloff tests, although Tw,CHF slightly fluctuated in the neu- 5358.
ral network applied cases, the wall temperature and the time of the
onset of the CHF in the original SPACE were the same as those of
the neural network applied cases (Fig. 12).
The deviations of the wall temperature and Tw,CHF shown in val-
idation results were clearly dependent on the accuracy of the
developed neural network models. For the pool boiling condition
(FLECHT-SEASET boiloff tests), the developed neural network mod-
els have good accuracies and there was no difference of the wall
temperature and the onset of the CHF obtained from the neural
network applied and original SPACE codes. For the flow boiling
condition (Bennett’s experiments), the Tw,CHF and wall temperature
obtained from the neural network model with 15 neurons having a
better accuracy was closely overlapped with the calculation of the
original SPACE code.
Another objective of this study is to enhance the calculation
speed. Table 6 compares the calculation speeds. In Table 6, the cal-
culation time is the averaged value for three repeated calculations,
and the total calculation time and the Tw,CHF calculation time for
each test case are compared. As shown in comparison results, all
neural network models had an advantage in reducing the calcula-
tion time. The computing time was varied by the size of the neural Fig. 6. Comparison of wall temperature and Tchf – Bennett’s experiments Run No.
network and the neural network having less neurons improved the 5336.
calculation speed more. Therefore, the neural network model with
10 neurons in each hidden layer shows better enhancement results
of the calculation speed (66–86% time reduction). The another sug-
gested neural network model with more neurons (15 neurons in
each hidden layer) also brought a reduction to the calculation time

Table 5
Experimental conditions of Bennett’s experiments and FLECHT-SEASET boiloff tests.

Pressure Mass flux Subcooling


(MPa) (kg/m2s) (K)
Bennett’s experiments
Run No. 5358 6.9 393 34.4
Run No. 5336 6.9 665 26.3
Run No. 5271 6.9 1004 23.0
Run No. 5246 6.9 1356 24.4
Run No. 5294 6.9 1953 18.8
Run No. 5312 6.9 2536 19.3
Run No. 5379 6.9 3798 11.0
FLECHT-SEASET boiloff tests
No. 35,557 0.41 Pool boiling saturated
(no injection) Fig. 7. Comparison of wall temperature and Tchf – Bennett’s experiments Run No.
5271.
H.M. Park et al. / Annals of Nuclear Energy 141 (2020) 107334 7

Fig. 11. Comparison of wall temperature and Tchf – Bennett’s experiments Run No.
Fig. 8. Comparison of wall temperature and Tchf – Bennett’s experiments Run No.
5379.
5246.

(36–69%) with a good consistency of the wall temperature and the


timing of the onset of CHF in the original SPACE calculation.
Those advantages resulted from omission of intermediate steps
such as repeated calculations of the water properties, the F factor,
the S factor, the Reynolds number, the heat transfer coefficient, the
CHF and so on, as well as the simple configuration of the neural
network. As shown in Fig. 2, using only input values (G, P, x and
Dh), the developed neural network directly calculates the wall tem-
perature at the CHF without calculating the nucleate boiling heat
transfer and the CHF.

4. Conclusion

The goals of this study are to develop machine learning models


to predict the wall temperature at the CHF without calculating the
CHF and solving Tw,CHF iteratively, and secondly to reduce the cal-
culation time spent in the SPACE code. To train the neural network,
a key methodology of machine learning, a big database was made
by using the Tchf program built as is in the SPACE code. Based on
Fig. 9. Comparison of wall temperature and Tchf – Bennett’s experiments Run No.
the database, a deep neural network having three hidden layers
5294.
was trained for various neuron numbers: 10 and 15 neurons. All
two neural network models had good accuracies of less
than ±2.5 K in comparison with the database. The neural network
with more neurons had better accuracy and the best accuracy
was ±1.5 K in the neural network model having 15 neurons in each
hidden layer.
The developed neural network models were implemented in
the SPACE code. To investigate whether the neural network models
are properly working, the test calculations were conducted for
Bennett’s experiments and FLECHT-SEASET boiloff tests, compared
with the original SPACE code. The neural network applied SPACE
code properly predicted Tw,CHF with small difference from the orig-
inal code. The wall temperature history calculated from the neural
network applied SPACE code was also closely matched.
Simultaneously with test calculations, the calculation speed
was also investigated. Typically, the neural network model with
fewer neurons was faster than that with more neurons. Compared
with the original SPACE code, the neural network model with 10
neurons in each hidden layer has reduced the calculation time
(by a maximum 86%) a lot. The neural network model with more
neurons also had an advantage in enhancing the calculation speed
Fig. 10. Comparison of wall temperature and Tchf – Bennett’s experiments Run No. (36–69% time reduction) with a better consistency with the wall
5312.
temperature and the timing of the onset of CHF calculated from
the original SPACE code.
8 H.M. Park et al. / Annals of Nuclear Energy 141 (2020) 107334

Table 6
Comparison of calculation time.

Total calculation time (calculation time for Tw,CHF)


(Unit: sec)
Original NN* applied NN applied
SPACE SPACE SPACE
(10  10  10) (15  15  15)
Bennett’s experiments
Run No. 5358 20.51 (1.02) 19.84 (0.35) 20.06 (0.56)
Run No. 5336 33.27 (1.53) 32.43 (0.48) 32.94 (0.98)
Run No. 5271 25.26 (1.89) 24.02 (0.27) 23.95 (0.72)
Run No. 5246 23.25 (1.82) 21.23 (0.27) 22.29 (0.57)
Run No. 5294 33.61 (2.61) 31.62 (0.38) 32.50 (1.03)
Run No. 5312 35.67 (2.72) 34.09 (0.48) 35.10 (0.99)
Run No. 5379 54.05 (4.19) 50.77 (0.88) 51.69 (1.49)
FLECHT-SEASET boiloff tests
No. 35,557 20.02 (4.62) 16.49 (0.90) 18.07 (2.24)
*
NN: neural network model

CRediT authorship contribution statement

Hae Min Park: Methodology, Software, Investigation, Writing -


original draft. Jong Hyuk Lee: Software, Validation, Formal analy-
sis. Kyung Doo Kim: Conceptualization, Supervision, Writing -
review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing finan-


cial interests or personal relationships that could have appeared
to influence the work reported in this paper.

Acknowledgments

This work was supported and funded by Korea Hydro & Nuclear
Power Co., LTD.

Appendix A. Supplementary data

Supplementary data to this article can be found online at


https://doi.org/10.1016/j.anucene.2020.107334.

References

Incropera, F.P., Dewitt, D.P., Bergman, T.L., Lavine, A.S., 2007. Fundamentals of Heat
and Mass Transfer. John Wiley & Sons, New Jersey.
Ha, S.J., Park, C.E., Kim, K.D., Ban, C.H., 2011. Development of the SPACE Code for
Nuclear Power Plants. Nucl. Eng. Tech. 43, 1, 45.
KHNP, KAERI, KEPCO E&C, 2013. ‘‘SPACE 2.14 Manual Volume 1 Theory Manual,”
S06NX08-K-1-TR-36, Rev. 1, October, 2013.
USNRC, 2008. ‘‘TRACE V5.0 Theory Manual, Field Equations, Solution Method and
Physical Models,” ML071000097, USA (Jun. 2008).
Moon, S.K., Baek, W.P., Chang, S.H., 1996. Parametric Trends Analysis of the Critical
Heat Flux Based on Artificial Neural Networks. Nucl. Eng. Des. 163, 29.
Grag, S., Shariff, A.M., Shaikh, M.S., Lal, B., Suleman, H., Faiqa, N., 2017. Experimental
Data, Thermodynamics and Neural Network Modeling of CO2 Solubility in
Aqueous Sodium Salt of L-phenylalanine. J. CO2 Util. 19 (146).
Khayet, M., Cojocaru, C., Essalhi, M., 2011. Artificial Neural Network Modeling and
Fig. 12. Comparison of wall temperature and Tchf – FLECHT-SEASET boiloff tests Response Surface Methodology of Desalination by Reverse Osmosis. J. Memb.
No. 35557 (axial location: a: 7/20, b: 10/20 and c: 14/20). Scie. 368, 202.
Tracey, B.D., 2015. Machine Learning for Model Uncertainties in Turbulence Models
and Monte Carlo Integral Approximation, Ph.D. Thesis, Stanford University,
In this study, it is found that neural network models were effec-
Department of Aeronautics and Astronautics (Jun. 2015).
tive in speeding up SPACE calculations, and the feasibility and use- Chang, C.W., Dinh, N., June 2017. Development of Uncertainty-Guided Deep
fulness of the machine learning method in TH modeling and the Learning with Application to Thermal Fluid Closures. Multiphysics Model
Validation Workshop, North Carolina, USA.
application to nuclear system codes were successfully suggested
Ciresan, D.C., Meier, U., Schmidhuber, J., Multi-column Deep Neural Networks for
in this study. In the future, other applications to TH modeling, such Image Classification, Technical Report No. IDSIA-04-12, Switzwerland (Feb.
as heat transfer coefficient modeling, prediction of wall tempera- 2012).
ture (or peak cladding temperature) and so on, will be possible Ciresan, D.C., Meier, U., Gambardella, L.M., Schmidhuber, J., 2010. Deep, Big, Simple
Neural Network Nets for Handwritten Digit Recognition. Neur. Computa. 22,
and the same approach of this study can be applied. 3207.
H.M. Park et al. / Annals of Nuclear Energy 141 (2020) 107334 9

Behnke, S., 2003. Hierarchical Neural Networks for Image Interpretation. Springer, Nair, V., Hinton, G.E., 2010. Rectified Linear Units Improve Restricted Boltzmann
New York. Machines. In: the 27th International Conference on Machine Learning (ICML-
Chen, J.C., 1966. Correlation for Boiling Heat Transfer to Saturated Fluids in 10), Haifa, Israel, June 2010.
Convective Flow. Ind. Eng. Chem. Process Des. Dev. 5, 3, 322. Groeneveld, D.C., Leung, L.K.H., Guo, Y., Vasic, A., Nakla, M.E., Peng, S.W., Yang, J.,
McAdams, W.H., 1954. Heat Transmission. McGraw-Hill, New York. Cheng, S.C., 2005. Lookup Tables for Prediction CHF and Film-Boilng Heat
Warner, C.Y., Arpaci, V.S., 1968. An Experimental Investigation of Turbulent Natural Transfer: Past, Present, and Future. Nucl. Technol. 152, 87.
Convection in Air along Vertical Heated Flat Plate. Int. J. Heat Mass Transfer. 11, Bennett, A.W., Hewitt, G.F., Kearsey, H.A., Keeys, R.K.F., 1968. Heat Transfer to
397. Steam-Water Mixtures Flowing in Uniformly Heated Tubes in Which the
Forster, H.K., Zuber, N., 1955. Dynamics of Vapor Bubbles and Boiling Heat Transfer. Critical Heat Flux Has Been Exceeded. AERE-R5373, United Kingdom.
AIChE J. 1, 4, 531. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S.,
Groeneveld, D.C., Shan, J.Q., Vasic, A.Z., Leung, L.K.H., Durmayaz, A., Yang, J., Cheng, Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D.G.,
S.C., Tanase, A., 2007. The 2006 CHF Look-up Table. Nucl. Eng. Des. 237, Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., Zheng, X.,
1909. November 2016. TensorFlow: a system for large-scale machine learning. 12th
Kim, B.J., Lee, J.H., Kim, K.D., 2016. Improvements of Critical Heat Flux Models for USENIX Symposium on Operating Systems Design and Implementation (OSDI
Pool Boiling on Horizontal Surfaces Using Interfacial Instabilities of Viscous 16), Savannah, USA.
Potential Flows. Int. J. Heat & Mass Transfer. 93, 200. Wong, S., Hochreiter, L.E., 1981. Analysis of the FLECHT SEASET Unblocked Bundle
McCulloch, W.S., Pitts, W., 1943. A Logical Calculus of the Ideas Immanent in Steam-Cooling and Boiloff Tests, EPRI NP-1460, NUREG/CR-1533, WCAP-9729
Nervous Activity. Bull. Math. Biophys. 5, 115. (May 1981).

You might also like