Professional Documents
Culture Documents
Series editor
J Paulo Davim, Dept of Mechanical Engineering, University of Aveiro
Aveiro, Portugal
More information about this series at http://www.springer.com/series/15734
Jiuping Xu Mitsuo Gen
•
Editors
123
Editors
Jiuping Xu Asaf Hajiyev
Business and Administration Azerbaijan National Academy of Sciences
Sichuan University Baku
Chengdu Azerbaijan
China
Fang Lee Cooke
Mitsuo Gen Monash University
Fuzzy Logic Systems Institute Clayton, VIC
Tokyo University of Science Australia
Tokyo, Tokyo
Japan
v
vi Preface
Executive Committee
General Chairs
vii
viii Organization
Program Committee
Secretary-General
Under-Secretary
General
Secretaries
Volume 1
Computing Methodology
A Comparison of Pretest, Stein-Type and Penalty Estimators
in Logistic Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Orawan Reangsephet, Supranee Lisawadi, and Syed Ejaz Ahmed
Multi-objective Job Shop Rescheduling with Estimation
of Distribution Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Xinchang Hao, Lu Sun, and Mitsuo Gen
Multi-Queue Priority Based Algorithm for CPU Process Scheduling . . . . 47
Usman Rafi, Muhammad Azam Zia, Abdul Razzaq, Sajid Ali,
and Muhammad Asim Saleem
An Order-Based GA for Robot-Based Assembly Line
Balancing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Lin Lin, Chenglin Yao, and Xinchang Hao
A Hybrid Model of AdaBoost and Back-Propagation Neural
Network for Credit Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Feng Shen, Xingchao Zhao, Dao Lan, and Limei Ou
Effects of Urban and Rural Residents’ Behavior Differences in Sports
and Leisure Activity: Application of the Theory of Planned Behavior
and Structural Equation Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Linling Zhang
xi
xii Contents
Data Analysis
How to Predict Financing Efficiency in Public-Private Partnerships–In
an Aspect of Uncertainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Yixin Qiu, Umair Akram, Sihan Lin, and Muhammad Nazam
The Moderating Effects of Capacity Utilization on the Relationship
Between Capacity Changes and Asymmetric Labor Costs Behavior . . . . 260
Abdulellah Azeez Karrar, DongPing Han, and Sobakinova Donata
The Empirical Analysis of the Impact of Technical Innovation
on Manufacturing Upgrading-Based on Subdivision
Industry of China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Dan Jiang and Yuan Yuan
A Crash Counts by Severity Based Hotspot Identification Method
and Its Application on a Regional Map Based Analytical Platform . . . . 286
Xinxin Xu, Ziqiang Zeng, Yinhai Wang, and John Ash
Comparison Between K-Means and Fuzzy C-Means Clustering
in Network Traffic Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Purnawansyah, Haviluddin, Achmad Fanany Onnilita Gafar,
and Imam Tahyudin
RDEU Evolutionary Game Model and Simulation of the Network
Group Events with Emotional Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Guoqiang Xiong, Xian Wang, Ying Yang, and Yuxi Liu
Effects of Internet Word-of-Mouth of a Tourism Destination
on Consumer Purchase Intention: Based on Temporal Distance
and Social Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Mo Chen and Jingdong Chen
Analysis and Prediction of Population Aging Trend Based
on Population Development Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Jiancheng Hu
Evaluation of Progressive Team Intervention on Promoting
Physical Exercise Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Xinyan Guo
SEM-Based Value Generation Mechanism from Open Government
Data in Environment/Weather Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Xiaoling Song, Charles Shen, Lin Zhong, and Feniosky Peña-Mora
Impact of Management Information Systems Techniques
on Quality Enhancement Cell’s Report for Higher Education
Commission of Pakistan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Faraz Ullah Khan and Asif Kamran
xiv Contents
Volume 2
Risk Control
Machine Learning and Neural Network
for Maintenance Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1377
Alfredo Arcos Jiménez, Carlos Quiterio Gómez Muñoz,
and Fausto Pedro García Márquez
Volatility Spillover Between Foreign Exchange Market
and Stock Market in Bangladesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389
Shibli Rubayat and Mohammad Tareq
Cost/Efficiency Assessment of Alternative Maintenance
Management Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395
Diego Ruiz-Hernández and Jesús María Pinar-Pérez
Heijunka Operation Management of Agri-Products Manufacturing
by Yield Improvement and Cropping Policy . . . . . . . . . . . . . . . . . . . . . . . 1407
Ritsuko Aoki and Hiroshi Katayama
Optimizing Reserve Combination with Uncertain Parameters
(Case Study: Football) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1417
Masoud Amel Monirian, Hamideh Razavi,
and Shabnam Mahmoudzadeh Vaziri
A Bayesian-Based Co-Cooperative Particle Swarm Optimization
for Flexible Manufacturing System Under Stochastic Environment . . . . 1428
Lu Sun, Lin Lin, and Haojie Li
Exchange Rate Movements, Political Environment and Chinese
Outward FDI in Countries Along “One Belt One Road”. . . . . . . . . . . . . 1439
Wenjing Zu and Haiyue Liu
xxii Contents
Jiuping Xu(B)
1 Introduction
The Eleventh International Conference on Management Science and Engineering
Management (ICMSEM) in Kanazawa, Japan, has given Management Science
and Engineering Management (MSEM) academics the opportunity to present
their innovative findings in this increasingly popular research field. The papers
in this volume demonstrate the substantial growth in interdisciplinary MSEM
methodologies and practical applications. The ICMSEM aims to be the primary
forum for academic researchers and practitioners to become involved in discus-
sions on state-of-the-art MSEM research and development.
MSEM has shown tremendous growth and expansion since its beginnings. In
particular, Management Science (MS) has had a long history associated with uni-
versal and impersonal management research disciplines. Scientific management,
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 1
4 J. Xu
of course, owes its beginnings to the innovative ideas of Taylor and Fayol and
their associates, who at the turn of the century attempted to solve industry prob-
lems through strict time monitoring, technical production methods, incentive
wage payments, and rational factory organization based on efficiently structured
work assignments [1,6]. In the period following the Second World War, modern
analytical management methods brought the business practice into a new era.
Terms such as Decision Theory, Operations Research, System Engineering, and
Industrial Dynamics, which were practically unknown in the early fifties, are now
as well-known as Accounting or Finance [12]. As time passed, with societal, eco-
nomic, organizational, and cultural, factors playing increasingly more important
roles in management practice, MS used mathematics information science, sys-
tems science, cybernetics, statistics and other theories and methods from natural
science to develop innovative management and control systems. The integration
of these research areas has brought significant improvements to the ICMSEM
proceedings Volume I, which is mainly focused on computing methodology, data
analysis, enterprise operations management, and decision support system this
year.
2 Literature Review
Literature reviews give insights into the focus and new research directions in
a particular research field. The widespread research attention given to MS has
been mainly focused on four main areas; computing methodology, data analysis,
enterprise operation management, and decision support system. Therefore, in
this section, we review the pertinent research that had taken place in each of
these four areas.
technological innovation in the future will draw on the advantages offered by big
data [9,15]. Because of the promise of more automated information exchanges
in networked enterprise scenarios, enterprise information systems (EIS) have
become increasingly important for interoperability to increase productivity and
efficiency [2]. With improvements in enterprise operations management and the
application of advanced management tools, operating efficiency is being contin-
ually enhanced, thereby benefiting enterprise operations.
into two proceedings volumes of 75 papers each. The significance of the keywords
lies not only in the frequency or ratio but also in the key word connections that
demonstrate how these papers revolve around MS concepts and metrics. Infor-
mation visualization methods have also proven valuable in discovering patterns,
trends, clusters, and outliers, even in complex social networks. We have now
received the submissions and sorted the keywords using the software Keywords
Match to perform keyword matching and to identify similarities. Finally, the
processed keywords were put into the NodeXL software, and the results are
shown in Fig. 1.
From this figure, it is not easy to identify the most important and most heav-
ily researched areas. Further, from calculating and screening keyword degrees
greater than six, the following graphical results were obtained. It can be seen the
central issues of the eleventh ICMSEM proceedings Volume I have been mainly
about computing methodology, data analysis, enterprise operations management
and decision support system, as shown as Fig. 2. In the following, we focus on
some related papers published in the proceedings to highlight the mainstream
research this year.
There was a great deal of interest in the computing methodology section.
Usman Rafi et al. discusses various scheduling terms and scheduling algorithms
and proposes a new approach for scheduling based on a mixture of MQMS, a
priority scheduling mechanism, and round robin scheduling. Lin et al. proposes
a Bayesian network based particle swarm optimization (BNPSO) for solving
a stochastic scheduling problem. Tahyudin and Nambo determine the rules of
numerical association rule mining and also calculate the multi-objective function
using a combination of PSO and the Cauchy distribution. Shen et al. proposes
an AdaBoost algorithmic model based on a back-propagation neural network
8 J. Xu
for highly accurate and efficient credit scoring. Both classical and intelligence
algorithms are used to develop the model solution.
The data analysis section also presents some innovative work. Yuko et al. pro-
vides a method to convert Braille books into machine-readable electronic data.
Another interesting study is that in which Yamamoto researches the number of
visitors in each period and their characteristics based on mobile phone user loca-
tion data collected by a mobile phone company. Zeng et al. develops a severity
crash count based on a hotspot identification method by extending the tradi-
tional empirical Bayes method to a generalized nonlinear model-based mixed
multinomial logit approach. In addition, Hu determines the future population
trends in Sichuan, China through the establishment of a population development
equation. All these papers solve problems by analyzing data and developing use-
ful tools.
For enterprise operations management, Sawada and Yoshida present a novel
approach to increase the attractiveness of Facebook pages, Guo et al. uses a
Helloing model to analyze the influence of pricing and profit on platform enter-
prises, Lu uses IPA methods to help identify the strengths and weaknesses of
tourist satisfaction towards traditional culture, and Khan et al. provides cog-
nitive support for the development of optimized employee motivation levels
through consciousness raising about applied approaches and unrealistic ideas. A
method that measures propensity score matching is proposed by Tang to assess
how cross listing affects corporate performance. In a word, researchers have stud-
ied many aspects of enterprise operations management to improve efficiency and
performance.
In the decision support system section, some practical applications to solve
decision problems are presented. Pakkak proposes an integrated data envelop-
ment analysis (DEA) and analytic hierarchy process (AHP) approach to obtain
attribute weights, Sarwar et al. uses a fuzzy analytical hierarchical process
(AHP) and an extent analysis method to choose an appropriate supplier under
fuzzy environment, Benmessaoud et al. focuses on the monitoring of a wind farm
Advancement of EINT, CM, IT and DSS 9
in real time based on big data collected by the Supervisory Control and Data
Acquisition (SCADA) system, and shows how the maintenance decision-making
can be assured by the SCADA system, and Zhang et al. provides a corresponding
pricing strategy for product crowdfunding, providing inspiration and reference
for enterprises and entrepreneurs when making pricing strategy decisions. Inte-
grating modern technology into traditional methods has become a tendency in
the study of decision support system.
classes are shown in timezone view in Fig. 4. As can be seen, management science
has been in dynamic development at the points where the key word repetition
is relatively high; however, there are no stable phases. There were also some
indirectly related key words scattered in the small nodes around the center (not
shown), which reflected the immaturity in some of the past management science
research and the combinations with other research areas.
and methods, models systems, and frameworks), data analysis (statistical tools,
data mining, data processes and big data), enterprise operation management
(technology, information, innovation governance, business and economics) and
decision support system (systems, decision making and strategies) all reflect the
most pressing areas at the moment. Environmental management (ecosystems,
sustainability, water resources) also has appeared as one of the main foci in
the ICMSEM proceedings. When reviewing the previous years’ proceedings, the
Advancement of EINT, CM, IT and DSS 13
5 Conclusion
In this paper, we briefly introduced the four main areas covered in proceedings
volume I and summarized previous research in these areas. Using the keyword
analysis function in NodeXL, the most prominent topics in these four areas were
identified. We also itemized the main research foci in the ICMSEM proceedings
Volume I to assist readers better understand the content in this year’s papers.
Finally, we analyzed the MS and ICMSEM development trends using the litera-
ture analysis tool CiteSpace, which found that the research foci in the ICMSEM
proceedings volume I are consistent with but slightly different from mainstream
MS research. Further work is needed to identify all areas from MSEM journals
so as to identify the ICMSEM expectations for the future.
Acknowledgements. The author gratefully acknowledges Jingqi Dai and Lin Zhong’s
efforts on the paper collection and classification, Zongmin Li and Lurong Fan’s efforts
on data collation and analysis, and Ning Ma and Yan Wang’s efforts on the chart
drawing.
References
1. Ackoff RL (1962) Scientific method: optimizing applied research decisions
2. Agostinho C, Ducq Y et al (2015) Towards a sustainable interoperability in net-
worked enterprise information systems: trends of knowledge and model-driven tech-
nology. Comput Ind 79:64–76
3. Aiello G, Giovino I et al (2017) A decision support system based on multisensor
data fusion for sustainable greenhouse management. J Cleaner Prod. doi:10.1016/
j.jclepro.2017.02.197
4. Anicic O, Petkovi D, Cvetkovic S (2016) Evaluation of wind turbine noise by soft
computing methodologies: a comparative study. Renew Sustain Energy Rev 56(1–
2):1122–1128
5. Arnott D, Pervan G (2005) A critical analysis of decision support systems research.
J Inf Technol 20(2):67–87
6. Beged-Dov AG (1967) An overview of management science and information sys-
tems. Manage Sci 13(12):817–831
7. Bi Z, Xu LD, Wang C (2014) Internet of things for enterprise systems of modern
manufacturing. IEEE Trans Ind Inf 10(2):1537–1546
8. Camarinha-Matos LM, Afsarmanesh H et al (2009) Collaborative networked
organizations-concepts and practice in manufacturing enterprises. Comput Ind Eng
57(1):46–60
9. Chen CLP, Zhang CY (2014) Data-intensive applications, challenges, techniques
and technologies: a survey on big data. Inf Sci 275(11):314–347
10. De Bruijn B, Martin J (2002) Getting to the (c) ore of knowledge: mining biomed-
ical literature. Int J Med Inf 67(1):7–18
11. Gotmare A, Bhattacharjee SS et al (2016) Swarm and evolutionary computing algo-
rithms for system identification and filter design: a comprehensive review. Swarm
Evol Comput 32:68–84
12. Hall AD (1962) A methodology for systems engineering. Van Nostrand, Princeton
Advancement of EINT, CM, IT and DSS 15
1 Introduction
For the past few decades, simultaneous variable selection and estimation of sub-
model parameters has become popular. Many predictors exist to infer an inter-
esting response in the initial model. Some of these predictors may be inactive and
not influential; these should be excluded from the final model that represents
a sparsity pattern in the predictor space to achieve parsimony, flexibility and
reliability. Several researchers, following this information in statistical modeling,
have used either the full model or a candidate submodel.
The logistic regression model also called the logit model, is the most widely
used for an analysis of the independent binary response data in medical, engi-
neering, and other studies. This model assumes that the logit of the response
variable can be modelled by a linear combination of unknown parameters xi β
where xi = (xi1 , xi2 , · · · , xip ) is a p × 1 vector of the p predictors for the ith sub-
ject and β = (β1 , β2 , · · · , βp ) is a p × 1 vector of regression parameters. Detailed
information on logistic regression can be found in the books by Hilbe [8] and
Hosmer and Lameshow [10].
In this article, we consider the problem of estimating the logistic regression
model when the response variable may be related to many predictors, some
of which may be inactive. Prior information about inactive predictors may be
incorporated in the full model to produce the candidate submodel.
The pretest (preliminary test) estimation strategy, is inspired by Bancroft,
and the shrinkage estimation strategy, is inspired by Stein, efficiently combine
both full model and submodel estimators in an optimal way to achieve an
improved estimator. Numerous authors have discussed the pretest, shrinkage,
and penalty estimation strategies in many fields including Ahmed and Amezziane
[2], Ahmed and Yüzbaşı [4], Al-Momaniet et al. [5], Gao, Ahmed, and Feng
[6], Hossain, Ahmed, and Doksum [12], and Yüzbaşı and Ahmed [16,17]. For a
logistic regression model, shrinkage estimators and three penalty estimators as
LASSO, adaptive LASSO and SCAD were considered by Hossain and Ahmed
[11] and Lisawadi, Shah, and Ahmed [13] considered the pretest estimation.
As we know, ridge regression (Hoerl and Kennard [9]) has been widely used
when there are many possible predictors to achieve the precision of an esti-
mate. Ahmed et al. [3] found that the ridge regression is highly efficient and
stable when there are many predictors with small effect. Hence, we suggest the
ridge regression for a logistic regression model. In this article, we propose the
pretest and shrinkage estimators in the logistic regression model when it is priori
suspected that parameters may be restricted to a subspace and compares the
resulting estimators to the classical maximum likelihood estimator as well as
the penalty estimators, i.e. LASSO estimator and ridge regression. Monte Carlo
simulation study is carried out using the simulated relative efficient to appraise
the performance of the proposed estimators.
To further illustrate the proposed estimators in the logistic regression model,
we apply the proposed estimator to the South African heart disease data set and
provide a bootstrap approach to compute simulated relative efficiency (SRE) and
simulated relative prediction error (SPE) of the estimators. The detail of this
data set will be described in the Sect. 4. Hossain, Ahmed, and Doksum [12] also
considered this data in the generalized linear model via the pretest estimator,
positive-part Stein-type shrinkage estimator, and three penalty estimators as
LASSO, adaptive LASSO, and SCAD The performance of these estimators are
evaluated in terms of simulated relative efficient (SRE).
Under the prior information about inactive predictors, the full parameter
vector β can be partitioned as β = (β1 , β2 ) where β1 and β2 represent a p1 × 1
active parameter and a p2 × 1 inactive parameter subvector, respectively, such
that p = p1 + p2 . Therefore, our interest lies in the estimation of the active
parameter subvector β1 when the information on β2 is readily available. In other
words, this information about the inactive parameters may be used to estimate
the active parameter subvector β1 when their values are near to some specified
A Comparison of Pretest, Stein-Type and Penalty Estimators 21
value β20 . Without the loss of generality, it is plausible that β2 may be set to a
zero vector, β2 = 0. Keep in mind that the candidate submodel estimator is more
efficient than the full model estimator when the candidate submodel is correct.
On the other hand, the submodel estimator may not be reliable and become
considerably inefficient when the candidate submodel incorrectly represents the
data at hand.
The remainder of this article is organized as follows; the model and the
efficient estimation strategies are proposed in Sect. 2, the results of a Monte Carlo
simulation study are reported in Sect. 3, real data applications are described in
Sect. 4, and finally, discussions and conclusions are presented in Sect. 5.
zi = xi β, (1)
where λ defines the degree of confidence in the given prior information and is a
fixed constant. The linear shrinkage estimator shrinks β̂ U E toward β̂ RE . If λ = 0,
then LS simplifies to an unrestricted estimator, while it simplifies to a restricted
estimator when λ = 1. The performance of the linear shrinkage estimator is
better than the unrestricted and restricted MLE in some part of the parameter
space.
where I(.) is an indicator function, and Ln,α is the α-level critical value of the
exact distribution of a suitable test statistic Ln under H0 : β2 = β20 . For testing
H0 : β2 = β20 , the likelihood ratio statistic Ln is suggested:
⎛ ⎞
L β̂ RE
Ln = −2 log ⎝ ⎠ = 2 l β̂ U E − l β̂ RE , (8)
L β̂ U E
where l(β̂ U E ) and l(β̂ RE ) are values of the log-likelihood at the unrestricted and
restricted estimates, respectively. Under H0 , the distribution of Ln converges to
Chi-square distribution with p2 degree of freedom as n → ∞.
Clearly, the pretest estimator takes the value of the unrestricted estimator
when the test statistic lies in a rejection region, otherwise, it takes the value of
the restricted estimator. This estimator has limits due to the large size of the
pretest.
A Comparison of Pretest, Stein-Type and Penalty Estimators 23
Ahmed [1] found that the shrinkage pretest estimator significantly improves
upon the pretest estimator in terms of size α, and it dominates the unrestricted
estimator in a large portion of the parameter space. For λ = 1, the pretest
estimators are used to estimate the parameter, while we use a UE as λ = 0.
Generally, the estimators based on the pretest strategy are biased and inefficient
when the null hypothesis does not hold.
alternatively,
β̂ S = β̂ U E − (p2 − 2) Ln −1 β̂ U E − β̂ RE , p2 ≥ 3. (12)
For some insight to this estimator, we refer to Hossain, Ahmed, and Doksum
[12], Yüzbaşı and Ahmed [17] among others. The Stein-Type shrinkage estimator
will provide uniform improvement over the unrestricted estimator. However, the
Stein-type shrinkage estimator tends to over-shrink the unrestricted estimator
towards the restricted estimator when the test statistic Ln is very small in
comparison with p2 − 2. To avoid the over-shrink behavior of this estimator,
the truncated version is suggested which is called the positive-part Stein-type
shrinkage estimator.
where λ is the tuning parameter which controls the amount of a shrinkage. The
LASSO shrinks some coefficients to exactly zero. Therefore, LASSO procedure
performs variable selection and parameter estimation simultaneously.
where pi = P (yi = 1|xi ) and the predictor values xi have been drawn from a
standardized multivariate normal distribution.
A Comparison of Pretest, Stein-Type and Penalty Estimators 25
where β (0) = β1 , 0 and β is the true parameter in the simulated model.
Samples were generated using Δ∗ between 0 and 4.
The number of replications in the simulation was initially varied and it was
determined that N = 1, 000 iterations were adequate to obtain a stable result
for each combination of parameters.
Based on the simulated data, we estimated the MSE of all the proposed
estimators. The performance of the estimators was evaluated using the notion
of simulated relative efficient (SRE), which is the MSE relative to the MSE of
β̂ U E . For any estimator β̂ ∗ , the SRE of β̂ ∗ with respect to β̂ U E is defined as
p
UE 2
UE ∗ SimulatedMSE(β̂ U E ) Simulated i=1 βi − βi
SRE β̂ , β̂ = = p ∗ 2
.
SimulatedMSE(β̂ ∗ ) Simulated i=1 (βi − βi )
(19)
Keep in mind that an SRE is larger than the one that indicates the degree
of superior of the estimator β̂ ∗ over β̂ U E .
Table 1. The SREs of the estimators with respect to the UE for λ = 0.25 at Δ∗ = 0.
Table 2. The SREs of the estimators with respect to the UE for λ = 0.50 at Δ∗ = 0
Table 3. The SREs of the estimators with respect to the UE for λ = 0.75 at Δ∗ = 0
submodel is used. We choose β4 = 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.8, 1.0, 1.5, and 2.0
28 O. Reangsephet et al.
In this section, we apply the proposed estimators to the South African heart
disease data set. Rousseauw et al. [14] described a retrospective sample of males
in a heart-disease high-risk region of the Western Cape, South Africa. This study
comprised over 462 samples and the set of variables is described in Table 5.
We notice that the condition index (CI) value is calculated as 392.718 which
implies the existence of multicollinearity in this data set. After applying the
variable selection procedure based on AIC criterion, BIC criterion, and LASSO,
the results are given in Table 6.
Table 6 shows that the candidate submodel based on AIC and BIC criteria
contains 5 active predictors, while LASSO selection procedure contains 7 active
predictors. Hence, we will consider the candidate submodel with 5 active pre-
dictors that is tobacco, famhist, ldl, typea, and age. The restricted subspace is
β2 = (βadiposity , βobesity , βalcohol , βsbp ) = (0, 0, 0, 0), p = 9, p1 = 5, and p2 = 4.
To examine the performance of the proposed estimators for the candidate
submodel, we draw m = 250 bootstrap rows with replacement N = 1, 000 times
from the data. The performance of the proposed estimators with respect to the
A Comparison of Pretest, Stein-Type and Penalty Estimators 29
Table 4. The SREs of RE, LS, PT, SP, S, and S+ with respect to the UE for λ = 0.75
p2 Δ∗ RE LS PT SP S S+
α = 0.01 α = 0.05 α = 0.10 α = 0.01 α = 0.05 α = 0.10
5 0.0 2.581 2.311 2.428 2.057 1.806 2.195 1.909 1.705 1.597 1.726
0.1 1.328 1.611 1.141 1.046 1.016 1.299 1.139 1.082 1.279 1.309
0.2 0.884 1.227 0.836 0.882 0.907 0.976 0.957 0.960 1.187 1.195
0.3 0.666 0.988 0.773 0.873 0.920 0.882 0.928 0.954 1.150 1.152
0.4 0.54 0.833 0.795 0.910 0.958 0.879 0.947 0.975 1.126 1.127
0.5 0.452 0.716 0.863 0.948 0.968 0.917 0.967 0.980 1.111 1.111
0.8 0.309 0.515 0.958 0.988 0.996 0.973 0.992 0.998 1.088 1.088
1.0 0.255 0.434 0.984 1.000 1.000 0.989 1.000 1.000 1.081 1.081
1.5 0.184 0.322 1.000 1.000 1.000 1.000 1.000 1.000 1.074 1.074
2.0 0.152 0.269 1.000 1.000 1.000 1.000 1.000 1.000 1.070 1.070
7 0.0 3.065 2.66 2.873 2.477 2.079 2.523 2.232 1.924 1.959 2.147
0.1 1.701 1.962 1.406 1.215 1.147 1.550 1.286 1.197 1.499 1.540
0.2 1.183 1.548 1.000 0.973 0.975 1.140 1.042 1.021 1.349 1.359
0.3 0.909 1.29 0.876 0.927 0.955 0.985 0.978 0.984 1.282 1.284
0.4 0.748 1.116 0.857 0.939 0.963 0.938 0.971 0.981 1.240 1.241
0.5 0.643 0.993 0.891 0.962 0.979 0.944 0.979 0.989 1.217 1.217
0.8 0.449 0.735 0.969 0.991 0.996 0.981 0.994 0.997 1.169 1.169
1.0 0.376 0.632 0.989 0.997 1.000 0.993 0.998 1.000 1.155 1.155
1.5 0.278 0.483 1.000 1.000 1.000 1.000 1.000 1.000 1.139 1.139
2.0 0.222 0.394 1.000 1.000 1.000 1.000 1.000 1.000 1.129 1.129
10 0.0 4.693 3.702 3.987 3.087 2.396 3.277 2.680 2.175 2.612 2.942
0.1 2.43 2.661 1.982 1.531 1.366 2.124 1.604 1.414 1.944 2.048
0.2 1.669 2.124 1.301 1.135 1.074 1.499 1.230 1.135 1.686 1.730
0.3 1.263 1.755 1.029 0.991 0.983 1.197 1.079 1.037 1.569 1.584
0.4 1.024 1.510 0.911 0.931 0.949 1.048 0.997 0.99 1.485 1.49
0.5 0.868 1.335 0.857 0.926 0.955 0.974 0.974 0.983 1.433 1.434
0.8 0.610 1.001 0.890 0.959 0.980 0.942 0.977 0.989 1.326 1.327
1.0 0.508 0.858 0.928 0.980 0.992 0.959 0.988 0.995 1.300 1.300
1.5 0.368 0.648 0.996 1.000 1.000 0.998 1.000 1.000 1.262 1.262
2.0 0.295 0.529 1.000 1.000 1.000 1.000 1.000 1.000 1.236 1.236
15 0.0 6.502 4.665 4.986 3.510 2.651 3.879 2.976 2.372 3.58 3.961
0.1 3.570 3.566 2.568 1.735 1.462 2.570 1.742 1.469 2.569 2.662
0.2 2.466 2.926 1.575 1.258 1.164 1.703 1.310 1.197 2.163 2.196
0.3 1.906 2.506 1.219 1.083 1.046 1.330 1.132 1.075 1.949 1.956
0.4 1.549 2.188 1.062 1.018 1.011 1.152 1.053 1.029 1.818 1.819
0.5 1.323 1.951 0.993 0.993 0.992 1.057 1.017 1.005 1.712 1.713
0.8 0.927 1.503 0.971 0.992 0.997 0.991 0.997 0.999 1.560 1.560
1.0 0.793 1.331 0.984 0.995 0.998 0.993 0.998 0.999 1.500 1.500
1.5 0.574 1.015 0.999 0.999 1.000 0.999 0.999 1.000 1.423 1.423
2.0 0.458 0.827 1.000 1.000 1.000 1.000 1.000 1.000 1.380 1.380
30 O. Reangsephet et al.
Fig. 1. SREs of RE, LS, PT, SP, S, and S + with respect to the UE when the candidate
subspace misspecifies β4 as zero as of Δ∗ = (β4 )2 . Here, p1 = 3, α = 0.01.
Fig. 2. SREs of RE, LS, PT, SP, S, and S+ with respect to the UE when the candidate
subspace misspecifies β4 as zero as of Δ∗ = (β4 )2 . Here, p1 = 3, α = 0.05.
A Comparison of Pretest, Stein-Type and Penalty Estimators 31
Fig. 3. SREs of RE, LS, PT, SP, S, and S+ with respect to the UE when the candidate
subspace misspecifies β4 as zero as of Δ∗ = (β4 )2 . Here, p1 = 3, α = 0.10.
Variable Description
Response variable
Chd Coronary heart disease
Predictor variable
Tobacco Cumulative tobacco (kg)
Famhist Family history of heart disease, a factor with levels absent and present
Ldl Low density lipoprotein cholesterol
Typea Type-A behavior
Age Age at onset
Adiposity Adiposity
Obesity Obesity
Alcohol Current alcohol consumption
Sbp Systolic blood pressure
Table 6. Full and candidate sub-models for South African heart disease data
Note that SPE is less than one; this means the unrestricted estimator is doing
better. This study assumed the empirical distribution F̂ based on 462 actual
observations to be the true distribution and the resulting logistic regression
coefficient β̂’s to be the true parameter values. We assumed α = 0.01 and λ =
0.50. The results of the point estimates, standard errors, SREs, and SPEs of the
estimators are shown in Table 7.
Table 7 reveals that the restricted estimator is the best, and all the estima-
tors outperform the unrestricted estimator. The performance of linear shrink-
Table 7. Estimates (first row) and standard errors (second row) of the coefficients for
active predictors. The SRE and SPE columns give the relative efficiency and relative
prediction error of the estimators with respect to UE, respectively.
Acknowledgements. We would like to thank all the referees, the Editor, and the
Associate Editor for their valuable suggestions on the revision of the article.
References
1. Ahmed SE (1992) Shrinkage preliminary test estimation in multivariate normal
distributionsm. J Stat Comput Simul 43(3–4):177–195
2. Ahmed SE, Amezziane M (2016) Shrinkage-based semiparametric density estima-
tion. Statistical Methodology
34 O. Reangsephet et al.
1 Introduction
In current manufacturing systems, production processes and management are
involved in many unexpected events and new requirements are emerging con-
stantly. For modern flexible decision making in manufacturing systems it is nec-
essary to adapt the unexpected changes with rescheduling processes. As a result,
production managers in the volatile environment have to generate high quality
schedules [3,8]. Rescheduling is defined as periodically or event-driven operation
to better arranging activities for the next time period based on both the state of
the system and the vested schedule. Many literatures have reported researches
on the rescheduling topic. Typically, Vieira et al. presented a comprehensive
survey for most applications of rescheduling in manufacturing systems [8]. This
methodology is summarized in Table 1.
In this paper, job shop rescheduling problem (JSRP) in dynamic environment
will be studied, more specifically, the study takes into account the effect of ran-
dom job arrivals and machine breakdowns. Since JSRP is ubiquitous in manufac-
turing systems, it has attracted great attention to design optimization methods
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 3
36 X. Hao et al.
for finding the desired effective solutions. It is well-known that Job shop schedul-
ing problem is NP-hard, and various methods have been introduced to obtain
optimum solutions. From the practical point of view, many dispatching rules and
composite dispatching rule have been developed for the JSRP [8]. Sabuncuoglu
and Kizilisik proposed several reactive scheduling policies and tested their per-
formances with various experimental conditions, processing time variations, and
random machine breakdown Table 1. Sha and Liu presented extended data min-
ing tools for extracting knowledge of job scheduling with respect to due date
assignment in a dynamic job shop environment [7]. Vinod and Sridharan pre-
sented the salient aspects of a simulation-based experimental study on scheduling
rules in a dynamic job shop where the setup times are sequence dependent [9].
However, a key challenge is to balance available capacities among jobs at differ-
ent processing stages, especially in the shops with re-entrant flow, like those in
semiconductor wafer fabrication plants [5]. Furthermore, in job shop reschedul-
ing, job arrival of specific product is often unexpected, critical job information
is not available in advance which would eliminate schedulers to anticipate future
workloads effectively. However, all these researches transform the multi-objective
problem to a single objective problem using a weighting method. They do not dis-
cuss the implications of non-dominated solutions for the multi-objective dynamic
job shop scheduling problems.
In this paper, to solve the moJSRP model, with the framework of proposed
MoEDA, the probability model of the operation sequence is estimated firstly.
For sampling the processing time of each operation with the Monte Carlo meth-
ods, allocation method is used to decide the operation sequence, and then the
expected makespan and total tardiness of each sampling are evaluated. Subse-
quently, updating mechanism of the probability models is proposed according to
the best solutions to obtain.
Multi-objective Job Shop Rescheduling with Estimation 37
2 Mathematical Formulation
(1) Assumption
The job shop scheduling problem (JSP) concerns with the determination of
the operation sequences on the machines so that the makespan is minimized. It
may consist of several assumptions as follows:
A8. The 1rst operation of the new job has to be started after the processing
operation is finished on one machine.
A9. For new job, the processing time and machine assignment of each oper-
ation are determinate.
A10. There are no precedence constraints between operations of original jobs
and new job.
A11. For new job, neither release times nor due dates are specified.
The JSP has already been confirmed as one of the NP-hard combinatorial
problems. There are N jobs and M machines to be scheduled; furthermore, each
job is composed of a set of operations and the operation order on machines is
pre-specified. Each operation is characterized by the required machine and the
fixed processing time [3].
(2) Notation
Indices:
i, i , k: job index; where i is original job index, i is any job index, k is the
index of new arrival job;
j, j , l: operation index; where j is operation index of original job, j is any
operation index of any job, and l is the operation index of new arrival job.
38 X. Hao et al.
Parameters:
Decision Variables:
The first objective Eq. (1) of the moJSRP is to minimize the makespan of new
job. The second objective Eq. (2) is to minimize the costs caused by disruption
on original schedule, where the term Δtij and Δt̄ denote the deviation and its
mean of begin time respectively in the original schedule.
where, inequity (5) ensures that the (l − 1)th operation of job k should be
processed before the l − th operation in the same job. Equation (6) expresses the
operation precedence constraints. Equation (7) ensures that the original opera-
tions that are fixed for execution at the rescheduling point cannot be changed.
Inequities (8) and (9) represent the nonnegative restrictions.
Multi-objective Job Shop Rescheduling with Estimation 39
The multiple objective optimization problems have been receiving growing inter-
est from researchers with various backgrounds since early 1960. There are a
number of scholars who have made significant contributions to the problem. In
the paper, we present a scheme of hybrid multiobjective estimation of distrib-
ution algorithm (h-MoEDA). In h-MoEDA, MoEDA takes care of exploration
that tries to identify the most promising search space regions, and modeling the
distribution of the stochastic variables. For NP-hard combinatorial optimization
problems, it became necessary that the global search dynamics of EDAs ought to
be complemented with local search refinement [6]. Variable neighborhood search
(VNS) is proposed to improve the solution quality, and the simple principle for
driving this improvement is based on the systematic change of neighborhood
within a possibly randomized local search.
In Fig. 1, h-MoEDA starts by generating the solutions kept in the population
randomly. The set of solutions of high fitness, which refers to promising solu-
tions, is selected from the population using a selection method, and the promis-
ing solutions are used to learn the probability model. Thereafter, probabilities
defined by the probability model are estimated, and the new candidate solutions
are sampled according to the given sampling method. For a candidate solution,
problem specific local search algorithm is used to improve each candidate solu-
tion to reach a local optimum. Finally, the new solutions are evaluated, and
these with high fitness are incorporated into a solution pool, which keeps these
individuals contributing to the makeup of promising solutions. The iteration will
continue until the predefined termination criteria are met.
The following subparagraph presents the vital components of the proposed h-
MoEDA in detail. Firstly, the representation of a chromosome for an individual
is described. Then, transition probability model is presented. Finally, fitness
assignment mechanisms are discussed.
Fig. 3. Gantt chart of the schedule encoded by the operation sequence shown in Fig. 2
where α denotes the learning rate from the current promising solutions; in par-
ticular, for α = 1, the probability distribution is completely reconstructed by
the current promising solutions.
To maintain the diversity of sampling, the distribution probability of X is
updated toward the estimation distribution. The distribution can be tuned with
probability pm of the mutation, and the mutation is performed using the follow-
ing definition:
Pt (X = x) + λm
Pt+1 (X = x) = ,
λm
x ∈X\{x} max P t (x) − |X|−1 , ε + (P t (X = x) + λ m )
(13)
where λm is the mutation shift that controls the amount for mutation operation,
and ε is a small probability value to avoid the negative probability value.
42 X. Hao et al.
The multiple objective optimization problems have been receiving growing inter-
est from researchers with various backgrounds since early 1960. There are a num-
ber of scholars who have made significant contributions to the problem. Among
them, Pareto is perhaps one of the most recognized pioneers in this field and
recently. Many multi-objective genetic algorithms differ mainly in the fitness
assignment strategy which is known as an important issue in solving multiple
objectives optimization problem [1]. Zhang et al. proposed a hybrid sampling
strategy-based evolutionary algorithm (HSS-EA) in which a Pareto dominating
and dominated relationship-based fitness function (PDDR-FF) is used to evalu-
ate the individuals. PDDR-FF of an individual Si is calculated by the following
fitness assignment function (14):
where q(·)is the number of individuals which can dominate the individual S · p(·)
is the number of individuals which can be dominated by the individual Si . The
PDDR-FF can set the obvious difference values between the nondominated and
dominated individuals as shown in Fig. 5.
To examine the practical viability and efficiency of the pro- posed MoEDA, we
designed a numerical study to com- pare MoEDA with efficient algorithms from
previous studies. The proposed MoEDA was compared with adaptive weight
genetic algorithm (awGA) [1], Non-dominated Sorting Genetic Algorithm II
(NSGA-II) [2] and Strength Pareto Evolutionary Algorithm 2 (SPEA2) [10] for a
Multi-objective Job Shop Rescheduling with Estimation 43
set of simulation data of testing standard benchmark problems. All of the above
algorithms were implemented using JAVA under the Eclipse environment, and
the simulation experiments were conducted on Intel Core i5 (2.3 GHz clock)
with 4G memory. Data were collated from 30 test runs for each algorithm. In
order to compare the performance of these algorithms fairly and under the same
environment, the strategies of related algorithms and their respective parameters
are presented in Table 2.
4.1 Convergence
Table 4 shows a comparison of coverage by MoEDA, awGA, NSGA-II and
SPEA2. Figure 6 presents the distribution range of coverage conducted on LA35.
From Table 3 and Fig. 6, it is easy to see that the h-MoEDA is better than
MSGA-II, SPEA2 and awWA on C measure. Such better convergence should
mainly attribute to the hybrid sampling strategy of VEGA’s preference for the
edge region of the Pareto front and PDDR-FF’s tendency converging toward the
centre area of the Pareto front.
They preserve better performances both in efficacy and efficiency. Especially,
h-MoEDA can also keep diversity evenly without special distribution mecha-
nisms like NSGA-II and SPEA2. h-MoEDA is better than MSGA-II, SPEA2
44 X. Hao et al.
Table 4. Comparison of the spacing measure by h-MoEDA, NSGA-II, and SPEA2 and
AWGA
5 Conclusion
This paper presents an effective h-MoEDA, which solves the MoJSRP. It min-
imized the expected average makespan and expected total tardiness within a
46 X. Hao et al.
References
1. Cauty R (2000) Genetic algorithms and engineering optimization. Wiley, New York
2. Deb K, Pratap A, Agarwal S et al (2002) A fast and elitist multiobjective genetic
algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197
3. Gen M, Cheng R, Lin L (2008) Network models and optimization: multiobjective
genetic algorithm approach. Springer, London
4. Giffler B, Thompson GL (1960) Algorithms for solving production-scheduling prob-
lems. Oper Res 8(4):487–503
5. Kempf K (1994) Intelligently scheduling semiconductor wafer fabrication. In: Intel-
ligent Scheduling, pp 517–544
6. Michiels W, Aarts E, Korst J (2007) Theoretical aspects of local search. Springer,
Berlin
7. Sha DY, Liu CH (2005) Using data mining for due date assignment in a dynamic
job shop environment. Int J Adv Manufact Technol 25(11):1164–1174
8. Vieira GE, Herrmann JW, Lin E (2003) Rescheduling manufacturing systems: a
framework of strategies, policies, and methods. J Sched 6(1):39–62
9. Vinod V, Sridharan R (2008) Scheduling a dynamic job shop production system
with sequence-dependent setups: an experimental study. Rob Comput-Integr Man-
ufact 24(3):435–449
10. Zitzler E, Laumanns M, Thiele L (2001) SPEA 2: improving the strength pareto
evolutionary algorithm. In: Evolutionary methods for design, optimization and
control. CIMNE, Barcelona
Multi-Queue Priority Based Algorithm for CPU
Process Scheduling
1 Introduction
A computer system comprises of software and hardware. Software includes pro-
grams for system and user needs. Hardware is a set of resources including CPU,
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 4
48 U. Rafi et al.
GPU, RAM and many more of them. The most important software is Operating
System that acts as an interfacing layer which serves the users in using hardware
effectively. User needs are fulfilled by set of Application Software that requires a
base software to run i.e. operating system. Software may be in running state or
it may not be running. The former is called a Process while the latter is called
Program. A process requires certain resources for execution. CPU and RAM are
major resources that each process requires for execution. CPU is the most costly
and important resource. It must be utilized properly. CPU should never be left
underutilized. Operating System must ensure proper utilization of the CPU.
A computer system may be Single Processor System or it may be Multi-
Processor System. Single processor system allows only one process to acquire
the CPU and gets execute on it. On the other hand multi-processor system
allows more than one processes to be executed with one on each processor at a
time.
A process changes its activity from time to time as it executes. Process activ-
ity is defined in terms of two-state model or five-state model. Two-state model
defines that a process can have two states: running and not running. It gives an
abstract idea. Two state model is shown in Fig. 1. Stallings [21] described a more
elaborative version of two state model called five-state model. Five state model
took into account the reason of process for not being in the running state. So,
five-state model described five process activities: new, ready, running, waiting
and terminated. Figure 2 shows five state model for process execution states.
Section 2 of the paper describes CPU Scheduling, its need, various queue man-
agement systems, queues involved in scheduling and scheduling criteria. Section 3
Multi-Queue Priority Based Algorithm for CPU Process Scheduling 49
2 CPU Scheduling
The ready queue is a bulk of processes ready to get executed. Ready queue
may be partitioned as it is done in Multilevel Queue Scheduling and Multilevel
Feedback Scheduling. There are following two ways to manage ready queue:
(1) SQMS
SQMS (Single Queue Management System) involves no partitioning of ready
queue. SQMS consists of single partition ready queue in which all the incoming
processes are stored. It is a straight forward queue management scheme. It is due
to the fact that processes are to be picked for execution from a single queue. So
operating system requires only process scheduling algorithm. Only a single queue
has to be managed and scheduled. No overhead is involved in terms of scheduling
ready queue partitions or multiple ready queues. In addition to this, operating
system doesn’t require any overhead for deciding the addition of processes to
un-partitioned ready queue.
50 U. Rafi et al.
(2) MQMS
MQMS (Multi-Queue Management System) scheme maintains multiple ready
queues of jobs or partitions of ready queue. Processes are added to appropriate
ready queue portion depending upon certain criteria. MQMS scheme is complex
scheme in terms of implementation, management and scheduling. The selection
of process for execution requires two important decisions: (i) appropriate queue
selection and (ii) appropriate process selection from the selected queue. Hence
MQMS involves both queue scheduling algorithm and process scheduling algo-
rithm. MQMS scheme involves overhead in terms of queue selection.
A process is first added to job queue. After this, it is moved to ready queue.
Process can now move itself among ready queue, running state and waiting queue
as and when required. All these state changes depend upon a criteria on the basis
of which the Operating System decides that which process move to ready queue,
which started execution next, which process will be needed to move to waiting
queue and which process pre-empted and moved to ready queue. These decisions
are taken by intelligent algorithms, executed by CPU Scheduler.
3 Scheduling Algorithms
Tanenbaum [22] described that First Come First Served (FCFS), Shortest Job
First (SJF), Shortest Remaining Time First (SRTF), Round Robin (RR) and
Priority Scheduling (PS) algorithms were commonly used Scheduling Algo-
rithms. These algorithms were implemented in the form of SQMS. However,
priority scheduling may had MQMS version. Mishra and Khan [11] narrated
that SJF only permitted process with lowest burst time to acquire CPU for exe-
cution and lowered average waiting time. RR scheduling defined a time quantum
for assigning CPU fairly to each process present in the ready queue for specific
amount of time. We discuss Shortest Job First (SJF), Shortest Remaining Time
First (SRTF), Round Robin (RR) and Priority Scheduling algorithms below.
Shortest Job First is based on the Burst Time of processes. SJF is based on
a simple idea derived from FCFS, that is, executing smaller processes before
larger ones lowers the Average Waiting Time (AWT) of process set. If a sub set
of processes have same burst time, then they are executed in FCFS fashion. The
processes are arranged in a queue on the base of their Burst Times in a way that
process with smallest burst time is placed at the Head/Front of queue while the
process with largest burst time is placed at Rear of queue. In this way, CPU
Scheduler always picks the process with smallest burst time from ready queue.
It’s a non-preemptive scheduling algorithm. The main advantage of SJF is to
decrease the Average Waiting Time of process set. However, this algorithm does
not take into account process priorities. SJF may result in starvation of larger
processes.
Multi-Queue Priority Based Algorithm for CPU Process Scheduling 51
4 Related Work
Rao et al. [15] elaborated that Multilevel Feedback Queue Scheduling involved
use of multiple queues. Process could be migrated (promoted/demoted) from
one queue to another. Each queue had its own scheduling algorithm. Similarly,
criteria were also defined for promotion and demotion of processes. PMLFQ
resulted in starvation because of execution of high priority process queues before
low priority process queues.
Goel and Garg [5] explained fairness was an important scheduling criteria.
Fairness ensured that each process got fair share of CPU for execution. Further,
higher priority processes exhibited smaller waiting time and starvation could
take place for lower priority process.
Shrivastav et al. [16] proposed Fair Priority Round Robin with Dynamic
Time Quantum (FPRRDQ). FPRRDQ calculated time quantum on the basis of
priority and burst time for each individual process. Experimental results when
Multi-Queue Priority Based Algorithm for CPU Process Scheduling 53
compared with Priority Based Round Robin (PBRR) and Shortest Execution
First Dynamic Round Robin (SEFDRR) scheduling algorithms, showed that
FPRRDQ had improved performance considerably.
Patel and Patel [13] proposed a Shortest Job Round Robin (SJRR) schedul-
ing algorithm. SJRR suggested to arrange processes in SJF fashion. The shortest
process’s burst time was designated as time quantum. Case study showed reduc-
tion in average waiting time of process set.
Abdulrahim et al. [1] proposed New Improved Round Robin (NIRR) algo-
rithm for process scheduling. The algorithm involved two queues ARRIVE and
REQUEST. In this algorithm, the time quantum was calculated by finding the
ceiling of average burst time of process set. First process in the REQUEST queue
was allocated to CPU for one time quantum. In any case, the executing process
had burst time less than or equal to half time quantum then CPU reallocated for
remaining CPU burst time. Otherwise, process was removed from REQUEST
and added to ARRIVE. NIRR algorithm improved scheduling performance and
also lowered number of context switches as the case of RR algorithm.
Mishra and Rashid [12] introduced Improved Round Robin with Varying
Time Quantum (IRRVQ) scheduling algorithm. IRRVQ proposed arrangement
of processes (in ready queue) in SJF fashion and setting time quantum equal
to burst time of first process. After a complete execution of first process, next
processes were selected and assigned one time quantum. If finished, processes
were removed from ready queue otherwise, processes were preempted and added
to tail of ready queue.
Sirohi et al. [20] described an improvised round robin scheduling algorithm
for CPU scheduling. This algorithm calculated the time quantum by finding the
average of the burst times of processes. The processes were allocated to ready
queue in SJF order. CPU was allocated to first process for one time quantum.
The reallocation of CPU was based on remaining burst time of process under
execution.
Akhtar et al. [3] proposed a preemptive hybrid scheduling algorithm based
on SJF. Initially, it showed similar steps as that of SJF. The algorithm reduced
number of context switches considerably but only showed minor change in aver-
age waiting time. The algorithm could exhibit starvation.
Joshi and Tyagi [7] explained Smart Optimized Round Robin (SORR)
scheduling algorithm for enhancing performance. At first step, the SORR algo-
rithm arranged processes in increasing order of burst times. In the second step,
the mean burst time was calculated. Third step involved calculation of Smart
Time Quantum (STQ) after which the CPU was allocated to first process in
the ready queue for a time quantum equal to calculated STQ. If the allocated
process was remained with burst time less or equal to 1 time quantum, CPU was
reallocated for the burst time. Otherwise process was preempted, next process
was selected from ready queue and CPU was allocated to the newly selected
process for STQ.
Lulla et al. [10] devised a new algorithm for CPU scheduling. The algo-
rithm introduced mathematical model for calculating initial time quantum and
54 U. Rafi et al.
5 Proposed Algorithm
In this paper, we proposed a new scheduling algorithm as an extension of pri-
ority scheduling. Here, we proposed it only for single processor system (neither
Multi-Queue Priority Based Algorithm for CPU Process Scheduling 55
Step 1. Create Multiple Queues (one for high priority processes and other for
low priority processes) based on Priority Scheme. These queues are
designated as High Priority and Low Priority Queues.
Step 2. Create Processes in Job Pool.
Step 3. Define Priority Criteria.
Step 4. Allocate processes to queues based on priority criteria and burst time.
Processes are arranged in decreasing order of priority from front to
rear. Additionally following are considered:
Step 4.1.1 If process priority is less than or equal to priority criteria
then add it to the Low Priority Queue. Otherwise add it
to High Priority Queue.
Step 4.1.2 If multiple processes have same priority then arranges
them in a way that the process with shortest burst time
comes first in the queue.
Step 5. Select first process from High Priority Queue and execute it fully (for
entire burst time by assigning time slice equal to the burst time) and
remove it from queue head.
56 U. Rafi et al.
Step 6. Select first process from Low Priority Queue and execute for a time
equal to half that of burst time (assign time slice equal to half the
burst time). Important thing to consider is that, in this case, each low
priority process completed in two halves.
Step 7. Repeat Steps 5–6 until both queues are empty.
6 Pseudo Code
Proposed Algorithm (P[1, · · · , n], Priority Criteria) (Fig. 4)
begin
Create Queue Portions QH[1....nh], QL[1....nl]
For each P[i] in P[1.......n]
If (P[i].Priority <= PriorityCriteria) Then
Add P[i] to QL in such a way that processes are arranged in
decreasing order of priority and increasing order of burst time
Else
Add P[i] to QH in such a way that processes are arranged in
decreasing order of priority and increasing order of burst time
End If
Repeat following steps while QH and QL are not empty
Select a process from QH head, allocate CPU, execute completely and
remove it from QH
Select a process from QL head, allocate CPU, execute for the time half to
its original burst time
If (P[i].RBT) = 0 Then
Removes it from QL
Else
place it back to QL head
End If
Goto step 4
End while
Completion of process set
End Algorithm
*RBT = Remaining Burst Time
7 Case Studies
7.1 Case Study 1
Gantt Chart of above process data when executed by Priority Scheduling is as
Fig. 5 (Table 1).
Multi-Queue Priority Based Algorithm for CPU Process Scheduling 57
(1) Priority Criteria: Process with Priority > 2 is placed in High Priority
Queue while process with Priority ≤ 2 is placed in Low Priority Queue.
(2) Ready Queue Partitions: As Fig. 9.
The proposed algorithm on given data results in Fig. 10.
Fig. 9. Scheduling queues Fig. 10. Gantt chart for proposed algo-
rithm
Multi-Queue Priority Based Algorithm for CPU Process Scheduling 59
In the case with all the processes getting place in a single queue, the proposed
algorithm gives an average waiting time and average turnaround time exactly
equal to priority scheduling algorithm.
Fig. 12. Scheduling queues Fig. 13. Gantt chart for proposed algo-
rithm
60 U. Rafi et al.
Fig. 15. Scheduling queues Fig. 16. Gantt chart for proposed algo-
rithm
Multi-Queue Priority Based Algorithm for CPU Process Scheduling 61
all the processes having high priority. In this case only one priority queue was
formed rather two queues.
References
1. Abdulrahim A, Abdullahi SE, Sahalu JB (2014) A new improved round robin (nirr)
cpu scheduling algorithm. Int J Comput Appl 90(4):27–33
2. Adekunle O (2014) A comparative study of scheduling algorithms for multipro-
gramming in real-time systems. Int J Innov Sci Res 12:180–185
3. Akhtar M, Hamid B et al (2015) An optimized shortest job first scheduling algo-
rithm for cpu scheduling. J Appl Environ Biol Sci 5:42–46
4. Almakdi S (2015) Simulation and Performance Evaluation of CPU Scheduling
Algorithms. LAP LAMBERT Academic Publishing, Saarbrücken
5. Goel N, Garg RB (2013) A comparative study of cpu scheduling algorithms. Int J
Graph Image Process 2(4):245–251
6. Goel N, Garg RB (2016) Performance analysis of cpu scheduling algorithms with
novel omdrrs algorithm. Int J Adv Comput Sci Appl 7(1):216–221
7. Joshi R, Tyagi SB (2015) Smart optimized round robin (sorr) cpu scheduling algo-
rithm. Int J Adv Res Comput Sci Softw Eng 5:568–574
8. Kathuria S, Singh PP et al (2016) A revamped mean round robin (rmrr) cpu
scheduling algorithm. Int J Innov Res Comput Commun Eng 4:6684–6691
9. Khan R, Kakhani G (2015) Analysis of priority scheduling algorithm on the basis
of fcfs and sjf for similar priority jobs. Int J Comput Sci Mob Comput 4:324–331
10. Lulla D, Tayade J, Mankar V (2015) Priority based round robin cpu scheduling
using dynamic time quantum. Int J Emerg Trends Technol 2:358–363
11. Mishra MK (2012) An improved round robin cpu scheduling algorithm. J Glob Res
Comput Sci 3(6):64–69
12. Mishra MK, Rashid F (2014) An improved round robin cpu scheduling algorithm
with varying time quantum. Int J Comput Sci Eng Appl 4(4):1–8
13. Patel R, Patel M (2013) Sjrr cpu scheduling algorithm. Int J Eng Comput Sci
2:3396–3399
14. Rajput G (2012) A priority based round robin cpu scheduling algorithm for real
time systems. Int J Innov Eng Technol 1:1–10
15. Rao MVP, Shet KC, Roopa K (2009) A simplified study of scheduler for real time
and embedded system domain. Comput Sci Telecommun 12(5):1–6
16. Shrivastav MK, Pandey S et al (2012) Fair priority round robin with dynamic time
quantum: Fprrdq. Int J Mod Eng Res 2:876–881
17. Shukla D, Ojha S, Jain S (2010) Data model approach and markov chain based
analysis of multi-level queue scheduling. J Appl Comput Sci Math 8(4):50–56
18. Silberschatz A, Gagne G, Galvin PB (1983) Operating System Concepts, 8th edn.
Addison-Wesley Pub. Co, Boston Binder Ready Version
19. Singh N, Singh Y (2016) A practical approach on mlq-fuzzy logic in cpu scheduling.
Int J Res Educ Sci Methods 4:50–60
20. Sirohi A, Pratap A, Aggarwal M (2014) Improvised round robin (cpu) scheduling
algorithm. Int J Comput Appl 99(18):40–43
21. Stallings W (2011) Operating Systems–Internals and Design Principles, 7th edn.
DBLP
22. Tanenbaum AS, Tanenbaum AS (2001) Modern Operating Systems, 2nd edn.
Prentice-Hall, Upper Saddle River
23. Ulfahsiregar M (2012) A new approach to cpu scheduling algorithm: Genetic round
robin. Int J Comput Appl 47(19):18–25
An Order-Based GA for Robot-Based Assembly
Line Balancing Problem
Abstract. In the real world, there are a lot of scenes from which the
product is made by using the robot, which needs different assembly times
to perform a given task, because of its capabilities and specialization. For
a robotic assembly line balancing (rALB) problem, a set of tasks have
to be assigned to stations, and each station needs to select one robot to
process the assigned tasks. In this paper, we propose a hybrid genetic
algorithm (hGA) based on an order encoding method for solving rALB
problem. In the hGA, we use new representation method. Advanced
genetic operators adapted to the specific chromosome structure and the
characteristics of the rALB problem are used. In order to strengthen the
search ability, a local search procedure is integrated under the frame-
work the genetic algorithm. Some practical test instances demonstrate
the effectiveness and efficiency of the proposed algorithm.
1 Introduction
An assembly line system is a manufacturing process in which interchangeable
parts are added to a product in a sequential manner to produce a finished prod-
uct. It is important that an assembly line is designed and balanced so that it
works as efficiently as possible. Most of the work related to the assembly lines
concentrate on the assembly line balancing (ALB). The ALB model deals with
the allocation of the tasks among stations so that the precedence relations are
not violated and a given objective function is optimized.
Since the ALB model was first formulated by Helgeson et al. [13], many ver-
sions of ALB arise by varying the objective function [26]: Type-F is an objective
independent problem which is to establish whether or not a feasible line balance
exists. Type-1 and Type-2 have a dual relationship; the first one tries to min-
imize the number of stations for a given cycle time, and the second one tries
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 5
64 L. Lin et al.
to minimize the cycle time for a given number of stations. Type-E is the most
general problem version, which tries to maximize the line efficiency by simul-
taneously minimizing the cycle time and a number of stations. Finally, Type-3,
4 and 5 correspond to maximization of workload smoothness, maximization of
work relatedness and multiple objectives with Type-3 and Type-4, respectively
[15]. Furthermore, most versions of above models are NP-hard [12].
Recently, genetic algorithm (GA) and other evolutionary algorithms (EAs)
have been successfully applied in a wide variety of ALB problems. Falkenauer
and Delchambre [6] were the first to solve ALB with GAs. Following application
of GAs for solving ALB model was studied by many researchers, e.g., Anderson
and Ferris [2], Rubinovitz and Levitin [22], Gen et al. [10], Bautista et al. [3],
Sabuncuoglu et al. [24], Goncalves and Almeida [11], Brown and Sumichrast [4],
and Nearchou [21]. However, most of the researchers focused on the simplest
version of the problem, with single objective and ignored the recent trends, i.e.,
mixed-model production, u-shaped lines, robotic lines and etc.
Sigeru focused on implementing particle swarm optimization (PSO) to opti-
mize the robotic assembly line balancing (RALB) problems with an objective of
maximizing line efficiency. By maximizing the line efficiency, industries tend to
utilize their resources in an efficient manner [14]. Slim studied a robotic assem-
bly line balancing problem which goal is to maximize the efficiency of the line
and proposed some resolution methods which define the suitable component
and point positions in order to define the strategy of pick and place for each
robot [5]. In Nilakantan’s research, bio-inspired search algorithms, viz. particle
swarm optimization (PSO) algorithm and a hybrid cuckoo search and particle
swarm optimization (CS-PSO) were proposed to balance the robotic assembly
line with the objective of minimizing the cycle time [20]. Armin et al. extended
the existing algorithms to solve the robotic assembly line problem [27]. Yoose-
felahi et al. [29] considered a different type II robotic assembly line balancing
problem (RALB-II). One of the two main differences with the existing literature
is objective function which is a multi-objective one. The aim is to minimize the
cycle time, robot setup costs and robot costs. The second difference is on the
procedure proposed to solve the problem. In addition, a new mixed-integer lin-
ear programming model is developed. Snke [16] introduced a new robot concept
that aims at closing the gap between a manual assembly and a fully automatic
assembly. It is intended to be used for handling and assembly of small parts in a
highly agile production scenario, which employs both human workers and robots
in the same line, with a frequent need for reconfiguration. The objective is to
minimize the cost of the stations. This formulation is very similar to the rALB
problem.
Gao and Gen [7] proposed a hybrid GA for solving rALB-2, in which different
robots may be assigned to the assembly line tasks, and each robot needs different
assembly times to perform a given task due to its capabilities and specialization.
They focused on the genetic operators adapted to the specific chromosome struc-
ture and the characteristics of the rALB problems.
An Order-Based GA for Robot-Based Assembly Line Balancing Problem 65
The assembly of each product unit requires the execution of n tasks (indivisible
elements of work). Precedence constraints partially specify the order in which the
tasks have to be performed. They can be represented by an acyclic precedence
graph which contains nodes for all tasks. As mentioned above, the type II robotic
assembly line balancing problem usually occurs when changes in the production
process of a product take place. In this case, the robotic assembly line has to
be reconfigured using the present resources (such as robots) so as to improve it
efficiency for the new production process. The problem concerns how to assign
the tasks to stations and how to allocate the available robots for each station in
order to minimize the cycle time under the constraint of precedence relationships.
Let assembly tasks (i = 1, 2, · · · , 10) assigned on 4 stations. Robots (R1, R2,
R3, R4) are to be equipped on the 4 stations. The balancing chart of the solution
was drawn for analyzing the solution. Figure 1 shows the idle time of the station
1, 2, 3 is very big, and it also means this line did not get the balancing for
producing. In the real world, the assembly line is not just for producing one unit
of the product. It should produce several units. We give the Gantt chart about
3 units for analyzing the solution just like Fig. 2.
We can see the waiting time occurs in the Fig. 2, and the waiting time means
the idle time in the line. For example, the part waiting is occurred when the
maximum processing time of the station, which is before the current station,
is larger than the processing time of the current station. It means the current
station will wait for the parts to process, which come from the anterior station.
The processing waiting is occurred when the processing time of the current
station is smaller than the processing time of the next station. It means the
parts which were produced by the current station will wait for being processed
66 L. Lin et al.
stn/rbn
4/4 8 9 10
3/3 1 4 6
2/1 3 5
1/2 2 7
50 85 time
3/3 1 4 6
4/4 8 9 10
by the next station. Both of the waiting time is the idle time of the assembly
line. We want to reduce it by getting the balance of the line. The notation used
in this section can be summarized as follows:
Indices:
Parameters:
n: total number of assembly tasks;
m: total number of stations (robots);
til : processing time of the i − th task by robot l;
pre(i): the set of predecessor of task i in the precedence diagram.
Decision variables:
1, if task j is assigned to work station k
xjk =
0, otherwise;
1, if robot j is assigned to work station k
yjk =
0, otherwise.
Problem Formulation:
n
m
min CT = min til xik ykl (1)
1≤k≤m
i=1 l=1
An Order-Based GA for Robot-Based Assembly Line Balancing Problem 67
m
m
s. t. kxj k − kxik ≥ 0, ∀j; i ∈ pre(i) (2)
k=1 k=1
m
xik = 1, ∀i (3)
k=1
m
xkl = 1, ∀k (4)
l=1
m
xkl = 1, ∀l (5)
k=1
xik ∈ 0, 1, ∀k, i (6)
xik ∈ 0, 1, ∀l, k. (7)
The objective (1) is to minimize the cycle time (CT ). Inequity (2) represents
the precedence constraints. It ensures that for each pair of assembly activities,
the precedent cannot be assigned to a station behind the station of the successor,
if there is precedence between the two activities. Equation (3) ensures that each
task has to be assigned to one station. Equation (4) ensures that each station
is equipped with one robot. Equation (5) ensures that each robot can only be
assigned to one station.
3 Order-Based GA Design
Genetic algorithms (GAs) are powerful and broadly applicable stochastic search
and optimization techniques based on principles from evolutionary theory [9].
GA’s have been applied to solve various assembly line balancing problems
[8,23,28]. We divide this algorithm into 3 phases. It is as follows:
S1 S2 S3 S4
S1 S2 S3 S4
2 1 3 4 9 6 5 7 8 10
Phase 1.1 Order encoding for task sequence: A GA’s structure and parameter
settings affect its performance. However, the primary determinants
of a GA’s success or failure are the coding by which its genotypes
represent candidate solutions and the interaction of the coding with
the GA’s recombination and mutation operators.
68 L. Lin et al.
Locus 1 2 3 4 5 6 7 8 9 10
Phase 1: Task Sequence (v1) 2 1 3 4 9 6 5 7 8 10
Locus: station 1 2 3 4
Phase 2:
Robot assignment (v2) 1 2 3 4
Step 2. Find out a feasible cycle time as the upper bound of the cycle
time (CU B );
Step 3. Find out the optimal cycle time via bisection method;
Step 4. Partition the task sequence into m parts with the optimal
cycle time based on the robot assignment vector.
Here, a cycle time is said to be feasible if all the tasks can be allocated
to the stations by allowing as many tasks as possible for each station
under the constraint of the cycle time. The procedure to calculate
the upper bound of cycle time is illustrated in procedure 2, and
the bisection method to find out the optimal cycle time is shown in
procedure 3. After calculating the optimal cycle time, it is easy to
generate the breakpoints on the task sequence to divide it into m
parts, each of which will correspond to a station, based on the robot
assignment vector. Therefore, the tasks assigned for each station are
An Order-Based GA for Robot-Based Assembly Line Balancing Problem 69
Phase 3.2 Drawing a Gantt chart: Firstly, we draw a balancing chart for analyz-
ing the solution (in Fig. 8). We can see the solution got the balancing
for the line by comparing with the feasible solution from Fig. 2.
V1: 2 1 3 4 9 6 5 7 8 10
R1 R2 R3 R4
In the real world, the assembly line is not just for producing one unit of the
product. It should produce several units. So, we give the Gantt chart with 3 units
for analyzing the solution just like Fig. 7, which is shown at the beginning. We
can see the solution reduce the waiting time for the line by comparing with the
feasible solution from Fig. 6. It also means the better solution got the balancing
for the assembly line.
70 L. Lin et al.
4/4 8 10
3/3 5 7
2/2 4 6 9
1/1 1 2 3
52 time
1/1 1 2 3
: Processing waiting
2/2 4 6 9 : Part waiting
3/3 5 7
4/4 8 10
• For the task sequence vector, the useful feature of permutation representation
to represent a rALB solution is the order information among the tasks.
• For the robot assignment vector, the acting feature is the number at each
allele which indicates the robot no. assigned for a specific station.
An Order-Based GA for Robot-Based Assembly Line Balancing Problem 71
f (v) = w(f (v)z min ) + w(f (v)z min ) + w(f (v)z min ), (9)
where
1
f1 (v) = , (10)
CT (v)
f2 (v) = nc (v), (11)
m m
1 1
f3 (v) = ((CT (v) − tW
k (v)) − (CT (v) − tW
j (v))). (12)
m−1 m j=1
k=1
72 L. Lin et al.
Let s(i) be the station to which task i is assigned. First we introduce the logical
function Wi (k) which return false if task i cannot be transferred from station
s(i) to station k, and true otherwise:
Wi (k) = false if
s(i) > k and j ∈ suc(i), s(j) > k, or
s(i) < k and j; i ∈ suc(j), s(j) < k
Wi (k) = true, otherwise.
then, the two tasks are said to be exchangeable. Let R(i) be the robot which is
allocated for station s(i), and T l be the total assembly time of station l. For a
critical task i and task j which is not in s(i), if
then the exchange between tasks i and j is called worthwhile. The task sequence
search is as follows:
During the robot assignment search, the local search will implement over two-
pace neighborhood when it reaches the local optima of one-pace neighborhood,
and is called two-pace robot assignment search. During the robot assignment
search, when the local optima of two-pace robot assignment neighborhood is
reached, the neighbors of the two-pace local optima are improved by the task
sequence search to help the robot assignment search escape from the local
optima. Since the size of the two-pace neighborhood is very large and the task
sequence search is very computationally complex, to improve each neighbor solu-
tion by task sequence search needs a great computation time. Therefore, only
those neighbor solutions that are local optimums of the two-pace robot assign-
ment neighborhood are improved by task sequence search so as to save compu-
tation time.
5 Numerical Experiments
In the literature, no benchmark data sets are available for rALB. We collect 8
representative precedence graphs from [1], which are widely used in the sALB-1
literature [25]. These precedence graphs contain with 25-297 tasks. From each
precedence graph, 4 different rALB-2 problems are generated by using different
WEST ratios: 3, 5, 7, 10, 15. WEST ratio, as defined by Dar-EI [19], measures
the average number of activities per station. For each problem, the number of
station is equal to the number of robots, and each task can be processed on
any robot. The task time data are generated at random, while two statistical
dependence are maintained: (1) statistical dependence of task times on the task
type, (2) statistical dependence of task times on the robot on which the task is
processed.
To validate the effectiveness of our hybrid GA, we compare our approach with
Levitin et al.’s two algorithms. Recently, Levitin et al. [18] develop an efficient
approach for rALB problems. It is similar research with our purpose, aims to
achieve a balanced distribution of work between different stations (balance the
line) while assigning to each station the robot best fit for the activities assigned
to it. Levitin et al. proposed two algorithms named as recursive assignment
method and consecutive assignment method. If we look each robot as one type
and deleting this type from the total types of robots when the robot is allocated
for a station, Levitin et al.’s two algorithms can be adapted to solve the 32 rALB-
2 problems here. Hence, the two algorithms proposed by Levitin et al. are also
used to solve the 32 problems. The adopted parameters of the hGA are listed as
following: maximal generations maxGen = 1000; population size popSize = 100;
74 L. Lin et al.
Test problems Cycle time (CT ) Test problems Cycle time (CT )
of of West Levitin Levitin Proposed of of West Levitin Levitin Proposed
task station ratio et al.’s et al.’s approach task station ratio et al.’s et al.’s approach
recursive consecutive recursive consecutive
25 3 8.33 518 503 503 89 8 11.13 638 505 494
4 6.25 351 330 327 12 7.42 455 371 370
6 4.17 343 234 213 16 5.56 292 246 236
9 2.78 138 125 123 21 4.24 277 209 205
35 4 8.75 551 450 449 111 9 12.33 695 586 557
5 7.00 385 352 344 13 8.54 401 339 319
7 5.00 250 222 222 17 6.53 322 257 257
12 2.92 178 120 113 22 5.05 265 209 192
53 5 10.60 903 565 554 148 10 14.80 708 638 600
7 7.57 390 342 320 14 10.57 537 441 427
10 5.30 35 251 230 21 7.05 404 325 300
14 3.79 243 166 162 29 5.10 249 210 202
70 7 10.00 546 490 449 297 19 15.63 1129 674 646
10 7.00 313 287 272 29 10.24 571 444 430
14 5.00 231 213 204 38 7.82 442 348 344
19 3.68 198 167 154 50 5.94 363 275 256
Recursive assign.
Recursive assign.
Consecutive assign.
Consecutive assign. Proposed approach
Proposed approach
Recursive assign.
6 Conclusion
Robotic assembly line has been an important manufacturing system in the mod-
ern era. The objective of this work is to develop an efficient solution for the
robotic assembly line balancing problem. This solution aims to achieve a bal-
anced distribution of work between different stations and assign to each station
the robot best fit for the activities assigned to it. The result of such solution
would be an increased production rate of the line.
A new representation method adapting the GA to the rALB-II problem
is proposed. Advanced genetic operators adapted to the specific chromosome
76 L. Lin et al.
structure and the characteristics of the rALB problem are used. In order to
strengthen the search ability, two kinds of local search are integrated under the
framework the genetic algorithm. The coordination among the three kinds of
local search is well considered. The neighborhood structure of the local search can
be adjusted dynamically. The balance between genetic search and local search
is also investigated. The performance of proposed method is validated through
simulation experiments. The simulation shows that our algorithm is computa-
tionally efficient and effective to find the best solution. The solution obtained by
our algorithm outperforms the results from previous works.
References
1. ALB: Assembly line balancing: Data sets & research topics (2016). http://www.
assembly-line-balancing.de/
2. Anderson EJ, Ferris MC (1994) Genetic algorithms for combinatorial optimization:
the assemble line balancing problem. Informs J Comput 6(2):161–173
3. Bautista J, Suarez R et al (2000) Local search heuristics for the assembly line
balancing problem with incompatibilities between tasks. In: IEEE International
Conference on Robotics and Automation, 2000, Proceedings, ICRA, pp 2404–2409
4. Brown EC, Sumichrast RT (2005) Evaluating performance advantages of grouping
genetic algorithms. Eng Appl Artif Intell 18(1):1–12
5. Daoud S, Chehade H et al (2014) Solving a robotic assembly line balancing problem
using efficient hybrid methods. J Heuristics 20(3):235–259
6. Falkenauer E, Delchambre A (1997) A genetic algorithm for bin packing and line
balancing. In: IEEE International Conference on Robotics and Automation, 1992,
Proceedings, vol 2, pp 1186–1192
7. Gao J, Sun L et al (2009) An efficient approach for type II robotic assembly line
balancing problems. Comput Ind Eng 56(3):1065–1080
8. Gen M, Cheng R (1997) Genetic Algorithms and Engineering Design. Wiley, New
York
9. Gen M, Cheng R (2000) Genetic Algorithms and Engineering Optimization. Wiley,
New York
10. Gen M, Tsujimura Y, Li Y (1996) Fuzzy assembly line balancing using genetic
algorithms. Comput Ind Eng 31(31):631–634
11. Gonçalves JF, Almeida JRD (2002) A hybrid genetic algorithm for assembly line
balancing. J Heuristics 8(6):629–642
12. Gutjahr AL, Nemhauser GL (1964) An algorithm for the line balancing problem.
Manage Sci 11(2):308–315
13. Helgeson WB, Salveson ME, Smith WW (1954) How to Balance an Assembly Line.
Technical Report Carr Press, New Caraan
14. Janardhanan MN, Nielsen P, Ponnambalam SG (2016) Application of particle
swarm optimization to maximize efficiency of straight and U-shaped robotic assem-
bly lines. Springer
15. Kim YK, Kim YJ, Kim Y (1996) Genetic algorithms for assembly line balancing
with various objectives. Comput Ind Eng 30(3):397–409
An Order-Based GA for Robot-Based Assembly Line Balancing Problem 77
16. Kock S, Vittor T et al (2011) Robot concept for scalable, flexible assembly automa-
tion: a technology study on a harmless dual-armed robot. In: IEEE International
Symposium on Assembly and Manufacturing, pp 1–5
17. Krasnogor N, Smith J (2000) A memetic algorithm with self-adaptive local search:
TSP as a case study. In: Proceedings of 2000 Genetic and Evolutionary Computa-
tion Conference, pp 987–994
18. Levitin G, Rubinovitz J, Shnits B (2006) A genetic algorithm for robotic assembly
line balancing. Eur J Oper Res 168(3):811–825
19. (Mansoor) EMDE (1973) Malbla heuristic technique for balancing large single-
model assembly lines. IIE Trans 5(4):343–356
20. Mukund Nilakantan J, Ponnambalam SG et al (2015) Bio-inspired search algo-
rithms to solve robotic assembly line balancing problems. Neural Comput Appl
26(6):1379–1393
21. Nearchou AC (2007) Balancing large assembly lines by a new heuristic based on
differential evolution method. Int J Adv Manufact Technol 34(9):1016–1029
22. Rubinovitz J, Levitin G (1995) Genetic algorithm for assembly line balancing. Int
J Prod Econ 41(41):343–354
23. Rubinovitz J, Levitin G (1995) Genetic algorithm for line balancing. Int J Prod
Econ 41:343–354
24. Sabuncuoglu I, Erel E, Tanyer M (2000) Assembly line balancing using genetic
algorithms. J Intell Manufact 11(3):295–310
25. Scholl A (1993) Data of assembly line balancing problems. Schriften zur Quanti-
tativen Betriebswirtschaftslehre
26. Scholl A (1999) Balancing and Sequencing of Assembly Lines. Physica-Verlag, Hei-
delberg
27. Scholl A, Fliedner M, Boysen N (2010) Absalom: balancing assembly lines with
assignment restrictions. Eur J Oper Res 200(3):688–701
28. Tsujimura Y, Gen M, Kubota E (1995) Solving fuzzy assembly-line balancing
problem with genetic algorithms. In: International Conference on Computers and
Industrial Engineering, pp 543–547
29. Yoosefelahi A, Aminnayeri M et al (2012) Type II robotic assembly line balancing
problem: an evolution strategies algorithm for a multi-objective model. J Manufact
Syst 31(2):139–151
A Hybrid Model of AdaBoost and
Back-Propagation Neural Network
for Credit Scoring
1 Introduction
Credit scoring has grown an increasingly important issue of financial risk man-
agement in financial institutions since the 2008 financial crisis. It often calculates
by following a set of decision models and other underlying technologies, does
a great favor for lenders’ judging whether an application of credit should be
approved or rejected [27]. When some applicants fail to repay their debt, it leads
to a direct economic loss for the lenders. In addition, the sub-prime mortgage
crisis occurred in the USA has caused some financial institutions loss billions
of dollars due to customers’ default. However, if a credit-granting-institution
rejects all applicants of loans even with good credit scores, it will suffer the
potential revenues it can earn from the applicants in the future. Therefore, an
efficient decision support with high accuracy becomes a clear need for financial
institution.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 6
AdaBoost and Back-Propagation Neural Network 79
2 Methodology Formulation
2.1 Back-Propagation Neural Network Theory
Step 1. Select the training samples randomly from the chosen databases with
the method of 10-fold cross-validation, and each group training sample
is assigned with the same weights:
oi = f (neti ), (6)
84 F. Shen et al.
di = yi − oi , (i = 1, 2, · · · , n). (7)
Step 3.5 If the total error above is acceptable, the process stops. Oth-
erwise, revise and. There are many ways of weight changes, we
use the most common gradient descent method:
k+1 k k k−1
wij = wij + Δwij + ∂Δwij , (9)
k+1 k k k−1
wjt = wjt + Δwjt + ∂Δwjt . (10)
where I(ht (xi )) is the indicator function, and its mathematical expres-
sion shows as below:
yi ,
1, if ht (xi ) =
I(ht) = (12)
0, if ht (xi ) = yi .
In this way, the weights of training data which are wrongly classified by
current weak learner will be increased or decreased if the samples are
correctly classified.
Step 7. As the training iteration goes on, a series of base classifiers will be
obtained, then combine these classifiers during the processes:
T
f (x) = at × ht (x). (16)
t=1
A series of base classifiers with diversity are collected with loops from
step 2 to step 5. Finally, an AdaBoost model based on the weak learners
trained above is constructed:
T
H(x) = sign( at × ht (x)) (17)
t=1
4 Empirical Analysis
Table 2. Credit evaluation results comparison of different models for German credit
dataset
Table 3. Credit evaluation results comparison of different models for Australian credit
dataset
There are two possible reasons. On one hand, the credit market in Germany is
more complex than that in Australia. On the other hand, there is more non-
linearity in the German data set than in the Australian data set. Overviewing
the above performance, one interesting phenomenon shows that there is a great
capability raising of all three evaluation criteria both in the German data set
and Australian data set compared BP neural network with the AdaBoost model
based on BP neural network, from the viewpoint of Total Accuracy, there is
a 9.5% raising in the German data set and a 6.7% raising in the Australian
data set.
5 Conclusion
In this paper, a hybrid model of AdaBoost and BP neural network is proposed
for credit risk evaluation. According to the empirical results, we find that our
AdaBoost and Back-Propagation Neural Network 89
proposed hybrid model is the best one compared with other seven models for
two publicly available credit data sets, which indicates that our proposed hybrid
model of AdaBoost and BP neural network has a good practicability for credit
scoring. In the future, we will use other method as base classifiers, for example,
support vector machine and decision tree, and research other ensemble algorithm
like bagging algorithm for credit risk assessment.
References
1. Altman EI (1968) Financial ratios, discriminant analysis and the prediction of
corporate bankruptcy. J Finan 23(4):589–609
2. Baesens B, van Gestel T et al (2003) Benchmarking state-of-art classification algo-
rithms for credit scoring. Oper Res Soc 54(6):627–635
3. Baesens B, Setiono R et al (2003) Using neural network rule extraction and decision
tables for credit-risk evaluation. Manag Sci 49(3):312–329
4. Beynon MJ, Peel MJ (2001) Variable precision rough set theory and data discreti-
sation: an application to corporate failure prediction. Omega 29(6):561–576
5. Carter C, Catlett J (1987) Assessing credit card applications using machine learn-
ing. IEEE Expert 2(3):71–79
6. Chen M, Ma L, Gao Y (2010) Vehicle detection segmentation based on adaboost
and grabcut. In: IEEE International Conference on Progress in Informatics and
Computing, pp 896–900
7. Chen MC, Huang SH (2003) Credit scoring and rejected instances reassigning
through evolutionary computation techniques. Expert Syst Appl 24(4):433–441
8. Desai VS, Crook JN, Overstreet J (1996) A comparison of neural networks and
linear scoring models in the credit union environment. Oper Res 95(2):24–37
9. Essa EM, Tolba AS, Elmougy S (2008) A comparison of combined classifier archi-
tectures for arabic speech recognition. In: International Conference on Computer
Engineering & Systems, pp 149–153
10. Freund Y, Schapire RE (1995) A decision-theoretic generalization of on-line learn-
ing and an application to boosting. In: European Conference on Computational
Learning Theory, pp 119–139
11. Gestel TV, Baesens B et al (2003) A support vector machine approach to credit
scoring. Banken Financiewezen 2:73–82
12. Henley WE, Hand DJ (1996) A k-nearest-neighbour classifier for assessing con-
sumer credit risk. J Roy Stat Soc 45(1):77–95
13. Henley WE, Hand DJ (1997) Construction of a k-nearest-neighbour credit-scoring
system. IMA J Manag Math 8(4):305–321
14. Khashman A (2009) A neural network model for credit risk evaluation. Int J Neural
Syst 19(4):285–294
15. Li H, Sun J (2008) Ranking-order case-based reasoning for financial distress pre-
diction. Knowl Based Syst 21(8):868–878
16. Li H, Sun J (2009) Gaussian case-based reasoning for business failure prediction
with empirical data in China. Inf Sci 179(1):89–108
17. Li H, Sun J (2009) Predicting business failure using multiple case-based reasoning
combined with support vector machine. Expert Syst Appl 36(6):10085–10096
90 F. Shen et al.
18. Li H, Sun J (2010) Business failure prediction using hybrid2 case-based reasoning
(h2cbr). Comput Oper Res 37(1):137–151
19. Li H, Sun J, Sun BL (2009) Financial distress prediction based on or-cbr in the
principle of k-nearest neighbors. Expert Syst Appl 36(1):643–659
20. Malhotra R, Malhotra DK (2002) Differentiating between good credits and bad
credits using neuro-fuzzy systems. Eur J Oper Res 136(1):190–211
21. Malhotra R, Malhotra DK (2003) Evaluating consumer loans using neural net-
works. Soc Sci Electron Publishing 31(2):83–96
22. Ong CS, Huang JJ, Tzeng GH (2005) Building credit scoring models using genetic
programming. Expert Syst Appl 29(1):41–47
23. Piramuthu S, Piramuthu S (1999) Financial credit-risk evaluation with neural and
neurofuzzy systems. Eur J Oper Res 112(2):310–321
24. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-
propagating errors. Nature 323(6088):533–536
25. Schebesch KB, Stecking R (2005) Support vector machines for classifying and
describing credit applicants: detecting typical and critical regions. J Oper Res Soc
56(9):1082–1088
26. Steenackers A, Goovaerts MJ (1989) A credit scoring model for personal loans.
Insur Math Econ 8(1):31–34
27. Thomas LC, Edelman DB, Crook JN (2002) Credit scoring and its applications.
SIAM, Philadelphia
28. Varetto F (1951) Genetic algorithms in the analysis of insolvency risk. University
of Illinois Press, Champaign
29. Wang Y, Wang S, Lai KK (2006) A new fuzzy support vector machine to evaluate
credit risk. IEEE Trans Fuzzy Syst 13(6):820–831
30. West D (2000) Neural network credit scoring models. Comput Oper Res
27(11):1131–1152
31. Yobas MB, Crook JN, Ross P (2000) Credit scoring using neural and evolutionary
techniques. IMA J Manag Math 11(2):111–125
32. Yu L, Wang S, Lai KK (2008) Credit risk assessment with a multistage neural
network ensemble learning approach. Expert Syst Appl 34(2):1434–1444
33. Yu L, Wang SY et al (2008) Designing a hybrid intelligent mining system for credit
risk evaluation. Syst Sci Complex 21(5):527–539
34. Yu L, Wang S, Cao J (2009) A modified least squares support vector machine
classifier with application to credit risk analysis. Int J Inf Technol Decis Making
08(4):697–710
35. Zhou L, Lai KK, Yu L (2009) Credit scoring using support vector machines with
direct search for parameters selection. Soft Comput 13(2):149–155
Effects of Urban and Rural Residents’ Behavior
Differences in Sports and Leisure Activity:
Application of the Theory of Planned Behavior
and Structural Equation Modeling
Linling Zhang1,2(B)
1
School of Business, Sichuan University,
Chengdu 610065, People’s Republic of China
1010519067@qq.com
2
Leisure Sports Department, Chengdu Sport Institute,
Chengdu 610041, People’s Republic of China
Abstract. Because the urban and rural residents’ sports and leisure
behaviors were different, to find out the factors influencing these differ-
ences was the purpose of this study. Therefore we selected urban and
rural residents in Sichuan province as the research object to start our
study. This study which was based on the Theory of Planned Behav-
ior (TPB) used Structural Equation Modeling (SEM) as the research
method. Firstly we constructed the conceptual model and put forward a
series of assumptions. Then we used questionnaire survey to obtain first-
hand information, at the same time we used ARMOS17.0 to work out
the model. At last we verified our original assumption with the model.
Research showed that: (1) The influence on the urban and rural resi-
dents’ sports and leisure behavior from external objective factors (social
atmosphere, venues, facilities, etc.) was much larger than that from the
intrinsic objective factors (leisure time, income, physical condition, etc.);
(2) The richness of sports and leisure activities, venues, facilities and the
social atmosphere impacted the behavior of urban residents much more
than that of rural residents; (3) The attitude to affirm and recognize
sports and leisure had much more influence on urban residents; (4) While
rural residents more cared about the views from family, friends even gov-
ernment.
1 Introduction
Under the Chinese specific dual social structure between urban and rural
areas, social and economic activities, material conditions, living environment,
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 7
92 L. Zhang
life style, cultural consciousness between urban and rural areas appear a huge
differentiation [5,24]. In terms of sports and leisure activity, social system differ-
ences between the urban and rural result in gaps which come from capital invest-
ment, facilities construction, social organization development and concepts. And
the leisure sports behavior of urban and rural residents also shows difference.
What’s more, with the accelerating of urbanization process, urban and rural
demographic differences will also expand the difference of leisure sports behav-
ior [11]. Therefore, the purpose of this study is trying to find out the main factors
which cause huge difference and gap between urban and rural residents, and to
think about how to close the gap.
Sports and leisure behavior is a dynamic process that people try to meet
their leisure needs to participate in sports activities spontaneously and get sat-
isfaction from it. The purpose that people take part in leisure sports activities
mainly includes physical aspect (exercise, beauty fitness, self-defense, etc.) and
emotional aspect (entertainment, relieve pressure, thrill-seeking, social communi-
cation, etc.) [19]. Western researches about sports and leisure behavior is mainly
focused on the influence of behavior by using various theories. Trans-theoretical
Model (TTM), Self-efficacy, Self-determination Theory (SDT), Leisure Con-
strains Theory, Theory of Planned Behaviors (TPB) are the representative the-
ories [3,4,6–9,15,16,18]. In 2006, Xu [21] who published a paper “The Study
on the characteristics of the Residents’ Leisure Sports Behaviors” in the Jour-
nal of Guangzhou Sport University opened the prelude of the domestic sports
and leisure behavior research. Qiu [10,12–14] who published many articles in the
Chinese authoritative core journals is very outstanding in the study of sports
and leisure behavior. It is worth mentioning Yu [23], Zhao [25], Ye [22] and oth-
ers who introduced the theory of planned behavior (TPB) into the sports and
leisure behavior study. Overall, the domestic scholars’ research mainly focused
on cities and towns and little focused on rural areas and it caused the lacking of
empirical studies in such areas.
It is very hard to find out a clear answer from existing researches in the face
of the question what is the deep reason that causes sports and leisure behav-
ior differences between the urban and rural residents. In order to answer this
question better, we introduce the Theory of Planned Behavior (TPB) and Struc-
tural Equation Modeling (SEM) as our theoretical and method guide and choose
Sichuan province as the sample to try to reveal the main influencing factors of
sports and leisure behavior of urban and rural residents and to analyze the
relationship between these factors. We try to work out a theoretical analysis
framework to explain the differences between urban and rural residents’ sports
and leisure behavior and this framework should play an important role to enrich
and expand the leisure science research. According to the results, we put forward
corresponding countermeasures and suggestions to improve the policy environ-
ment effectively and promote the development of urban and rural integration
better.
Effects of Urban and Rural Residents’ Behavior Differences 93
Theory of Planned Behaviors (TPB) is one of the most famous theories about
the relationship between attitude and behavior in the field of social psychology.
Based on the Theory of Reasoned Action (TRA), Ajzen [1] published “The The-
ory of Planned Behavior” in 1991, it marked the born of the formal formation of
Planned Behavior Theory. In the TPB, the Behavior is affected by the Behavior
Intention or the Perceived Behavioral Control. And the Behavior Intention is
affected by the Attitude toward the Behaviors, Subjective Norm and Perceived
Behavioral Control, at the same time Behavioral beliefs, Normative beliefs and
Control beliefs respectively affect the Behavioral Attitude, Subjective Norm and
Perceived Behavioral Control.
The effects of sports and leisure behavior involve economic factors (income,
consumption idea), cultural factors (leisure concept, leisure atmosphere, leisure
time, etc.), environmental factors (Sports venues, policy guarantee, product pro-
motion, service level, etc.), etc. In this study we analyze problems from the per-
spective of the TPB and give out the main factors which affect the urban and
rural residents’ behavior. Below are the main factors.
(1) Attitude
If people keep positive attitudes on the sports and leisure activity, they will
actively participate in the activity. It is obvious that the attitude can affect the
behavior.
(2) Subjective Norm
It refers to whether the views and practices of people around will affect residents’
participation in sports and leisure activities.
(3) Perceived Behavioral Control
It refers to whether the past experience and current conditions of residents have
influence on their sports and leisure behaviors.
(4) Behavior Intention
It refers to if the residents are more willing to participate in sports and leisure
activities subjectively, the probability that they actually participate in sports
and leisure activities is greater. And the significant positive correlation between
behavior and intention is also confirmed by many relative researches.
(5) Other Objective Factors
According to interview results from Sichuan residents, the main reasons that
people do not attend sports and leisure activities include lack of enough time,
lack of appropriate sports ground, lack of specific skills for their favorite sports,
lack of information and lack of partner, etc.
94 L. Zhang
According to the TPB and the previous investigation results, we build our con-
ceptual model (Fig. 1). According to the key points in the model, we make the
following theoretical assumptions:
Hypothesis 1 (H1): The attitude of sports and leisure has a significant positive
influence on the behavior intention.
Hypothesis 2 (H2): The attitude of sports and leisure plays an intermediary role
between subjective norm and behavior intention.
Hypothesis 3 (H3): The attitude of sports and leisure behavior plays an inter-
mediary role between perceived behavioral control and behavior intention.
Hypothesis 4 (H4): Subjective norm has a significant positive influence on the
behavior intention.
Hypothesis 5 (H5): The perceived behavioral control has a significant positive
influence on the behavior intention.
Hypothesis 6 (H6): The peripheral objective factors have a significant positive
influence on the TPB system.
Hypothesis 6a (H6a): The peripheral objective factor has a significant positive
influence on the subjective norm.
Hypothesis 6b (H6b): The peripheral objective factors have a significant positive
influence on the attitude.
Hypothesis 6c (H6c): The peripheral objective factors have a significant positive
influence on the perceived behavioral control.
factors Behavior
Perceived behavior
control
Fig. 1. The conceptual model of factors influencing sports and leisure behavior based
on the TPB
Effects of Urban and Rural Residents’ Behavior Differences 95
3 Data Collection
3.1 Questionnaire Design
The influence factors of sports and leisure behavior mainly include the attitude,
subjective norm, perceived behavioral control, peripheral objective factors and
intention. The relevant information is collected through the questionnaire survey.
This questionnaire chooses the Likert Scale, each question uses a measurement
from strongly disagree (1 point) to strongly agree (5 points). All questions in
Questionnaire come from the discussion of our team and the experts’ suggestion.
Below are the questions designed according to the conceptual model and the
hypothesis (Table 1).
can account for 63.876% of the total variation. Through rotary, we can see the
factor-1 has a great influence on variables-a1, a2, a3, a4, a5, a18 and a19 (load
value > 0.5); Factor-2 has a great influence on variables-a6, a7, a9 and a20;
Factor-3 has a great influence on variables-a12, a15, a16 and a17; Factor-4 has a
greater influence on variables-a10 and a13; and factor-5 has a great influence on
variable-a11. While for variables a8 and a13, each of their load value on every
common factor is less than 0.5, so we have reason to delete them. In the next
step, we find that the item variable-a11 also should be removed through the tests
for every common factor respectively.
(2) Reliability Analysis
We do the internal consistency test for the variables of every factor with Cron-
bach’s alpha reliability analysis. And we find that 4 factors (attitude, subjective
norm, periphery objective factors and intention) pass the test. If we delete the
variables a11 and a12 in the factor-perceived behavioral control, the a-coefficient
increases significantly to 0.698. In order to pass the internal consistency test,
variables a11 and a12 should be deleted.
z1
1
1
e6 a6
Subjective
1
e7 a7 norm
1 z4 1
1 a15 e15
e9 a9 1
1
Peripheral 1
a16 e16
objective
1 z2
e1 a1 factors 1
1 a17 e17
1
e2 a2
1
e3 a3 Attitude
1
1
e4 a4 z5
1 1 1
e5 a5 1 a18 e18
1
Intention a19 e19
1
Perceived a20 e20
1
z3 behavior
control
1
a10 a14
1 1
e10 e14
Fit index χ2 (df) CMIN/DF P TLI IFI CFI RMSEA AIC EVCI
Results 130.356 (73) 1.786 0 0.909 0.929 0.927 0.079 194.356 1.555
Conclusion Suitable Pass Pass Pass Pass Pass Suitable Pass Pass
100 L. Zhang
Table 4. Hypothesis testing results of the effects of urban residents’ behavior in sports
and leisure
Table 5. (Continued)
Fit index χ2 (df) CMIN/DF P TLI IFI CFI RMSEA AIC EVCI
Results 133.622(70) 1.909 0 0.947 0.935 0.96 0.063 203.622 0.889
Conclusion Suitable Pass Pass Pass Pass Pass Suitable Pass Pass
Table 7. Hypothesis testing results of the effects of rural residents’ behavior in sports
and leisure
5 Results
In view of the above empirical analysis, there are some differences between urban
residents and rural residents in sports and leisure behavior influence factor.
From the perspective of direct effects (Table 8), the influence of peripheral
objective factor that affected urban residents’ sports and leisure attitude was
larger 0.182 units than what affected rural residents, the influence of subjective
norm on urban residents was larger 0.051 units than that on rural residents. For
the influence of sports and leisure attitude on the intention, urban residents were
higher 0.179 units than rural residents. The subjective norm only affected the
intention of urban residents but did not affect their attitude, however subjective
norm not only affected the intention of rural residents but also affected their
attitude. The subjective norm had more influences on the intention of rural
residents.
From the perspective of indirect effects (Table 8), the influence of peripheral
objective factors on the intention of urban residents was 0.675 units, it was higher
than that on rural residents. At the same time, the peripheral objective factors
also indirectly affected the attitude of the rural residents. The subjective norm
could indirectly affect the intention of rural residents through attitude, but it
could not affect urban residents.
Urban Rural
Direct effect/Indirect effect Direct effect/Indirect effect
Peripheral Attitude Subjective Intention Peripheral Subjective Attitude Intention
objective norm factors objective norm
factors
Attitude .764/ - - - .582/.098 .257/ - -
Subjective .433/ - - - .382/ - - -
norm
Intention /.675 .807/ .135/ - /.537 .289/.162 .628/ -
a5 /.540 .706/ - - /.415 /.157 .610/
a20 /.493 /.588 /.099 .730/ /.406 /.341 /.474 .755/
a19 /.578 /.690 /.116 .856/ /.480 /.402 /.560 .892/
a18 /.569 /.680 /.114 .843/ /.467 /.392 /.545 .868/
a17 .586/ - - - .575/ - - -
a16 .514/ - - - .469/ - - -
a15 .686/ - - - .698/ - - -
a1 /.543 .710/ - - /.550 /.208 .809/ -
a2 /.568 .743/ - - /.538 /.204 .791/ -
a3 /.436 .570/ - - /.411 /.156 .604/ -
a4 /.635 .831/ - - /.544 /.206 .799/ -
a6 /.373 - .860/ - /.335 .876/ - -
a7 /.376 - .867/ - /.348 .911/ - -
a9 /.245 - .564/ - /.187 .490/ - -
Effects of Urban and Rural Residents’ Behavior Differences 103
From the perspective of total effects (Table 8), the influence of peripheral
objective factors on the attitude, subjective norm and intention of urban resi-
dents was greater than that of rural residents. For the influence of attitude on the
intention, it was larger for urban residents than for rural residents. For the influ-
ence of subjective norm on the intention, it was larger for rural residents than
for urban residents. In addition, the subjective norm also affected the attitude
of rural residents, but it did not affect the attitude of urban residents.
The sports and leisure behavior of urban residents was affected by the inten-
tion, attitude, subjective norm and peripheral objective factors. The Subjective
norm, attitude and intention were the intermediary variables. The subjective
norm and the behavior attitude were two independent variables, besides, they
were both affected by the peripheral objective factors, at the same time they indi-
rectly affected the sports and leisure behavior by impacting behavioral intention.
Sports leisure and behavior of rural residents was also affected by the intention,
attitude, subjective norm and peripheral objective factors. The subjective norm,
behavior, attitude and intention were the intermediary variables. The relation-
ships among them were more complicated than the relationships of urban resi-
dents. The peripheral objective factors affected the subjective norm and attitude,
at the same time they affected the sports and leisure behavior through behav-
ioral intention. In addition, the subjective norm affected the attitude of rural
residents, but it did not affect the attitude of urban residents.
After we calculated the path coefficient, we got a point of view about the total
effect. The influence of peripheral objective factors on the attitude, subjective
norm and intention of urban residents was higher than that of rural residents. For
the influence of attitude on the intention, it was higher for urban residents than
for rural residents. For the influence of subjective norm on the intention, it was
higher for rural residents than for urban residents. That is to say, the influence of
external objective factors (social atmosphere, sports ground, facilities, etc.) on
the sports and leisure behavior of both urban and rural residents was much larger
than the influence of intrinsic objective factors (personal leisure time, personal
income, physical condition, etc.). The diversity and richness of sports and leisure
activities, sports ground, facilities, and social atmosphere impacted urban resi-
dents more than rural residents. Admissive and positive attitude toward sports
and leisure activities influenced urban residents more. While rural residents more
cared about the view of their family, friends and even the government about their
participation in sports and leisure activities.
Acknowledgement. We thank Guo Xinyan and Liu Ying for technical assistance;
Zhang Xu for guidance. This work was supported by Sichuan Social Science Fund
(Grant # SC16XK025).
104 L. Zhang
References
1. Ajzen I (1991) The theory of planned behavior. Organ Behav Hum Decis Process
50:179–211
2. Ajzen I (2016) Ownership structure, diversification, and corporate performance
based on structural equation modeling. J Real Estate Portfolio Manage 22:63–73
3. Beville JM, Meyer M et al (2014) Gender differences in college leisure time physical
activity: application of the theory of planned behavior and integrated behavioral
model. J Am Coll Health 62(3):173–184
4. Boudreau F, Godin G (2014) Participation in regular leisure-time physical activ-
ity among individuals with type 2 diabetes not meeting canadian guidelines: the
influence of intention, perceived behavioral control, and moral norm. Int J Behav
Med 21:918–926 (in Chinese)
5. Caldwell JT, Ford CL et al (2016) Intersection of living in a rural versus urban
area and race/ethnicity in explaining access to health care in the United States.
Am J Public Health 106(8):1463–1469
6. Cerin E, Vandelanotte C et al (2008) Recreational facilities and leisure-time phys-
ical activity: an analysis of moderators and self-efficacy as a mediator. Health
Psychol 27(2):126–135
7. Heesch KC, Masse LC (2004) Do women have enough time for leisure-time physical
activity. Res Q Exerc Sport 75(1):A101–A101
8. Lloyd K, Little DE (2010) Self-determination theory as a framework for under-
standing women’s psychological well-being outcomes from leisure-time physical
activity. Leisure Sci 32(4):369–385
9. Nurmi J, Hagger MS et al (2016) Relations between autonomous motivation and
leisure-time physical activity participation: the mediating role of self-regulation
techniques. J Sport Exerc Psychol 38(2):128–137
10. Qiu Y (2008) Research on restricted factors of leisure sport behavior developing
stagelan assumption theory framework. China Sport Sci 28:71–75 (in Chinese)
11. Qiu Y (2009) Research on the Motivations and Constraints of the Stages of Leisure
Physical Activity. Zhejiang University Press (in Chinese)
12. Qiu Y (2011) Research on the negotiation strategies of the leisure sports activities.
China Sport Sci 31:8–16 (in Chinese)
13. Qiu YJ, Jiao XU (2014) Research on the characteristics of constraints to women’s
leisure sports activities and the relationship with behaviors. China Sport Sci 32:25–
33 (in Chinese)
14. Qiu YJ, Liang MY (2012) Qualitative research on constraints of Chinese women
leisure sports behavior-based on the perspective of the theory of social gender.
China Sport Sci 34:75–82 (in Chinese)
15. Sabiston CM, Crocker PRE (2008) Exploring self-perceptions and social influences
as correlates of adolescent leisure-time physical activity. J Sport Exerc Psychol
30(1):3
16. Shields M, Ellingson L et al (2008) Determinants of leisure time physical activity
participation among Latina women. Leisure Sci 30(5):429–447
17. Simpson VL, Hyner GC, Anderson JG (2013) Lifestyle behavior change and repeat
health risk appraisal participation: a structural equation modeling approach. Am
J Health Promot 28(2):128–135
18. Stanis SAW, Schneider IE, Russell KC (2009) Leisure time physical activity of park
visitors: retesting constraint models in adoption and maintenance stages. Leisure
Sci 31(3):287–304
Effects of Urban and Rural Residents’ Behavior Differences 105
19. VanSickle J, Schaumleffel NA (2016) Developing recreation, leisure, and sport pro-
fessional competencies through practitioner/academic service engagement partner-
ships. Schole A J Leisure Stud Recreation Educ 31(2):37–55
20. Vanwesenbeeck I, Walrave M, Ponnet K (2016) Young adolescents and advertising
on social network games: a structural equation model of perceived parental media
mediation, advertising literacy, and behavioral intention. J Advertising 45(2):1–15
21. Xu J (2006) The study on the characteristics of the residents’ leisure sports behav-
iors. J Guangzhou Sport Univ 26:97–100 (in Chinese)
22. Ye W (2014) Theory and empirical research on the intention of leisure sports
behavior. Sport Sci Technol 35:102–104
23. Xu Y (2013) A study on the citizens’ behavior intention of sports leisure based on
the theory of planned behavior. Yunnan Geogr Environ Res 25:71–76 (in Chinese)
24. Zhao B (2011) Consumer ethical beliefs associated with its birthplace: empirical
research of China’s urban and rural dual society background. Manage World 1:92–
100 (in Chinese)
25. Zhao Q, Xu N et al (2014) The study of the relation between leisure sports con-
sumption culture and consumption behaviors. Sichuan Sports Sci 33:101–106 (in
Chinese)
Fast Multiobjective Hybrid Evolutionary
Algorithm Based on Mixed Sampling Strategy
1 Introduction
Multiobjective evolutionary algorithms (MOEAs) have been identified as very
suitable for solving multiobjective optimization problems (MOP) [4]. MOEA is
a heuristic method, and is useful for searching the Pareto optimal solutions of a
MOP by global and local search between generations. From the results, MOEA
can provide decision maker (DM) with a number of more practical solutions. So
the DM can make decisions based on actual needs. In the optimization process
of a single objective, the collaboration of other objectives should be considered
at the same time. Meanwhile, maintaining the trend of simultaneous evolution
make the Pareto front solutions as close as possible. Simultaneously, the real
Pareto optimal set is evenly distributed.
Moreover, a number of evolutionary algorithms have their own characteris-
tics. Each algorithm is effective to solve the problem of limited areas of appli-
cation. No algorithm can solve all the problems. How to mix these evolutionary
algorithms to play their respective strengths has become one of the challenging
research fields in evolutionary algorithms.
In order to compensate for the shortcomings of single MOEA, more
researchers began to combine different algorithm to generate hybrid algorithm.
In 2012, Zhang and Fujimura proposed an improved vector evaluation of genetic
algorithm with archive (IVEGA-A) [6]. The IVEGA-A combines the strong con-
vergence ability of the VEGA [7] method to the Pareto frontier boundary region
[11] and the elite population updating mechanism based on the new fitness func-
tion to ensure the overall performance of the algorithm in the central region of
the Pareto front. Zhang and Li introduced differential evolution strategy into
MOEA/D [5].
In this paper, we propose a multi-objective hybrid evolutionary algorithm
(MOHEA) to solve multi-objective optimization problem for improving the con-
vergence, the distribution and reducing the computational time. The rest of
the paper is organized as follows: An overview of sampling strategy is given in
Sect. 2. MOHEA is summarized in Sect. 3. Section 4 presents experimental results
to illustrate the efficiency of the algorithm.
where q (Si ) is the number of individuals that dominate Si , p (Si ) is the number
of individuals that dominated by Si . popSize is the size of the population. If the
individual Si is non-dominated solution, q(Si ) = 0. Therefore, the fitness func-
tion of the non-dominated individual Si is related to p(Si ). The more individuals
are dominated by Si , the smaller the value of 1/(p(Si ) + 1), and the smaller the
value of PDDR-FF. If q(Si ) = 0, p(Si ) = 0, the PDDR-FF value of Si is 1. It
means there is no individual dominates Si and no individual is dominated by
Si . Therefore, the PDDR-FF value of the non-dominated solution will no more
than 1. Si is dominated individual, q(Si ) ≥ 1; the less individuals that dominate
Si , the smaller q(Si ). In addition, the more individuals that are dominated by
108 W. Zhang et al.
Si , the bigger p(Si ) and the smaller 1/(p(Si ) + 1). Then, the value of PDDR-FF
is smaller.
The smaller value of PDDR-FF means that the more individuals dominated
by Si and the less individuals dominating Si . Therefore, the smaller the PDDR-
FF value, the better. And besides, PDDR-FF can clearly identify the dominant
and non-dominated solutions. If an individual is a non-dominant, the fitness
function value is not more than 1. Even if the individuals are all non-dominated,
the individuals with different number of dominant are endowed with different
fitness function values. It is obvious that the fitness function value (close to 0)
of the non-dominated individuals locating near the central area of the Pareto
front having big domination area is smaller than the value (close to 1) of the
individual in the edge region.
Step 5. Generation of the mating pool: mixing the sub-populations and A(t) to
form the mating pool. In the mating pool, sub-population 1 stores the excel-
lent individuals for the objective 1 and sub-population 2 keeps the excellent
individuals for the another objective. The individuals with good PDDR-FF
value are stored in A(t). For the two objectives optimization problem, the
size of these two sub populations and archive are set to the half size of the
population. As a result, there are three parts in the mating pool, 1/3 of the
individuals serve the objective 1, 1/3 of the individuals tend to the objective
2 and the remaining individuals obeys both the two objectives.
Step 6. Genetic operations: according to the selection operator, two individuals
are selected from the mating pool as the parent and the new generation
P (t + 1) is produced after genetic operations (crossing and mutation).
Step 7. The updating of the archive (PDDR-FF sampling strategy): mixing
the new generation P(t+1) and the archive to form the temp population
A (t). Calculate the fitness function value of all individuals in the population,
according to the ascending order and individuals with the smallest of the
| A(t) | values are chosen as A(t + 1).
The genetic code operating is real-coded in this paper. Each gene of the chro-
mosome is consists of a random, evenly distributed, floating-point number in
[0, 1], or the true value of the decision variable. Each chromosome is a vector
of a floating point type and the length of the chromosome is determined by the
number of decision variables. Real coding makes the complex genetic operation
easy to implement. It enhances the accuracy of the operation and strengthens
the search ability of the algorithm to the large space.
For example, the number of benchmark test problem ZDT6 [10] decision
variables is 10. Suppose vector V = (0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1) is
a set of ZDT6 solution, the vector V can be directly used as the chromosome
corresponding gene coding sequence. Or the vector V may be used as the gene
sequence of the chromosome according to the mapping relationship in the Fig. 3.
The crossover operator used in this section is the Simulated Binary Crossover
(SBX) proposed by Deb [1]. When the real-coded chromosomes are crossed by
the SBX, the distribution parameter β of the parents need to be calculated by
the equation:
1
(2u) η+1 , u ≤ 0.5
β= 1
1
η+1
(2)
2(1−u) , othersize,
Fast Multiobjective Hybrid Evolutionary Algorithm 111
S = S + δωmax , (4)
where v ∈ V (0, 1). γ > 0 represents the variance distribution index and be
determined by the decision maker.
In this study, NSGA-II [3] and SPEA2 [13] use binary contention selection as
the selection operator. SPEA2 need to consider the dominance of the individual
relationship. In this context, NSGA-II also needs to note the crowding distance
of individuals. The mating pool of MOHEA consists of two parts: one is the
individuals that are excellent for a single target. In the selection process, the
individuals in the current population are first sorted according to the number of
targets and the sub-population with excellent single-target value is selected in
112 W. Zhang et al.
order. All sub-populations are included as part of the mating pool. The other
is the non-dominated individuals in the elite population. In order to ensure
sufficient information exchange between individuals in the two populations, it
is necessary to randomly select two individuals from the mating pool during
genetic manipulation.
The benchmark problem is used to test the MOHEA proposed in this paper. At
the same time, two classical multi-objective evolutionary algorithms, NSGA-II
and SPEA2, are tested and compared under the same conditions. The test func-
tion uses the ZDT and DTLZ problems that are currently used in the MOEAs
domain. The algorithm is implemented in Java language. The hardware condi-
tion of the PC computer is: CPU Core 2, clock speed 2.8G, memory 2G, and
operating system Windows XP.
Let Sj be the solutions of each algorithm (j = 1, 2, 3). In this study, the test
considers two kinds of common convergence and distribution performance mea-
sures.
Coverage C(S1 , S2 ) is the percent of individuals dominated by S1 in S2 [12].
The bigger the C(S1 , S2 ) value, the better the convergence performance of S1 .
Spacing SP is the standard deviation of the closest distance of individuals
in Sj [8]. The smaller value of SP means the better distribution performance.
The mathematically defined expressions for the 12 benchmark tests used
in the experiment are shown in [10]. Among them, ZDT1, ZDT2 and ZDT3
have 30 decision variables, ZDT4 and ZDT6 have 10 decision variables. The
decision variables and the number of targets of DTLZ problem can be expanded
arbitrarily. For k and | xk | values, if the target number is 2, then k = 2.
The parameters used in the three algorithms are as same as the original. The
detailed values are shown in the following.
The three algorithms have the same crossover and mutation operator. Fit-
ness evaluation mechanism and the elite strategy is not the same. It should be
pointed out that NSGA-II and SPEA2 need to select all non-dominated solu-
tions from the elite as Pareto sets. MOHEA updates the Pareto set by choosing
Fast Multiobjective Hybrid Evolutionary Algorithm 113
non-dominated solutions among the elite and the population. For the optimal
stopping criterion of the algorithm, this paper uses the most widely used function
evaluation number as the termination condition of the algorithm.
The performance evaluation index of the algorithm adopts the convergence
index C and the distributed index SP. The results of the three algorithms running
30 times are compared. The time of each algorithm running is recorded. Thus the
algorithm efficiency index CPU is obtained. A boxplot was used to evaluate the
alignment results of the algorithm during 30 calculation results [9]. Box diagram
is an important tool for statistical analysis. The image can clearly reflect the
distribution of data. It is the most effective way to display EAs graphically.
The upper line of the box represents the quartile line of the sample. The lower
line indicates the lower quartile line of the sample. The middle line represents
the bisector (mean). The top of the box represents the maximum value of the
sample and the bottom is the minimum value. The gap in the middle of the
box represents the confidence interval, and the other scatter points represent
the outliers.
Table 1. C index
Table 2. SP index
5 Conclusions
In this study, a multi-objective hybrid evolutionary algorithm (MOHEA) based
on mixed sampling strategy was proposed to solve the multi-objective optimiza-
tion problem while improving the convergence and distribution performances,
and reducing the computational time at the same time. The VEGA sampling
strategy prefers the edge region of the Pareto front, but deviates from the search
of the central region. The PDDR-FF sampling strategy makes up the shortcom-
ing of VEGA. The mixed sample strategy could converge the multiply regions
of Pareto front to improve the efficacy. Since no computation requirement for
distance, the MOHEA could also improve the efficiency. Numerical comparisons
indicated that the proposed MOHEA outperforms the classical MOEAs such as
SPEA 2 and NSGA ii on convergence and distribution performance as well as
computational time too.
References
1. Deb K, Beyer HG (2001) Self-adaptive genetic algorithms with simulated binary
crossover. Evol Comput 9(2):197–221
2. Deb K, Goyal M (1999) A combined genetic adaptive search (geneas) for engineer-
ing design. Comput Sci Inform 26:30–45
3. Deb K, Pratap A et al (2002) A fast and elitist multiobjective genetic algorithm:
NSGA-II. IEEE Trans Evol Comput 6(2):182–197
4. Gaspar-Cunha A, Covas JA (2001) Robustness in multi-objective optimization
using evolutionary algorithms. John Wiley & Sons, Inc
5. Li H, Zhang Q (2009) Multiobjective optimization problems with complicated
pareto sets, MOEA/D and NSGA-II. IEEE Trans Evol Comput 13(2):284–302
6. Non-Member WZ, Member SFS (2012) Multiobjective process planning and
scheduling using improved vector evaluated genetic algorithm with archive. IEEJ
Trans Electr Electron Eng 7(3):258–267
7. Schaffer JD (1985) Multiple objective optimization with vector evaluated genetic
algorithms. In: International Conference on Genetic Algorithms, pp 93–100
8. Schott JR (1995) Fault tolerant design using single and multicriteria genetic algo-
rithm optimization. Cell Immunol 37(1):1–13
9. Tukey JW (1978) Variations of box plots. Am Stat 32(1):12–16
116 W. Zhang et al.
10. Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans
Evol Comput 3(2):82–102
11. Yu X, Gen M (2010) Introduction to Evolutionary Algorithms. Springer, London
12. Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative
case study and the strength pareto approach. IEEE Trans Evol Comput 3(4):257–
271
13. Zitzler E, Laumanns M, Thiele L (2001) Spea2: Improving the strength pareto
evolutionary algorithm. vol 3242, pp 95–100
Literature Mining Based Hydrogen
Fuel Cell Research
1 Introduction
Many environmental issues have been caused by or relate to the production,
transformation and use of fossil energies, for example, acid rain, stratospheric
ozone depletion and global climate change [5]. Energy is the material basis for
the country’s social development and progress of science and technology, but due
to the constant use of the three large fossil energies, energy conservation and
environmental problem became more and more serious. As a result of the badly
exceeds bid of PM2.5, human beings got a warning of continuous haze weather.
And the air pollution problem can be settled down when it is mainly closely
related to fossil fuel combustion and transport industries. The increasing trend
in world’s energy need is expected to continue in the future. As a result, a growth
in energy generation capacity will be needed [1]. Cars are important transport
means in people’s lives. However, the automobile exhaust is the important cause
of increasingly serious environmental pollution. Therefore, we need to find an
alternative fuel for the traditional fossil fuels.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 9
118 L. Li et al.
Hydrogen has high energy density, and the energy released is enough to
make the vehicle’s engine running. Hydrogen energy systems appear to be the
one of the most effective solutions and can play a significant role in providing
better environment and sustainability [4,14]. Hydrogen fuel cell vehicle itself
works with no noise, no vibration, no loss, and long service life. The electrode
is only working as the place of chemical reaction and conductive channel and
it does not participate in chemical reactions. The fuel of hydrogen fuel cell is
hydrogen and oxygen, and the product is clean water, with the prefect results of
no CO and CO2 , and no sulfur and particulate exhaust. We could have a clear
look at the advantages when compared to the thermal power as Table 1 shows.
Therefore, hydrogen fuel cell vehicle can run with zero emissions, zero pollution
in real sense and hydrogen is regarded as a perfect energy for vehicle. Fuel cell
technology with the hydrogen energy is the core technology of the auto industry
in the 21st century, and it takes revolutionary significance for the car industry.
Table 1. Comparison of atmospheric pollution between fuel cells and thermal power
2 Literature Mining
Hydrogen fuel cell is a cutting-edge research problem in current society, but
researches about this issue has certain basis and achievements. Based on these
Literature Mining Based Hydrogen Fuel Cell Research 119
researches, literature mining has been shown to be a powerful method for elu-
cidating major trends across time in published scientific literature so that topic
maps can be built [2]. Garfield believed that the academic literature citation
indexing was crucial when researching similar topic areas [7]. A citation index is
a synthesized result based on journal articles, keywords, publication dates and
abstracts and is able to separate and highlight the various influences in a specific
field, allowing for the research with the greatest impact to be easily identified.
Literature mining from published scientific literature allows for the discovery of
key areas and trends [2]. CiteSpace is one of the latest development of a generic
approaches to detecting and visualizing emerging trends and transient patterns
in scientific literature. The work makes substantial theoretical and methodolog-
ical contributions to progressive knowledge domain visualization. A specialty is
conceptualized and visualized as a time-variant duality between two fundamen-
tal concepts in information science: research fronts and intellectual bases [3].
1271 papers had been retrieved as “TI = HFC” been limited, and all the data
was imported into the CiteSpace with a data process. Finally, 65 categories were
formed after cluster, as Fig. 1 shows.
2.2 Results
Keywords are important index terms used to provide access to articles that
have been published or presented in journals and conference databases and are
120 L. Li et al.
Solar
Hydrogen
Fuel Cell
Hydrogen
fuel cell
application
Operational solar-hydrogen
characterist
ics renewable energy
vehicles efficiency
hfc network costs
two-phase mixture
thermoanalytical study transport
temperature-dependence stationary
storage
emissions portable
reaction
heat transfer MCFC
physical layer design PEMFC
two-way hfc environment
hfc refrigerant
presencerefrigerant equipment
As the schematic diagram in Fig. 3 shows, the fuel (hydrogen) and oxidizer
(oxygen) dissociate to evolve electrons and ions, the ions separating from the
pole move towards another under electrolyte conditions while the dissociated
electrons are guided to an external electric circuit. The fuel (hydrogen) is sup-
plied for the cathode, and the oxidizer for the anode. The hydrogen dissociates
the H+ and e− , and the H+ blends into the electrolyte while the e− , tends
towards the positive pole through an external circuit. The H+ and oxygen in the
air react at the positive pole to form water when absorbing e− ,. As long as the
fuel and oxygen are supplied, there is a constant power output in the FCs.
Lo ad
e−
Dep leted fu el an d Dep len ted o xid an t an d
p ro d u ct g as ed o u t p ro d u ct g as es o u t
−
H2 OH O2
AFC
HO
2
HO
2
H2 H
−
O2
P EMFC
HO P AFC
2
H2 CO 3
2-
O2
CO MCFC
2
CO
HO
2
2
2−
H2 O2 O2
S O FC
HO
2
Fu el in Oxid an t in
A n o d e Electro ly te Cath o d e
Fig. 3. The schematic diagram of FCs and different types FCs’ operation principle
the temperature range in which the cell operates, the fuel required, and other
factors. In turn, these chemical reaction condition characteristics affect the appli-
cations for which these cells are most suitable. There are six major types of fuel
cells presently under development, each with its own advantages, limitations,
and potential applications, as shown in Table 2. Respectively, they are proton
exchange membrane fuel cells (PEMFC), alkaline fuel cells (AFC), phosphoric
acid fuel cells (PAFC), molten carbonate fuel cells (MCFC), solid oxide fuel cells
(SOFC) and direct methanol fuel cells (DMFC). The operating characteristics
of these systems are highlighted in Table 2.
delivery and short start up time compared with other fuel cells, characteristics
which guarantee that the PEMFCs technology is competitive for transportation
and commercial applications such as stationary, and portable power generation
[13]. PEMFCs use a solid polymer electrolyte (Teflon-like membrane) which is
an excellent conductor of protons and an insulator for electrons to exchange
ions between two porous electrodes, and operate under a temperature of 100◦ C.
The operating schematics are shown in Fig. 3. Therefore, the membrane is the
key point in the PEMFCs technology and current research on PEMFCs also
focuses on the development of a proton exchange membrane with high proton
conductivity, low electronic conductivity, good thermal stability, and low cost.
Figure 3 shows that the working principles of the PAFCs and PEMFCs are
similar even though the structures are different. PAFCs use liquid phosphoric
acid as an electrolyte which is contained in a Teflon-bonded silicon carbide
matrix, and works at about 150 to 200◦ C. Different from PEMFCs and AFCs,
PAFCs are more tolerant and insensitive to any impurities in the hydrogen fuel.
The advantages of PAFCs are that the metal catalysts required are much less
than that of the AFCs and the reducing agent purity requirements are signifi-
cantly lower allowing for up to 5% carbon monoxide. PAFCs are viewed as the
“first generation” of modern FCs. According to the state development, these are
the first mature cell types to be used commercially. The 100200 and 500 kW size
plants are typically available for stationary power generation. A 1.3 MW system
has already been tested in Milan and PAFCs have been installed at 70 sites in
Europe, USA and Japan [16].
MCFCs are high-temperature (650◦ C) fuel cells which commonly use a
molten carbonate salt, such as lithium carbonate, potassium carbonate or sodium
carbonate, suspended in a porous ceramic matrix as the electrolyte. To date, this
type has been developed in natural gas and coal based power plants for indus-
trial and military use. The SOFCs’ electrolyte is a hard, non-porous ceramic
compound such as yttria stabilized zirconia which has a strong conductivity and
operates at a much higher temperature of 800–1000◦ C [12]. SOFCs are consid-
ered to be around 50%–60% efficient at converting fuel to electricity and have
rapidly increased in popularity in stationary applications in recent years.
Different from the five types fuel cells which fueled by hydrogen, DMFCs
are powered by pure methanol or alcohol which can be generated hydrogen via
reforming reaction as presented in Sect. 3. Direct methanol fuel cell technology is
relatively new compared with that of fuel cells powered by pure hydrogen, and
DMFCs research and development is roughly 3–4 years behind that for other
fuel cell types [10]. The working principle and reaction equations are shown in
Eqs. (3)–(4).
Solar energy is a ubiquitous and rich source, also a kind of renewable energy
sources with no environmental pollution. China has abundant solar energy
resource, annual total solar radiation area is greater than 1050 kWh/m2 , which
is more than 96% of the total land. There are several methods for producing
hydrogen from solar energy. Currently, the most widely used solar hydrogen
production method is to obtain hydrogen by electrolyzing the water at low tem-
perature. Solar-hydrogen fuel cell power generation system was new-type energy
which was based on the solar power for water decomposition, and it could gen-
erate electricity by electrochemical reaction using the hydrogen and oxygen pro-
duced. Countries are committed to carry on the development and application
of solar-hydrogen fuel cell because of rich energy source, high energy conversion
efficiency, and no pollution in the whole transformation process. In the future
energy system, solar energy will become the main primary energy to replace the
current energy such as the coal, oil and gas, and hydrogen fuel cell will become
the clean energy to replace petrol, diesel and chemical battery.
r
to
c
l le
co
r
la
hot
so
ls
water
elc
tank
ar
ol
cs
lta
MPPT battery
Diode
vo
to
o
Ph
Hydrogen storage
H2
electrolyzer
PV Panel
Fuel ce ll
As a reversible fuel cell, it gives you the freedom to invent your own clean
energy applications using fuel cells and renewable hydrogen formed using sun and
water. Figure 4 shows a schematic diagram of hydrogen production system that
Literature Mining Based Hydrogen Fuel Cell Research 127
6 Conclusion
Through the researches on the existing hydrogen fuel cell technologies and the
application technologies, we learned that the real value of the hydrogen fuel cells
is does not poor. Especially this particular and reversible solar-hydrogen fuel cell
system is bright star prior to other styles with a steady stream of resources and
the cycle of zero pollution. The direct hydrogen fuel cell vehicle is preferred, since
it would be less complex, have better fuel economy, lower greenhouse gas emis-
sions, greater oil import reductions and would lead to a sustainable transporta-
tion system once renewable energy was used to produce hydrogen. All countries
need to increase comprehensive and multi-functional analysis and research on
hydrogen fuel cell’s characteristics if we want to keep in active status in the field
of energy development in the 21st century.
References
1. Acar C, Dincer I (2014) Comparative assessment of hydrogen production methods
from renewable and non-renewable sourcesn. Int J Hydrogen Energy 39(1):1–12
2. Bruijn BD, Martin J (2002) Getting to the (c) ore of knowledge: mining biomedical
literature. Int J Med Inform 67(1):7–18
3. Chen C (2006) CiteSpace II: detecting and visualizing emerging trends and tran-
sient patterns in scientific literature. J Am Soc Inform Sci Technol 57(3):359–377
4. Dincer I (2007) Environmental and sustainability aspects ofhydrogen and fuel cell
systems. Int J Hydrogen Energy 31(1):29–55
5. Dincer I (2012) Green methods for hydrogen production. Int J Hydrogen Energy
37(2):1954–1971
6. Joshi AS, Dincer I, Reddy BV (2010) Exergetic assessment of solar hydrogen pro-
duction methods. Int J Hydrogen Energy 35(10):4901–4908
7. Leydesdorff L (2015) Bibliometrics/citation networks. arXiv preprint
arXiv 1502.06378
8. Li J, Chen L et al (2015) Finite time optimizations of a newton’s law carnot cycle.
Int J Energy Environ 5(6):517
128 L. Li et al.
1 Introduction
Shift-Share Method (SSM) was put forward by America scholars Dunn, Perloff,
Lampard and Muth in the 60s, and it was summarized as the form which is
widely acknowledged now by Dunn in the early 80s [2]. Shift-Share Method is
widely applied in the assessment of regional economy and industrial analysis
with high comprehensiveness and dynamic which is used mainly in analyzing
regional economy efficiency. Due to the particularity of object, every practical
application has its own referring value.
SSM was introduced to China in the 80s. Nowdays lots of domestic scholars
study in inter-region with SSM widespread and some surveys already found that
it is evident and suitable for the area of the industrial development assessment
[1,5,9]. However, the utilizing of SSM by existing research is based on the analysis
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 10
130 H. Lan et al.
where W and U separately represent result the effect index and effect of regional
competition index. If Gi is a little bit big while L is greater than 1, the regional
increase is faster than another which is of locating area and whole country. If Pi
is a little bit large while W is greater than 1, it proves the industry department
with high-speed growth has major share. In this situation, the regional overall
economic structure is a little bit good and the structure makes a great contribu-
tion to economic growth. If Di is great and U is greater than 1, the increasing
trend of every industry department is obvious with a strong competitiveness.
GM(1.1) model has part of the information known and partial information
unknown. It can accurately describe the state and behavior of the social economic
system. On one hand, this method can help avoid the Achilles’ heel of the lack of
relevant data. On the other hand, it also can avoid the subjective assume due to
personal preference, experience, knowledge and macro policy which can better
grasp the self-evolution law of the system.
GM(1.1) was modeled as follows:
(1) Set the original sequence X (0) and figure up the original sequence according
i
to x(1) (i) = m=1 x(0) (m), i = 1, 2, · · · , n to create a new sequence X (1) ;
(2) Create a winterization differential equation [12] GM(1.1) as predictive model
(l)
to sequence X (1) , that is dXt +aX (l) = b, a, b are undetermined parameter,
separately as evolution parameter and gray action;
(3) Suppose â as undetermined parameter vector,
with the help of least square
a
method, to figure out â and get â = = (B B)−1 B T X;
T
b
⎡ ⎤ ⎡ (0) ⎤
− 12 [x(1) (1) + x(1) (2)] 1 x (2)
⎢ − 1 [x(1) (2) + x(1) (3)] 1⎥ ⎢ x(0) (3) ⎥
⎢ 2 ⎥ ⎢ ⎥
B = ⎢. .. ⎥ , X = ⎢ .. ⎥
⎣ .. .⎦ ⎣. ⎦
− 12 [x(1) (n − 1) + x(1) (n)] 1 x(0) (n)
(4) Figure out GM(1.1) and get X̂ (1) (i + 1) = (x(0) (1) − ab )e−ai + ab ;
(5) According to Inverse Accumulated Generating Operation to calculate
x(0) (i + 1); Make a counter current to the predictive value X̂ (0) (i + 1) =
X̂ (1) (i + 1) − X̂ (1) (i);
(6) Make an error checking to prediction model, including residual test, rela-
tional coefficient test and posterior-variance-test [11].
132 H. Lan et al.
The data come from Leshan City Statistical Yearbook (1999–2009) [3] and the
documents from relevant departments; Sichuan Province Statistical Yearbook
(1999–2009) [8]. The data are calculated in current prices.
Table 1. 2010–2015 The prediction of industrial data of Leshan city (L) and Sichuan
province (S)
the largest–the prediction relative error between Leshan and Sichuan province
in tourism is more than 20% due to the outbreak of SARS which makes a pro-
found impact [7]. Above conclusion is also mentioned in previous studies and
GM(1.1) model still cannot eliminate the phenomenon effectively. Therefore, on
the basis of acknowledging prediction error and considering the reality, the three
sets of predictive data meets the precision requirement.
Calculate the correlation coefficient, correlation degree (while ρ = 0.5, ζ > 0.6
is satisfying), variance ratio (if C < 0.35, and P ≥ 0.95, the forecasting precision
is regarded as superior). The result shows as follow Table 3: the correlation degree
of every index is greater than 0.6; the variance ratio of every index is less than
0.35; every small error possibility is 1. According to the inspection standard, this
prediction accuracy is higher and predicted values have high credibility.
134 H. Lan et al.
Index Leshan 2002 2003 2004 2005 2006 2007 2008 2009
Pij GDP1 2.07 2.641 9.606 3.293 4.302 17.861 12.464 −4.083
GDP2 5.159 10.399 19.066 21.633 26.034 32.233 43.85 29.615
GDP3 3.449 4.776 9.516 8.173 9.469 13.221 12.258 21.453
W 1.0031 1.0028 1.0081 1.0086 1.004 1.0105 1.0011 1.0069
Dij GDP1 −1.15 1.224 1.969 −3.07 3.6 2.341 −3.404 2.807
GDP2 2.678 14.409 3.758 −2.488 3.218 6.75 20.826 −30.634
GDP3 3.035 0.494 −2.644 −0.083 −1.286 −1.924 5.429 9.524
U 1.0622 1.0086 0.9835 1.0127 1.0123 1.0321 0.9716 1.027
L 1.0655 1.0114 0.9915 1.0214 1.0164 1.0429 0.9727 1.0341
Table 4 indicates that during 2002 to 2003, 2005 to 2007, 2009, the regional
economic growth is faster than the overall level of Sichuan province; the growth
rate is lower than the overall level of Sichuan province in 2004 and 2008. The
result from the Chart tells that the main reason is a decline in the industrial
sector competitiveness (U 04 = 0.9835 < 1, U 08 = 0.9716 < 1). During 2002 to
2009, the effective index of industrial result W is always greater than 1 which
indicates the industries with potential and high-speed increase are the majority
in Leshan economy. Moreover, regional overall economic structure is fine and
the structure makes a great contribution to economic growth while secondary
and tertiary industries are in the proportion of the optimization process step
by step. During 2002 to 2003, 2005 to 2007, 2009, the effect of regional com-
petition index U is greater than 1 which indicates the overall increasing trend
Leshan’s Industries Shift-Share Analysis and Prediction 135
Choose the structure deviation component P [10] as abscissa and the com-
petitiveness deviation component D as ordinate to draw the SSM Picture of
Leshan’s three industries during 2010 to 2015.
Fig. 1. 2010–2015 The SSM analysis Picture of three industries of Leshan city
Table 5 and Fig. 1 indicates: during 2010–2015, the relative growth rate (L) is
greater than 1, meaning that the regional economic growth will be faster than the
overall level of Sichuan province during the period of the Twelfth Five-year; W is
136 H. Lan et al.
always greater than 1 which indicates the regional overall economic structure will
continue to the good development trend; further study of the structure deviation
component of every industrial department find that the structure deviation com-
ponent is the majority of economic part of every department. It proves that the
structure of department make a great contribution to the economy growth and
Leshan city already has some scale. At the same time, it is clear that compared
with the secondary industry, the scale of tertiary industry has a growing differ-
ence. The effectiveness index of competitiveness shows every sector has a strong
growth trend in total during the period of the Twelfth Five-year with a strong
competitiveness. With the further study of the competitiveness deviation com-
ponent, in addition to the obvious advantage of secondary industry, the primary
industry and tertiary industry are in the average level of entire province without
superiority. Still, this region depends on the secondary industry too much.
On the analysis of three industries of Leshan, we stress the further study
of tertiary industry and get the SSM appraisal result of the tertiary industry
during 2002–2009: the relative growth rate fluctuated, that is to say, there was
no obvious advantage in its economic growth compared with the level of Sichuan
province. The effect coefficient and the effect of regional competition index of
every department are fluctuated too which proves that the advantages in the eco-
nomic structure and competitiveness of tertiary industry are not obvious enough.
With further study, we find that the competitiveness deviation component of
tourism are negative generally which indicates the relative competitiveness of
Leshan’s tourism has not formed yet; there is a trend that the structure com-
ponent of transportation industry is optimized step by step although the scale
need to be further improved. Only in that way, an obvious economic effect will
exists. From Table 6 and Fig. 2, the speed of growth of Leshan’s tertiary industry
will be slower than the level of Sichuan province in the next 5 years. We find out
the result from the analysis of effect coefficient and the effect of regional com-
petition index. The reasons are two: first, the departments in tertiary industry
with high growth are minority; then, there is a non-performing growth trend of
Fig. 2. 2010–2015 The SSM analysis Picture of three industries of Leshan city
Acknowledgements. This work was supported by the Humanities and Social Sci-
ences Foundation ot the Ministry of Education (Grant No. 16YJC30089), and the
Sichuan Center for science and Enterprise Development Research (Grant No. Xq16C04)
References
1. Artige L, Neuss L (2014) A new shift-share method. Growth Change 45(4):667–683
2. Fang L (2008) The shift-share analysis of jilin deviate from the industry’s compet-
itiveness. Master’s thesis, Northeast Normal University
3. Statistics Bureau of Leshan City (2015) Leshan statistical yearbook
4. Lin CT, Yang SY (2004) Forecast of the output value of taiwan’s ic industry using
the grey forecasting model. Int J Comput Appl Technol 19(1):23–27
5. Liu YP (2006) Regional economics shift-share analysis of Jiangxi Province. Sci
Mosaic 3:19–20
Leshan’s Industries Shift-Share Analysis and Prediction 139
1 Introduction
Machine scheduling problem (MSP) arises in diverse areas such as flexible man-
ufacturing system, production planning, computer design, logistics, communi-
cation, etc. A common feature of many of these problems is that no efficient
solution algorithm is known yet for solving it to optimality in polynomial time.
The classical job-shop scheduling problem (JSP) is one of the most well-known
MSP models. Informally, the JSP model can be described as follows: there are a
set of jobs and a set of machines. Each job consists of a chain of operations, each
of which needs to be processed during an uninterrupted time period of a given
length on a given machine. Each machine can process at most one operation at
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 11
Scheduling Problem for Allocating Worker with Class-Type Skill 141
By using this parameter, each worker was placed in each machine, and it is pos-
sible to determine the processing time in consideration of the worker’s abilities.
In addition, we can create a more realistic schedule.
(1) Processing time in consideration of skill level
When assigning N workers Wn (n = 1, 2, · · · , N ) to the I products Ai (i =
1, 2, · · · , I) using K machines Mk (k = 1, 2, · · · , K), skill level Sn (k) for each
worker Wn is set as the ability to operate the machine Mk .
In addition, for each product Ai (i = 1, 2, · · · , I), there are Ji operations writ-
ten by each one Oij (j = 1, 2, · · · , Ji ) to be processed by the machineMRij (Rij ∈
1, 2, · · · , K) which is determined in advance. The processing time given to each
operation Oij (processing time when the skill level is 1.00) is defined as the
standard processing time P Tij . The processing time ptnij of operation Oij by the
worker Wn is represented by the following equation, and the actual processing
time expands and contracts depending on the skill level [11]:
P Tij
ptnij = . (1)
Sn (Rij )
The proposed worker skill level in SPAW by Iima and Sannomiya [11] is
determined in a range from 0.00 to 1.00 and are given in advance as skill level
table. In addition, the value of the skill level for the worker of the machine is 0.00
cannot operate the machine. Processing time on each machine by each worker
will be determined using the skill level table.
(2) Setting of the Worker Skill Level
Concept of worker’s skill level is twofold. One is uniform proficiency type of
skill level that has been proposed by Iima [11]. Another is variety proficiency
type of skill level that has been proposed by Osawa [15].
Uniform proficiency type of skill level is different for each machine. On the
other hand, variety proficiency type of skill level has a skill level that each worker
is different for each machine. An example of the skill level table as shown in Fig. 1.
Divided into several class workers, to determine the skill level for each
machine in each worker for each class, considering the case that can handle
jobs in a shorter time than the standard processing time.
In Kitada [12] skills value table used in the NSP, the nurse of the leader,
veteran, mid-level, second year, newcomers has been divided into five classes.
For this reason, divided workers into five classes in this paper, to set the range
of skill level in each class. Range of skill level of each class, the leader from 1.00
to 1.40, a veteran from 0.90 to 1.30, mid-level from 0.80 to 1.20, second year
from 0.60 to 1.00, newcomer from 0.30 to 0.60. To determine the skill level for
each machine of the workers in each of the ranges for each class. In addition, the
number of workers of each class was determined based on the number ratio of
each class in the skill value table of Kitada [12].
Step 1. Set i = 1.
Step 2. Select one worker at random from the operator that is not arranged, and
to place the selected workers in the period i.
Step 3. Exclude worker placed in Step 2 from the arrangement of the choices
Step 4. If the worker of the same number as the total machine numbers are
arranged in period i, the process proceeds to Step 5. Otherwise, return to
Step 2.
Step 5. If it is i = 3, the process advances to Step 6. Otherwise, as i = i + 1,
and return to Step 2.
Step 6. If the periods 1 to 3 skill level average of all of the worker group was 0.9
or more, the process proceeds to Step 7. Otherwise, initialize the unplaced all
workers which are disposed in the periods 1–3, and returns to Step 1
Scheduling Problem for Allocating Worker with Class-Type Skill 145
Step 7. Assign the worker group of the period from 1 to 3 in order to each period
of the period from 4 to P , and exit when assigned a worker group to period P .
Class M1 M2 M3
Leader 1.01 1.28 1.15
Veteran 1.13 0.93 1.29
Mid-level 0 0.95 1.08
2nd year 0.66 0 0.98
Newcomer 0 0 0.5
In this paper, the leader and the veteran can operate all of the machine, and
mid-level, second year, newcomer is set as the operator cannot operate the part
of the machine (skill level 0). In the experiment, the proportion of the worker,
mid-level 10–30%, second year 20–40%, and newcomer 40–60%, is determined at
random in advance.
path to confirm the search space, and swapped the operations in critical block,
and this approach can also improve the efficiency of algorithm for finding active
schedule (Gen [7]).
In GA for SPAW, it represents one solution using two types of chromosomes.
One is a job permutation chromosome representing a processing order for each
task of a job, the other is a worker placement chromosome representing the
arrangement of the worker to operate the machine at each period. Therefore,
genetic operations such as crossover and mutation is carried out for each of the
chromosome.
In this paper, we developed hybrid genetic algorithm (HGA) for solving
scheduling problem to allocate worker with class-type skill in JSP as shown
in a pseudo code in Fig. 2.
In Osawa [15] of the technique, the initial population of job permutation chro-
mosome of each individual are randomly generated. In GA, superior solution can
be obtained by repeating the genetic operations such as crossover and mutation.
Therefore, it is required that to have diversity in the chromosome in the initial
population stage, and eliminate the diversity operation is cause to cause initial
convergence.
However, when starting the operation of GA from the chromosome of the
initial population randomly generated, there is a possibility that the improve-
ment of solution stagnates. Further, since the randomly generated, it is difficult
to obtain stable and excellent solution.
Scheduling Problem for Allocating Worker with Class-Type Skill 147
4 Numerical Experiment
4.1 Experimental Data
Make a comparison experiments with the method (oGA) by Osawa [15] with
using the proposed method (pGA) in this paper. Because the Osawa [15] app-
roach had incorporated worker group as data, the creation of worker group is
using the technique proposed in this paper.
Scheduling of instances considering variety of worker skill level, such as deal-
ing with in this paper, does not exist as far as the authors know. Therefore, as an
instance in this experiment, to use instances for JSP (10job-10machine problem
la16-la20, ft10). Total machine number is 10, so the total number of workers is
set 30. Upon the experiment, the worker’s skill level is used a randomly gener-
ated skill level in the range specified by the class-type skill level. Worker ratio of
each class will be set based on the number ratio of each class in the skill value
table of Kitada [5].
148 K. Ida et al.
If the skill level for the operator of the machine is zero, the method of Osawa
[15] shown in Subsect. 3.3 is used. In addition, delivery time of each job is set
using the delivery time coefficient F, F was defined as F = 1.3 by preliminary
experiment.
Various parameters for the GA are set, the number of attempts 50 times,
the population size 100, crossover probability 0.8, and mutation probability 0.2.
Termination condition is when the number of evaluation individual has reached
one million individuals, or delivery delay time total was set to when it becomes
0. Machine specifications are, Intel Core i5-4430 3.00 GHz, memory 8.00 GB.
Experiments carried out in Microsoft Visual C++ 2010 on PC.
Performing a comparison by the minimum total delivery delay time (Best) in
all the attempts and the average value of the minimum total delivery delay time
(Average) obtained in each trial for each instance. The results were as shown in
Table 2 (number in parentheses represents the number of times when the best
solution is obtained more than once).
Table 2. Experimental result
4.2 Consideration
From Table 2, significantly better solution than the method of Osawa [15] is
obtained in all instances. Figure 3 shows a Gantt chart of the best solution for
the most improved instance.
In particular, the instances, la18 and la19 has become delivery delay time
following conventional half. In addition, it can be seen that from the experimental
results of the Average, the solution accuracy of each attempt in all instances has
been greatly improved. However, it is also true that still delivery delay time
occurs in these instances. In particular ft10, the width of the delay in delivery
compared with the other instance has become a big thing. This is thought to be
due to features of ft10. ft10 is biased machine to be used in the processing of
the first few of the work of the job. Thus, even if most job can protect delivery
time, jobs that delivery delay time projecting to occur are present. Therefore, it
is required improvement inconsideration of the characteristics of the instances.
Scheduling Problem for Allocating Worker with Class-Type Skill 149
5 Conclusion
In this paper, scheduling problem for allocating worker (SPAW) with class-type
skill in JSP expanded to the more realistic model by using Iima [11] and improve-
ments were made of various types of algorithm using Osawa [15]. As a result, it
was able to reduce the delivery delay time in all the instances that have occurred
of delivery delay, and solving accuracy was able to present a schedule improved.
Further, by introducing the class-type skill, divided into several class workers, to
determine the skill level for each machine in each worker for each class, consider-
ing the case that can handle jobs in a shorter time than the standard processing
time, and proposed SPAW of Iima [11] could be bought close to realistic models.
However, if the product had been completed earlier than the delivery time,
in fact, there is a need to store the product until the delivery time. This leads to
an extra cost, so it is necessary to consider in order to create a realistic schedule.
The same applies to the total processing time. Therefore, after strictly to defend
delivery time, it is necessary to introduce the inventory control as the second
target and the total processing time as the third target.
References
1. Mand AK, Hatanaka OH (2003) A heuristic algorithm for job-shop scheduling
to minimize total weighted tardiness. J Jpn Ind Manage Assoc 51(3):245–253 (In
Japanese)
2. Cheng R, Gen M, Tsujimura Y (1996) A tutorial survey of job-shop schedul-
ing problems using genetic algorithmsli: representation. Comput Ind Eng 30(4):
983–997
150 K. Ida et al.
1 Introduction
The issue of numerical association rule mining is interesting to discuss. This prob-
lem already solved by many researchers with many approaches. For instance dis-
cretization [14], distribution and optimization [1,2,6,13,18]. The last approach is
the interesting one because it is said as the recent solution to overcome this problem.
According to [6,18] revealed that this solution can find the multi-objectives func-
tion such as support, confidence, comprehensibility, interestingness, and amplitude
without transformation of numerical dataset to categorical type.
One of the methods in optimization approach is MOPAR method [6]. This
method used the PSO (particle swarm optimization) for solving the numerical
ARM problem. This method already developed by Tahyudin and Nambo [18] by
using combination of PSO and Cauchy distribution because of the weakness of
PSO which trapped in local optima when the iteration goes to infinite then the
velocity particle lead to 0. This condition make the PSO does not have capability
to find the optimal solution [12]. Therefore, the combination of PSO with Cauchy
distribution (PARCD) is the answer for the previous method.
The important step for this method is particle representation to determine
the rules. This step as the initialization process before calculating the multi-
objective function. There are two approach methods, firstly is Pittsburgh method
which is determine the particle as the rule set and secondly, the Michigan app-
roach method which explains that one particle refer to one rule [6]. This research
uses the second method because of being appropriate with the kind of case.
Moreover, the particle representation contains three main parts, they are ACN
(antecedent, consequent and none of them), the lower bound and the upper
bound. Hence, the aim of this research is to determine the rules and to calculate
the multi-objective function values which are support, confidence, comprehensi-
bility and interestingness using PARCD method.
The research is organized as follows: Sect. 2 reviews the literature of recent
research; Sect. 3 presents the proposed method of combining PSO and Cauchy dis-
tribution for numerical association rule mining optimization (PARCD) and the
procedure to determine the particle representation; Sect. 4 gives an experiment and
discussion for determining of particle representation and also the result of multi
objective function; finally, the conclusion and future work are given in Sect. 5.
2 Literature Review
Minaei-Bidgoli et al. [13] demonstrated that the numerical association rule prob-
lem can be solved by discretization, distribution and optimization. The dis-
cretization is performed by partitioning and combining, clustering and fuzzy
logic routines [3,5]. Then, the optimization is approached by optimized ARM
[20], differential evolution [2], Genetic Algorithm (GA) [10,13,15] and Particle
swarm optimization (PSO) [1,6,9]. Using PSO techniques, the numerical data
can be solved to attain the important information without the discretization
process [4,6] and some methods can automatically determine the minimum sup-
port and minimum confidence based on the optimal threshold [15,17,20].
On the other hand, the PSO method has the weakness that the user has to
specify the number of “best rules” and the time of complexity [17] and also it
is not robust when used on large data sets [7,12]. One of the ways to diminish
these weakness is revealed in [12], that the combination of PSO with Cauchy
has approved can rise the leverage of result because the mutation process can
Numerical Association Rule Mining Optimization 153
reach wider and appropriate to a large database. In another research work [7]
the combination of approaches has the ability to optimize two-stage reentrant
flexible flow shop with blocking constraints. In addition, this combination can
improve the interval solution by the average of 15.60% and then the perfor-
mance of this combination higher than Hybrid Genetic Algorithm (HGA) [16].
Then, this combination was used to optimize the Integration of Process Planning
and Scheduling (IPPS) and the result shows the effectiveness of the proposed
IPPS method and the reactive scheduling method [21]. This hybrid method was
developed by [7,16] to increase the wide search space in mutation process by
using Cauchy distribution. His result shows that the method can enhance the
evolutionary process with the wide search space.
3 Proposed Method
3.1 The Particle Representation
The rules of numerical association rule mining by PARCD will be obtained by
the particle representation procedure. This study used Michigan method which
determine for every particle referring to one rule [6], for which the data set will
be extracted into ACN category, based on the value of lower bound and upper
bound. Antecedent is pre condition and consequent is conclusion for describing
a rule. The PARCD method can classify automatically the ACN based on the
optimal threshold in every rules. This concept can be showed clearly by Fig. 1.
If the optimal procedure for one rule are 0 ≤ ACNi ≤ 0.33 for antecedent,
0.34 ≤ ACNi ≤ 0.66 for consequent and 0.67 ≤ ACNi ≤ 1.00 for none of them.
For instance, see Table 1.
According to the Table 1 the attribute A and B are the antecedent and
the attribute D is consequent. The attribute C is not appearing because it not
includes both of them. Therefore, the rule is AB → D.
154 I. Tahyudin and H. Nambo
3.3 PSO
The PSO method discovered by Kennedy and Eberhart in 1995. They are animal
psychologist and electrical engineer respectively which observed the swarming
behaviors in flocks of birds, schools of fish, or swarms of bees, and even human
social behavior [11].
The main concept of PSO is to initialize with a group of random particles
(solutions) and then to search for optima by updating generations. During all
iterations, each particle is updated by following two “best” values. The first one
is the best solution (fitness) has achieved so far. This value is called “pBest”.
The other “best” value that is tracked by the particle swarm optimizer is the
best value obtained so far by any particle in the population. This best value is a
global best and is called “gBest”. After finding the two best values; each particle
updates its corresponding velocity and position [9]. Each particle p, at some iter-
ation t, has a position x(t), and a displacement velocity v(t). The personal best
(pBest) and global best (gBest) positions are stored in the associated memory.
The velocity and position are updated using Eqs. (5) and (6) respectively [9,12].
vinew = ωviold + c1 rand()(pBest − xi ) + c2 rand()(gBest − xi ), (5)
xnew
i = xold
i + vinew , (6)
here ω is the inertia weight; vi old is the particle velocity of ith particle before
updating; vi new is the particle velocity of ith particle after updating; xi is the
ith , or current particle; i is the particle’s number; rand() is a random number in
the range (0, 1); c1 is the individual factor; c2 is the societal factor; pBest is the
particle best; gBest is the global best Particle velocities on every dimension are
clamped to a maximum velocity Vmax [9,12].
functions as the current fitness. After that it runs by looping iteration to looking
for pBest until it finds gBest value as the optimal solution.
Pseudocode of PSO [7] is shown as following, and flowchart is as Fig. 2:
4.2 Experiments
(1) The output of particle representation
This output is generated by every running times. And then in this experiment
shows the 20th running time which in every running contains 2000 rules. The 20th
running time is the last time so it produces the optimal solution. The results of
ACN of quake data set which are the rule number, ACN and LB and UB in head
title of table is shown in Table 4. According to the result in rule 1, the attributes
as the antecedent are focal depth and latitude. This first rule result is same
like the second rule then the last rule, rule 2000, the antecedent attributes are
latitude, longitude and Richter. However, the consequent attributes are almost
all none.
158 I. Tahyudin and H. Nambo
Rule number (particle) Antecedent, consequent, Lower bound (LB) < attribute
none of them (ACN) < Upper bound (UB)
Rule 1 Antecedent 290.028451 < Att1 < 316.467965
9.068816 < Att2 < 21.329102
Consequence None
Rule 2 Antecedent 290.028451 < Att1 < 316.467965
9.068816 < Att2 < 21.329102
Consequence None
··· ··· ···
··· ··· ···
Rule 2000 Antecedent 20.104086 < Att2 < 40.384987
33.959986 < Att3 < 71.573894
6.418029 < Att4 < 6.606453
Consequent None
Note: Att1 = focal depth, Att2 = latitude, Att3 = longitude, Att4 = Richter (target).
The result phenomena in Basket Ball Data set (Table 5) is almost similar
to Quake data set, in which almost the rules have no consequent attributes.
The first rule shows that the antecedent attributes are the number of assist per
minute, height and time played. The result of second rule also shows the same
as first rule, then the last rule shows that antecedent attributes are height and
time played. The result of ACN in Quake and Basketball data set are interesting
to study deeply, because generally the optimal solution has the antecedent and
consequent attribute.
In Table 6 the result of ACN of Body Fat data set depicts the complete para-
meter either antecedent or consequent. In the rule 1 we see that there are eight
attributes as the antecedent and three attributes as the consequents. In the rule
2 the number of antecedent and consequent attributes are the same as rule 1and
in the last rule the number of antecedent and consequent attributes are six and
two attributes respectively. The antecedent attributes in rule 1 are case num-
ber, Percent body fat using Siri’s equation, density, age, adiposity index, chest
circumference, Abdomen circumference and thigh circumference. Next, the con-
sequent attributes are Percent body fat using Brozek’s equation, height and hip
circumference. Then, in rule 2 the antecedent and the consequent attributes
are the same as rule 1. So that the rules 1 and 2 are if (att1, att3, att4, att5,
att8, att11, att12, att14) then (att2, att7, att13). And then, in the rule 2000, the
antecedent attributes are Percent body fat using Brozek’s equation, Percent body
fat using Siri’s equation, density, height, neck circumference and knee circum-
ference. Next, the consequent attributes are case number and weight. Therefore,
the rule 2000 is if (att2, att3, att4, att7, att10, att15) then (att1, att6).
Table 7 explains the experiment result from Bolt data set which has eight
attributes; run, speed, total, speed2, number2, Sens, time and T20 Bolt. Based on
Numerical Association Rule Mining Optimization 159
Rule number (particle) Antecedent, consequent, Lower bound (LB) < attribute
none of them (ACN) < upper bound (UB)
Rule 1 Antecedent 0.093462 < Att1 < 0.149991
186.522680 < Att2 < 195.865376
14.763501 < Att2 < 20.155824
Consequence None
Rule 2 Antecedent 0.093462 < Att1 < 0.149991
186.522680 < Att2 < 195.865376
14.763501 < Att2 < 20.155824
Consequence None
··· ··· ···
··· ··· ···
Rule 2000 Antecedent 199.334064 < Att2 < 203.000000
24.076966 < Att3 < 39.254520
Consequent None
Note: Att1 = Assists per minute, Att2 = height, Att3 = time played, Att4 = age,
Att5 = points per minute (target).
the table, two first rules show the same result both of antecedent and consequent.
The antecedent attributes are total and time while the consequent attributes are
run and speed1. Therefore, the rule is if (total, time) then (run, speed1).
The other rules do not report in this paper because they are so large, only
three rules; the first, the second and the last one, rule 2000. So that, the rule
2000 shows that the antecedent attributes are run and speed 2. However, the
consequent attribute is similar like quake and basket ball datasets, which is
unknown. Hence. This rule can not be declared clearly because it does not have
conclusion.
Table 8 depicts the result of rules from pollution dataset by particle repre-
sentation PARCD method. The result from the first and the second rules are
the same which the antecedent attributes are JANT, EDUC, NONW, WWDRK
while the consequent attributes are PREC, JULT, OVR65, DENS and HUMID.
So that, the rule is if (JANT, EDUC, NONW, WWDRK) then (PREC, JULT,
OVR65, DENS and HUMID).
The rule 2000 has the ACN result which is different from the first and the
second attributes. The antecedent attributes of rule 2000 are JANT, OVR65,
HOUS, POOR, HC and HUMID while its consequent attributes are POPN,
EDUC, DENS, NOX, SO@. The final rule is if (JANT, OVR65, HOUS, POOR,
HC) then (POPN, EDUC, DENS, NOX, SO@).
(2) The output of multi-objective function
Table 9 reveals a comparison for the output of Multi-objective function among
the proposed method, PARCD and MOPAR method. This table compares two
objective functions which are support and confidence from both of methods in
160 I. Tahyudin and H. Nambo
Rule number (particle) Antecedent, consequent, Lower bound (LB) < attribute <
none of them (ACN) upper bound (UB)
Rule 1 Antecedent 1.096724 < Att1 < 1.108900
57.988435 < Att3 < 69.574945
309.987803 < Att4 < 314.218245
55.294719 < Att5 < 66.896106
136.234441 < Att8 < 138.744999
40.927433 < Att11 < 41.562953
20.266071 < Att12 < 20.586850
22.220988 < Att14 < 23.180185
Rule number (particle) Antecedent, consequent, Lower bound (LB) < attribute
none of them (ACN) < upper bound (UB)
Rule 1 Antecedent 11.911616 < Att3 < 16.259242
62.782669 < Att7 < 65.562550
Consequence 23.688468 < Att1 < 31.295955
5.928943 < Att2 < 6.000000
Rule 2 Antecedent 11.911616 < Att3 < 16.259242
62.782669 < Att7 < 65.562550
Consequence 23.688468 < Att1 < 31.295955
5.928943 < Att2 < 6.000000
··· ··· ···
··· ··· ···
Rule 2000 Antecedent 13.621221 < Att1 < 29.817232
1.761097 < Att4 < 2.325029
Consequent None
Note: Att1 = RUN, Att2 = SPEED1, Att3 = TOTAL, Att4 = SPEED2,
Att5 = NUMBER2, Att6 = SENS, Att7 = TIME, Att8 = T20BOLT (target).
column side, on the row side shows the five datasets which are quake, basket
ball, body fat, bolt and pollution. According to the table the global result of
PARCD method is better than MOPAR method except for the support value of
Quake dataset.
The highest result of support value by PARCD method is at 250.84% (Bolt
dataset) which is double value than the opposite method in the same dataset. On
the other hand, the lowest one is 22.97% by PARCD method which is almost less
twice from the MOPAR method also in the same dataset. Next, the remaining
datasets; Basket ball, Body fat and pollution, show that the PARCD has better
value than MOPAR method in which the gaps are almost 30%, 60% and 10%
respectively.
Then, according to the Table 9, the confidence result there is the additional
value of standard deviation because it shows the stability of confidence value.
Based on the the result, the confidence value generally shows that the PARCD is
better than MOPAR method. It can be seen from the whole datasets from Quake
to pollution data set in which the gaps are by 4%, 0.02%, 40%, 8% and 10%
respectively. Interestingly, the confidence and standard deviation values from
basket ball dataset have the similar values as of both of methods.
The output of comprehensibility and interestingness function can be seen
in Table 10. The comprehensibility function results that the PARCD is better
than MOPAR method, because there are three datasets which the values are
higher. They are basket ball, body fat and pollution which the gaps are almost
125%, 130% and 40% respectively. The comprehensibility means that the results
from the method are easy to understand. So, the percentage of the result can
be explained that both of the methods are better than the traditional method
162 I. Tahyudin and H. Nambo
Rule number (particle) Antecedent, consequent, Lower bound (LB) < attribute <
non of them (ACN) upper bound (UB)
Rule 1 Antecedent 42.431841 < Att2 < 46.441110
9.675301 < Att6 < 10.303791
24.171326 < Att9 < 27.345700
42.882070 < Att10 < 44.054696
Consequence 21.695266 < Att1 < 22.757671
77.760994 < Att3 < 80.221960
6.698662 < Att4 < 7.071898
7436.549761 < Att8 < 7801.004046
58.816363 < Att15 < 63.240005
Rule 2 Antecedent 42.431841 < Att2 < 46.441110
9.675301 < Att6 < 10.303791
24.171326 < Att9 < 27.345700
42.882070 < Att10 < 44.054696
Consequence 21.695266 < Att1 < 22.757671
77.760994 < Att3 < 80.221960
6.698662 < Att4 < 7.071898
7436.549761 < Att8 < 7801.004046
58.816363 < Att15 < 63.240005
··· ··· ···
··· ··· ···
Rule 2000 Antecedent 39.363260 < Att2 < 46.455909
8.721294 < Att4 < 9.206407
89.212389 < Att7 < 90.700000
21.796671 < Att11 < 23.231486
606.938956 < Att12 < 648.000000
67.768113 < Att15 < 73.000000
5 Conclusions
The rules of PARCD from five data sets show the good result but there are
two data sets which do not obtain the consequent, they are quake and basket
ball dataset. This output is interesting to be studied continually because actu-
ally the optimal solution has the complete rule. In addition, by referring to the
experiment, the output of multi objective functions which are support, confi-
dence, comprehensibility, Interestingness, it is discovered generally that PARCD
gives a better result than MOPAR method. For the future, since the problem of
numerical association rule mining is still open to be improved, it will be nice to
follow the research for instance by combining with other methods such as genetic
algorithm or fuzzy algorithm.
164 I. Tahyudin and H. Nambo
References
1. Alatas B, Akin E (2008) Rough particle swarm optimization and its applications
in data mining. Soft Comput 12(12):1205–1218
2. Alatas B, Akin E, Karci A (2008) Modenar: multi-objective differential evolution
algorithm for mining numeric association rules. Appl Soft Comput 8(1):646–656
3. Alhajj R, Kaya M (2008) Multi-objective genetic algorithms based automated clus-
tering for fuzzy association rules mining. J Intell Inf Syst 31(3):243–264
4. Álvarez VP, Vázquez JM (2012) An evolutionary algorithm to discover quantitative
association rules from huge databases without the need for an a priori discretiza-
tion. Expert Syst Appl 39(1):585–593
5. Arotaritei D, Negoita MG (2003) An optimization of data mining algorithms used
in fuzzy association rules. In: International Conference on Knowledge-Based and
Intelligent Information and Engineering Systems. Springer, Heidelberg, pp 980–985
6. Beiranvand V, Mobasher-Kashani M, Bakar AA (2014) Multi-objective pso algo-
rithm for mining numerical association rules without a priori discretization. Expert
Syst Appl 41(9):4259–4273
7. Gen M, Lin L, Howada, (2015) Hybrid evolutionary algorithms and data mining:
case studies of clustering. Proc Soc Plant Eng 2015:184–196
8. Ghosh A, Nath B (2004) Multi-objective rule mining using genetic algorithms. Inf
Sci 163(1–3):123–133
9. Indira K, Kanmani S (2015) Association rule mining through adaptive parameter
control in particle swarm optimization. Comput Stat 30(1):251–277
10. Kaya M (2006) Multi-objective genetic algorithm based approaches for mining
optimized fuzzy association rules. Soft Comput 10(7):578–586
11. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of the
IEEE International Conference on Neural Networks, vol 4, pp 1942–1948
12. Li C, Liu Y et al (2007) A fast particle swarm optimization algorithm with cauchy
mutation and natural selection strategy. In: International Symposium on Intelli-
gence Computation and Applications. Springer, Heidelberg, pp 334–343
13. Minaei-Bidgoli B, Barmaki R, Nasiri M (2013) Mining numerical association rules
via multi-objective genetic algorithms. Inf Sci 233:15–24
14. Narita M, Haraguchi M, Okubo Y (2002) Data abstractions for numerical
attributes in data mining. Lecture Notes in Computer Science, vol 2412, pp 35–42
15. Qodmanan HR, Nasiri M, Minaei-Bidgoli B (2011) Multi objective association rule
mining with genetic algorithm without specifying minimum support and minimum
confidence. Expert Syst Appl 38(1):288–298
16. Sangsawang C, Sethanan K et al (2015) Metaheuristics optimization approaches
for two-stage reentrant flexible flow shop with blocking constraint. Expert Syst
Appl 42(5):2395–2410
17. Sarath K, Ravi V (2013) Association rule mining using binary particle swarm
optimization. Eng Appl Artif Intell 26(8):1832–1840
Numerical Association Rule Mining Optimization 165
Qifeng Wei(B)
1 Introduction
The conflicts of knowledge network refer to the state of disharmony caused by
the accumulation of contradictions to a certain extent, among two or more than
two knowledge subjects, due to their incompatible behaviors or goals. According
to Pondy [19], the American behavioral scientist, the generation and develop-
ment of conflict could be divided into five stages: latent conflict, perceived con-
flict, felt conflict, manifest conflict and conflict aftermath, so as to construct
the embryonic of organizational conflict theory. There are many reasons for
the conflict of knowledge network, that’s because there are many differences
between different knowledge subjects in organizational goals, management modes
and organizational cultures, which will cause disagreement and lead to conflict;
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 13
An Analytical Framework for the Conflict Coordination Mechanism 167
The nature and scope of interdependence among knowledge subjects, are often
in a dynamic reset conditions, which enables conflict inevitable; And because
of the complexity of the knowledge cooperation among organizations, the dis-
location of objectives will also lead to the formation of conflict; In addition,
due to a changing external environment of knowledge network, inconsistencies
between subjects are long-standing. Moreover, the knowledge level, knowledge
structure [20], organizational routines [1], differences of values [10,11], among
the knowledge subjects, also lead to various conflicts inevitably.
Whether knowledge subjects within knowledge network could learn and
improve the knowledge and skills of conflict management, and accurately or effec-
tively implement conflict management, to control the conflict without harm, is
directly affecting the goal, and also it is related to subject itself and the existence
of knowledge network. Therefore, in the evolution process of conflict coordination
mechanism, we should objectively face various conflicts, and adopt correspond-
ing managerial strategies to conflicts with different natures. Obviously, in the
process of the generation, development and operation of the conflict mechanism
of knowledge network, as the carrier of management main body and objects,
knowledge subjects not only have the characteristics of cooperate with the whole
team to acquire a better implementation of conflict management, but also have
the initiative to actively adjust and innovate the managerial behaviors. From
the perspective of long-term development practices, the behavior routines and
routine variation of knowledge subjects in the conflict management, is character-
ized by adaptation, which is specifically the behaviors adaptation under conflicts
management mechanism of knowledge network.
Y-axis: Z-axis:
knowledge structure
strong explicit
conducive to
communication
X-axis:
uncoordination coordination benefit
not
conducive to weak explicit
communication
resources and institutions owned by the knowledge main body, which involves
the organizational structure, organizational culture, social capital, human capi-
tal, capital, core competitive advantage and so on. The overlapping of the sub-
jects is an important source of conflicts. Knowledge, as the basis of the opera-
tion and survival of knowledge network, determines the essence of value pursued
by the knowledge subject. The development of the knowledge subject through
knowledge network is to promote the knowledge creation, so as to form the knowl-
edge advantage. Obviously, knowledge plays the central role in the conflict issue.
Therefore, the conflict coordination of knowledge network also focuses on these
three dimensions to operate management intervention. Wu et al. [22] argued that
the problem of cooperation and conflict between knowledge subjects is actually
a problem of creating and distributing income, including opportunism behav-
ior control and benefit coordination, value creation mechanisms. The former
includes a contract mechanism, a self-implementation mechanism and a third-
party conflict coordination mechanism, while the latter includes a relationship
adjustment mechanism and a knowledge collaboration mechanism [8]. And the
framework of knowledge subject adaptive behaviors is also embedded in these
mechanisms. The contract mechanism and the self-implement mechanism can
effectively distribute the benefits of the knowledge network to avoid the oppor-
tunistic behavior of the knowledge subject, which is based on the coordination
and interaction among the knowledge subjects. When the interaction can’t solve
the conflicts, then the third-party coordination mechanism is introduced to fur-
ther resolve. In the mechanism of value creation, the regulation of relationship
helps to improve the efficiency of cooperation among the knowledge subjects
and increase the value of innovation. The mechanism of knowledge collabora-
tion enhances the collaborative benefits of the knowledge subjects and further
promotes the creation of knowledge to form the knowledge advantage.
An Analytical Framework for the Conflict Coordination Mechanism 169
whether to cooperate and adjust the level of knowledge input, yet the knowledge
spillover loss is also the reason of the conflict formation. At this point, the input
level of the knowledge subject depends on the input level of the partner, showing
a positive correlation.
The study [23] revealed that, the effect of knowledge spillovers on the coop-
eration among knowledge subjects is a “double-edged sword”, which effectively
promotes cooperation but also brings conflict. On the one hand, the innovation
value brought by knowledge spillovers is the driving force for deep cooperation.
The stronger the knowledge absorptive capacity of knowledge subjects, the more
benefits they could get from knowledge spillovers of partners, and the higher the
level of knowledge input. On the other hand, because of the influence of knowl-
edge spillover, the input level of knowledge subject is affected by the level of
partner input, and the drop of knowledge input will affect the benefit from
knowledge spillover. In addition, knowledge spillover may lead to the leakage of
core knowledge assets, which will seriously affect the stability of knowledge net-
work cooperation, because the huge potential value of knowledge spillover will
stimulate the opportunistic behavior of the cooperators, resulting in conflict.
By enhancing the level of cooperation transparency and enhancing the level of
mutual trust among organizations, the observability of knowledge investment in
the cooperation can be improved, which will effectively restrain opportunistic
behaviors and enhance the interaction.
will undermine long-term cooperative relations and thereby penalize the loss of
future earnings. In order to maintain the self-implementation of the contract
mechanism in the knowledge network and avoid the intervention of the third-
party managers to increase the management cost, the rational knowledge subject
in cooperation will adjust its behaviors from the factors closely related to the
self-implementation, including relationship rent, discount factor and knowledge-
specific level.
Usually, the principal and the entrusted subject cooperate to create the rela-
tionship. The more the rent is, the more the entrusted subject puts in the specific
knowledge assets, the stronger the self-implementation of the relational contract
is. The higher the discount factor set by the principal, the higher the input
of the entrusted subject, and the stronger the self-implementation of the con-
tract. Knowledge-specific degree refers to the relationship rent created by the
unit-specific knowledge asset invested by the knowledge subject. The higher the
degree of knowledge-specificity of the principal, the higher the degree of knowl-
edge assets locked in the specific cooperative relationship with the entrusted
subject, the replacement cost will be higher, so the self-implementation of rela-
tional contract is stronger; and when the principal commissioned into higher
special assets, the principal must have a very high degree of knowledge in order
to ensure the self-implementation of the relational contract, otherwise the sub-
ject may be subject to the implementation of opportunistic behavior to damage
the benefits of the principal, thus the principal will spend more energy in the
observation of the cooperative behavior of the entrusted subject.
specific users, the knowledge subject once invest the specific asset, the bargaining
power in cooperation will be reduced, which may face “rip off” behaviors. There-
fore, under the condition of lack of confidence, the subject will generally choose
universal asset investment rather than specific asset investment, but specific
asset investment can create relational rent and help to improve performance [4].
In addition, trust helps to ensure the openness of communication among organi-
zations. In-depth communication between organizations means that knowledge
subjects have more comprehensive understanding the strategic vision, business
objectives, resource allocation, financial status and other information, but deep
communication between organizations must be based on a high degree of mutual
trust as the prerequisite.
Trust mechanism for the cooperation between knowledge subjects and coor-
dination of conflict, its essence is the trust game among organizations [8]. Under
completely unconstrained conditions, suppose that there are two knowledge sub-
jects, A and B, are cooperating. When they trust each other, the knowledge is
fully shared. At this time, the two parties get the value of R1 and R2 ; when
A trusts in B, while B is for the individual interests, bad faith and takes the
opportunity to implement opportunistic behavior, so that the interests of A is
damaged, ΔR > 0, then the two sides get the benefits R1 − ΔR and R2 + ΔR; If
two sides do not trust each other, resulting in no knowledge sharing or knowledge
sharing is in low efficiency, assuming that compared to mutual trust, the loss of
benefits this time are both 2ΔR, so two sides respectively get R1 − 2ΔR and
R2 − 2ΔR. In the unconstrained condition, the game matrix of trust behaviors
between knowledge subjects is shown in Table 1.
Knowledge subject B
Trust Distrust
Knowledge Subject A Trust R1 , R 2 R1 − ΔR, R2 + ΔR
Distrust R1 + ΔR, R2 − ΔR R1 − 2ΔR, R2 − 2ΔR
Knowledge subject B
Trust Distrust
Knowledge Subject A Trust R1 , R2 R1 − ΔR, R2 + ΔR − r1 − p
Distrust R1 + ΔR − r1 − p, R2 − ΔR R1 − 2ΔR, R2 − 2ΔR
subject may try to find a solution to the conflict and obey the decision of the
third party to eliminate conflicts. The third-party conflict intervention strategy
chosen by knowledge network is mainly affected by fairness perception, satisfac-
tion perception, effective perception and strategy implementation efficiency of
knowledge subject.
6 Conclusions
References
1. Argote L, Guo JM (2016) Routines and transactive memory systems: creating,
coordinating, retaining, and transferring knowledge in organizations. Res Organ
Behav 36:65–84
2. Baker G, Murphy KJ (2002) Relational contracts and the theory of the firm. Q J
Econ 117(1):39–84
3. Baker G, Gibbons R, Murphy KJ (1999) Informal authority in organizations. J
Law Econ Organ 15(1):56–73
4. Dyer JH, Singh H (1998) The relational view: cooperative strategy and sources of
interorganizational competitive advantage. Acad Manage Rev 23(4):660–679
5. Elangovan AR (1995) Managerial third-party dispute intervention: a prescriptive
model of strategy selection. Acad Manage Rev 20(4):800–830
6. Fisher RJ (2016) Ronald J. Fisher: Advancing the understanding and effectiveness
of third party interventions in destructive intergroup conflict. Springer
An Analytical Framework for the Conflict Coordination Mechanism 179
Abstract. With the accelerated pace of life, more and more tourists’
travel mode change from the traditional land travel into air travel. It
has contributed to the rapid development of the aviation industry. But
also produced some problems that troubled the airline. Among them, the
flight delays problem has not been effectively addressed. The cost of flight
delays is still high. This paper launches the research, analyze the cause of
the cost of flight delays. For the factor of terminal area flight scheduling
unreasonable to improve. Combined with the two-runway actual situa-
tion of Chengdu Shuangliu International Airport, minimize the cost of
flight arrival delays, construct the model of flight arrival on two runways.
At the same time, the coding method, selection strategy and fitness func-
tion of GEP are improved combined with the specific problem. Finally,
IGEP and simulation are utilized to solve the practical problem. Com-
pared with the traditional F CF S rules, the cost of flight arrival delays
is significantly reduced, the efficiency of flight arrival and runway uti-
lization is improved, and the interests of airlines are guaranteed. It also
shows the superiority of IGEP in addressing the issue of two-runway
flight arrival.
1 Introduction
With the speeding up of life rhythm, people on the quality of travel requirements
are getting higher and higher. Compared with ordinary land travel, people have
a tendency to more efficient and better services’ air travel. China’s civil aviation
passenger traffic for several years to maintain the growth rate of more than 8%
[3]. In order to satisfy the needs of more passengers, major domestic airlines
actively purchased aircraft, the number of aircraft China’s civil aviation has
been increasing for many years. At the same time, several domestic airports
have built new runways for use by airlines. Two-runway airport is becoming
more and more.
But the development of the aviation industry has also brought some problems
cannot be ignored. Air traffic flow increases, airport traffic congestion and flight
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 14
Flight Arrival Scheduling Optimization on Two Runways 181
delays are very common. According to statistics, the punctuality rate of flight
in China has been lower than 80% for several years, and has a tendency to
decline [4]. It is contrary to the pursuit of efficient travel. Flight delays led to
some people confidence shook on the air travel, and then give up air travel, the
airlines’ revenue will reduce. In general, flight delays reach two hours, this flight
will be a loss, the longer the delay, the higher the cost of delays. So how to
control flight delays and decrease the cost of flight delays has become an urgent
problem to be solved.
There are numerous reasons for flight delays, such as bad weather and aircraft
faults, these accidental factors are unable to change. However, in many cases,
unreasonable flight scheduling can also lead to flight delays. The data showed
that nearly 40% of flight delays are related to flight scheduling. Unreasonable
scheduling will aggravate flight delays, increase the cost of flight delays. On the
contrary, reasonable flight scheduling can alleviate the problem of flight delays,
reduce the cost of flight delays, protect the interests of airlines and improve
the utilization of the runway. So how to make reasonable flight scheduling has
become more and more attention by the airlines. Scholars in related fields have
studied and made some achievements after noticing the importance of flight
delays:
Hu and Xu [11] pointed out that flight delays and air traffic congestion due
to the restriction of airport capacity, they used the ground maintenance strat-
egy to solve the problem. Xu and Yao [19] established optimization model of
multi-runway in the terminal area, and the genetic algorithm (GA) was used
to solve the problem. Zhang et al. [20] constructed a dynamic multi-objective
optimization model for arriving and departure flights on multiple runways. Then
designed GA to solve the proposed model with the dynamic characteristics of
receding horizon control strategy. Kafle and Zou [12] proposed a novel analytical-
econometric approach, and built an analytical model to reveal the effects of var-
ious influencing factors on flight delays. Liang and Li [14] take flight departure
time as the goal, and used improved gene expression algorithm (IGEA) to study
the departure problem of single runway airport.
But the problem of two-runway flight arrival based on the cost of flight delays
has not been solved. The traditional mode of flight arrival is first come first served
(FCFS), without considering the impact on other flights, the advantage of this
approach is completely fair between the various flights, and the scheduling is sim-
ple. It is more suitable in the early stage of China’s aviation industry, the number
of flights is less. With the increase in the number of flights, the shortcomings are
also obvious leakage. Lack of scheduling lead to high cost of flight arrival delays,
low runway utilization, cannot guarantee the interests of airlines, also cannot
meet the requirements of travel time for passengers. So the traditional FCFS
cannot adapt to the current flight arrival scheduling. There is an urgent need
for new ways to improve the problem. Gene expression programming (GEP) is
a new adaptive evolutionary algorithm based on biological structure and func-
tion [2]. It is evolved from GA and genetic programming (GP). It learns GA
to encode chromosome into a fixed-length linear symbol string, and inherits the
182 R. Wang et al.
2 Problem Description
In Table 1, the rows indicate the arrival flight in the front of a runway, the
columns indicate the flight that is at the back on the same runway, the cross cells
of row and column represent before and after the two adjacent arrival flights safe
wake vortex separation on the same runway.
184 R. Wang et al.
Table 2. The cost of flight delays per unit time for the three aircraft types
About the cost of flight delays, according to the literature on the analysis of
the cost of flight delays [18]. Table 2 shows the cost of flight delays per unit time
for the three aircraft types, and the classification standards of aircraft type.
3 Model Construction
The total cost of flight arrival delays is minimized as the optimization goal,
establish the scheduling model of flight arrival. In particular, to minimize the cost
of delays in arrival flight on two runways, involved two runways, it is necessary
to allocate runways to these flights, taking into account the sequence of flight
arrival on the runway. The sequence of flight arrival is related to the safe wake
vortex separation. The type of aircraft and the cost of flight delays per unit time
is also required to consider. These factors affect the cost of flight arrival delays
together.
In order to make the model more accurately describe the process of flight
arrival, the following assumptions are made:
Assumption 1. The number of flights required to arrival during a certain period
of time is N times, and these flights are now required to circle in the terminal
area due to environmental impact and airport capacity constraints.
Assumption 2. From a certain moment, the waiting flight in the terminal area
can in turn to arrival, this moment is remarked as 0.
Assumption 3. From 0 moment, the arrival flight can smoothly through the
runway, taxiway into the apron. This is a continuous process. The allocation of
flight on the taxiway and access to the apron does not have to be considered.
i : There are N flights circling in the terminal area waiting for arrival in
a certain period of time, according to the sequence of arrival terminal area
labeled as i = 1,2, · · · ,N ;
j: Indicates a total of two runways j = 1,2;
k: According to the sequence of flight arrival on runway j, the flight is
remarked as k = 1,2, · · · ,M , and M ≤ N ;
j
Xik : If the i − th flight in the terminal area is k − th arrival on runway j,
j j
then Xik = 1; else Xik = 0;
u: Distinguish the aircraft types of flight. Heavy aircraft remarked as u = 1;
medium aircraft remarked as u = 2; light aircraft remarked as u = 3;
Flight Arrival Scheduling Optimization on Two Runways 185
Wu : Represents the cost of delays per unit time for the aircraft type u;
j j
Pku : If the k − th flight arrival is the aircraft type on runway j, then Pku = 1;
j
else Pku = 0;
Qjk,k−1 : The safe wake vortex separation of adjacent flight on runway j;
Tkj : The arrival time of the k − th flight on runway j, it is also the time of
flight delays;
j
Tk,k−1 : The difference in arrival time between adjacent flights on runway j;
C: The total cost of flight arrival delays.
N
2
M
3
j
Cmin = Pku W uTkj Xik
j
, i = 1, · · · N ; j = 1, 2; k = 1, 2 · · · , M ; u = 1, 2, 3
i=1 j=1 k=1 u=1
(1)
2
M
j
Xik = 1,i = 1, 2, · · · , N ; j = 1, 2; k = 1, 2, · · · , M (2)
j=1 k=1
N
2
M
j
Xik = N, i = 1, 2, · · · N ; j = 1, 2; k = 1, 2, · · · , M (3)
i=1 j=1 k=1
3
j
Pku = 1, j = 1, 2; k = 1, 2, · · · M ; u = 1, 2, 3 (4)
u=1
3
j j j j
Pku W u = Pk1 W 1 + Pk2 W 2 + Pk3 W 3, j = 1, 2; k = 1, 2, · · · M ; u = 1, 2, 3
u=1
(5)
Tkj − j
Tk−1 = j
ΔTk,k−1 ,j = 1, 2; k = 2, · · · , M (6)
j
ΔTk,k−1 ≥ Qjk,k−1 , j = 1, 2; k = 2, · · · , M (7)
T1j = 0, j = 1, 2 (8)
j j
Xik ∈ {0,1} , Pku ∈ {0, 1} . (9)
Among them, Eq. (1) is the objective function of the model, which represents
the total cost of flight arrival delays; Eq. (2) shows that each flight can only
arrival on one runway; Eq. (3) shows that there are N flights need to arrival at 0
moment; Eq. (4) indicates that each flight can only be one of three aircraft types;
Eq. (5) shows the cost of delays per unit time of the k th flight on runway j; Eq. (6)
shows that the difference in arrival time between the adjacent flights on runway j
j
is ΔTk,k−1 ; Eq. (7) indicates that the difference in arrival time between adjacent
flights on the same runway is not shorter than the safe wake vortex separation;
Eq. (8) shows that the first flight arrival on the runway is at 0 moment and the
time of delays is 0; Eq. (9) are the position and aircraft types selection variable.
And then the specific algorithm is designed to solve the model.
186 R. Wang et al.
4 Algorithm Design
FCFS has not been able to adapt to the flight arrival scheduling problem, and
GEP as a new evolutionary algorithm, has a powerful function, compared with
the traditional evolutionary algorithm, the ability to solve problems has been
greatly enhanced. This chapter focusses on the design and process of IGEP.
4.1 IGEP
The IGEP are used to improve the problem of flight arrival on two runways
and multiple taxiways. It is necessary to improve GEP according to the specific
problem and model. It mainly adjusts the coding method, genetic operators
design and so on of GEP, so that the algorithm is more simple and efficient in
solving problems.
(1) Coding Method
Coding method around the problem, there are two runways mentioned on the
model. In order to simplify the problem, it is possible to code the flight arrival
on runway 1 as “1” and the flight arrival on runway 2 as “2”, regardless of flight
specific allocation on the taxiway. There are a total of m “1” and n “2”, and N
flights arrival on two runways. In accordance with the sequence of arrival, the
flight arrival on runway 1 is encoded as “A1 , A2 , · · · , Am ”, the flight arrival on
runway 2 is encoded as “B1 , B2 , · · · , Bn ”.
(2) Mutation Operator
Mutation is the most efficient operator among all operators with the ability to
modify. It can occur anywhere in the chromosome, and the chromosome struc-
ture was complete after mutation [16]. The root insertion, gene insertion, gene
recombination not used in IGEP, excluding recombination and transposition.
Utilising the mutation operator, the mutation probability Pm determines the
point mutation in the chromosome. Since the coding used only contains the
genotypes “1” and “2”, the most effective mutation is to switch “1” and “2” of
some genes. The gene pair represents the arrival flight on two runways, then it
is easy to get the individual and population after mutation, simplify the process
of GEP, increase the efficiency of solving the problem.
(3) Selection Strategy
In GEP, one of the most common selection strategies is roulette selection, in
which the individual chooses according to their fitness through the roulette sam-
pling strategy. Each individual with a round table of the disc to represent the
fitness, the higher the individual fitness, the greater possibility it becomes the
offspring. Instead of choosing this method, a strategy that is more suitable to
solve the problem of flight scheduling is used in this paper. It preserves the indi-
viduals other than the optimal individual, and ensures that the population size
remains constant.
Flight Arrival Scheduling Optimization on Two Runways 187
Parameter Value
Prescribed algebra 500
Population size 50
Chromosome length 10
Number of genes 10
Mutation probability 0.05
Select range 100%
according to the sequence of arrival terminal area. In accordance with the FCFS
rules, the odd-numbered flights arrival on runway 1 in sequence, and the even-
numbered flights arrival on runway 2 in sequence, and two initial queues l1 and
l2 are obtained according to the arrival sequence of the flight on two runways.
Both queues contain 10 arrival flights, representing flights arrival sequence on
runway 1 and runway 2 respectively, which is flight arrival sequence based on
the FCFS rules and is also regarded as the initial flight arrival sequence L0 .
According to L0 , obtained the total cost of flight arrival delays on two runways
is C0 . Then the initial flight arrival sequence L0 is optimized by IGEP according
to the set parameters. When the algorithm reaches the termination condition,
the optimized flight arrival sequence is Ln , and the total cost of flight arrival
delays is Cn . After simulation, the flight arrival delays of the two schemes are
obtained. As shown in Table 4.
Compare flight arrival delays of the two schemes in Table 4. It can be seen
through optimization of IGEP. The total cost of flight arrival delays C has been
significantly decreased. Compare with the traditional FCFS rules, the total cost
of flight arrival delays decreased from 54,862 yuan to 30,716 yuan through the
IGEP optimization. The total cost of flight arrival delays reduced 24,146 yuan,
a decrease of 44.01%. The average delay cost of each flight from 2743.1 yuan
down to 1535.8 yuan. The total time of flight arrival delays dropped from 43 min
to 38 min, the total time decreased by 5 min, the average arrival time of each
flight from 2.15 min down to 1.9 min, the flight arrival efficiency increased by
11.63%. The same flights arrival, reduced arrival time has proved that runway
utilization has improved. According to the data in Table 4, the line chart of the
cost of flight arrival delays on two runways are obtained, the abscissa represents
the arrival sequence on the runway and the ordinate represents the total cost of
flight arrival delays, as shown in Figs. 3 and 4.
As can be seen from Figs. 3 and 4, the cost of flight arrival delays on two
runways has declined. The cost of flight delays on runway 1 decreased from 27406
yuan to 14590 yuan. The cost of flight delays on runway 2 decreased from 27456
yuan to 16176 yuan. Delay cost fell 12,816 yuan and 11,280 yuan respectively.
The total cost of delays in airlines has significantly decreased, the income has
been safeguarded, these have a positive impact on the development of airlines. At
the same time, according to the trend of the line chart, we can see that when the
190
Flight
R. Wang et al.
The initial flight arrival sequence L0 The optimized flight arrival sequence Ln
arrival Runway 1 T1j Delay cost Runway 2 T2j Delay cost Runway 1 T1j Delay cost Runway 2 T2j Delay cost
order (min) (yuan) (min) (yuan) (min) (yuan) (min) (yuan)
1 H(1) 0 0 H(2) 0 0 H(2) 0 0 H(2) 0 0
2 L(1) 3 276 M(2) 2 472 H(1) 2 996 H(1) 2 996
3 M(1) 5 1456 L(2) 5 932 H(1) 4 2988 H(2) 4 2988
4 L(1) 8 2192 H(2) 7 4418 M(2) 6 4404 M(1) 6 4404
5 M(1) 10 4552 L(2) 10 5338 M(1) 8 6292 M(2) 8 6292
6 H(1) 12 10528 M(2) 12 8170 M(2) 10 8652 M(1) 10 8652
7 L(1) 15 11908 L(2) 15 9550 L(2) 13 9848 M(2) 12 11484
8 H(1) 17 20374 H(2) 17 18016 L(1) 15 11228 L(2) 15 12864
9 L(1) 20 22214 M(2) 19 22500 L(2) 17 12792 L(1) 17 14428
10 M(1) 22 27406 M(2) 21 27456 L(1) 19 14540 L(1) 19 16176
Delay cost 54862
Delay time 43
Flight Arrival Scheduling Optimization on Two Runways 191
number of arrival flights reaches a certain scale, with the increasing number of
arrival flights, the difference between the cost of flight arrival delays before and
after optimization is greater, and the economic significance of the optimization
of the flight arrival problem becomes more apparent.
6 Conclusion
The rapid development of social economy has promoted the change of passen-
ger’s travel mode. The demand for aviation industry has been growing. The
development of aviation industry has also resulted in flight delays and high cost
of flight delays, which impose higher demands on flight arrival scheduling. This
paper studies the problem of flight arrival delays based on the results of domestic
and international flight scheduling research. The total cost of flight arrival delays
is minimized as the optimization goal, construct the scheduling model of flight
arrival, and IGEP is designed according to the specific problem and model.
192 R. Wang et al.
References
1. Alavi A (2011) A robust data mining approach for formulation of geotechnical
engineering systems. Eng Comput 28:242–274 (in Chinese)
2. Azamathulla H, Ahmad Z, Aminuddin A (2013) Computation of discharge through
side sluice gate using gene-expression programming. Irrig Drain 62:115–119 (in
Chinese)
3. Civil Aviation Administration of China (2016) The civil aviation industry in 2015
statistical bulletin [eb/ol]
4. Development Planning Department of Civil Aviation Administration of China
(2016) From the statistical view of Civil Aviation. China Civil Aviation, Beijing
5. Deshpande V, Kan M (2012) Impact of airline flight schedules on flight delays.
Manufact Serv Oper Manage 14:423–440 (in Chinese)
6. Divsalar M (2012) A robust data-mining approach to bankruptcy prediction. J
Forecast 31:504–523 (in Chinese)
7. Dorndor U (2007) Disruption management in flight gate scheduling. Statistica
Neerlandica 61:92–114 (in Chinese)
8. Drüe C (2008) Aircraft type-specific errors in amdar weather reports from com-
mercial aircraft. J R Meteorol Soc 134:229–239 (in Chinese)
9. Fernández-Ares A (2016) Analyzing the influence of the fitness function on genet-
ically programmed bots for a real-time strategy game. Entertainment Comput
18:15–29 (in Chinese)
10. Gandomi A (2011) A new prediction model for the load capacity of castellated
steel beams. J Constr Steel Res 67:1096–1105 (in Chinese)
11. Hu M, Xu X (1994) Ground holding strategy for air traffic flow control. J Nanjing
Univ Aeronaut Astronaut 26:26–30 (in Chinese)
12. Kafle N, Zou B (2016) Modeling flight delay propagation: a new analytical-
econometric approach. Transp Res Part B Methodol 93:20–542 (in Chinese)
13. Karbasi M, Azamathulla H (2016) Gep to predict characteristics of a hydraulic
jump over a rough bed. KSCE J Civil Eng 20:1–6 (in Chinese)
14. Liang W, Li Y (2014) Research on optimization of flight scheduling problem based
on improved gene expression algorithm. Comput Technol Dev 7:5–8 (in Chinese)
15. Marques J (2016) On an analytical model of wake vortex separation of aircraft.
Aeronaut J 120:1534–1565 (in Chinese)
16. Peng J (2015) A new evolutionary algorithm based on chromosome hierarchy net-
work. Int J Comput Appl 30:183–191 (in Chinese)
Flight Arrival Scheduling Optimization on Two Runways 193
17. Vasilyev I, Avella P, Boccia M (2016) A branch and cut heuristic for a runway
scheduling problem. Autom Remote Control 77:1985–1993 (in Chinese)
18. Xie T (2009) Study of arrival flight scheduling optimizing based on delay cost. PhD
thesis, Beijing Jiaotong University, Beijing (in Chinese)
19. Xu X, Yao Y (2004) Application of genetic algorithm to aircraft sequencing in
terminal area. J Traffic Transp Eng 4:121–126 (in Chinese)
20. Zhang Q, Hu M, Zhang H (2015) Dynamic multi-objective optimization model of
arrival and departure flights on multiple runways based on RHC-GA. J Traffic
Transp Eng 2:012 (in Chinese)
A Study of Urban Climate Change Vulnerability
Assessment Based on Catastrophe Progression
Method
1 Introduction
Climate change has already made a serious impact on human beings [1]. It not
only makes social economy worse [3], increases the flood exposure of cities [6]
and the difficulty of reservoir management [14], severely threaten the human
health [11], but also destroys the diversity and the stability of ecosystem [2,9].
Therefore, climate change has been becoming an important issue related to the
survival and sustainable development of human beings. To minimize the threat
caused by climate change, effective measures need to be formulated.
2 Method
The catastrophe progression method combines catastrophe theory and fuzzy
mathematics [10]. And in order to make the sort of indicators more objective
and reduce the influence of subjective factors, the evaluation indexes are ranked
by the weight. The catastrophe progression method can be divided into 6 steps.
Step 1. Establish assessment systems.
The overall goal is divided into multi-level indicators. And the total number
of the corresponding indicators is not more than four, which can be expressed
by the parameter n.
Step 2. Identify the model of system.
⎧ 3
⎪
⎪ x + ax, n=1
⎨ 4
x + ax2 + bx, n=2
f (x) = 1 5 1 3 1 2 (1)
⎪
⎪ x + ax + bx + cx, n=3
⎩ 51 6 31 4 21 3 1 2
6 x + 4 ax + 3 bx + 2 cx + dx, n = 4.
The f (x) represents the potential function of the state variable x. And a, b, c
represent the control variable of the state variable x.
Step 3. Data standardization.
⎧
⎨ xij / max(xij ), xij ∈ x+ ij
∗ j
xij = (2)
⎩ min(xij )/xij , xij ∈ x− ij .
j
196 Y. Sun et al.
1
m
ej = − pij ln pij , j = 1, 2, · · · , n, (4)
ln m i=1
n
wj = (1 − ej ) (1 − ej ), j = 1, 2, · · · , n. (5)
j=1
The m represents the number of evaluated objects. And n indicates the num-
ber of indicators. The value of wj is the weight of each index.
Step 5. Confirm the normalized equation of each model.
⎧
⎪
⎪ xa = a1/2 , n=1
⎨
xa = a1/2 , xb = b1/3 , n=2
x= (6)
⎪
⎪ x = a1/2 , xb = b1/3 , xc = c1/4 , n=3
⎩ a
xa = a1/2 , xb = b1/3 , xc = c1/4 , xd = d1/4 , n = 4.
3 Case Study
Sichuan province is located in the hinterland of the China’s southwest, and on the
eastern of Qinghai-Tibetan plateau. Due to the unique geographical environment
of the plateau, the climate change in Sichuan is very obvious. Meanwhile, Sichuan
is located in the Longmen Shan Fault, in which geological disasters easily happen
[13]. The sudden heavy rainfall caused by climate change led to the city water
logging, and even geological disasters (e.g., landslide, debris flow). The basic
situation of cities in each economic zone is shown in Fig. 1.
GDP
PXER,
NWSER, 7.6%
1.6%
SSER,
17.1%
NESER, CDER,
16.3% 57.5%
NESER
NWSER
CDER
varies according to the real problems [16]. Sichuan Province is divided into five
major economic zones, including Chengdu Economic Region (CDER), North-
east Sichuan Economic Region (NESER), Northwest Sichuan Economic Region
(NWSER), Pan-Xi Economic Region (PXER), and South Sichuan Economic
Region (SSER). In order to make the economic zone of the social vulnerability
index objective and reasonable, the vulnerability assessment of urban climate
change disaster is established based on actual situation of Sichuan (Table 1).
Our study regards Sichuan Statistical Yearbook (2015) as the source of the
original data. According to Eqs. (2)–(6), the standard data and the importance
of each indicator can be calculated. Table 1 shows the sorted result of each layer
index.
In the assessment system, the number of each layer’s indexes is not more
than 4. According to the number of each level’s indicators, the vulnerability
of all levels’ indicators can be calculated. The urban vulnerability of Sichuan
economic zone is shown in Table 2. The larger the number of values in the table,
the more serious the degree of vulnerability. As shown in Table 2, the disaster
vulnerability of the CDER is the lowest, and the ability of disaster prevention
and mitigation is the highest. The vulnerability of the SSER is the highest, whose
ability of disaster prevention and reduction is the worst.
198 Y. Sun et al.
Region A1 A2 A3 A4 B1 B2 C1 C2 F Rank
CDER 0.1656 0.4301 0.553 0.5751 0.4656 0.4836 0.3493 0.4725 0.6379 5
NESER 0.1917 0.5166 0.5923 0.6026 0.5081 0.6781 0.442 0.6049 0.6616 4
NWSER 0.5643 0.4379 0.3448 0.5118 0.5209 0.5456 0.6892 0.5062 0.8667 2
PXER 0.3514 0.4093 0.4762 0.5398 0.5687 0.4881 0.509 0.4058 0.7699 3
SSER 0.6636 0.6883 0.5596 0.5888 0.5209 0.5456 0.4379 0.5566 0.897 1
NESER NESER
NWSER
NWSER
CDER CDER
SSER SSER
PXER PXER
NESER NESER
NWSER NWSER
CDER CDER
SSER SSER
PXER PXER
Among all economic zones, the disaster vulnerability, coping ability, vul-
nerability, sensitivity and vulnerability resilience vulnerability indicators of the
CDER all place in the lowliest place. The CDER’s overall situation of climate
disasters prevention is relatively benign. The value of the vulnerability index
A3 and A4 is higher than other vulnerability indicators of CDER, which means
disaster awareness and hospital conditions need to be improved. As the CDER
is the core area of the entire Sichuan region, the overall level of medical care
and economy is better than other economic regions. In the context of popula-
tion mobility, the actual population of CDER is much larger than the data of
the Yearbook, so the actual vulnerability is more serious than that in Table 2.
And to minimize the urban vulnerability of the CDER, appropriate measures
must be taken, especially on the disaster awareness and medical conditions. The
urban management of CDER should offer much more attention to the floating
population, and effectively improve the urban climate disaster vulnerability.
Zone covers a larger area than other economic zones. However, its urbaniza-
tion process is slow. The ratio of the GDP, urban built-up area and urban
population is much less than other areas (Fig. 1). As shown in Fig. 2, the
sensitivity and vulnerability of the urban economic zone is very serious. The
urban situation, the structure of population, self rescue ability and social
security is poor.
4 Conclusion
References
1. Bachelet D, Ferschweiler K et al (2016) Climate change effects on southern cali-
fornia deserts. J Arid Environ 127:17–29
2. Bellard C, Bertelsmeier C et al (2012) Impacts of climate change on the future of
biodiversity. Ecol Lett 15(4):365–377
3. Bowen A, Cochrane S, Fankhauser S (2011) Climate change, adaptation and eco-
nomic growth. Clim Change 113(2):95–106
4. Cutter SL (1996) Vulnerability to environmental hazards. Prog Hum Geogr
20(4):529–539
5. Füssel HM, Klein RJT (2006) Climate change vulnerability assessments: an evo-
lution of conceptual thinking. Clim Change 75(3):301–329
6. Hallegatte S, Green C et al (2013) Future flood losses in major coastal cities.
Nature Clim Change 3(9):802–806
7. Mazumdar J, Paul SK (2016) Socioeconomic and infrastructural vulnerability
indices for cyclones in the Eastern Coastal States of India. Nat Hazards 82(3):1621–
1643
8. She L, Wang G, Xu J (2013) Research of influential factors of urban vulnerability
based on interpretative structural modeling: a case study of rescue and recovery in
Japan earthquake. Areal Res Develop (in Chinese) 12:18
9. Shen G, Pimm SL et al (2015) Climate change challenges the current conservation
strategy for the giant panda. Biol Conserv 190:43–50
10. Songhe X, Chuanfeng H, Lingpeng M (2015) Assessment and the key influencing
factors analysis of urban disaster prevention system vulnerability. Soft Science
9:131–134 (in Chinese)
11. Thompson TM, Rausch S et al (2014) A systems approach to evaluating the air
quality co-benefits of us carbon policies. Nature Clim Change 4(10):917–923
12. Wild M (2009) Global environmental change. J Geophys Res Atmos 114(11):733–
743
13. Xu J, Xie H et al (2015) Post-seismic allocation of medical staff in the longmen
shan fault area: case study of the Lushan earthquake. Environ Hazards 14(4):1–23
14. Yang G, Guo S et al (2016) Multi-objective operating rules for danjiangkou reser-
voir under climate change. Water Resour Manage 30(3):1183–1202
15. Yi L, Xi Z et al (2014) Analysis of social vulnerability to hazards in china. Environ
Earth Sci 71(7):3109–3117
16. Yoon DK (2012) Assessment of social vulnerability to natural disasters: a compar-
ative study. Nat Hazards 63(2):823–843
17. Zhang Q, Meng H (2014) Social vulnerability and poverty to climate change: a
summary on foreign research. J China Agric Univ (in Chinese) 31:29–33
Judging Customer Satisfaction by Considering
Fuzzy Random Time Windows in Vehicle
Routing Problems
1 Introduction
Generally, in the classical VRP, there is a set of customers, each of whom have
their own demands. Vehicles in the same condition at the depot deliver goods
to these customers, and they are required to start and end at the depot. The
objective of the classical VRP is to minimize the total cost by designing an opti-
mal delivery route for each vehicle. Nowadays, the VRP is a common problem in
almost every industry such as supply chain management and transport planning.
With the development of modern technology, customer satisfaction has become
a hot issue in the VRP.
In the classical VRP, delivery vehicles usually need to meet the following con-
ditions: (1) Serve all customers using a minimum of vehicles; (2) Each customer
is served by only one vehicle once; (3) Each vehicle starts and ends at the depot;
(4)Total customer demand on each route cannot exceed the load capacity of the
vehicle. There are several VRP variants, such as the multi-depot VRP [1,9], the
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 16
Considering Fuzzy Random Time Windows in Vehicle Routing Problems 205
Pickup and Delivery VRP [10], and the VRP with Backhauls [16] and so on. In
addition, some problems have pre-set time constraints on the period of the day in
which deliveries should take place. These are known as Vehicle Routing Problems
with Time Windows (VRPTW) which is a well known variant of the VRP.
The VRPTW is classified into two types, namely a VRP with a hard time
window (VRPHTW) in which the time windows are hard constraints and where
a route is not feasible if the service for any customer does not start within the
limits established by the time window [3], and a VRP with a soft time window
(VRPSTW), which is a relaxation of the VRPHTW, where the delivery of goods
is allowed outside the time windows if a penalty is paid [4,5,12]. There are many
uncertainties in real-life applications where hard time windows can be violated,
while the “soft time window” can deal with these situations [13,14]. The travel
time on each road section cannot be determined because of many uncertainties,
such as traffic incidents, vehicle breakdowns, work zones, special events, and
driver skills and experience [6].
In the past, the research used to consider these uncertainties random variables
[6,15], but recently, researchers have been increasingly applying fuzzy member-
ship functions to characterize the service level issues associated with time win-
dow violation in a vehicle routing problem, called the VRP with fuzzy time
windows (VRPFTW) [2,7,14]. This research has highlighted the fact that fuzzy
factors and stochastic factors can exist at the same time in the VRPTM. In this
paper, we propose to use fuzzy random theory to describe the VRPTW, namely
VRPFRTM.
This paper is organized as follows. In Sect. 2, we describe the problem which
includes fuzzy random time windows. And then, in Sect. 3, we discuss the meth-
ods for dealing with these uncertainties, and propose a procedure to handle fuzzy
random time windows. Next, we proposed the membership function of the cus-
tomer satisfaction based on the fuzzy random time windows in Sect. 4. Finally,
some concluding remarks are outlined in Sect. 5.
the customer satisfaction level is related to the start time. If a customer requires
that the service should be started in [e, l] in traditional VRP with hard time
windows, earlier than e or later than l is unacceptable, and the satisfaction level
is 0; otherwise, the customer is satisfied and the satisfaction level is 1. While
considering the soft time windows, the service starts earlier than e or later than
l is permitted in some extent, but no earlier than EET or later than ELT
[14]. Hence, if service starts between e and l, customer satisfaction level is 1; if
service starts between EET and e or between l and ELT customer satisfaction
level is valued in [0, 1]; otherwise, customer satisfaction level is 0. The customer
satisfaction level from hard to soft time windows can be seen in Fig. 1.
Sa Sa
1 1
0 e l t 0 EET e l ELT t
Hard time windows Soft time windows
Fig. 1. The customer satisfaction level from hard to soft time windows
From previous studies, it can be seen that the EET and ELT are usually be
treated as known and deterministic, while in practice, they can not usually be
obtained as deterministic data. There are two common way to get the data, one
is reasoning and the other is consulting the client. For reasoning, for example, the
concreting in a construction project should be started at 9:00, and the process of
unloading the concrete requires 10 min. By reasoning, the ELT is suppose to be
8:50, while actually the acceptable ELT to the project manager may be 8:30. As
seen, reasoning often leads to false EET or ELT due to the lack of flexibility. As
to consulting the client, deterministic data can not be obtained easily because the
client usually gives a response including ambiguous information. The customer
may make a statement, such as “not too early” or “no late than 10:00”. After
getting these information, the specific data can not be acquired obviously, and if
it is dealt as deterministic, the part of information which the customer provides
will be lost. In sum, a response often not only includes some fuzzy information
but also some random information. In this paper, the vehicle routing problem
with fuzzy random time windows (VRPFRTM) is proposed and fuzzy random
theory is chosen to describe the fuzzy random time windows. EET and ELT
are considered to be fuzzy random, namely EET and ELT .
Step 1. Consider the endurable earliness time of customer, EET , is fuzzy ran-
dom variable. Through previous data and professional experience using
statistical methods, estimate the parameters [m]L , [m]R , μ0 and σ02 .
Step 2. Obtain the intermediate parameters by using group decision making
approach, namely the decision-makers’ degree of optimism. From Puri
and Ralescu’s definition in [11], a fuzzy random variable is a measur-
able function from a probability space to a collection of fuzzy variables.
Roughly speaking, a fuzzy random variable is a random variable taking
is denoted
fuzzy values. In this paper, the fuzzy random variable EET
= ([m] , ρ(ω), [m] ), where ρ(ω) ∼ N (μ , σ 2 ) whose proba-
as EET L R 0 0
bility density function is ϕρ (x). Thus the expression of ϕρ (x) should
(x−μ0 )2
1 − 2
2σ0
be ϕρ (x) = √2πσ e . Suppose that σ is a probability level
0x
and σ ∈ [0, sup ϕρ (x)], r is a possibility variable and r ∈ [rl , 1] where
R −[m]L
rl = [m]R[m]
−[m]L +ρR L , both of them reflect the decision-maker’s degree
σ −ρσ
of optimism.
Step 3. Let ρσ be the σ-cut of the random variable ρ(ω). According to Xu and
Liu’s lemma in [17], ρσ = [ρL R
σ , ρσ ] = {x ∈ R|ϕρ (x) ≥ σ}, and the value
L R
of ρσ and ρσ can be expressed as
√
ρL
σ = inf{x ∈ R|ϕρ (x) ≥ σ} = inf ϕ−1
ρ (σ) = μ0 − −2σ02 ln( 2πσ0 σ),
√
ρR −1
σ = sup{x ∈ R|ϕρ (x) ≥ σ} = sup ϕρ (σ) = μ0 + −2σ02 ln( 2πσ0 σ).
Step 4. Transform the fuzzy random variables EET = ([m] , ρ(ω), [m] ) into
L R
the (r, σ)-level trapezoidal fuzzy variable ω by the following
EET (r,σ)
equation:
→ω
EET = ([m]L , m, m, [m]R ).
EET (r,σ)
here, we have
√
m = [m]R − r([m]R − ρL
σ ) = [m]R − r([m]R − μ0 + −2σ02 ln( 2πσ0 σ)),
√
m = [m]R − r([m]L + ρR
σ) = [m]L + r(μ0 − [m]L + −2σ02 ln( 2πσ0 σ)).
208 Y. Ma et al.
can be specified by ω
EET = ([m]L , m, m, [m]R ) with the mem-
EET (r,σ)
bership function:
⎧
⎪
⎪ 0[m] −x for x > [m]R
⎪
⎪
⎪ R
⎨ [m]R −m for m ≤ x ≤ [m]R
μω̃ = 1 for m≤x≤m
EET (x) ⎪
⎪ x−[m]L
⎪
⎪ for [m]L ≤ x ≤ m
⎪
⎩ m−[m]L
0 for x < [m]L .
to the (r, σ)-
The process of transforming fuzzy random variable EET
level trapezoidal fuzzy variable ω is illustrated in Fig. 2.
EET (r,σ)
Step 5. Take expected value operator to convert (r, σ)-level trapezoidal fuzzy
variable into a deterministic one. Based on the definition of fuzzy inter-
val and expected value by Heilpern [8], suppose that there is a fuzzy
number Ñ = (a, b, c, d), its membership function is:
⎧
⎪
⎪ 0 for x < a
⎪
⎪
⎨ fÑ (x) for a ≤ x ≤ b
μÑ (x) = 1 for b ≤ x ≤ c
⎪
⎪
⎪
⎪ gÑ (x) for c ≤ x ≤ d
⎩
0 for x > d.
where fÑ (x) and gÑ (x) are the upper and lower ends of the fuzzy num-
ber respectively. Then, the expected value of fuzzy variables ω̃
EET (r,σ)
is as follows:
m [m]R
1
EV [
ω ]= [(m − fω (x)dx) + (m + gω (x)dx)].
EET (r,σ) 2 EET (r,σ) EET (r,σ)
[m]L m
Considering Fuzzy Random Time Windows in Vehicle Routing Problems 209
Based on the what has been discussed above, the fuzzy random time windows
can be achieved, see Fig. 3.
Fig. 3. The customer satisfaction level from soft to fuzzy random time windows
where n is the number of customers. Li (ti ) is the satisfaction level serving cus-
tomer i. Sa is the least average customer satisfaction level. The membership
function of the satisfaction level, Li (ti ), is as below [14]:
⎧
⎪ ,
⎪
⎪ 0, ti < EET i
⎪
⎪
⎪
⎪ ti − EET i
, EET i ≤ ti < ei ,
⎪
⎨
ei −EET i
Li (ti ) = 1, e ≤ ti < t,
⎪
⎪
⎪
⎪
−t
ELT ,
, li ≤ ti < ELT
⎪
⎪ −l
i i
i
⎪
⎪ ELT i i
⎩
0, ti ≥ ELT i .
As discussed above, the service may start outside the time window [e, l],
and
and the bounds of acceptable earliness and lateness are described by EET
, respectively. Obviously, the earliness and lateness are closely related to
ELT
the quality of service of the supplier. The response of a customer satisfaction
level to a given service time may not be simply “good” or “bad”; instead, it may
210 Y. Ma et al.
be between “good” and “bad”. For example, the customer might say, “it’s all
, e] or [l, ELT
right” to be served within [EET ]. In either case, the service level
cannot be described by only two states (0 or 1).
5 Conclusion
In this paper, we focus on a special type of the VRPTW, the vehicle rout-
ing problem with fuzzy random time windows and multiple decision makers
(VRPFRTW-MDM) which is seldom considered before. We present a member-
ship function of the customer satisfaction based on fuzzy random time windows
in vehicle routing problems. Since customer satisfaction is becoming more and
more important for suppliers, the objective of this paper is to confirm that all the
customers are satisfied in an acceptable degree by judging the vehicle arriving
time. We also proposed a method to deal with the fuzzy random time windows
based on the fuzzy random theory. In the end, we have given a measure func-
tion on how to obtain the customers’ satisfaction based on fuzzy random time
windows.
References
1. Aras N, Aksen D, Tuğrul Tekin M (2011) Selective multi-depot vehicle routing
problem with pricing. Transp Res Part C Emerg Technol 19(5):866–884
2. Brito J, Martı́nez F et al (2012) Aco-grasp-vns metaheuristic for vrp with fuzzy
windows time constraints. In: Computer aided systems theory–EUROCAST 2011,
Springer, pp 440–447
3. Cordeau JF, Desaulniers G et al (2002) Vrp with time windows. Veh Routi Prob
9:157–193
4. Errico F, Desaulniers, et al (2016) The vehicle routing problem with hard time
windows and stochastic service times. Euro J Trans Logist:1–29
5. Figliozzi MA (2010) An iterative route construction and improvement algorithm
for the vehicle routing problem with soft time windows. Trans Res Part C Emerg
Technol 18(5):668–679
6. Gao S, Chabini I (2006) Optimal routing policy problems in stochastic time-
dependent networks. Trans Res Part B Methodol 40(2):93–122
7. Ghoseiri K, Ghannadpour SF, Seifi A (2010) Locomotive routing and scheduling
problem with fuzzy time windows. In: Transportation research board 89th Annual
Meeting (No: 10-2592)
8. Heilpern S (1992) The expected value of a fuzzy number. Fuzzy Sets Syst 47:81–86
9. Lightnerlaws C, Agrawal V et al (2016) An evolutionary algorithm approach for
the constrained multi-depot vehicle routing problem. Int J Intell Comput Cybern
9(1)
Considering Fuzzy Random Time Windows in Vehicle Routing Problems 211
10. Nair DJ, Grzybowska H, et al (2016) Food rescue and delivery a heuristic algorithm
for periodic unpaired pickup and delivery vehicle routing problem. Trans Res Board
11. Puri ML, Ralescu DA (1986) Fuzzy random variables. J Math Anal Appl
114(2):409–422
12. Qureshi A, Taniguchi E, Yamada T (2009) An exact solution approach for vehicle
routing and scheduling problems with soft time windows. Transp Res Part E Logist
Transp Rev 45(6):960–977
13. Shi C, Li T et al (2016) A heuristics-based parthenogenetic algorithm for the vrp
with potential demands and time windows. Sci Program 1:1–12
14. Tang J, Pan Z et al (2009) Vehicle routing problem with fuzzy time windows. Fuzzy
Sets Syst 160(5):683–695
15. Taş D, Dellaert N et al (2012) Vehicle routing problem with stochastic travel times
including soft time windows and service costs. Comput Oper Res
16. Wassan N, Wassan N et al (2016) The multiple trip vehicle routing problem with
backhauls: formulation and a two-level variable neighbourhood search. Comput
Oper Res
17. Xu J, Liu Y (2008) Multi-objective decision making model under fuzzy random
environment and its application to inventory problems. Inform Sci 178(14):2899–
2914
Hybrid Multiobjective Evolutionary Algorithm
with Differential Evolution for Process Planning
and Scheduling Problem
1 Introduction
In an intelligent manufacturing system, a set of prismatic parts need to be
processed into a products effectively and economically according to various
resources constraints. The parts have operations with different features, which
to be related with the machines, tools and tool access directions (TADs). More-
over, the precedence relationship constraints among operations must be satis-
fied with the technological and the geometrical considerations. Process planning
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 17
Hybrid Multiobjective Evolutionary Algorithm 213
generates the optimal process plans, i.e. optimizes the operations (machine, tool,
tool access direction) and their sequences. The scheduling assignments the most
appropriate moment to execute each operation with competitive resources. The
process planning and scheduling (PPS) optimizes of the optimal process plan
and schedule simultaneously within the precedence relationship constraints and
manufacturing resources. However, it is very difficult to efficiently find an opti-
mal solutions from all of the combinations of all of the operations, manufacturing
resources satisfying one and more specified objectives.
Since the complex and multi-resource constraints as well as multi-objective
requirements of PPS problem, many researchers have addressed the evolutionary
algorithms (EAs) method to deal with it while satisfying the single and multiple
objectives [3,10,11,17,19]. Especially, multi-objective evolutionary algorithms
(MOEAs) are well suitable for solving multi-objective optimizations (MOPs)
[6,18]. In MOEAs, Vector evaluated genetic algorithm (VEGA) divides the pop-
ulation into several sub-populations according to the number of objectives, each
of which evolves toward a single objective [14]. As two classical MOEAs, NSGA-
II [5] and SPEA2 [20] have been proven to get better convergence and distri-
bution performances in solving MOPs. NSGA-II uses the Pareto ranking and
crowding distance mechanism to obtain better performances. SPEA2 proposes
raw fitness assignment mechanism and density mechanism to guarantee the con-
vergence and distribution performances. For solving the multi-objective PPS
problem, Zhang et al., propose a hybrid sampling strategy-based multi-objective
evolutionary algorithm (HSS-MOEA), which combines the sampling strategy of
VEGA and the sampling strategy based on a new Pareto dominating and domi-
nated relationship-based fitness function (PDDR-FF) [19]. The hybrid sampling
strategies preserve both the convergence and the distribution performance while
reduce the computational time.
As a real-parameter optimization method, Differential Evolution (DE) [16]
has been also treated as a population-based approach, in which mutation and
crossover are the variation operators used to generate new solutions. In DE,
the mutation operator has been executed based on the differences of individu-
als to guide the search direction in the current population and the replacement
mechanism uses to maintain the population. The effectiveness but simplicity of
DE has attracted many research interests in single objective and multi-objective
optimization [1,2,4,7–9,12,13,15]. Especially, Iorio and Li [8] proposed three DE
variants incorporating the directional information to improve the optimization
performance. The direction information among different individuals with same
or different rank as well as different crowding distances could guide the search
converge towards the Pareto frontier and/or prefers along the Pareto frontier
to improve the convergence and distribution performances. Iorio and Li [8] pro-
vided an interesting topic to combine the DE with other MOEAs to improve
the performance more especially for most complicated PPS problem, since only
applies the DE on PPS problem could increase the computation time for prac-
tical optimization problem.
214 C. Wang et al.
Parameters:
I: number of parts;
Ji : number of operations for part i;
M : number of machines;
L: number of tools;
D: number of TADs;
O: set of operations for part i, Oi = {oij |j = 1, 2, · · · , Ji };
Hybrid Multiobjective Evolutionary Algorithm 215
tPRE MC TC SC
mij = tmij + tmij + tmij (1)
The processing time for an operation consists of the preparation time and
the machining time for the operation.
tP PRE M
mij = tmij + tmij , (2)
completion time plus its processing time might be smaller that its completion
time.
216 C. Wang et al.
Decisionvariables:
1, if oij is performed by machinem,
xMmij = 0, otherwise;
1, if oij is performed by tool l,
xTlij =
otherwise;
0,
1, if oij is performed by TAD d,
xDdij = 0, otherwise;
1, if oij is performed directly before okh ,
yijkh =
0, otherwise;
1, if X = Y,
Ω (X, Y ) =
0, otherwise.
The mathematical model can be formulated as the following bi-criteria non-
linear mixed integer programming (NMIP) model:
mij = 0, ∀ (i, j) ∈
xM / Am , ∀m (11)
yijkh ∈ {0, 1} , ∀ (i, j) , (k, h) (12)
mij ∈ {0, 1} , ∀m, (i, j)
xM (13)
tmij ≥ 0, ∀m, (i, j).
C
(14)
The sub populations and elitist population A(t) are combined to form a
mating pool. In the mating pool, sub-pop-1 saves the good individuals for one
objective, and sub-population 2 stores the good individuals for the other objec-
tive. The elitist population (archive) holds the individuals with good PDDR-FF
values. Therefore, one-third of the individuals serve one objective, one-third the
other objective, and the left one-third both the two objectives in the mating
pool.
Phase 3: Reproduction with Traditional Crossover and Mutation Operators
Problem-dependent crossover and mutation operators are used to reproduce
new individuals to form new population P(t+1). Moreover, a local search mech-
anism is proposed to improve the quality of individuals after the reproduction
process. The details of reproduction and local search is described in article by
Zhang et al. [19].
Phase 4: Archive Maintenance by PDDR-FF based Elitist Sampling Strategy
The individuals in A(t) and P (t) are combined to form a temporary archive
A (t). Thereafter, the PDDR-FF values of all individuals in A (t) are calculated
and sorted. If the individual is nondominated one, its fitness value will not exceed
one. The fitness value of dominated individual will exceed one. Moreover, even
the all nondominated individuals have also different PDDR-FF values. The non-
dominated individuals will be smaller values (near to 0) than the edge points
(near to 1). After calculating the fitness function value, the smallest |A(t)| indi-
viduals in A (t) are copied to form A (t + 1).
Phase 5: Archive Enhancement by Combining DE
In this paper, the DE operators such as mutation, crossover and selection
operators only apply on the temporary archive A’(t+1), which like a local search
process to improve the performance of archive. After DE operator, the obtained
solution set (temporary archive) A (t+1) and A (t+1) are combined to generate
the last A(t + 1) according to the PDDR-FF values.
The DE/current/2 is used to generate new mutants and each individuals
xi in A (t + 1) and three individuals r1 , r2 , r3 are randomly picked up to do
reproduction operators according to the following mutation operator.
0.6 100
Coverage
0.4
50
0.2
0
0
C(HMOEA-DE,HSS-MOEA) C(HSS-MOEA,HMOEA-DE) HMOEA-DE HSS-MOEA
4
x 10 Hypervolume Spacing
4
3.5 150
Hypervolume
3
Spacing
100
2.5
2
50
1.5
1
0
HMOEA-DE HSS-MOEA HMOEA-DE HSS-MOEA
5 Conclusions
References
1. Abbass HA, Sarker R (2011) The pareto differential evolution algorithm. Int J
Artif Intell Tools 11(04):531–552
2. Ali M, Siarry P, Pant M (2012) An efficient differential evolution based algorithm
for solving multi-objective optimization problems. Eur J Oper Res 217(2):404–416
3. Ausaf MF, Gao L, Li X (2015) Optimization of multi-objective integrated process
planning and scheduling problem using a priority based optimization algorithm.
Front Mech Eng 10(4):1–13
4. Babu BV, Jehan MML (2003) Differential evolution for multi-objective optimiza-
tion. In: The 2003 Congress on Evolutionary Computation, 2003, CEC 2003, vol
4, pp 2696–2703
5. Deb K, Pratap A et al (2002) A fast and elitist multiobjective genetic algorithm:
NSGA-II. IEEE Trans Evol Comput 6(2):182–197
6. Gen M, Cheng R, Lin L (2008) Network models and optimization: Multiobjective
genetic algorithm approach
7. Guo W, Yu X (2014) Non-dominated sorting differential evolution with improved
directional convergence and spread for multiobjective optimization. In: Proceedings
of the Companion Publication of the 2014 Annual Conference on Genetic and
Evolutionary Computation, pp 87–88
8. Iorio AW, Li X (2006) Incorporating directional information within a differential
evolution algorithm for multi-objective optimization. In: Conference on Genetic
and Evolutionary Computation, pp 691–698
9. Li H, Zhang Q (2006) A multiobjective differential evolution based on decompo-
sition for multiobjective optimization with variable linkages. In: Parallel Problem
Solving From Nature - PPSN Ix, International Conference, Reykjavik, Iceland,
9–13 September 2006, Proceedings, pp 583–592
10. Nayak A (2015) Multi-objective process planning and scheduling using controlled
elitist non-dominated sorting genetic algorithm. Int J Prod Res 53(6):1712–1735
11. Phanden RK, Jain A, Verma R (2013) An approach for integration of process
planning and scheduling. Int J Comput Integr Manufact 26(4):284–302
12. Qu BY, Suganthan PN, Liang JJ (2012) Differential evolution with neighborhood
mutation for multimodal optimization. IEEE Trans Evol Comput 16(5):601–614
13. Santana-Quintero LV, Coello Coello CA (2005) An algorithm based on differential
evolution for multi-objective problems. Int J Comput Intell Res 1(2):151–169
14. Schaffer JD (1985) Multiple objective optimization with vector evaluated genetic
algorithms. In: International Conference on Genetic Algorithms, pp 93–100
222 C. Wang et al.
15. Sindhya K, Ruuska S et al (2011) A new hybrid mutation operator for multiobjec-
tive optimization with differential evolution. Soft Comput 15(10):2041–2055
16. Storn R, Price K (1997) Differential evolution – a simple and efficient heuristic for
global optimization over continuous spaces. J Glob Optim 11(4):341–359
17. Xia H, Li X, Gao L (2016) A hybrid genetic algorithm with variable neighborhood
search for dynamic integrated process planning and scheduling. Comput Ind Eng
102:99–112
18. Yu X (2010) Introduction to evolutionary algorithms. In: International Conference
on Computers and Industrial Engineering, p 1
19. Zhang W, Gen M, Jo J (2014) Hybrid sampling strategy-based multiobjective evo-
lutionary algorithm for process planning and scheduling problem. J Intell Manufact
25(5):881–897
20. Zitzler E, Laumanns M, Thiele L (2001) SPEA2: Improving the strength pareto
evolutionary algorithm
A New Approach for Solving Optimal Control
Problem by Using Orthogonal Function
1 Introduction
system or both contains at least one fractional order derivative term [25]. Inte-
ger order optimal controls have already been well established and a significant
amount of works have been done in the field of optimal control of integer order
systems. Agrawal formulated and developed a numerical scheme for the solution
of FOCP [1] in the Caputo sense. Biswas proposed a pseudo-state space represen-
tation of a fractional dynamical system, which is exploited to solve a fractional
optimal control problem using a direct numerical method [21]. Sweilam et al.
solved some types of fractional optimal control problem with a Hamiltonian for-
mula using a spectral method based on Chebyshev polynomials [24]. Bernstein
polynomials have been used for finding the numerical solution of FOCP by using
Lagrange multipliers [10].
Approximation by orthogonal families of basis functions is widely used in
science and engineering. The main idea behind applying an orthogonal basis is
reduction of the problem under consideration into a system of algebraic equa-
tions. This is possible by truncating series of orthogonal basis functions for the
solution of the problem and applying operational matrices. The orthogonal func-
tions are classified into three main category [23]: the first one is sets of piecewise
constant orthogonal functions such as the Walsh functions and block pulse func-
tions. The second one is orthogonal polynomials such as the Laguerre, Legendre
and Chebyshev functions, and the last one is sine-cosine functions. In one hand
approximating a continuous function with piecewise constant basis functions
results in a piecewise constant approximation, on the other hand, if a discon-
tinuous function is approximated with continuous basis functions, the resulting
approximation is continuous which cannot properly model the discontinuities.
So, neither continuous basis functions nor piecewise constant basis functions, if
used alone, can efficiently model both continuity and discontinuity of phenom-
ena at the same time. In the case that the function under approximation is not
analytic, wavelet functions will be more effective.
In this paper, we propose a computational method based on Sine-Cosine
wavelet with their fractional integration and derivative operational matrix to
solve the FOCP. The main idea is reduction the problem under consideration into
a system of algebraic equations. To this end, we expand the fractional derivative
of the state variable and the control variable using the Sine-Cosine wavelet with
unknown coefficients.
The paper is organized as follows. In first section we will give the definitions
of fractional calculus, then express a brief review of block pulse function and
the related fractional operational matrices. In Sect. 4, we describe Sine-Cosine
wavelets and its application in function approximation. In Sect. 5, operational
matrices of fractional integration and derivative for considered wavelet is given.
In Sect. 6, the proposed method is described for solving the underlying FOCP. In
the last section the proposed method is applied for solving numerical example.
A New Approach for Solving Optimal Control Problem 225
with
⎧ 1
⎨ √2 m=0
fm (t) = cos(2mπt) m = 1, 2, · · · , l (10)
⎩
sin(2(m − l)πt) m = l + 1, · · · , 2l
where cn,m = f (t), ψn,m and ., . denotes the inner product as:
+∞
cn,m = f (t)ψn,m (t)dt. (12)
−∞
where Ψ (t) represent considered wavelet. C and Ψ (t) are 2k (2l + 1) × 1 matrices
which are given by:
C T = [c00 c01 · · · c0,2l , c10 , · · · , c1,2l , · · · , c2k −1,0 , · · · , c2k −1,2l ], (13)
T
Ψ = [ψ00 ψ01 · · · ψ0,2l , ψ10 , · · · , ψ1,2l , · · · , ψ2k −1,0 , · · · , ψ2k −1,2l ]. (14)
In this section we find the operational matrix of fractional derivative for the con-
sidered wavelet using the operational matrix of fractional integration for BPF.
A New Approach for Solving Optimal Control Problem 227
For m = 1, 2, · · · , l
⎡
m ⎢ n(2l + 1) + 1 n(2l + 1)
ψn,m = ⎣0, 0, · · · , 0, ψn,m+l −ψn,m+l , (21)
2
k+1
2 mπ m m
n(2l+1)
(n + 1)(2l + 1) n(2l + 1) + 2l
· · · , ψn,m+l − ψn,m+l , 0, 0, · · · , 0 × Bm .
m m
And for m = l + 1, · · · , 2l we get
⎡
−m ⎢ n(2l + 1) + 1 n(2l + 1)
ψn,m = ⎣0, 0, · · · , 0, ψn,m−l − ψn,m−l , (22)
(m−l)π m m
k+1
2 2
n(2l+1)
(n + 1)(2l + 1) n(2l + 1) + 2l
· · · , ψn,m−l − ψn,m−l , 0, 0, · · · , 0 × Bm .
m m
228 A. Kheirabadi et al.
Therefore we have Ψ (x) = Φm ×m Bm (x) where Φm ×m = diag(Φ0 , Φ1 , · · · ,
Φ2k −1 ), Φn is defined as follows, in the following matrix, i = n(2l + 1)
⎡ k k
⎤
22 ··· 22
⎢ m i+2l ⎥
⎢ k+1 (ψn,1+l i+1 i i+2l+1
m − ψn,1+l ( m ) · · · ψn,1+l ( m ) − ψn,1+l ( m )) ⎥
⎢ 2 2 1π ⎥
⎢. . . .. ⎥
⎢ .. .. ⎥
⎢ i+1 i ⎥
⎢ m ⎥
Φn = ⎢ k+1 (ψn,2l m − ψn,21 m · · · ψn,2l ( i+2l+1
m ) − ψn,21 ( i+2l
m ))) ⎥.
⎢ 2 2 lπ ⎥
⎢ −m (ψ i+1 − ψ ( i ) i+2l+1 i+2l
· · · ψn,1 ( m ) − ψn,1 ( m )) ⎥
⎢ k+1 n,1 m n,1 m ⎥
⎢ 2 2 1π ⎥
⎢ .. .. . . ⎥
⎢. . . ⎥
⎣ i+1 ⎦
−m i i+2l+1 i+2l
k+1 (ψn,l m − ψn,l ( m ) · · · ψn,l ( m ) − ψn,l ( m ))
2 2 lπ
(23)
For finding operational matrix of fractional derivative of vector Ψ (t), first of all
we try to find the operational matrix of fractional integration.
I α Ψ (x) = I α Φm ×m .Bm (x) = Φm ×m I α Bm (t) = Φm ×m F α Bm (x) (25)
α α α
⇒ P Ψ (x) = P Φ m ×m B
m (x) = Φm ×m F B m (x)
⇒ P = Φm ×m F Φ−1
α
m ×m
α
where A and B are constant matrices with the appropriate dimensions, also in
cost functional S and Q are symmetric positive semi-definite matrices and R
is a symmetric positive definite matrix. In this section, the Sine-Cosine wavelet
is used for solving the above problem. We approximate each xi (t) and ui (t), in
terms of Sine-Cosine wavelets as
X(t) = [x1 (t), x2 (t), · · · , xs (t)]T xi (t) = Ψ T (t)Xi or XiT Ψ (t), (34)
X(t) = Ψ̂sT (t)X X = [X1T , X2T , · · · , XsT ] Ψ̂s (t) = Is ⊗ Ψ (t), (35)
U (t) = [u1 (t), u2 (t), · · · , uq (t)]T ui (t) = T
Ψ (t)Ui or UiT Ψ (t), (36)
U (t) = Ψ̂qT (t)U U= [U1T , U2T , · · · , UqT ] Ψ̂s (t) = Is ⊗ Ψ (t), (37)
Equations (43) and (44) generate 2k (2l + 1) set of linear equations. These
linear equations can be solved for unknown coefficients of the vectors X T and
U T . Consequently, X(t) and U (t) can be calculated.
7 Illustrative Example
We applied the method presented in this paper and solved the undergoing
example.
We want to find a control variable u(t) which minimizes the quadratic perfor-
mance index J. This problem is solved by proposed method with α = 1, m = 5
and n = 7, the numerical value obtained for J is 0.1979, which is close to the
exact solutions in the case α = 1(0.1929).
A New Approach for Solving Optimal Control Problem 231
8 Conclusion
In this paper, we derive a numerical method for fractional optimal control based
on the operational matrix for the fractional integration and differentiation. The
procedure of constructing these matrices is summarized. An example is given
to show the efficiency of method. The obtained matrices can also be used to
solve problems such as fractional optimal control with delay. Moreover we could
find these matrices using another set of orthogonal functions instead of BPFs,
it seems if we use a set of continuous orthogonal function the numerical result
will improve.
References
1. Agrawal OP (2008) A formulation and numerical scheme for fractional optimal
control problems. J Vibr Control 14(9–10):1291–1299
2. Bagley RL, Torvik P (1983) A theoretical basis for the application of fractional
calculus to viscoelasticity. J Rheol 27(3):201–210
3. Baillie RT (1996) Long memory processes and fractional integration in economet-
rics. J Econometrics 73(1):5–59
4. Bohannan GW (2008) Analog fractional order controller in temperature and motor
control applications. J Vibr Control 14(9–10):1487–1498
5. Canuto C, Hussaini MY, Quarteroni A, Zang TA Jr (1988) Spectral methods in
fluid dynamics. Annu Rev Fluid Mech 57(196):339–367
6. Dehghan M, Manafian J, Saadatmandi A (2010) The solution of the linear frac-
tional partial differential equations using the homotopy analysis method. Z Natur-
forsch A 65(Z. Naturforsch):935–949
7. Dehghan M, Manafian J, Saadatmandi A (2010) Solving nonlinear fractional partial
differential equations using the homotopy analysis method. Numer Methods Partial
Differ Equ 26(2):448–479
8. He J (1998) Nonlinear oscillation with fractional derivative and its applications.
In: International conference on vibrating engineering, Dalian, China, vol 98, pp
288–291
9. He J (1999) Some applications of nonlinear fractional differential equations and
their approximations. Bull Sci Technol 15(2):86–90
10. Jafari H, Tajadodi H (2014) Fractional order optimal control problems via the
operational matrices of bernstein polynomials. Upb Sci Bull 76(3):115–128
11. Kajani MT, Ghasemi M, Babolian E (2006) Numerical solution of linear integro-
differential equation by using sine-cosine wavelets. Appl Math Comput 180(2):569–
574
12. Li Y, Sun N (2011) Numerical solution of fractional differential equations using the
generalized block pulse operational matrix. Comput Math Appl 62(3):1046–1054
13. Liu F, Agrawal OP et al (1999) Fractional differential equations: an introduction to
fractional derivatives, fractional differential equations, to methods of their solution
and some of their applications. Academic Press
14. Lotfi A, Dehghan M, Yousefi SA (2011) A numerical technique for solving fractional
optimal control problems. Comput Math Appl 62(3):1055–1067
15. Momani S, Odibat Z (2007) Numerical comparison of methods for solving linear
differential equations of fractional order. Chaos Solitons Fractals 31(5):1248–1255
232 A. Kheirabadi et al.
16. Odibat ZM, Shawagfeh NT (2007) Generalized taylor’s formula. Appl Math Com-
put 186(1):286–293
17. Panda R, Dash M (2006) Fractional generalized splines and signal processing. Sig
Process 86(9):2340–2350
18. Rossikhin YA, Shitikova MV (1997) Applications of fractional calculus to dynamic
problems of linear and nonlinear hereditary mechanics of solids. Appl Mech Rev
50:15–67
19. Saadatmandi A, Dehghan M (2010) A new operational matrix for solving
fractional-order differential equations. Comput Math Appl 59(3):1326–1336
20. Saez D (2009) Analytical solution of a fractional diffusion equation by variational
iteration method. Comput Math Appl 57(3):483–487
21. Sen S (2011) Fractional optimal control problems: A pseudo-state space approach.
J Vibr Control 17(17):1034–1041
22. Shawagfeh NT (2002) Analytical approximate solutions for nonlinear fractional
differential equations. Appl Math Comput 131(2–3):517–529
23. Sohrabi S (2011) Comparison chebyshev wavelets method with bpfs method for
solving abel’s integral equation. Ain Shams Eng J 2(3–4):249–254
24. Sweilam NH, Alajami TM, Hoppe RHW (2013) Numerical solution of some types
of fractional optimal control problems. Sci World J 2013(2):306237
25. Tangpong XW, Agrawal OP (2009) Fractional optimal control of continuum sys-
tems. J Vibr Acoust 131(2):557–557
The Sustainable Interaction Analysis of Cause
Marketing and Ethical Consumption in Electric
Business Platform: Based on Game Theory and
Simulation Analysis
1 Introduction
In recent years, the development rate of China’s e-commerce is 2–3 times of GDP
(7% 9%). China’s B2C online shopping market deals reached 609.67 billion yuan
in the second quarter of 2016. With the rise of e-commerce, the businessman
tried to introduce a cause marketing methods to get more customers focus on
their homogeneous products, for example, some businesses join the public wel-
fare plan of Taobao platform. Cause marketing dated from the public welfare
activity of “Renovation of the statue of liberty” by the cooperation of American
Express Company and Alice Island Foundation, and then it obtained the wide-
spread attention and development. Nowadays, more and more traditional cor-
porations have been successfully performed the corporate social responsibility
by the form of cause marketing. The cause marketing is not only contributing
to society, but also obtaining the business interests [1,8], for example, the cause
marketing activity of “buy a bottle of water, donate a penny” improved Nongfu
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 19
234 X. Zu and W. Yu
spring’s product sales, and Nongfu Spring raised more than 5 million yuan,
361◦ “ONE CARES ONE” also promoted sales of especially-made shoes and
expanded brand influence. Cause marketing, also known as charity Marketing or
cause-related Marketing, is a special form of donation. It depends on the pur-
chasing behavior of consumers, and donates the amount of a certain percentage
of business to the special public welfare project. Cause marketing has become
one of the important way of charity enterprises to perform social responsibility in
many countries. Cause marketing can promote sales, social responsibility image
and brand awareness [9,13]. In E-commerce platform, corporations, which suc-
cessfully continue to push the cause marketing, not only can achieve the aim of
marketing businesses, but also can effectively promote the healthy development
of Chinese social and economic. However, focusing on E-commerce situation,
businesses joined the cause marketing activities in E-commerce platform, and
didn’t get a good response of consumers.
From the practice experience of western countries, the benign interaction
between the enterprise and the consumer is the key to successfully propel the
corporate social responsibility [12]. The concept of ethical consumption is that
consumers not only consider the commercial value of product, but also consider
the beneficial impact of their purchasing behavior on society, environment, and
so on [5], that effectively promote the corporate cause marketing to fulfill the
social responsibility better. The sustained interaction of cause marketing and
ethical consumption can ensure that businessmen get more consumer approval
ratings in order to cover cost and promote sales and corporate image. According
to the survey in U.S. market, 78% of consumers are willing to buy products with
cause marketing, 66% of consumers willing to switch brands to support cause
marketing [3], and 96.6% consumers also incline to choose homogeneous products
with good corporate social image [6] in China market. However, consumers are
also “rational economic man”. Because of the higher perceived risk in the E-
commerce platform, consumers are not willing to fulfill the ethical consumption
with much cost in order to balance the benefit and the cost [11], so there is often
the inconsistent phenomenon about the consumer ethical attitude and behavior.
The relationship researches of cause marketing and ethical consumption
mainly focus on two aspects: the first is the relationship of cause marketing and
ethical consumption based on the investigation method and experiment method
[1,9], the second is the cause of the inconsistent about consumers’ ethical atti-
tude and behavior based on the interview and other qualitative methods [2].
However, few scholars’ researches focus on the interaction mechanism, especially
the research of systemic balance and dynamic evolution in the E-commerce plat-
form. It belongs to a kind of gambling behavior that cause marketing and ethical
consumption. This research constructs the evolutionary game model based on
evolutionary game theory and systematically explores the evolution rules. The
research result not only enriches the existing theory research of cause market-
ing and ethical consumption, but also provides valuable advice on sustainable
promoting the cause marketing and ethical consumption in the E-commerce
platform.
Analysis of Cause Marketing and Ethical Consumption 235
Table 1. Payoff matrix for gaming revenue about businesses and consumers.
Type Consumers
Ethical Unethical
consumption (y) consumption
(1 − y)
Businesses Cause marketing (x) P1 − C1 + F, V − K −C1 + F, 0
Non cause marketing (1 − x) −C2 , −V P2 − C2 , 0
V
p∗ = , (9)
2V − K
C1 − F + P2 − C2
q∗ = . (10)
P1 + P2
Analysis of Cause Marketing and Ethical Consumption 237
Friedman proposed that the stability of the equilibrium point of the evolution
system can be obtained from the local stability analysis of the jacobian matrix
(denoted by J) [4]. The jacobian matrix by the combination of Eqs. (7) and (8):
∂ ẋ ∂ ẋ
a11 a12
J = ∂ ẏ ∂ ẏ =
∂x ∂y
, (11)
∂x ∂y
a21 a22
and
a11 = (1 − 2x) [y(P1 + P2 ) + F − C1 − P2 + C2 ] , (12)
a12 = x(1 − x)(P1 + P2 ), (13)
a21 = y(1 − y)(2V − K), (14)
a22 = (1 − 2y) [x(2V − K) − V ] . (15)
Thus, we can get the numerical results about a11 , a12 , a21 and a22 under the
five local equilibrium points, the results are shown in Table 2.
Table 2. The numerical results about a11 , a12 , a21 and a22 under the five local equi-
librium points.
Table 3. The stability analysis of local equilibrium point when F > C1 + P2 − C2 and
V > K.
Table 4. The stability analysis of local equilibrium point when F < C1 + P2 − C2 and
V > K.
Table 5. The stability analysis of local equilibrium point when F > C1 + P2 − C2 and
V < K.
Table 6. The stability analysis of local equilibrium point when F < C1 + P2 − C2 and
V < K.
of income, but a strong sense of social responsibility and good support and
encouragement of electric business platform will increase the business additional
benefits, so businesses still prefer to cause marketing strategy. Responsibility
behavior of businesses and consumers will enter a good time of continuous inter-
active development.
Case 2: From the stability analysis result of the replicated dynamic equation,
(1,1) and (0,0) are ESS when F < C1 + P2 − C2 and V > K. The study respec-
tively substitutes F = 4, C1 = 4, P2 = 5, C2 = 2, V = 3 and K = 1 and F = 1,
C1 = 4, P2 = 5, C2 = 2, V = 3 and K = 0.5 into the simulation model, and the
simulation result is as shown in Figs. 2(a) and 3(a).
The simulation results validate that there are two kinds of behavior patterns
in the evolution system, namely [cause marketing, ethical consumption] and [non
cause marketing, unethical consumption]. When the benefits from a good cor-
porate brand image and self-perception of social contributions and commercial
policy benefits make up for the loss of profits due to cause marketing, businesses
will still choose cause marketing. Consumers, compensated for the additional
cost, will still choose the ethical consumption strategy, so businesses and con-
sumers are in a benign behavioral interaction process. When the additional ben-
efits of cause marketing can not make up for the loss of profits, the system will
quickly move toward a vicious direction: businesses will choose non cause mar-
keting strategy, consumers will choose unethical consumption strategy. Social
responsibility awareness of businesses and consumers is low and the system fails
to achieve the sustainable social interaction due to lack of awareness and support
of electric business platform.
Case 3: From the stability analysis result of the replicated dynamic equation,
(1,0) is ESS when F > C1 + P2 − C2 and V < K. The study substitutes F = 8,
C1 = 4, P2 = 5, C2 = 2, V = 3 and K = 4 into the simulation model, and the
simulation result is as shown in Fig. 4.
The simulation results validate that businesses will still keen on cause mar-
keting although they can not sell products when the additional effect is greater
than the sum of non cause marketing profits and the cause marketing cost. Con-
sumers, as “economic man”, can not pay too much extra expense for ethical
consumption, and finally give up ethical consumption. The stability of system
is based on policy support of the electric business platform and businesses own
economic interests, if the electronic business platform reduce support and the
economic benefits can not make up for cause marketing cost, then the system
will be rapidly unbalanced.
Case 4: From the stability analysis result of the replicated dynamic equation,
(0,0) is ESS when F < C1 + P2 − C2 and V < K. The study substitutes F = 1,
C1 = 4, P2 = 5, C2 = 2, V = 3 and K = 4 into the simulation model, and the
simulation result is as shown in Fig. 5.
The simulation results validate that consumers can not use the extra utility
of ethical consumption to make up for the extra cost, so they abandon the
ethical consumption, and gradually choose unethical consumption through game
learning. Although businesses get a certain income because of cause marketing,
Analysis of Cause Marketing and Ethical Consumption 241
businesses finally take non cause marketing strategy in order to sell products
for profit without ethical consumption support, so the system will evolve to a
vicious direction, and will be rapidly unbalanced.
Fig. 6. The theoretical framework for the realization of benign and sustainable inter-
action.
References
1. Al-Dmour H, Al-Madani S et al (2016) Factors affecting the effectiveness of cause-
related marketing campaign: moderating effect of sponsor-cause congruence. Int J
Mark Stud 8(5):114–127
2. Carrigan M, Attalla A (2001) The myth of the ethical consumer - do ethics matter
in purchase behaviour? J Consum Mark 18(7):560–578
3. Chang CT (2008) To donate or not to donate? Product characteristics and framing
effects of cause-related marketing on consumer purchase behavior. Psychol Mark
25(12):1089–1110
4. Friedman D (1991) Evolutionary games in economics. Econometrica 59(3):637–666
5. Hoffmann S, Hutter K (2012) Carrotmob as a new form of ethical consumption. The
nature of the concept and avenues for future research. J Consum Policy 35(2):215–
236
6. Huo B, Zhou Y (2014) Research on relationship of corporate social responsibility,
corporate reputation and corporate performance. J Ind. Technol Economics 1:59–
65 (in Chinese)
7. Khalifa NB, El-Azouzi R et al (2016) Evolutionary games in interacting commu-
nities. Dynamic Games Appl 1:1–26
8. Kim YJ, Lee WN (2009) Overcoming consumer skepticism in cause-related market-
ing: the effects of corporate social responsibility and donation size claim objectivity.
J Promot Manage 15(4):465–483
9. Plewa C, Conduit J, Quester PG (2015) The impact of corporate volunteering on
CSR image: a consumer perspective. J Bus Ethics 3:643–659
244 X. Zu and W. Yu
1 Introduction
Infrastructure constructions are essential to support the world economic develop-
ment. Both in developed and developing countries, governments have dedicated
a significant share of the public budget to develop and refurbish infrastruc-
tures [10]. While, with the global economy slows, the lack of economic resource
can prevent further economic growth and the development of already planned
infrastructure projects [5]. Infrastructure builders (always public sectors) are
driven to attract private investment to take part in infrastructure constructions.
This idea has elicited the generation of Public-Private Partnerships (PPP) mode.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 20
248 Y. Qiu et al.
The development and popularization of PPP well bridge the gap between a need
of infrastructures and a lack of budget. Worldwide, PPP grows into one of the
most important procurement mechanisms of project development.
PPP mode holds numerable advantages, among which the financial efficiency
is one of the main superiority. The British government directly name PPP
projects as “Private Finance Initiative”, therefore its effect on financing is evi-
dently reflected. The Asian Development Bank strong introduced PPP for con-
structing infrastructures at the beginning of its establishment, in order to reach
the value of money and promote the efficiency of financing [1]. Many scholars
studied the financing effects of PPP mode either from theoretical aspect or relat-
ing with practical application. Sastoque [12] concluded that it is one of the key
characteristics of PPP to count private sector’s contribution on financial issues
of public projects. Gatti indicated PPP is an efficient financing instrument acts
through designing, structuring, and executing, as an important role in whole
life cycle [6]. The high financing efficiency of PPP mode allows infrastructure
projects obtain the best value for money.
However, stakeholders of PPP are in need of a benchmark to learn about
the financing efficiency quantitatively; it is a necessary prerequisite for making a
sensible decision before it simplement. Uncertainties implicated in PPP projects,
like lengthy durations, high transaction costs and a lack of competition and
transparency, always lead to inefficiencies and ineffectiveness [2], and further
drag the financing efficiency down. Experts are trying to build a systematic
model for the assessment of financing efficiency in PPP. While, PPP projects
are multi-participation activities and multi-indicator processes which makes this
work more complicated than traditional ones, let alone making predictions and
selecting the optimal scheme.
Various methods have been proposed by scholars to settle this issue. The first
family of methods universally used in the assessment of PPP projects is Fuzzy
Synthetic Evaluation (FSE). Verweij [14] employed FSE to analysis satisfactory
outcome of financing in 27 PPP road constructions in the Netherlands. FSE
performs well in comparing existing options, however, fails to propose an opti-
mum scheme and make prediction or simulation. The second family of methods
for addressing the above problem is Analytic Hierarchy Process (AHP) analysis.
Zhuang [21] applied AHP to research the fiscal control in PPP of city infrastruc-
ture. AHP suffered from the same limitation as FSE. Besides, it is easily influ-
enced by the subjective rating of experts. Recently, an improved method had
been proposed for achieving a comparison of the existing options, optimization
of the inefficient ones, as well as eliminating subjective influence from human.
It is Data Envelopment Analysis (DEA). In YuYuanchun’s study, DEA plays
an important role in evaluating the technology transfer efficiency in industry-
university-research institution [17]. Wang Hong investigated the efficiency of
debt risk and fiscal expenditure of local government with an application of DEA
[15]. Another strong point of DEA is that it supports target data of each indi-
cator in relatively inefficient projects, which provides a reference for optimizing
How to Predict Financing Efficiency in Public-Private Partnerships 249
the original scheme. But it ignores the uncertainties and external influences in
projects implement, which can be conducted by Monte Carlo (MC) approach.
The objective of this work is to propose a mathematical model to evaluate
and predict the financing efficiency in PPP Projects. We apply DEA method
to handle difficulties caused by multiple participators and factors, and integrate
MC method to simulate uncertainties embraced in PPP projects. It is may offer
a new insight and academic reference for related research.
2 Problem Statement
The motivation of the proposed method lies in the fact that the crude FSE and
AHP method are limited in giving general evaluation and selecting the optimal
scheme; moreover, they are vulnerable to anthropic rating [9]. These restrictions
result in a series of dilemma in PPP projects, like hard to assess the potential
influence of uncertainties, and lack of quantitative support for decision making
of financing scheme. So, stakeholders in PPP projects are in an urgent need of a
systematic model that offers data support for the decision of financing strategies,
which should be uncertainties considering, optimum proposal providing, and free
from artificial factors.
DEA is able to achieve all the mentioned goals except that it fails to take
uncertainties in the implement and operation of PPP projects into account. The
principle of Monte Carlo analysis is to run stochastic values of independent vari-
able sand then simulate and identifies the possible distribution of outcomes [20].
A massive repetition of simulation processes counts the flexibility of changing
environment, which supplement the flaw of DEA.
In this work, we try to propound an approach by exploiting Monte Carlo
approach and integrating it with DEA. Monte Carlo works for simulating the
uncertainties in PPP projects which facilitate the new method to systematically
fulfill evaluation and optimization of financing efficiency in PPP Projects.
4 Model Establishment
Scholars have explored a number of avenues to calculate and optimize the pro-
ductivity and operational efficiency issues. However, it is noteworthy that DEA is
one of the most popular approaches in this area. It is more recent in applications
among studies and it also draws more effective conclusions in comparison with
some other methodologies like Stochastic Frontier Analysis (SFA) [13]. With a
popularization of the PPP mode, employing DEA as modeling method is becom-
ing more and more popular in latest decade [19]. While in the specific financing
efficiency field of PPP projects, only a few scholars have gone into and DEA is
scarcely used.
In this research, we first adopt DEA to evaluate the financing efficiency of
specific PPP projects and get the initially optimized data. Due to the limita-
tion of theoretical mathematical models, DEA ignores the uncertainties and all
possible scenarios in PPP projects. Therefore, Monte Carlo is designed to play
a role for filling in the vacancy. The integrated algorithm can be used to both
give an assessment and practically optimized scheme and its efficiency.
In the projects area, the application of DEA always embodies three main parts:
selecting input and output indexes, forming the decision-making unit set, and
finally analyzing from calculated outcome. For the efficiency of financing in PPP
projects, it involves two major areas: raising funds in low cost and using funds
efficiently. In addition, franchise and public service of public infrastructure are
two characteristics in PPP project. Accordingly, five indicators are selected to
measure the financing efficiency of PPP projects in this study: the ratio of capital
funds, the duration, and the franchise as input indexes; social influence and the
turnover ratio of total capital as output indexes.
Five indicators refer to five aspects of financing efficiency in PPP projects.
Their names, definitions, notations and formulas for indicators are listed in
Table 1.
For making up an effective decision-making unit set, the number of decision-
making units should be no less than twice the number of a sum of the input and
output indicators; otherwise its ability to distinguish efficiency will decline [7].
So, when measures the relative efficiency of a PPP project, the data of the other
nine projects is needed to compose the decision-making unit set. Then input all
indexes into DEA model and results can be generated. The calculating model is
as Eq. (2) [8].
252 Y. Qiu et al.
Table 1. The definition, notation, and calculation formula of five indexes in DEA
model
webpage (µ2 )
net operating income
Turnover ratio of Y2 Evaluate management quality Y2 = average total asset
total capital and utilization efficiency of total
capital
Bt : excess earnings in tth y; m : duration of franchise; i: discount rates.
K L
min θ−ε s− + s+
k=1 l=1
⎧ n
⎪
⎪ Xkj λj + s− = θxko , k = 1, 2, 3
⎪
⎪
⎪
⎪ j=1
⎪
⎪ n (2)
⎨ Y λ − s+ = y , l = 1, 2
lj j ko
s. t. j=1
⎪
⎪ n
⎪
⎪ λj = 1
⎪
⎪
⎪
⎪ j=1
⎩ − +
s , s , λj ≥ 0, j = 1, 2, · · · , n.
Fig. 1. The flowchart of simulation algorithm integrates Monte Carlo with DEA
as theoretical and fixed data. During the practical development and construction
of a PPP project, all the changes in policy, economics, natural environment
and project itself generate flexibilities and fluctuations. They may result in a
deviation from the expected optimal scheme. MC analysis is strong at simulating
realistic uncertainties by randomly sampling the possible parameters set based
on their fitted distribution.
The simulation process is the vital part in Monte Carlo analysis. It con-
tains defining distributions of independent parameter, inputting random sam-
pling data set, and gathering results to generate a fitted distribution of dependent
indicators. When integrates with DEA model, the sampled random date set need
to be put in DEA model for running and giving the results of efficiency indica-
tors, provided that the simulation times does not reach 1000 yet. An aggregation
of all the results obtained from every simulation forms a possible financing effi-
ciency distribution. Consequently, uncertainty is considered in the majorization.
The flowchart of simulation process of the new model is shown in Fig. 1.
The innovative algorithm combines data processes of MC and DEA, which
contribute to a complement of both methods. DEA creates raw data and algo-
rithm for simulation in MC, and MC simulation bridge the gap between reality
and theoretical values given by DEA. With the integrated algorithm, a prob-
ability distribution of financing efficiency with potential changes in implement
will be drawn. It offers a quantitative reference for managers of PPP projects to
analyze the feasibility and risk of the optimum proposal, and then make more
evidence-based and financing efficient decision.
5 Case Study
5.1 Project Introduction
Australia Adelaide Water Utility project was developed to mitigate the water
threat. PPP mode was applied to ally the public and private sector for supporting
254 Y. Qiu et al.
more efficient and quality service. The United Water Corporation (private sec-
tor) cooperated with the Water Company in South Australia (public sector)
to construct the PPP project. The overall budget was 4.3 billion included 1.6
billion capital investments, which was raised from both main stakeholders and
the others investment from banks and other social capital. During the negotia-
tion stage, the government committed a 27 year franchise right. The completed
water utility was expected to serve 5 million citizens and get 521 million’s annual
operating incomes.
Table 3. Comparison of initial and target data of Australia Adelaide Water Utility
Fig. 2. The flowchart of simulation algorithm integrates Monte Carlo with DEA
The target values are more theoretical, rather than feasible and practical.
For in the actual construction, it is surrounded by a changing environment all
the time, any uncertainty can result in a deviation from the optimal target.
If we predict and simulate all the possibility value of independent parameters,
the possible value of dependent parameters can be obtained. MC is created to
imitate this process. Ratio of capital funds, duration, franchise rights, social
influence and turnover ratio of total capital are five independent parameters in
this algorithm, we collect data from over hundred PPP projects so as to generate
a fitting probability distribution for each parameter, and they are hypothesized
as show in Fig. 2.
After sampling 1000 times in Monte Carlo simulation and running extracted
data in DEA model, the crossover process finally generates the probability dis-
tribution of pure technical efficiency θP E , scale efficiency θSE and technical effi-
ciency θ (Fig. 3). It shows the fitted probability curve for scale efficiency and
technical efficiency are lognormal distribution; while for pure technical efficiency
256 Y. Qiu et al.
Fig. 3. The flowchart of simulation algorithm integrates Monte Carlo with DEA
is beta distribution. With setting the optimal scheme as a target, the expected
value for the pure technical efficiency reaches 0.8, has a promotion over 20%
than the initial value; for the scale efficiency reaches 0.68, get a considerable
50% improvement; for the technical efficiency meets 0.54, improves 80%. The
blue area highlighted in graphs reveals the most probable range of financing
efficiency value with a boundary set at the half of the maximum value.
Finally, we can make a conclusion that the practical implementation of an
optimal scheme hardly fulfills the relative efficient state in the end, because
of all the uncertainties and change of environment related with PPP projects.
While the innovative algorithm proposed in this research offers an approach
to predict possible outcomes and distribution of financing efficiency. It enables
stakeholders make a better decision among financing strategies and prepare for
probable consequences.
6 Conclusions
In this work, we propose an efficient and practical model for estimating and pre-
dicting financing efficiency in PPP projects. It integrates DEA and Monte Carlo
approach to measure financing efficiency scientifically and objectively, as well as
take the uncertainties embodied in PPP project into consider. The innovative
model designs a crossover running between DEA and MC, which enable them to
compensate for each other’s weaknesses. DEA is an effective approach to mea-
sure the relative efficiency and delivered an optimal target to reach the relative
efficiency state [19]. Wanke, Peter F also applied DEA to analyze scale efficiency
of PPP projects in Brazilian ports [16]. While PPP projects are full of risks
and uncertainties which will result in a fluctuation from the optimal target, the
intrinsic restrictions of DEA fail to take it into consider. A number of scholars,
like DijunTan and Yuzba Bahadr, use the features of Monte Carlo to quantify
How to Predict Financing Efficiency in Public-Private Partnerships 257
and visualize volatility and flexibility [3,18]. In this manuscript, the MC method
is combined with DEA to handle uncertainties through a series of simulation.
The proposed algorithm offers a comprehensive method to assess and forecast
the financing efficiency in PPP projects.
In the case study of Australia Adelaide Water Utility project, one- thousand-
time simulation forms the possible probability distribution of final pure technical
efficiency, scale efficiency and technical efficiency of financing. It reveals the
possible outcome of financing efficiency in this PPP project when set the optimal
value derived from DEA as the target. It shows that, although uncertainties make
the relative efficiency hardly to meet a perfectly financing efficient state, while
the three efficiency parameters promoted 20%–80% than the initial financing
scheme. This result enables the stakeholders to have a full preparation for the
probable consequence.
The application of the new method which integrates MC and DEA is not
limited in the financing efficiency of PPP projects. It can be extended to various
fields whose assessment and simulation process involve multiple indicators and
uncertainties. After making corresponding changes in input and output indexes
and fit their possibility distribution, the innovative algorithm can be widely used
in different areas.
This study proposed a new model to predict the financing efficiency with fluctu-
ation caused by uncertainties in PPP projects, based on integration of DEA and
MC. The case study of Australia Adelaide Water Utility project shows that this
innovative method is capable to evaluate and predict flexibilities quantitatively
and visually. It also indicates that the uncertainties in PPP projects make the
target values in optimal scheme hard to reach, while in this specific project, it
also realizes a considerable promotion in all three efficiency indicators.
Further research, such as a more comprehensive indexes system of DEA and
subdivide uncertainties in to different categories, is necessary to delve more
in-depth and practical in this topic. A more comprehensive and thorough inves-
tigation of related factors in DEA method will contribute to a more accurate
outcome. Considering that the implementation of PPPs would be affected by eco-
nomic, social and environmental conditions, sorted uncertainties can facilitate a
more accurate calculation. Alternatively, a series of diversified case study need
to be carried out in the future to verify and enrich the reliability and feasibility
of this new model.
Acknowledgement. The authors would like to thank the National Natural Science
Foundation of China for financially supporting this research (Grant No.: 71502011).
It is also supported by the Fundamental Funds for Humanities and Social Sciences of
Beijing Jiaotong University (Grant No.: 2015jbwj013).
258 Y. Qiu et al.
References
1. Bank AD (2013) Public-private partnership operational plan 2012-2020: realizing
the vision for strategy 2020: the transformational role of public-private partnerships
in Asian development bank operations
2. Broadbent J, Gill J, Laughlin R (2008) Identifying and controlling risk: the problem
of uncertainty in the private finance initiative in the UK’s national health service.
Crit Perspect Account 19(1):40–78
3. Dijun T, Yixiang T (2007) Volatility modeling with conditional volatility and real-
ized volatility. In: International Conference on Management Science and Engineer-
ing Management, pp 264–272
4. Dubi A (2000) Monte carlo applications in systems engineering. In: IEEE R&M
symposium on a Dubi stochastic modeling of realistic systems topics in reliability
& maintainability & statistics tutorial notes
5. Franke M, John F (2011) What comes next after recession? - airline industry
scenarios and potential end games. J Air Transp Manage 17(1):19–26
6. Gatti S (2013) Project finance in theory and practice: designing, structuring, and
financing private and public projects. Academic Press, Cambridge
7. Li G, Lei Q, Yang Y (2014) The efficiency evaluation of Chinese film and televi-
sion industry listed companies based on DEA method. In: 2014 Proceedings of the
eighth international conference on management science and engineering manage-
ment. Springer, Heidelberg, pp 83–95
8. Lin Q (2013) Introduction of decision analysis. Tsinghua University Press, Beijing,
pp 45–47
9. Liu J, Yang Y (2014) Efficiency evaluation of Chinese press and publication listed
companies based on DEA model
10. Martins J, Rui CM, Cruz CO (2014) Maximizing the value for money of PPP
arrangements through flexibility: an application to airports. J Air Transp Manage
39:72–80
11. Platon V, Constantinescu A (2014) Monte carlo method in risk analysis for invest-
ment projects. Procedia Econ Finance 15:393–400
12. Sastoque LM, Arboleda CA, Ponz JL (2016) A proposal for risk allocation in social
infrastructure projects applying PPP in Colombia. Procedia Eng 145:1354–1361
13. Schøyen H, Odeck J (2013) The technical efficiency of Norwegian container ports:
a comparison to some Nordic and UK container ports using data envelopment
analysis (DEA). Marit Econ Logist 15(2):197–221
14. Verweij S (2015) Producing satisfactory outcomes in the implementation phase of
PPP infrastructure projects: a fuzzy set qualitative comparative analysis of 27 road
constructions in the Netherlands. Int J Project Manage 33(8):1877–1887
15. Wang H, Huang J, Li H (2017) Local government debt risk, fiscal expenditure
efficiency and economic growth. Springer, Singapore
16. Wanke PF, Barros CP (2015) Public-private partnerships and scale efficiency in
Brazilian ports: evidence from two-stage dea analysis. Socio-Econ Plann Sci 51:13–
22
17. Yu Y, Gu X, Chen Y (2017) Research on the technology transfer efficiency eval-
uation in industry-university-research institution collaborative innovation and its
affecting factors based on the two-stage DEA model. In: 2017 Proceedings of the
tenth international conference on management science and engineering manage-
ment. Springer, Singapore, pp 237–249
How to Predict Financing Efficiency in Public-Private Partnerships 259
1 Introduction
Alternative behavior model is an important issue based on deliberate managerial
decisions for cost accounting [1]. Underlying the traditional behavior model is
based on the activity level that associates changes linearly and proportionately
[30]. Many studies have documented an asymmetry behavior between costs and
resources in various perspectives [15,22–25,36]. The variances between the level
of costs to activity rise changes and the level of costs to activity fall changes called
“asymmetry” as sticky, anti-stick costs [7]. Conversely, The asymmetric cost
behavior is investigating the consequence cost stickiness on actual and forecast
earnings [28].
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 21
The Moderating Effects of Capacity Utilization 261
Managers tend to save capacity when capacity is strained and demand falls
then, add capacity when demand grows [7,13,16]. As optimistic expectations
lead to saving idle capacity in the sales response to necessary declines but these
predictions will make a sticky cost behavior [17]. The asymmetric slack mainte-
nance does not automatically lead to cost stickiness, but if managers retained in
response to a previous sales decrease, the significant is stickiness on a prior sales
increase and anti-stickiness on a prior sales decrease [11]. These motivations lead
us to question is “how capacity utilization moderate the relationship between
labor costs and capacity changes?” Capacity utilization is a measurement of
the percentage of firm productivity capacity that determine unused resources to
total resources [31].
Unemployment fluctuations may make stickiness labor costs in structural
parameters because the statistics of unemployment are replicated as well in
sticky labor behavior [14]. Empirical implications created a counter cyclical
wedge between the real labor and marginal rate [19]. Moreover, the significance
of labor is not flexible or inflexible because it is necessary to manage cost struc-
ture [6]. Moderating model has certain guiding significance for the application
of big data to management decision [37].
Prior studies identified that capacity changes asymmetrically impact on man-
ufacturing labor, selling and administrative, and total costs [5,13]. However,
Banker et al. [9] examined the moderating effect of prior periods on the rela-
tionship between sales changes and costs behavior. Yet, no research available in
literature presents the impact of capacity utilization on asymmetric labor costs
behavior.
This paper aims to analyze the impact of capacity utilization on the asym-
metry behavior between labor costs and employment capacity, and to explain
scientifically the differences without and within the interaction effects.
3 Research Methodology
where MLCi,t is a manufacturing labor cost for firm i at time t and is a non-
linear function of independent variables (ϕ1 , ϕ2 , ϕ3 , and ϕ4 ). M E is a manu-
facturing Capacity of employees by hours. DECi,t is an indicator set to 1 if
MEqi,t < MEqi,t−1 , and set to 0 otherwise. ϕ0 is a parameter that estimates the
asymmetric labor cost unassociated with employment capacity changes. ϕ1 is a
parameter that estimates the association between manufacturing labor cost and
employment capacity changes increase. ϕ2 is a Parameter of asymmetry mea-
surement that estimates the association between manufacturing labor cost and
employment capacity changes during increasing and decreasing. ϕ3 is a moder-
ating variable that estimates the association between labor cost and capacity
utilization changes. ϕ4 is a critical value that estimates how capacity utiliza-
tion moderates the association between manufacturing employment capacity and
labor costs behavior. εi,t is an error term for variability in labor cost change esti-
mation for firm i at time t.
The capacity utilization was computed for labor hour’s measures of the
unused and total capacity for each of the periods as in Eq. (2) below:
unused capacity
qu = 1 − , (2)
total capacity
where qu is employees Capacity Utilization for each activities. Unused capacity
is that hours cannot exceed effective capacity. Total capacity is the maximum
hours of output designed for the operation and facility other.
S&ALCi,t −S&ALCi,t−1 SEqi,t −SEqi,t−1
ln = θ0 + θ1 ln
S&ALCi,t SEqi,t
SEqi,t −SEqi,t−1
+θ2 DECi,t ln
i,t
SEq
qui,t −qui,t−1
+θ3 ln qu (3)
i,t
SEqi,t −SEqi,t−1
+θ4 DECi,t ln
SEqi,t
qui,t −qui,t−1
× ln qui,t + ωi,t ,
where: S&ALCi , is a selling and administrative labor cost for firm i at time t.
SEqi,t is a selling and administrative Capacity of employees by hours, and all
other variables were defined previously.
264 A.A. Karrar et al.
TLCi,t −TLCi,t−1 TEqi,t −TEqi,t−1
ln = γ0 + γ1 ln
TLCi,t TEqi,t
TEqi,t −TEqi,t−1
+γ2 DECi,t ln
i,t
TEq
qui,t −qui,t−1
+γ3 ln qu (4)
i,t
TEqi,t −TEqi,t−1
+γ4 DECi,t ln
TEqi,t
qui,t −qui,t−1
× ln qui,t + αi,t ,
where TLCi , is a total labor cost for firm i at time t. MEqi,t is a total Capacity
of employees by hours include all of the activities, and remainder of variables
are defined previously.
To develop the concept of asymmetry labor costs about employment capacity
structure (high utilization). The coefficients φ1 , and measure the average per-
centage increase in labor costs for one present increase in employment capacity,
whereas the sum of coefficients (φ1 + φ2 ), (θ1 + θ2 ) and (γ1 + γ2 ) in three mod-
els to measure the average percentage of decrease in labor costs for one present
Variable/Calculation Description
MLCi,t −MLCi,t−1
ln MLCi,t
Log-change in manufacturing labor costs. All labor costs
of manufacturing process and its supporting, payments
of industrial activity
S&ALCi,t −S&ALCi,t−1
ln S&ALCi,t
Log-change in Selling & administrative labor costs. All
labor costs of selling and administrative process,
payments of selling, and administrative activity
TLCi,t −TLCi,t−1
ln TLCi,t
Log-change in total labor costs. All labor costs of
manufacturing, supporting, selling, and administrative
activities
MEqi,t −MEqi,t−1
ln MEqi,t
Log-change in manufacturing employment capacity.
Total hours of employees in manufacturing and
supporting activity
SEqi,t −SEqi,t−1
ln SEqi,t
Log-change in selling employment capacity. Total hours
of employees in selling, general, and administrative
activity
TEqi,t −TEqi,t−1
ln TEqi,t
Log-change in total employment capacity. Total hours of
employees in the all activities
qui,t −qui,t−1
ln qui,t
Log-change in capacity utilization. The percentage rate
of each kind of actual capacity to design capacity
Manufacturing labor cost is collected from manufacturing, engineering & services, and
quality control activities. Selling and administrative labor cost is collected from mar-
keting and administration activities. Practical employees’ capacity is collected from a
dataset of the planning department. Unused employees’ capacity is calculated based on
differences between actual and practical employees capacity. Actual employees’ capac-
ity is calculated by divided labor costs on payment rates.
The Moderating Effects of Capacity Utilization 265
In the current study, we estimate three models for each hypothesis of manufac-
turing labor costs, selling and administrative labor costs, and total labor costs
(Tables 2, 3 and 4). For all of those categories, the estimates indicate significance
levels and support our hypotheses. The evidence finds labor costs behavior is
sticky on a prior capacity decrease and indicate significant anti-sticky behavior
on a prior capacity decrease with interactive effects of capacity utilization (<0
and >0 continues to all models).
In the regression analysis, we find a sticky behavior between labor costs and
capacity changes for all of the labor costs categories but in different degrees,
the estimates indicate that ϕ1 is positive, ϕ2 is negative and significant (see
model (1) to each Tables 1, 2 and 3). The results found manufacturing, selling
and administrative labor, total labor costs are sticky on average by magnitudes
of prior employment capacity increase and decrease.
Table 2-panel A. presents a model (1), manufacturing labor costs response to
capacity increase is statistically greater than capacity decrease (ϕ1 ϕ1 +ϕ2 ).
The coefficient is negative (−0.56%, SE. = 0.18) and significantly different from
zero at the 1 % (t-statistics −3.176). On average, the manufacturing labor costs
increase 0.74% per 1% increase in employment capacity (ϕ1 ) and costs decrease
by 0.18% per 1% decrease in employment capacity (ϕ1 + ϕ2 ). The results show
that the adjusted R2 is 0.71, the model is significant (p < 0.001) level. Means
the behavior between employment capacity change and manufacturing labor
cost is sticky (H1a). However, Selling and administrative labor costs response
to capacity increase statistically is greater than capacity decrease (θ1 > + 2).
The coefficient is negative and significant (−0.40%, SE. = 0.12, t-statistics
−3.25). On average, the Selling and administrative labor costs increase 0.71%
266 A.A. Karrar et al.
Table 3. Validation test of the sticky behavior: Nonlinear analysis of moderation test
among employment capacity, capacity utilization, and selling and administrative labor
costs change
Panel A: Regression analysis: direct effects model 1 Dependent variable = selling and
administrative labor cost (S&ALC)
Variable Parameter Parameter Standard Parameter
estimate error significance
(t-statistics)
Intercept θ0 0.13 0.08 0.09
(?) −1.7
SEq
i,t −SEqi,t−1
ln SEqi,t
θ1 0.71 0.14 0
Asymmetric (+) −5.19
measure
SEq
i,t −SEqi,t−1
ln DECi,t SEqi,t
θ2 −0.4 0.12 0.001
(−) (−3.25)
qui,t −qui,t−1
ln qui,t
θ3 0.16 0.04 0.093
(+) −4.3
Adjusted R2 0.704
F-value 475.992
Significant level 0
Panel B: Moderation analysis: interactive effects of capacity utilization C model 2
Intercept θ0 0.4 0.33 0.224
(?) −1.22
SEq
i,t −SEqi,t−1
ln SEqi,t
θ1 0.74 0.1 0
Asymmetric (+) −7.11
behavior
SEq
i,t −SEqi,t−1
ln DECi,t SEqi,t
θ2 −0.19 0.05 0.015
(−) (−3.67)
qui,t −qui,t−1
ln qui,t
θ3 0.7 0.25 0.042
(+) −2.79
SEq
i,t −SEqi,t−1
ln DECi,t SEqi,t
θ4 0.28 0.04 0
qui,t −qui,t−1
× ln qu i,t
(+) −6.81
Adjusted R2 0.725
F-value 395.823
Significant level 0
All t-statistics calculate by using significant indicate at the 1%, 5%, 10% level respectively.
per 1% increase in employment capacity (θ1 ) and costs decrease by 0.31% per 1%
decrease in employment capacity (θ1 + θ2 ), for which Selling and administrative
labor costs expected to be sticky (H2a). The adjusted R2 is 0.70, where F-value is
475.99, the model is significant (p < 0.001) level. (Presented in Table 3-panel A.).
268 A.A. Karrar et al.
In extension analysis, we estimate interactive models for each of the main com-
ponents of labor costs (manufacturing labor, selling and administrative labor,
and total labor), for extending literature of asymmetric cost behavior. The esti-
mates indicate significant stickiness conditional on a prior capacity decrease and
significant anti-stickiness conditional on a prior capacity decrease with moder-
ating effects of capacity utilization change (ϕ2 < 0 and ϕ4 > 0 respectively)
suggesting that capacity utilization changes are related to the effects of employ-
ment capacity changes on labor costs behavior. These findings support H1b, H2b,
and H3b.
Moderation test was conducted to examine the interacting effect of prior
sales changes between sales changes and asymmetric cost behavior [9]. The
labor costs exhibit significant stickiness without the interactive effects of capac-
ity utilization (ϕ2 = −0.22%, SE. = 0.34, t-statistics −0.65), (θ2 = −0.19%,
SE. = 0.05, t-statistics −3.67), and (γ2 = −0.48%, SE. = 0.06, t-statistics
−7.51) respectively, but the labor costs reveal the opposite pattern of significant
anti-stickiness within the interactive effects of capacity utilization (ϕ4 = 0.11%,
SE. = 0.02, t-statistics 6.05), (θ4 = 0.28%, SE. = 0.04, t-statistics 6.81), and
(γ4 = −0.13%, SE. = 0.06, t-statistics 2.16). These results document that labor
costs are description of a broader pattern of asymmetric cost behavior, which
extends to all the major components of labor costs for physical input quantity
(employment capacity) for labor costs behavior.
In additional, Banker et al. [9] indicated that costs respond to activity
increase is greater than activity increase within the moderation effects, which
estimated parameter of total employees costs increase equal 0.62 stronger than
0.42 within the interactive effects. Whereas our finding indicate that coefficients
of asymmetry measure less than within the interactive effects (ϕ2 = 0.56, θ2 =
−0.40, γ2 = −0.58) > (ϕ2 = −0.22, θ2 = −0.19, γ2 = −0.48) respectively
and capacity utilization change was observed to strengthen the relationship
between capacity changes and labor costs behavior (ΔR2 = 0.017, ΔR2 = 0.021,
ΔR2 = 0.009, p < 0.001). The results also underscore the value-added of interac-
tive analysis of capacity utilization, which in the moderation models of categories
of labor costs are important for drawing accurate implications about the nat-
ural of costs behavior. While Banker et al. [9] considered the moderation of
two-period analysis is value- added for theory of asymmetric cost behavior.
The Moderating Effects of Capacity Utilization 269
Table 4. Validation test of the sticky behavior: Nonlinear analysis of moderation test
among employment capacity, capacity utilization, and total labor costs change
(+) −2.16
Adjusted R2 0.1
F-value 17.084
Significant level 0
All t-statistics calculate by using significant indicate at the 1%, 5%, 10% level
respectively.
270 A.A. Karrar et al.
4.3 Discussion
This research was conducted to analyze the moderation effects of capacity uti-
lization changes as they related to asymmetric costs behavior at manufacturing
sector firms in Iraq. Until recently, most studies have been conducted to investi-
gate why change in labor costs behavior to activity is asymmetry [3,8,18,20,29],
specially Banker et al. [9] conducted on moderating effects but they ignored the
important dimension of capacity utilization and also focused on multiple periods
analysis for same variables to overcome these issues, this research has presented
an interactive model based on data collection from multiple and different vari-
ables. Maintenance management is necessary an effective system to display the
surface temperature to reduce the capacity losses [32].
From the variable is used in the theoretical research model, three nonlin-
ear relationships were proposed and tested. The results found capacity utiliza-
tion changes can affect asymmetric cost behavior. Conversely, labor costs and
their categories have associated with employment capacity changes in different
degrees such as significant stickiness conditional on a prior capacity decrease
and significant anti-stickiness conditional on an interaction of capacity decreases
with capacity utilization increases. Finally, capacity utilization can decrease the
degree of sticky measure. The results from the nonlinear relationships tested to
confirm and align with the existing literature [1,3,5,9].
Anderson et al. [1] Documented that behavior between costs and resources
changes is asymmetry because costs respond to resources increase greater/less
than resources decrease that rejected the traditional models about fixed and
variable changes. However, costs behavior is sticky when prior sales increase,
whereas costs behavior is anti-sticky when prior sales change decrease. The costs
are anti-sticky when capacity utilization is unusually low [9]. On the other hand,
Azeez et al. [5] examined the labor costs behavior when employment capacity
change is increasing and decreasing for existing asymmetric cost behavior by
using physical output data. Finally, the results found capacity utilization is able
to moderate the relationship between capacity change and labor costs behavior,
and suggest new avenues of exploration for future studies.
5 Conclusion
Significance of the relationship between costs behavior and resources changes has
been an extension of the asymmetric cost behavior. Asymmetry phenomenon is
a new thinking created by [1] but it still under discussion because they did not
use many drivers, their findings used sales revenues change only and ignored a
physical output data for two reasons: physical data is not available, and sales
revenues typically is a more an appropriate empirical measure for activity than
physical data [10]. The current study provides an empirical examining that added
a new dimension for explaining how capacity utilization change moderates the
relationship between capacity changes and labor costs behavior. In addition,
our findings showed a complex fundamental design of asymmetric cost behav-
ior that combines two roles: labor costs behavior is sticky on a prior capacity
The Moderating Effects of Capacity Utilization 271
References
1. Anderson MC, Banker RD, Janakiraman SN (2003) Are selling, general, and
administrative costs “sticky”? J Account Res 41(1):47–63
2. Anderson MC, Lee JH, Mashruwala R (2015) Cost stickiness and cost inertia:
a two-driver model of cost behavior. Social Science Electronic Publishing SSRN
2599108
3. Anderson SW, Lanen WN (2007) Understanding cost management: what can we
learn from the evidence on ‘sticky costs’ ? SSRN Electron J SSRN 975135
4. Argilés JMA, Garcı́ablandón J (2007) Cost stickiness revisited: empirical applica-
tion for farms. Revista Espanola De Financiacion Y Contabilidad 38(187):579–605
5. Azeez KA, DongPing H, Mabula JB et al (2016) Using capacity utilization to
measure asymmetric labor cost behavior and capacity expansion. In: Management
science and engineering (ICMSE)
6. Babecky J, Caju PD et al (2012) How do European firms adjust their labour costs
when nominal wages are rigid? Labour Econ 19(5):792–801
7. Balakrishnan R, Labro E, Soderstrom NS (2014) Cost structure and sticky costs.
J Manage Account Res 26(2):91–116
8. Banker RD (2006) Labor market characteristics and cross-country differences in
cost stickiness. SSRN Electron J
9. Banker RD (2014) The moderating effect of prior sales changes on asymmetric cost
behavior. J Manage Account Res 26:221–242
10. Banker RD, Byzalov D (2014) Asymmetric cost behavior. J Manage Account Res
26(2):43–79
11. Banker RD, Basu S et al (2016) The confounding effect of cost stickiness on con-
servatism estimates. J Account Econ 61(1):203–220
12. Bugeja M, Lu M, Shan Y (2015) Cost stickiness in australia: characteristics and
determinants. Aust Account Rev 25(3):248C261
272 A.A. Karrar et al.
13. Cannon JN (2014) Determinants of ‘sticky costs’: an analysis of cost behavior using
united states air transportation industry data. Account Rev 89(5):1645
14. Casares M, Moreno A, Vázquez J (2009) Wage stickiness and unemployment fluc-
tuations: an alternative approach. Series 3(3):395–422
15. Chen CX, Hai LU, Sougiannis T (2012) The agency problem, corporate gover-
nance, and the asymmetrical behavior of selling, general, and administrative costs.
Contemp Account Res 29(1):252C282
16. Chen JV, Kama I, Lehavy R (2015) Management expectations and asymmetric
cost behavior. Technical report, Working Paper, University of Illinois at Chicago,
Tel Aviv University, and University of Michigan
17. Cohen S (2015) The sticky cost phenomenon at the local government level: empir-
ical evidence from Greece. Social Science Electronic Publishing SSRN 2575530
18. Dalla Via N, Perego P (2015) Sticky cost behaviour: evidence from small and
medium sized companies. Account Finance 54(3):753–778
19. Dicecio R (2009) Sticky wages and sectoral labor comovement. J Econ Dyn Control
33(3):538–553
20. Gu Z (2016) The political economy of labor cost behavior: evidence from china.
Social Science Electronic Publishing SSRN 2786533
21. Hamermesh DS, Pfann GA (1996) Adjustment costs in factor demand. J Econ Lit
34(3):1264–1292
22. Ibrahim AEA (2015) Economic growth and cost stickiness: evidence from Egypt.
J Financ Report Account 13(1):119–140
23. Jang Y (2016) Asymmetric cost behavior and value creation in m&a deals. Social
Science Electronic Publishing SSRN 2824132
24. Kama I, Dan W (2010) Do managers’ deliberate decisions induce sticky costs?
SSRN Electron J 112(1):39–47
25. Kama I, Dan W (2013) Do earnings targets and managerial incentives affect sticky
costs? J Account Res 51(1):201C224
26. Kee RC (2003) Operational planning and control with an activity-based costing
system. Adv Manage Account 11:59–84
27. Liao Y, Shen W, Lev B (2017) Asymmetric information effect on transshipment
reporting strategy
28. Malik M (2012) A review and synthesis of ‘cost stickiness’ literature. SSRN Elec-
tron J SSRN 2276760
29. Mohammad MN, Masoumeh N (2011) Evidence of cost behavior in Iranian firms.
In: 2011 international conference on advancements in information technology, vol
20, pp 254–258
30. Nematollahi M (2013) Investigation the relation between costs and revenues in
Iranian firms. N Y Sci J 6(11):52–57
31. Nyaoga RB, Wang M, Magutu PO (2015) Does capacity utilization mediate the
relationship between operations constraint management and value chain perfor-
mance of tea processing firms? evidence from Kenya. Int Strateg Manage Rev
3(1–2):81–95
32. Ramirez IS, Munoz CQG, Marquez FPG (2017) A condition monitoring system
for blades of wind turbine maintenance management. In: Proceedings of the tenth
international conference on management science and engineering management, pp
3–11
33. Teece DJ, Pisano G, Shuen A (2009) Dynamic capabilities and strategic manage-
ment. Oxford University Press, New York
34. Von Hippel E (1994) “Sticky Information” and the locus of problem solving: impli-
cations for innovation. Manage Sci 40(4):429–439
The Moderating Effects of Capacity Utilization 273
35. Watts T, Mcnair CJ et al (2009) Structural limits of capacity and implications for
visibility. J Account Organ Change 5(2):294–312
36. Weidenmier ML, Subramaniam C (2003) Additional evidence on the sticky behav-
ior of costs. Social Science Electronic Publishing
37. Yang A, Lan X et al (2017) An empirical study on the prisoners’ dilemma of
management decision using big data. Springer, Singapore
38. Zanella F, Oyelere P, Hossain S (2015) Are costs really sticky? evidence from
publicly listed companies in the uae. Appl Econ 47(60):6519–6528
39. Zhuang ZY, Chang SC (2015) Deciding product mix based on time-driven activity-
based costing by mixed integer programming. J Intell Manuf 4:1–16
The Empirical Analysis
of the Impact of Technical Innovation
on Manufacturing Upgrading-Based
on Subdivision Industry of China
1 Introduction
The manufacturing industry reflect a country’s productivity level, as well as
present the national competitiveness. It is the cornerstone of the realization
of social progress and national prosperity. With the advantage of reform and
opening policy, the labor force, some other natural resources and other factors,
China’ manufacturing industry has created a miracle and continued growth of
economy. In 2010, China’s manufacture value added exceeded the United States,
becoming a manufacturing country, is worthy of the name. As shown in Fig. 1.
Since the financial crisis in 2008, the development of the real economy has
been paid more and more attention by the countries all over the world. Therefore,
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 22
The Empirical Analysis of the Impact of Technical Innovation 275
the development of manufacturing industry and the expansion of exports are the
key to the transformation of the world economic structure. The developed coun-
tries have taken measures to revitalize the manufacturing sector. For instance,
the USA wishes “re industrialization” once again to opened the gap with the
developing countries, to further promote its economic and social development, to
enhance its competitiveness in the international manufacture. Germany issued
“industry 4.0” which was proposed to establish an information physics system
network in 2013, making itself the creator and supplier of advanced intelligent
manufacturing technology. It provides the opportunities of technological inno-
vation for China’s implementation of “made in China 2025”. At the same time,
some developing countries take advantage of opportunities of the new round of
international industrial transfer, promote the industrialization process, thereby
exacerbating China’s manufacturing competitive environment.
With China’s rising labor costs and resource depletion, the advantage of
traditional manufacturing relying on cheap labor and resources is gradually lost.
In a word, we must rely on technological innovation to speed up the restructuring
of China’s manufacturing and the transformation of development mode. But how
does technological innovation affect the manufacturing upgrading of different
factors intensive industrial structure? This is an urgent problem in the process
of China’s industrialization. In view of this, based on the status of manufacturing
upgrading and the thinking of upgrading power, this paper focuses on the effect
of technical innovation on manufacturing upgrading.
2 Literature Review
Foreign scholars have carried on the thorough discussion on the technology
innovation and the industrial structure evolution, but it mostly based on the
industrial level. Guan [4] analyzed the relationship between technological inno-
vation and industrial competitiveness in the ability of R&D personnel’s learning
and knowledge acquisition, research and development ability, and the research
276 D. Jiang and Y. Yuan
showed that technology innovation is the key way to promote the competitive-
ness of the industry. Castellacci [2] studied the relationship between the external
innovation environment of technology innovation and industrial competitiveness,
the results show that the economic policies based on market oriented and insti-
tutional arrangements will directly affect the innovation mode, and play the role
in industry competitiveness. Cohen [3] considered the Schumpeterian hypothe-
ses relating innovation to market structure and firm size, and considers in more
depth the role of firm characteristics and industry-level variableslbroadly char-
acterized as reflecting demand, technological opportunity, and demand condi-
tionslin affecting firms’ innovative activity and performance. Adak [1] focused
on the influence of technological progress and innovation on the Turkish econ-
omy, and got the result that a significant effect of technological progress and
innovation on economic growth. Hong et al. [5] found that government grants
exert a negative influence on innovation efficiency of high-tech industries. How-
ever, the impact of private R&D funding is significant and positive. In view of
the problem of upgrading the structure of manufacturing industry, this paper
will not discuss it in detail.
Domestic scholars focus on the research on the mechanism and path of tech-
nological innovation to upgrade the manufacturing industry. Sun and Ye [6]
analyzed the role and mechanism of innovation for transformation and upgrad-
ing of manufacturing industry based on the connotation of the innovation drive,
and considered innovation driven to give impetus to manufacturing from three
aspects of power dimension, elements dimensions and competition dimension.
Zhao and Qiao [8] studied the upgrading mode of Shanghai’ manufacturing
industry, and pointed out that Shanghai needed to promote technology improve-
ment and innovation of the traditional manufacturing, and created growth indus-
try through the industrialization of high and new technology industry, and
pushed on the dynamic integration of producer services and advanced manufac-
turing technology, all of which strengthen policy guidance so that it can realize
comprehensively manufacturing upgrading.
From what has been discussed above, the existing literature on the analy-
sis of the technology innovation’s influence on manufacturing upgrading, which
was mostly based on regional classification, and there is less research based on
manufacturing industry so that it failed to reveal the impact of technological
innovation on Internal structure changes in manufacturing industry. Based on
the panel data model, the paper made an empirical analysis of the differences
in the impact of scientific and technical personnel, R&D internal expenditure,
sales revenue of new products, the number of effective invention patent, cost
of technical renovation, main business profit on the manufacturing upgrading
of labor-intensive industry, capital-intensive industry and technology-intensive
industry. Through the analysis of the technology innovation’s influence on dif-
ferent kinds of different factors intensive industries, that can provide guidance for
our country to make the corresponding different incentive policy on technological
innovation, and avoid invalid policy, and make the technological innovation to
better play its role so to realize the upgrading of the manufacturing structure.
It has certain practical significance.
The Empirical Analysis of the Impact of Technical Innovation 277
3 Theoretical Basis
According to the classification standards of the National Bureau of Statistics,
the manufacturing industry is divided into 31 categories, due to the lack of data
during some years, this paper chooses 27 Subdivision manufacturing industry of
national above-scale industrial as the research object. Referring to the classifica-
tion method of the factor intensity and Wang and Dong’ classification method
[7] based on the structure of the manufacturing, the 27 Subdivision industry of
China’s manufacturing industry is divided into labor-intensive industry, capital
intensive industry and technology intensive industry. The specific classification
is shown in Table 1:
4 Empirical Analysis
4.1 Index Selection
The data which is used in the paper come from the “Statistical yearbook in
China”, “Industrial Statistics Yearbook in China”, “Labor Statistics Yearbook
in China”, “Statistical yearbook of scientific and technological activities of indus-
trial enterprises”, Some of the data are compiled according to the original data
of the statistical yearbook.
The Empirical Analysis of the Impact of Technical Innovation 279
Among them, Yit represents the main business profit of manufacturing during
t period i industry; Kit , Lit represents fixed capital investment and the amount
of labor during t period i industry respectively; RDit , STit , EPit , P Sit and
The Empirical Analysis of the Impact of Technical Innovation 281
Before testing the data of model, the Eq. (1) on both sides are processed into
logarithmic for linear regression model, the model as follows:
y x1 x2 x3 x4 x5 x6 x7
y 1
x1 0.671∗∗∗ 1
x2 0.713∗∗∗ 0.644∗∗∗ 1
x3 0.707∗∗∗ 0.685∗∗∗ 0.822∗∗∗ 1
x4 0.739∗∗∗ 0.747∗∗∗ 0.758∗∗∗ 0.973∗∗∗ 1
x5 0.751∗∗∗ 0.724∗∗∗ 0.692∗∗∗ 0.883∗∗∗ 0.939∗∗∗ 1
x6 0.716∗∗∗ 0.585∗∗∗ 0.651∗∗∗ 0.809∗∗∗ 0.814∗∗∗ 0.841∗∗∗ 1
x7 0.600∗∗∗ 0.760∗∗∗ 0.523∗∗∗ 0.764∗∗∗ 0.845∗∗∗ 0.813∗∗∗ 0.575∗∗∗ 1
and apply the system of GMM estimation to eliminate endogenous for further
analysis. In this paper, the regression model is as follows, model 1 is on behalf
of the panel data model, and model 2 is on behalf of the dynamic panel model.
5 Conclusion
In this paper, it can be seen from the empirical results that the invention of new
products and technological innovation can increase the main business profits and
promote the upgrading of manufacturing industrial structure. China is a big man-
ufacturing country, whose economic growth is mainly driven by the second indus-
try at present, and China’s manufacturing industry accounted for the largest pro-
portion of the second industry. At the same time, the manufacturing industry can
absorb a lot of labor force. The development of the manufacturing industry is of
great significance to the increase of the employment rate, reduction of the gap
between the rich and the poor and the stability of the society. Therefore, the gov-
ernment should support the technological innovation of manufacturing industry,
the enterprise itself also should actively seek innovation to make themselves invin-
cible in the tide of marketization and internationalization.
The Empirical Analysis of the Impact of Technical Innovation 285
References
1. Adak M (2015) Technological progress, innovation and economic growth; the case
of turkey. Procedia Soc Behav Sci 195:776–782
2. Castellacci F (2008) Innovation and the competitiveness of industries: comparing
the mainstream and the evolutionary approaches. Technol Forecast Soc Change
75(7):984–1006
3. Cohen WM (2015) Innovation and technological change, economics of. Int Encycl
Soc Behav Sci 2:160–168
4. Guan JC, Yam RCM et al (2006) A study of the relationship between competitive-
ness and technological innovation capability based on dea models. Eur J Oper Res
170(3):971–986
5. Hong J, Bea F (2016) Do government grants promote innovation efficiency in china’s
high-tech industries? Technovation 57:4–13
6. Sun S, Ye Q (2015) Mechanism and strategic choice of innovation driven manufac-
turing transformation. Ind Technol Forum 2:15–18
7. Wang ZH, Dong CT (2012) An analysis on the consistency between manufacturing
structure and quality structure of labor force in china:and the discussion of shortage
of migrant workers, skilled personnel, and difficulty of graduates employment. Popul
Econ 16(3):363–368
8. Zhao X, Qiao M (2012) Research on the mode and path of shanghai promoting
manufacturing upgrading with high technology. Econ Res Shanghai 2:63–69
A Crash Counts by Severity Based Hotspot
Identification Method and Its Application
on a Regional Map Based Analytical Platform
1 Introduction
There are papers that discuss methods based on accident count or frequency [3],
papers that employ both accident rate (AR) and rate quality control [13], and
others that adopt the joint use of accident frequency and rate to flag sites with
promise [7]. To correct for the regression-to-the-mean bias associated with typi-
cal HSID methods [4], some researchers have suggested using the empirical Bayes
(EB) techniques [5]. This method combines clues from both the accident history
of a specific site and expected safety of similar sites, and has the advantage of
revealing underlying safety problems which otherwise would not be detected.
Some recent studies indicated that the severity of crashes should not be
neglected in the hotspot identification (HSID) process [12]. The hotspots cor-
responding to high crash risk locations can be quite different when considering
the crash frequency by different levels of crash severity. It is particularly impor-
tant to take into account crash severities in site ranking, because the cost of
crashes could be hugely different at different severity levels. This means that,
for instance, a road segment with higher frequency of fatal accidents may be
considered more hazardous than a road segment with fewer fatal accidents, but
more severe or minor injury accidents. Therefore, it is necessary to consider crash
severity when identifying hotspots.
While the equivalent property damage only (EPDO) method [15] is a way of
comparing severity types among each other, inconsistencies can occur when eval-
uating the HSID methods in different time periods, since the traditional EPDO
method overemphasizes sites with a low frequency of fatal or severe crashes [10].
As a result, a risk weight factor is developed in this research by combining the
average crash cost with the corresponding probability for each type of crash
severity. In order to develop a new safety performance index (SPI) and a new
potential safety improvement index (PSII), a generalized nonlinear model-based
mixed multinomial logit approach is introduced to extend the traditional empir-
ical Bayes method for estimating the probability of crashes in different severity
levels. The new method developed in this paper is compared with other hotspot
identification methods by employing four hotspot identification evaluating meth-
ods and applied on a regional map based analytical platform.
The EB method requires the use of pertinent crash prediction models. For the
purpose of the study, in order to account for unobserved heterogeneity and non-
linear effects of variables to extract more complex relationship, a refinement of
the generalized nonlinear model-based (GNM-based) mixed multinomial logit
(MNL) approach developed by Zeng et al. [17] was used.
In this research, three categories are considered for the crash severity (i.e., prop-
erty damage only (PDO) (k = 1), injury (k = 2), and fatal (k = 3)). The crash
severity type, denoted by Y , was the response variable, whereas contributing
288 X. Xu et al.
Ye and Lord [16] demonstrated that fatal crashes should be set as the baseline
severity for the mixed MNL model. To minimize the bias and reduce the vari-
ability of a model, in this paper, fatal crashes were used as the baseline severity
category for comparison with the other categories.
In order to account for the unobserved heterogeneity, let Ω = (ω1 , ω2 , ω3 ),
as discussed previously, and note that the Ω vector has a continuous density
function f (Ω|Γ ), where Γ is a vector of parameters charactering the density
function, ωk = [ωk1 , ωk2 , · · · , ωkJ ]T is the coefficient vector for the kth category
of the predictor vector. According to [6,9,14], the resulting mixed MNL crash
severity probabilities are as follows:
1
P r(Yi = K) = K−1 U ω +β f (Ω|Γ )dΩ, i = 1, 2, · · · , n, (2)
1 + k=1 e ki k k0
eUki ωk +βk0
P r(Yi = k) = K−1 U ω +β f (Ω|Γ )dΩ, i = 1, 2, · · · , n; k = 1.2, · · · , K − 1,
1 + k=1 e ki k k0
(3)
where P r(Yi = k) is the probability of crash severity type k; Uki =
[Uki1 (xi1 ), Uki2 (xi2 ), · · · , UkiJ (xiJ )] is the nonlinear predictor vector of obser-
vation i for contributing factors (i.e., roadway geometric characteristics, traffic
characteristics, weather conditions); βk0 is an intercept term specific to crash
severity type k. Since Uki is considered as a nonlinear predictor vector of observa-
tion i for contributing factors, Eqs. (2) and (3) are called the prediction functions
of the GNM-based mixed MNL approach.
According to Eqs. (2)–(4), the expected crash density for different severity
levels can be estimated as follows.
(1) Expected PDO crash density:
di1 = di · P r(Yi = 1)
U1i ω1 +β10
= eUi ω+β0 f (ω|ϕ)dω · e
2 i = 1, 2, · · · , n,
f (Ω|Γ )dΩ,
1+ k=1 eUki ωk +βk0
(5)
where di1 is the expected PDO crash density along segment i during a certain
time period.
(2) Expected injury crash density:
di2 = di · P r(Yi = 2)
U2i ω2 +β20
= eUi ω+β0 f (ω|ϕ)dω · e
2 i = 1, 2, · · · , n,
f (Ω|Γ )dΩ,
1+ k=1 eUki ωk +βk0
(6)
where di2 is the expected injury crash density along segment i during a certain
time period.
(3) Expected fatal crash density:
Based on Eqs. (5)–(7), the EPDO crash frequency measure is modified and
employed to weight crashes according to severity (fatal, injury, and PDO) to
develop a combined crash density and severity score (CCDSS) for each site [15].
The weight factors are based on PDO crash costs. An EPDO value summarizes
the crash costs and severity.
In the calculations, weight factors were assessed from the crash cost estimates
developed by WSDOT in the Annual Collision Data Summary Reports (2011–
2014). Using average crash costs for motorways, fatal crashes ($2,227,851) have
a weight factor equal to 981, injury crashes ($20,439) have a weight factor equal
to 9, and PDO crashes ($2,271) have a weight factor equal to 1. However, if we
290 X. Xu et al.
only consider the average crash costs to be the weight factor, inconsistencies can
occur when evaluating the HSID methods in different time periods, since the
traditional EPDO method overemphasizes sites with a low frequency of fatal or
severe crashes [10]. As a result, a risk weight factor is developed in this research
by combining the average crash cost with the corresponding probability for each
type of crash severity. Let Fw denote the fatality risk weight factor, Iw , the
injury risk weight factor, and Pw , the PDO risk weight factor, then they are
defined by using the following equations:
cF · ηF cI · ηI
Fw = , Iw = , Pw = 1, (8)
cP ηP cP ηP
where cF = $2, 227, 851, cI = $20, 439, and cp = $2, 271 are the average costs for
fatal, injury, and PDO crashes; ηF , ηI , and ηP are the probabilities of occurrence
for fatal, injury, and PDO crashes.
Based on the preceding analysis, the expected CCDSS (ECCDSS) for road-
way segment i can be defined as:
where OCCDSSi is the observed combined crash density and severity score
(OCCDSS) for roadway segment i and is defined as below:
where σi1 , σi2 , σi3 are the observed fatal, injury, and PDO crash density along
segment i during a certain time period respectively; λi is a weighting factor that
is calculated through the following equation:
1
λi = , (12)
1 + αi ECCDSSi
where αi is the over dispersion parameter, which is a constant for a given model
and is derived during the regression calibration process.
A Crash Counts by Severity-based Hotspot Identification Method 291
The PSII was developed as the difference between the SPI and the ECCDSS, as
follows:
PSIIi = λi ECCDSSi + (1 − λi )OCCDSSi − ECCDSSi
(13)
= SPIi − ECCDSSi , i = 1, 2, · · · , n,
when the PSII value is greater than zero, a site experiences a higher combined
frequency and severity score than expected; when the PSII value is less than zero,
a site experiences a lower combined frequency and severity score than expected.
This study was performed based on crash data records collected in Washington
State from January 2011 to December 2014 (i.e., a four-year period). The
data were obtained from the Washington State Department of Transportation
(WSDOT), Highway Safety Information System (HSIS), and the Digital Road-
way Interactive Visualization and Evaluation Network (DRIVE Net) platform
at the University of Washington (UW). Four major datasets are included in this
study: crash data, roadway geometric characteristics, traffic characteristics, and
weather conditions. These datasets detail all of the information regarding crash
frequency, locations, severities, roadway segment length, average number of lanes
(NOL), horizontal curve type (HCT), curvature of the segment (COS), average
width of outer shoulder (WOS), average width of inner shoulder (WIS), average
width of median (WM), dominant lane surface type (DLST), dominant outer
shoulder type (DOST), dominant inner shoulder type (DIST), dominant median
type (DMT), average speed limit (ASL), AADT, AADT per lane, road surface
conditions (RSC, i.e., dry, wet, snow/ice/slush), and visibility (good, bad).
In this research, we consider using the proportion of the crash frequency for
each type of severity based on the collected crash data of 21,396 road segments
along I-5, I-90, I-82, I-182, I-205, I-405 and I-705 in Washington to represent
the probability of crash occurrence for each severity level in this area. Based on
the crash counts after data quality control, the total number of crashes recorded
during the data collection period was 47,657, including 134 fatal crashes, 13,824
injury crashes, and 33,699 PDO crashes. Thus, we can calculate that ηF =
0.0028, ηI = 0.29, ηP = 0.7072; then, the values of the risk weight factors are
obtained by employing Eq. (8) as Fw = 3.884, Iw = 3.691, Pw = 1.
EB estimated crash density based on the GNM, ARP based on the NB GLM,
and ARP based on the GNM.
Cheng and Washington [2] have developed four new evaluation tests for HSID.
In this research, the site consistency test, method consistency test, total rank
differences test, and the total score test are employed to evaluate the effectiveness
of the developed safety performance indexes and reference performance indexes.
The evaluation experiment uses the following procedure, which closely mimics
how reactive safety management programs are conducted in practice:
(1) For the purpose of comparing alternate HSID approaches, the 4-year acci-
dent data were separated into two periods, Period 1 (Year 2011–2012) and
Period 2 (Years 2013–2014).
(2) Road sections (intersections, ramps, two-lane rural roads, etc.) are segre-
gated so that the safety of similar sites can be fairly compared. In this eval-
uation, the analysis is based on the analysis of nine functional classifications
of road sections.
(3) For each HSID method, similar road sections are sorted in descending order
of estimated safety (noting that the four HSID methods rank sites according
to different criteria).
(4) Sections with the highest rankings are flagged as hotspots (in practice these
sites will be further scrutinized). Typically, a threshold is assigned according
to safety funds available for improvement, such as the top 1% of sites. In this
evaluation, both the top 1% and 5% of the locations are used as experimental
values.
The site consistency test (SCT) measures the ability of an HSID method to
consistently identify a high-risk site over repeated observation periods. The test
rests on the premise that a site identified as high risk during time period i should
also reveal an inferior safety performance in a subsequent time period t+1, given
A Crash Counts by Severity-based Hotspot Identification Method 293
that the site is in fact high risk and no significant changes have occurred at the
site. The method that identifies sites in a future period with the highest crash
frequency is the most consistent. In this research, the SPI developed above is
employed as the safety performance criterion in the subsequent time period. The
test statistic is given as:
n
SCTh,t+1 = SPIq,h,t+1 , h = 1, 2, · · · , H, (14)
q=n−nγ+1
where h is the HSID method index being compared; n is the total number of
roadway segment, γ is the threshold of identified hotspots (e.g., γ = 0.01 corre-
sponds with top 1% of n roadway segments identified as hotspots, and nγ is the
number of identified hotspots).
From the site consistency test, it is shown in Table 1 that the SPI method
outperforms other HSID methods in identifying both of the top 1% and 5% of
hotspots with highest SCT values, 21521.79 and 44251.68, in Period 2, followed
closely by the EB CD (GNM) method. The ARP (NB GLM) performs the worst
in both cases, with the identified hotspots experiencing the lowest number of SCT
values, say, 19034.25 and 42106.73, respectively (although the ARP is based on
reduction potential, so the total count can be misleading).
MCTh = {sn−nγ+1 , sn−nγ , · · · , sn }h,t {sn−nγ+1 , sn−nγ , · · · , sn }h,t+1 , h = 1, 2, · · · , H,
(15)
the SPI method outperforms the other HSID methods. Also shown in Table 2
are differences between percentages (shown in the parentheses) of Column 3
and Column 4 for the eight methods. There is a consistent drop in percentages
as threshold values drop. The explanation is that the top segments suffer from
greater random fluctuations in crashes, and thus the higher is the threshold, the
larger are the random fluctuations and the likelihood of not being identified in
a prior period.
where R(qh,t ) is the rank of segment q in period t for method h. The difference
in ranks is summed over all identified segments for threshold level γ for period t.
Table 3 illustrates that the SPI method is superior in the total rank differences
test. In both the γ = 0.01 and γ = 0.05 cases, the SPI method has significantly
smaller-summed ranked differences, by about 22.6% (in the case of γ = 0.01) and
16.9% (in the case of γ = 0.05) compared with the EB CD (GNM), and by about
75.1% (in the case of γ = 0.01) and 77.4% (in the case of γ = 0.05) compared
with the ARP (NB GLM). This result suggests that the SPI method is the
best HSID method (of the eight evaluated here) for ranking roadway segments
consistently from period to period.
A Crash Counts by Severity-based Hotspot Identification Method 295
The total score test (TST) combines the site consistency test, the method con-
sistency test, and the total rank difference test in order to provide a synthetic
index. The test statistic is given as:
SCTh,t+1 h −minh {TRDTh }
TSTh = 100 3 ( maxh {SCTh,t+1 }
) + ( maxMCT h
h {MCTh }
) + (1 − TRDT
maxh {TRDTh } ) ,
h = 1, 2, · · · , H,
(17)
where the test assumes that the SCT, MCT, and TRDT have the same weight.
The former three tests provide absolute measures of effectiveness, whereas the
total score test gives an effectiveness measure relative to the methods being
compared. If method h performed best in all of the previous tests, the TST value
is equal to 100. If method h performed worst in all of the tests, the TST value
is positive since all three components of the test have a positive value. Indeed,
SCT and MCT, which should be maximized by the HSID methods, are weighted
in relation to the maximum values in the tests, whereas TRDT, which should be
minimized by the HSID methods, is weighted in relation to its difference from
the minimum value in the test. Table 4 illustrates the results of total score test
of the eight HSID methods, in which SPI performed best in both γ = 0.01 and
γ = 0.05 cases, and was followed closely by EB CD (GNM) method with 93.81
score (in the case of γ = 0.01) and 92.12 score (in the case of γ = 0.05). ARP
(NB GLM) performed the worst in both cases, with 70.82 score and 73.82 score
respectively.
Overall, the four tests reveal that the SPI method is the most consistent
and reliable method for identifying hotspots. Although it can only be applied
to roadway segments where the crash data for different levels of severity are
available, with the rapid development of intelligent transportation systems and
data collection technologies, this method could become quite useful in identifying
high-risk road sites. On several criteria, the SPI outperforms other methods by
a wide margin. This evaluation suggests that the SPI method (of the methods
compared) has a potential to become the industry standard.
A regional map based analytical platform was developed on the DRIVE Net sys-
tem to highlight the methodology developed under this project. Ultimately, the
existing safety performance analysis function under the “Safety Performance”
module was expanded. The SPI developed in the preceding is used to color-code
the regional map based on safety performance. The PSII is employed to highlight
potential safety improvements on the map. By combining the two indices on the
regional map, one can easily identify accident hotspots and the key influencing
factors to consider in an improvement package.
The interface of the safety performance module in the regional map based
analytical platform is illustrated in Fig. 1. There are three sub-functions imple-
mented on this panel: Incident Frequency (NB GLM), Estimated Crash Mean
and Potential Safety Improvement Index (ARP NB GLM). The new SPI and
PSII were added as expended safety performance analysis options. As stated
in the previous modeling part of this report, within a selected time range and
corridor, the SPI shows a more comprehensive view of safety performance on a
given corridor. The accident/incident data is from Washington Incident Tracking
System (WITS) database. The SPI level ranges from Level A to Level F, where
Level A (light green) corresponds to the highest safety performance and Level
F (dark red) corresponds to the lowest safety performance expected as shown in
Fig. 1.
The PSII implements the EB method in the modeling part. In this function,
both the historical incident data and the characteristics of the selected corridor
are used as model inputs. The output format still uses the six different colors
representing Level A to Level F to show the potential safety improvement index
on the map, where Level A shows the segment has the least potential to improve
A Crash Counts by Severity-based Hotspot Identification Method 297
SPI Levels
Fig. 1. SPI level ranges from Level A to Level F in the safety performance module
its safety, and Level F shows the segment has the most potential to improve its
safety. Figure 2 shows an example of this function.
7 Conclusions
A CCS-based HSID method is developed by extending the traditional EB method
to a GNM-based mixed MNL approach in this paper. A new SPI and a new
PSII are developed by introducing the risk weight factor and compared with
traditional indexes by employing four HSID evaluating methods, including the
site consistency test, method consistency test, total rank differences test, and
the total score test. The test results showed that the new SPI derived by the
298 X. Xu et al.
GNM-based mixed MNL approach is the most consistent and reliable method
for identifying hotspots. Finally, the new CCS-based HSID method was applied
on a regional map based analytical platform.
References
1. Cheng W, Washington S (2005) Experimental evaluation of hotspot identification
methods. Accid Anal Prev 37:870–881
2. Cheng W, Washington S (2008) New criteria for evaluating methods of identifying
hot spots. Transp Res Rec 2083:76–85
3. Deacon J, Zegeer C, Deen R (1975) Identification of hazardous rural highway
locations. Transp Res Rec 543:16–33
4. Hauer E (1980) Bias-by-selection: overestimation of the effectiveness of safety coun-
termeasures caused by the process of selection for treatment. Accid Anal Prev
12:113–117
5. Hauer E, Persaud B, Smiley A et al (1991) Estimating the accident potential of
an Ontario driver. Accid Anal Prev 23:133–152
6. Hensher D, Greene W (2003) The mixed logit model: the state of practice. Trans-
portation 30:133–176
7. Laughland J, Haefner L, Hall J et al (1975) Methods for evaluating highway safety
improvements. NCHRP 162, Transportation Research Board
8. Mannering F, Shankar V, Bhat C (2016) Unobserved heterogeneity and the statis-
tical analysis of highway accident data. Anal Methods Accid Res 11:1–16
9. McFadden D, Train K (2000) Mixed MNL models for discrete response. J Appl
Econometrics 15:447–470
10. Montella A (2010) A comparative analysis of hotspot identification methods. Accid
Anal Prev 42:571–581
11. Park B, Lord D, Lee C (2014) Finite mixture modeling for vehicle crash data with
application to hotspot identification. Accid Anal Prev 71:319–326
12. Qu X, Meng Q (2014) A note on hotspot identification for urban expressways. Saf
Sci 66:87–91
13. Stokes R, Mutabazi M (1996) Rate-quality control method of identifying hazardous
road locations. Transp Res Rec 1542:44–48
14. Train K (2003) Discrete Choice Methods with Simulation. Cambridge University
Press, Cambridge
15. Washington S, Haque M, Oh J et al (2014) Applying quantile regression for model-
ing equivalent property damage only crashes to identify accident blackspots. Accid
Anal Prev 66:136–146
A Crash Counts by Severity-based Hotspot Identification Method 299
1 Introduction
A bandwidth usage monitoring management in a network traffic at university is
indispensable. Therefore, a recording and analysis of bandwidth usage by net-
work administrators is very beneficial. It aims to make use of the bandwidth
can be well controlled, stable access, and gives users convenient access. There-
fore, mapping or cluster of bandwidth usage in order to support the network
administrator performance analysis is required.
In this study, the cluster method based on intelligence algorithm are pro-
posed in bandwidth usage, such as Self-Organizing Maps (SOM), K-Means,
Fuzzy C-Means, etc. For that reason, these algorithms have been widely used by
researchers in order to solve cluster data problems in a variety of fields, includ-
ing economics [5], supply chain [2], engineering [6], hydrology [1,4], internet and
social media [3,8], pattern recognition [7], and so forth. Many research results
have shown that the algorithms are able to provide accurate information in solv-
ing the clustering problem.
2 Research Method
In this section, a brief information of K-Means and FCM models are presented.
• Recalculate the center of the cluster with the new cluster membership. This
is computed by determining the centroid/center cluster.
• Assign each object to put on the new cluster center, if the center of the cluster
changed, back to 3, otherwise clustering is complete.
• Analyze the results in the clustering process.
Other advanced techniques clustering in machine learning model come from the
Dunn in 1973, and improved by Bezdek in 1981, called Fuzzy C-Means (FCM)
clustering was developed by Dunn in 1973, and improved by Bezdek in 1981. In
principle, FCM clustering process is based on a partition of a set of data into a
number of clusters with minimum similarity between different clusters [2]. Since
the introduction of the fuzzy set theory in 1965 by Zadeh, it has been applied
in a variety of fields. FCM is a flexible fuzzy partition that an improvement of
common C-Means algorithm [5]. At FCM, each feature vectors valued between
[0–1] by using the membership function, because FCM is based on the criteria of
distances numbers between clusters. In other words, FCM clustering is based on
Point-Prototype Clustering Model with output centroid most optimal partition.
Where, partition optimization centroid is obtained by minimizing the objective
function. The formula is given by:
N
C
q 2
JFCM (U, V ) = (uij ) (dji ) ,
j=1 i=1
Comparison Between K-Means and Fuzzy C-Means Clustering 303
where
U = Fuzzy datasets K-partition;
V = Set of prototype centroid;
V = {v1 , v2 , · · · , vC } ⊂ RP ;
2 2
2 2
(dji ) = xj − vi = xj (row) − vi (row) + xj (col) − vi (col) . (2)
Step 5. Recalculate the step 4, uij → ûij If, max |uij − ûij | < ε, where ε is
ij
termination criteria between 0 and 1. Then, the iteration process is
stopped. If not go back to step 5. The FCM algorithm can be seen in
Fig. 2.
304 Purnawansyah et al.
cluster number = k
s = (maxdata − mindata) /k
m = mean (data)
C = [(m − s∗ (k)) (· · ·) (m − s∗ (1)) (m) (m + s∗ (1)) (· · ·) (m + s∗ (k))]
c1 101100
G= = .
c2 010011
2.3 Datasets
In this study, 152 days (from January–May 2016) daily network traffic usage of
four client datasets from ICT unit were captured. Then, the data are analyzed
by using MATLAB R2013b. The real dataset can be seen in Table 1 (Fig. 3).
This section presents the empirical work and compares the experimental results
of the K-Means and FCM algorithms on highest average usage network traffic
problems. The performances are measured by the objective function value given
by Eq. (1).
In this experiment, the scheme used is to classify the training data into 3, 4, and
5 cluster grouping patterns in order to observe the good cluster. The average
data value is used as one of the centroid values. Then, for another centroid value
is determined randomly in the space of data training. The results of K-Means
are shown in Fig. 4.
Comparison Between K-Means and Fuzzy C-Means Clustering 307
In this experiment, the scheme used is to classify the training data into 3, 4, and
5 cluster grouping patterns in order to observe the good cluster. The average
data value is used as one of the centroid values. Then, for another centroid value
is determined randomly in the space of data training. In this test, the FCM
308 Purnawansyah et al.
highest, middle and lowest data values are obtained. The results of FCM are
shown in Fig. 5.
In this study, the accuracy level in order to get the centroid value by using
FCM method is quite careful than K-Means method, Table 2.
4 Conclusion
In this paper we presented a comparison of K-Means and FCM methods then
compared the centroid accuracy by using various performance criteria. This
research was used network traffic usage from four units; rectorate, forestry, sci-
ence, and economic. In clustering, the examining of 1 parameter of centroid
value is 3 centroid values. Our experiment showed that the FCM method is
better results analysis in clustering than K-Means. Nevertheless, we have also
concluded that FCM algorithm was slower than K-Means. As future work, an
optimizing methods in order to get good accuracy between centroids is proposed.
References
1. Arroyo A, Herrero A et al (2016) Analysis of meteorological conditions in spain by
means of clustering techniques. J Appl Logic. doi:10.1016/j.jal.2016.11.026.
2. Bai C, Dhavale D, Sarkis J (2016) Complex investment decisions using rough set
and fuzzy c-means: an example of investment in green supply chains. Eur J Oper
Res 248(2):507–521
3. Carvalho L, Barbon S Jr et al (2016) Unsupervised learning clustering and self-
organized agents applied to help network management. Expert Syst Appl 54(C):29–
47
4. Elaal A, Hefny H, Elwahab A (2010) Constructing fuzzy time series model based
on fuzzy clustering for a forecasting. J Comput Sci 6(7):735–739
5. Huang CK, Hsu WH, Chen YL (2013) Conjecturable knowledge discovery: a fuzzy
clustering approach. Fuzzy Sets Syst 221(2):1–23
6. Pandit YP, Badhe YP et al (2011) Classification of Indian power coals using k-means
clustering and self organizing map neural network. Fuel 90(1):339–347
310 Purnawansyah et al.
7. Stetco A, Zeng XJ, Keane J (2015) Fuzzy c-means++: fuzzy c-means with effective
seeding initialization. Expert Syst Appl 40(21):7541–7548
8. Sucasas V, Radwan A et al (2016) A survey on clustering techniques for cooperative
wireless networks. Ad Hoc Netw 47(C):53–81
9. Velmurugan T (2014) Performance based analysis between k-means and fuzzy c-
means clustering algorithms for connection oriented telecommunication data. Appl
Soft Comput 19(6):134–146
RDEU Evolutionary Game Model
and Simulation of the Network Group Events
with Emotional Factors
1 Introduction
Generally, the whole Society is in the period of social transformation filled with
various contradictions and social conflicts, including the increasing number of
network mass incidents and the negative influence from that. We often find a
simple network information will attract tens of millions of click rate in a short
period, an infectious emotion may spread out of control like a virus, thus caus-
ing emotional excitement. That is to say, the development of the group events
on networks is often due to the interaction between social and human factors.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 25
312 G. Xiong et al.
The evolution of this interaction has the derivative harmfulness, easily leading
to systemic economic and social crisis. Thus, social stability is facing a severe
challenge. Therefore, keeping in touch with the emotional state of netizens in
time is of great significance to grasp the law of the evolution in the behavior of
netizens group and monitor the occurrence of early warning derivative events.
At present, a majority of researchers of the network mass incidents focused on
the aspects of the network conflict. Morales [5] believed that network mass inci-
dents refer to the large-scale pooling of netizens’ opinions in the network for spe-
cific hotspot problems, thus affecting the real-life group events. Levine [2] argued
that cyberspace disputes and conflicts, which expand the interaction of people,
can activate the public awareness of the society in order to create a new public
realm, and provide new environmental resources for democracy, effectively avoid
autocratic. In addition, there are some research focus on the function of net-
work conflict, the conflict on real society and democratic system. Post [6] pointed
out that the traditional legal control tools are confronting severe network con-
flict challenges. Technicians cannot depends on the traditional management pol-
icy to effectively restrict network environment and reconstruct the rules’ system of
cyberspace. Qiu [7] summarized three types of network mass incidents: First, some
incidents occur in the Internet but exerts substantial influence on the real society;
the second type of incidents happens in the reality with the virtual online organiza-
tion as the intermediary; the third one happens through offline interaction that is
based on the online communication. Some scholars proposed corresponding strate-
gies to solve the problem about unexpected group incidents from different perspec-
tives. From the view of communication studies, Zhu et al. [13] emphasized that the
fundamental effect of faith on the public opinion and the guidance of the “opin-
ion leader” for the direction of public opinion. From the social and psychological
point of view to investigate the social struggle, Sidney et al. [10] argued it was the
existence of a sense of deprivation of the individual that resulted in resentment,
thus forming a collective behavior. It follows that the present scholars’ research
on network unexpected group incidents mostly applies the qualitative, canonical
approaches and comes from the single perspective but seldom establishes math-
ematical models based on the theories of management science to make relevant
research. At present, a few scholars as Liu [3,4], Deng et al. [1], Xie [12], Wang
[11] and Sun [9] made research on traditional unexpected group incidents. Aim-
ing at the conflict evolutionary mechanism of unexpected group incidents, they
established the models of unexpected group incidents and initially revealed the
mechanism of such incidents. Although evolutionary game models of unexpected
group incidents deepened the research concerning the conflict evolutionary mech-
anism by dividing up the social groups into advantaged groups and vulnerable
groups, they did not consider the influence of emotional factor on the state of evo-
lutionary game equilibrium in decision-making in the game.
All in all, the study of the evolution mechanism of the group events on net-
works is still in its infancy, and only qualitative methods have been used to
analyze the causes, types and countermeasures of the events. There are little
research on the evolution mechanism of the group events on networks, especially
RDEU Evolutionary Game Model 313
we lack the direct research on game model of endogenous factors like emotion
to further explore the evolution of the event. Based on the Quiggin’s RDEU
theory (Rank-dependent expected utility) [8], this paper constructs the RDEU
evolutionary game model of the group events on networks and studies the evolve-
ment process of netizens’ strategy selection under different emotional states, and
provides emergency decision support for policy selection of the group events on
networks. There are four sections in this paper. In the second section, we intro-
duce the emotional function which reflects the psychological activities of netizens
and establish the RDEU game model of the behavior mechanism of the group
events on networks. The third section, according to the different emotions of neti-
zens, adopts Matlab’s numerical simulation analysis of evolution equilibrium on
the netizens RDEU game model under six evolutionary scenarios (for example,
netizens of one party are rational, while the other party is pessimistic, netizens
are both pessimistic, etc.). The fourth section is the summary of the full text.
In the network unexpected group incident, the strategy sets of the netizen groups
can be divided into: “fight” (represented by F ) and “peace” (indicated by P ),
the former refers to radical articles or comments while the latter refers to quiet
observation of the course of events without taking any radical actions. In that
game, there are three different situations: (1) When both of the two groups of
netizens adopt the sheep-flock strategy to “F ”; the influence of the incident may
be accordingly expanded, and both of interactive objects may respectively gain
interaction revenues represented by “V ”; extraneous revenues is implied by “S”;
paying the costs is implied by “C”. Then, both sides should pay the costs and
make extraneous and interaction revenues; (2) When two netizen groups adopt
different strategies, the principal party of “P ” can make interaction revenues
without paying “C” while the principal party of “F ” pays “C” and makes “V ”
and “S”; (3) When both groups apply the peaceful sheep-flock strategy, namely,
as onlookers or passers-by, neither of them gain any revenue or deliver any con-
tribution to the incidents, make any “S” or pay any costs, so they spend no
cost and gain no benefit. Table 1 shows the revenue matrix in the game of the
netizen’s sheep-flock effect in the network unexpected group incidents.
Netizen B
Fight (F ) q Peace (P ) 1 − q
Netizen A Fight (F ) p 2V + S − C, 2V + S − C V + S − C, V
Peace (P ) 1 − p V, V + S − C 0, 0
314 G. Xiong et al.
Table 1 shows that the netizens pay a little price when they forward or com-
ment the article in most case. To be concluded, the fight brings more bene-
fits than peace, obviously S > C, so the size relation between parameters is:
2V + S − C > V + S − C > V > 0.
The dynamic game model was basically built on the basis that the netizens
were entirely rational. However, when the social economic environment and the
decision problems are more complicated, people’s rationality will be obviously
limited. For this reason, in a move to ensure the application value of game analy-
sis, the netizen game of limited rationality must be analyzed. Consequently, the
following part will discuss the evolutionary game model among netizen groups in
a drive to analyze the game under the analytical framework of repeated random-
pair game among netizens of low rationality. Suppose that the netizens adopting
“F ” occupy the proportion of “p” of the netizen group A and the counterparts
adopting “P ” account for (1 − p); the netizens adopting “F ” take up a propor-
tion of “q” of the netizen group B and the counterparts adopting “P ” account
for (1 − q), where p, q ∈ [0, 1].
Network unexpected group incidents have displayed distinctive characteris-
tics from traditional group incidents chiefly in terms of emotional extremity in
these years. Because of lack of authoritative review mechanism in communi-
cation, some features, such as popularization and randomness arose. It is cor-
respondingly difficult for the governmental institutions to control. At the same
time, because of some problems like enlarging gap of wealth, soaring house prices
and inappropriate governmental behaviors, the netizens tend to show sympathy
to the disadvantaged groups. When the incident object relates to social unfair-
ness, the netizens are evidently easier to get agitated or go to extremes. To reflect
the emotional status of both parties of the game in strategy selection, the paper
hereby applies the RDEU theory to include the emotional factors of the netizen
groups into the above game model of the netizen’s sheep-flock effect. Suppose
the emotional function of netizen group A and netizen group B is respectively
wA (p) = pr1 and wB (q) = q r2 and r1 , r2 > 0, called the emotional index of the
netizens respectively.
In the RDEU theory, we can get that the probabilities of different revenues
made by netizen group A and B when they adopt different strategies and the rank
of the revenue value of different strategy portfolio among all the revenue values. It
can be called the rank of corresponding revenue. If the rank of a strategy portfolio
is higher, it is less possibility that the revenues of other strategy portfolios can
exceed it. That is the implication of “high revenue and low probability”. The
corresponding decision weights are calculated with probabilities and ranks of
different revenues, as listed in Tables 2 and 3.
RDEU Evolutionary Game Model 315
Table 2. The probability distribution, rank and decision weights of netizen group A’s
revenue
Table 3. The probability distribution, rank and decision weights of netizen group B’s
revenue
UAF and UAP represent the expected revenues of netizen group A adopting
“F ” and “P ” respectively in the network unexpected group incidents.
Similarly, UBF and UBP are taken to represent the expected revenues of
netizen group B adopting “F ” and “P ”
The limited rationality of the netizen groups indicates that people will find
better strategies by learning in the game instead of finding the best strategy at
316 G. Xiong et al.
the beginning. According to the theory of replicated dynamics for biological evo-
lution, the game player who adopts strategies yielding lower revenues will change
the strategy and turn to (simulate) the strategies yielding higher revenues. For
this reason, the proportion of group members who apply different strategies
may change, then the speed of proportion change and the proportion of specific
strategies have positive relation with the margin of its revenue over the average
revenue. Consequently, the speed of change of “p” of the netizens resorting to
“F ” among the above netizen group A can be shown in the replicated dynamic
equation as follows:
dp
= pr1 (UAF − UA ) = pr1 (1 − pr1 )[V (pq)r1
dt
+ (2V + S − C)pr1 − V (p + q − pq)r1 ]. (4)
At the same time, is taken to represent the average revenue of netizen group
B, so
When the netizen group B take the “fight” behavior, the change of speed of
netizen number’s proportion can be represented as the below replicator dynamics
equation as
dq
= q r2 (UBF − UB ) = q r2 (1 − q r2 )[V (pq)r2
dt
+ (2V + S − C)q r2 − V (p + q − pq)r2 ]. (6)
After that we assume r1 and r2 which the emotional indexes of the netizens
be divided into six kinds of conditions, then the conditions simulation software
named matlab7.0 will be applied to simulate the changes of the strategies of the
network groups in network unexpected group incidents and analyze the different
conditions to reach the state of evolutionary equilibrium. Assuming the initial
ratio of p and q is (0.2, 0.7). When t = 0, the proportion of netizens of group B
applying “F ” is higher than that of group A.
Situation 1: Both sides of netizen groups are rational, namely r1 = r2 = 1.
Figure 1 shows that the evolutionary result of the experiment when all the
parameters conform to situation 1. The result manifests that when both the
netizen group A and netizen group B are rational, the system will eventually
evolve to be “F ” of group B and “P ” of group A.
Situation 2: One of the netizen groups is rational while the other is optimistic.
r1 = 1 and r2 ∈ (1, +∞) or r1 ∈ (1, +∞) and r2 = 1, suppose r1 = 1, r2 = 1.7
or r1 = 1.7, r2 = 1
Figure 2 describes the evolutionary results of the system when all the para-
meters satisfy to situation 2. The result shows that when netizen group A and
group B are optimistic or group A are optimistic and group B are rational, the
system will finally evolve to be “F ” of Group B and “P ” of Group A.
Situation 3: One party of the netizen groups is rational while the other is
pessimistic. r1 = 1 and r2 ∈ (0, 1) or r1 ∈ (0, 1) and r2 = 1, suppose r1 = 1, r2 =
0.9 or r1 = 0.9, r2 = 1.
Figures 3(a) and (b) indicate that the evolutionary results of the system when
all the parameters conform to situation 3. The result shows that when netizen
group A are rational and group B are pessimistic or group A are pessimistic and
group B are rational, the system will finally evolve to be “F ” of group B and
“P ” of group A.
318 G. Xiong et al.
Fig. 2. (a) The result of simulation when r1 = 1, r2 = 1.7; (b) The result of simulation
when r1 = 1.7, r2 = 1
Fig. 3. (a) The result of simulation when r1 = 1, r2 = 0.9; (b) The result of simulation
when r1 = 0.9, r2 = 1
Fig. 4. The result of simulation when Fig. 5. The result of simulation when
r1 = 0.5, r2 = 0.6 r1 = 1.2, r2 = 1.3
Fig. 6. (a) The result of simulation when r1 = 0.9, r2 = 1.1; (b) The result of simulation
when r1 = 1.1, r2 = 0.9
Figure 6(a) and (b) describe the evolutionary results of the system when all
the parameters conform to situation 6. The result shows that when netizen group
A and group B are optimistic or group A and group B are pessimistic, the system
will finally evolve to be of Group B and of group A.
4 Conclusion
Based on the shortcomings of the existing literature on the research of group
events on networks and the new RDEU theory, this paper establishes RDEU
game model of group events on network. Under the six evolutionary scenarios,
it analyzes the extent and condition of influence of different emotions on neti-
zens’ game behavior. This paper overcomes the limitations of the traditional
game model which on the basis of “full rationality” hypothesis of netizens, and
considers the influence and mechanism of endogenous factors such as netizen
emotion on the evolution of the group events on networks. It also reveals the
deeper root of the outbreak and expansion of group events on networks. The
results show that emotional factors have an important influence on the choice of
320 G. Xiong et al.
the game strategies of the group events on networks. When some netizens has
a “pessimistic” emotion, they tend to “antagonistic” behavior, and the higher
the degree of pessimism, the more likely they choice a risk strategy; When some
netizens has a “optimistic” emotion, they are easy to commit “concessions”
behavior; When the netizens are pessimistic in both groups, the event is likely
to evolve into a conflict effect with the herd effect.
The conclusion of this study can provide some reference and inspiration for
the emergency management and prevention and control measures of network
mass incidents. Government departments or social media should increase the
response mechanism for the interests and wishes of netizens, strengthen netizens’
emotion monitoring and analysis and provide psychological intervention and
emotional counseling for them timely.
References
1. Deng Y, O’Brien KJ (2012) Relational repression in China: using social ties to
demobilize protesters. Soc Sci Electron Publishing 215(215):533–552
2. Levine P (2001) Civic renewal and the commons of cyberspace. Nat Civic Rev
90(3):205–212
3. Liu D, Wang W, Li H (2013) Evolutionary mechanism and information supervision
of public opinions in internet emergency. Procedia Comput Sci 17:973–980
4. Liu D, Han C, Yin L (2016) Multi-scenario evolutionary game analysis of evo-
lutionary mechanism in urban demolition mass incident. Oper Res Manage Sci
1:76–84
5. Morales AJ, Borondo J et al (2014) Efficiency of human activity on information
spreading on twitter. Soc Netw 39(1):1–11
6. Post DG (2002) Against ’against cyberanarchy’. SSRN Electron J 17(4):1365–1387
7. Qiu W (2009) The study of the network group events. J Ningbo Radio TV Univ
3:1–4 (in Chinese)
8. Quiggin J (1982) A theory of anticipated utility. J Economic Behav Organ
3(4):323–343
9. Sun H, Wang X, Xue Y (2016) Stochastic evolutionary game model for unexpected
incidents involving mass participation based on different scenarios. Oper Res Man-
age Sci 4:23–30 (in Chinese)
10. Tarrow SG (1994) Power in movement: social movements and contentious politics.
Contemp Sociol 28:3–12
11. Wang X, Li Y, Sun H (2015) Evolutionary game analysis of unexpected incidents
involving mass participation based scenario inference. J Manage Sci 6:133–143 (in
Chinese)
12. Xie B, Zhang W et al (2014) Evolutionary game analysis and application to mass
emergency under government coordination. Operations Research and Management
Science 5:243–249 (in Chinese)
13. Zhu J, Zhang E, Cui Y (2013) Characteristics and causes of the network group
events. Theory Res 32:96–97 (in Chinese)
Effects of Internet Word-of-Mouth of a Tourism
Destination on Consumer Purchase Intention:
Based on Temporal Distance and Social Distance
1 Introduction
The context of traveling getting increasingly popular, prospects is taking on
prosperity and potential is substantial for the traveling sector in China, with
online consumption in doing sightseeing projected to reaching RMB 800 Billion.
In 2014, this country had a transaction scale of RMB 3250 Billion, in which the
E-market of tourism represents 11.3% to 367 Billion, a 45.6% rise compared to
last year.
The reason why Chinese marketing has good developing prospects is great
influence of WOM. As the consequence surveyed by Market research company
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 26
322 M. Chen and J. Chen
GfK shows that more than half of Chinese travelers rely on personal contact to
obtain travel information, including friends, family and colleagues, which is same
as online reading. In terms of actual booking, online instead of offline, there are
63% of visitors choosing online payment, which means that they are unavoidable
to read some online information about destinations. 80% of consumers said they
buy, influenced by the Internet word-of-mouth IWOM, implying that researching
the tourism destination IWOM is of practical significance. At the same time,
customer get tourism information platforms and ways, which are more diverse,
because the Internet and mobile phone mobile client widely are used. The IWOM
regarding tourism destinations in platforms such as ctrip, weibo or other tourism
BBS or APP spread more widely, showing it exerts a great influence.
On the one hand, because the time intervals from customers reading IWOM
to the actual purchase is different, leading to their purchase will be different.
Customers will have a psychological judgment when they read online word-of-
mouth. The judgment of the psychological distance between customers and the
author of IWOM will also change the customer’s purchase intention. Therefore,
this paper starts from the difference of the influence of the word-of-mouth in
the different temporal distance and social distance, observes the performance of
consumers’ purchasing intention and explains the influence of online word-of-
mouth.
2 Literature
2.1 Word-of-Mouth Research
Arndt [1] first raised word-of-mouth in marketing research significance and
showed word-of-mouth as a third party’s influences on consumer behavior. And
early studies tended to probe into the reason for word of mouth research [7].
With the development of the Internet and the increase of its scale, word-of-
mouth research started in the Internet-word-of-mouth. In research, Internet-
word-of-mouth (IWOM) [24], electronic word-of-mouth (ewom) [22], and elec-
tronic reputation refer to the same concept. In the study of IWOM, scholars
[6,13] thought the IWOM is that customers or potential customers in the net-
work community or BBS put forward for the product or business the positive
and negative. Dong dahai defined the concept of IWOM [8], and considered
IWOM has three main differences compared to traditional word-of-mouth: 1
Spread more broadly. 2 Connection between the receivers and senders is not
determined [4]. In addition to the strong relationship between circles of friends,
there is also a weak relationship.
3 commercial uncertainty. Due to some rea-
sons such as the anonymity of the Internet behavior, Internet word of mouth
will be controlled by some enterprises or business organizations. So, examining
IWOM is different from traditional marketing and this makes word-of-mouth a
fresh meaning.
From the beginning of the 21st century’s China, research on word-of-mouth
started to be increasing gradually. Domestic study about word-of-mouth for
consumers to buy has two directions. One direction is discovering the emotional
Effects of Internet Word-of-Mouth of a Tourism Destination 323
Online
word of attitude actual purchase
mouth
regulatory
factors
factors of word-of-mouth. Guo [10] put forward the theoretical model of word-
of-mouth for consumer attitudes, as shown in Fig. 1. After the model raised, A
lot of research around the positive word-of-mouth and negative word-of-mouth.
They believe that the emotion behind IWOM influence actual purchase through
attitude. Baber et al. [2] found that the positive online word-of-mouth can bring
positive change on consumers’ purchase intentions. Bailey [3] proved that neg-
ative reviews have a significant impact on customers decision. Based on the
original model studied, concentrated on regulating variables, which joined the
adjustment factors such as trust and relationship strength. Balaji [22] found
that the relationships among language divergence effect positive word-of-mouth
intentions.
On the other hand, Zuo [20] put forward a model concerning the consumer
purchase intention affected by word-of-mouth’s quantity and quality. The exist-
ing literature has proved quality and quantity of word-of-mouth influencing
intentions of consumers purchase [23]. This studies focus on the transmission
of information. Balajia [4] mentioned information quality in the article about
language divergence in WOM. This article in also refer to interaction quality
and relationship quality.
under different situations. Zhang [21] proved the temporal distance effect on
consumers’ impulse consumption. Zou [25] found that the customers emphasis
desirability or feasibility differently, when the consumer choose financial products
under different temporal distances.
The preliminary study shows that temporal and social distance have an effect
on consumer purchase intention. In short, combined with psychology and word
of mouth is also a relatively new research.
Internet word-of-mouth has one of the biggest differences compared with the
traditional word of mouth, whose access is of diversification, so the quantity of
electronic word-of-mouth is substantial. In terms of tourism destination’s IWOM,
consumers can collect and obtain, from several ways like tourism network BBS,
blog, Micro-blog, instant messaging (We-chat, qq), network video, Wikipedia
and travel quiz, tourist IWOM. Liu [16] proved that the movie aspect through
word-of-mouth affects the relationship between the number of product sales and
reputation. The study of the quantity of word-of-mouth, with the development
of the network, from the original single-platform to cross-platform research, at
present, has developed [11] swiftly, so the total quantity of IWOM is measured
from the viewpoint of cross-platform.
The quality of word-of-mouth in essence refers to the content of word of
mouth. In terms of tourist destination of IWOM, word-of-mouth quality eval-
uation is mainly from the correlation of word of mouth to convey information,
professional degrees, and the vitality of comprehensive evaluation on the qual-
ity of word-of-mouth. In the texts of existing research on tourism destination’s
IWOM, online word-of-mouth interesting and vivid impact on the quality of word
of mouth are great, illustrating IWOM which has high quality in the customer
in the heart of higher quality [12].
Comprehensive quality and quantity of word-of-mouth’s related research,
Grant [9], proved that the total effect of word-of-mouth quantity and quality
is the most significant, especially the quality of word of mouth, which is dif-
ferent from the path on the purchase intention. So the theoretical model of its
influence on purchase intention action is (as shown in Fig. 2).
This basis the paper puts forward the following two assumptions:
Hypothesis 1 : Consumers purchase intention is more significantly affected by
the quality of word-of-mouth in far future and the quantity of word-of-mouth
influence nearly future purchase intention of consumers more prominently.
Hypothesis 2 : When there is far social distance between receivers and senders,
the receivers’ purchase intention is more significantly affected by the quality
of word-of-mouth; When they have nearly social distance, customers’ purchase
intention is more significantly influenced by the quantity word-of-mouth.
(1) Subjects
The participants were senior undergraduate and graduate students, between
the age of 21 to 25 years old, a total of 110 people, is divided into two groups,
each group divided into 55 people. There are 28 women, and 27 men in each
group. Choosing college students as subjects is mainly based on the following
reasons: first, according to the survey of 2013, we found that 85.43% of college
students surveyed said “I like traveling”. And college students often use the
Internet and like to travel. Second, college students and postgraduates with
high homogeneity, can effectively control the contrast group and control group
of demographic heterogeneity.
(2) Experiment Preparation
Experiment on all testers to get on the Internet first hand tourist informa-
tion platform and channel was tested. These platforms for tourist destination’s
IWOM, including Ctrip, Qyer and Ma Feng Wo, the tourism BBS, post tourist
experience. There are 90.9% testers, who collect information from those platform.
89.1% from weibo, 60% form We-chat’s moment and subscription account, 1.8%
from other ways. Due to the W-chat moment’s privacy, word-of-mouth authors
have close social distance with word-of-mouth readers. So that the experience of
travel sites post and micro-blog platform simulation scene, which measures the
participants’ purchase intention in the different temporal and social distance.
First of all, by reading the study of word-of-mouth, the author summarized
main export tablets in quality and quantity of several attributes. Word-of-mouth
quantity evaluation is mainly from three aspects [18]: 1 multiple online platform
to see multiple destinations related IWOM appear; 2 the common platform to
see multiple destinations IWOM; 3 the IWOM of destination has a lot of for-
warding. Measurement of word-of-mouth quality is from the relevance, integrity,
interesting and professional comprehensive evaluation.
On this basis, the author collected the related tourism destination online net-
work of word of mouth. To avoid the destination names, which interfere the sub-
jects of purchase intention, we use A and B represent to tourist destination, and
both are simulating the ancient town tour relevant scenic spots, this ruling out
consumer preferences’ disturbance. The IWOM of Destination A reflect word-
of-mouth quantity, and tourist destination B’s IWOM reflect word-of-mouth
quality. Word of mouth simulates the situation in three network platforms like
326 M. Chen and J. Chen
weibo, ctrip and Ma Feng Wo. The tester would be shown 12 items IWOM about
A and 3 items IWOM about B. The IWOM about B is 1 item IWOM in each
platform, and the IWOM about A is 4 items IWOM in each platform. At the
same time the experiment obscures the names of related sites.
(3) Experimental Design
Experiment is divided into two steps: the first measuring different temporal
distance and the second experiment measuring different social distance. Measur-
ing subjects for tourism destination A and B different purchase intention, and
the scale basis by using three items, designed by Mr Schiff (Schiffman et al.) [17]:
1 I am willing to travel to the area;
2 I would like to recommend to others the
region tourism; 3 I’d be happy to try to travel to the area.
Before answering questionnaire, subjects need to read the IWOM which sim-
ulates the IWOM’s model of tourism BBS and Micro-blog. Scale takes 7 likert
scale from “strongly disagree” to “strongly agree” assignment 1 to 7, recycling
questionnaire by examining Cronbach’s Alpha = 0.767 > 0.7, certificating valid-
ity within a reasonable range.
Because tourism is different from the retail purchase, it needs to cooper-
ate with holiday, so the writer has been testing the latest holiday time for the
recent test used in measuring temporal distance experiment. One is “weekend”
and another is “one year”, which are two different time distance measurements.
Experiment announced by two word of mouth the identity of senders, 53-years-
old workers and 23-year-old college student, respectively.
Experiment 1: using SPSS19.0 for data analysis and processing. In the study of
the demographic variables such as gender, age did not produce any significant
effect, therefore not included in the model. Mixed repeat test analysis of variance,
the BOX ’sM inspection results F (2, 108) = 0.174 > 0.05, co-variance matrix
through the homogeneity test, in the error variance equality Levene test, word-
of-mouth quality P = 0.937, word-of-mouth quantity P = 0.094 were greater
than 0.05 through inspection. Through the following Table 1 descriptive statistics
analysis, it can be seen that word-of-mouth obvious interaction between quality
and quantity exists.
Internet word of mouth and the interaction of the distance and time (Eta =
0.645, p > 0.645) can better account for the variation model. As shown in Table 1
and Fig. 3, in the distant future buying situation, the purchase intention of sub-
jects who read the IWOM of tourism destination B (reflect word-of-mouth qual-
ity), are significantly higher than that of tourism destination A (reflect word-of-
mouth quantity); In near future purchase situations, the purchase intention of
testee who read destination A (reflect word-of-mouth quantity), is significantly
higher than of destination B (reflect word-of-mouth quality). Hypothesis 1 is
validated.
Because of social distance perception differences, the experiment by control-
ling the reviewers with different background information realizes the control of
Effects of Internet Word-of-Mouth of a Tourism Destination 327
quantity
Purchase Intention: (1-7)
quality
social distance. In the social distance situation in the group, we told the partic-
ipating commentators, 53-year-old workers (social identities and differences in
subjects); Then in the social situation in the group closer, we told participating
reviewers, 23-year-old college students (high degree of social identity is similar
to the participants).
Social distance measure takes the following scale: (1) I think of myself and
word-of-mouth authors similar; (2) I think of myself and word-of-mouth authors
psychological distance; (3) I think of myself and word-of-mouth authors belong-
ing to the same group. Adopting likert scale at 7 level produces The result of
social distance is that a 23 of college students (M = 5.26) is significantly higher
than a 53 office worker (M = 3.44), showing that the social distance control
experiment is effective.
BOX’sM inspection results F (2, 108) = 0.100 > 0.05 in Levene test, word-of-
mouth quality P = 0.803, the word-of-mouth quantity (P = 0.07 were greater
than 0.05 through inspection. And the interaction of the Internet word of mouth
and social distance (Eta = 0.531, p > 0.531) can better explain the variation
model.
As shown in Table 2 and Fig. 4, when the social distance is far between the
senders and receivers of IWOM, the subjects’ purchase intention of tourism
destination B (reflect word-of-mouth quality), are significantly higher than that
of tourism destination A (reflect word-of-mouth quantity); When the reader feel
328 M. Chen and J. Chen
quantity
Purchase Intention: (1-7)
quality
Fig. 4. Impact of word of mouth quality and quantity on purchase intention of con-
sumers in different social distances
they have close social distance, which means that they believe that they have a
similar social identity, the purchase intention of destination A (reflect word-of-
mouth quantity), is significantly higher than B ((reflect word-of-mouth quality).
Hypothesis 2 is validated.
References
1. Arndt J (1967) Role of product-related conversations in the diffusion of a new
product. J Mark Res 4(3):291–295
2. Baber A, Ramayah T et al (2015) Online word-of-mouth antecedents, attitude and
intention-to-purchase electronic products in Pakistan. In: USENIX conference on
USENIX technical conference, pp 307–318
3. Bailey AA (2004) Thiscompanysucks.com: the use of the internet in negative
consumer-to-consumer articulations. J Mark Commun 10(3):169–182
4. Balaji MS, Khong KW, Chong AYL (2016) Determinants of negative word-of-
mouth communication using social networking sites. Inf Manage 53(4):528–540
5. Bar-Anan Y, Liberman N, Trope Y (2006) The association between psychological
distance and construal level: evidence from an implicit association test. J Exper
Psychol Gener 135(4):609
6. Bussière D (2015) Evidence and implications of electronic word of mouth. Springer
330 M. Chen and J. Chen
7. Chu SC, Kim Y (2011) Study on the relationship between the feature of microblog’s
word-of-mouse and consumer’s brand attitude. Int J Advertising 30(1):47–75
8. Dong D (2012) Contrast analysis of word-of-mouth, internet word-of-mouth and
word-of-mouse. Chin J Manage 3:428–436 (in Chinese)
9. Grant R, Clarke RJ, Kyriazis E (2007) A review of factors affecting online consumer
search behaviour from an information value perspective. J Mark Manage 23(5):519–
533
10. Guo G (2007) The impact of word-of-mouth on consumer’s attitude: a theoretical
model. Manage Rev 6:20–26 (in Chinese)
11. Haixia Y (2015) The distribution of word-of-mouth across websites and online
sales: sensitivity analysis on entropy and network opinion leaders based on artificial
neural network. Econ Manage J 10:86–95 (in Chinese)
12. Haiyan C (2011) Study of communication on electronic word-of-mouth of destina-
tion. Master’s thesis, Wuhan University (in Chinese)
13. Hennig Thurau T, Gwinner KP et al (2004) Electronic word-of-mouth via
consumer-opinion platforms: what motivates consumers to articulate themselves
on the internet. J Interact Mark 18(1):38–52
14. Jinjing X (2015) The role of temporal distance and social distance in the consumer
decision-making: evidence from eye movements. Master’s thesis, Jinan University
(in Chinese)
15. Kah JA, Choongki L, Seonghoon L (2016) Spatial-temporal distances in travel
intention-behavior. Ann Tourism Res 57:160–175
16. Liu Y (2013) Word of mouth for movies: its dynamics and impact on box office
revenue. J Mark 70(3):74–89 (in Chinese)
17. Schiffman LG, Leslie LK (2000) Consumer behavior. Prentice Hall, Upper Saddle
River
18. Schubert P, Selz D (1999) Web assessment - measuring the effectiveness of elec-
tronic commerce sites going beyond traditional marketing paradigms. In: Hawaii
international conference on system sciences, p 5040
19. Trope Y, Liberman N, Wakslak C (2007) Construal levels and psychological dis-
tance: effects on representation, prediction, evaluation, and behavior. J Consum
Psychol 17(2):83–95
20. Wenming Z, Xu W, Chang F (2014) Relation between electronic word of mouth and
purchase intention in social commerce environment: a social capital perspective.
Nankai Bus Rev 04:140–150 (in Chinese)
21. Xuan Z (2012) The influence of temporal distance on consumers’ online impulse
buying behavior. Master’s thesis, Huazhong University of Science and Technology
(in Chinese)
22. Zainal NTA, Harun A, Lily J (2017) Examining the mediating effect of attitude
towards electronic words-of mouth (eWOM) on the relation between the trust in
eWOM source and intention to follow eWOM among Malaysian travellers. Asia
Pacific Management Review
23. Zheng C, Han Q, Wang H (2015) How do paid posters’ comments affect your
purchase intention. Nankai Bus Rev 01:89–97 (in Chinese)
24. Zhu X, Li A (2014) Determinants of consumer engagement in electronic word-of-
mouth (eWOM) in social networking sites. PhD thesis, Guangdong University of
Technology (in Chinese)
25. Zou P, Hao L, Li Y et al (2014) Impacts of perceived temporal distance-based sales
promotion on purchasing behaviors under information asymmetry. J Manage Sci
01:65–74 (in Chinese)
Analysis and Prediction of Population Aging
Trend Based on Population Development Model
Jiancheng Hu(B)
1 Introduction
Population aging is a phenomenon that occurs when the median age of a country
or region increases due to rising life expectancy and/or declining fertility rates.
Population aging is becoming one of the global problems of the modern times,
but with different features from region to region and from country to country.
Recent declines in fertility rates and increases in life spans are producing a signif-
icant shift in the age distribution of population [4]. Aging of population is closely
associated with increased life expectancy, which was mainly determined by bet-
ter quality health services, new discoveries in pharmaceutics, children diseases
treatment. Recent research by the United Nations reveals that the proportion
of people aged 60 and over is growing faster than any other age group and it
is expected to reach 1 billion by 2020 and almost 2 billion by 2050 (representing
22% of the global population). The proportion of individuals age 80 or over is
projected to rise from 1 to 4% of the global population by 2050 [5]. Figure 1
shows the change in the age structure of the world’s population over time. Each
shaded section reveals the age distribution at one point in time. The elderly
share will be much higher in 2050 than it is now.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 27
332 J. Hu
Population aging is one of the important factors that influence social and
economic development of a country. Since different age groups have different
economic needs and productive capacities, a country’s economic characteristics
may be expected to change as its population ages. Therefor correct forecast
on the future population trend has significant guiding to the economic plan of
overall country and local government.
In recent years, many articles have been paying more and more attention to
population aging prediction and family policy to deal with the aging population
problem, such as using gray dynamic model with equal dimension forest theory
[9], overlapping generations model [1,7], logistic population model [2], predator-
prey population model [8,11,12], stochastic differential equations [10] and so
on. In order to analyze and predict the population trend of Sichuan, China,
population development equation model is presented and applied to analyze the
population of China [3,6].
The United Nations’ old standard maintains that a country with more
than10% of its population over 60 years old is an aging society. The new stan-
dard presents the population of 65 years and above people exceeds 7% of the total
population. In fact, according to census of population, there were 88.11 million
population aged over 65 years in Chinese mainland, accounting for 6.96% of the
total population till November, 2016; while the number turned to 130 million
and 10.20% when it came to the population aged over 60 years.
This paper takes the 6th census data in Sichuan as the accordance, and uses a
population development equation model to predict the population development
in Sichuan. The model is utilized to analyse the population aging trend in the
future in Sichuan from a short period, and further predict the long-term popu-
lation development trend and aging population change condition in Sichuan in
the case of different total fertility rate.
Analysis and Prediction of Population Aging Trend 333
Let p(r, t) expresses the population density of r years in the year of t, considering
the population in the age interval [r, r + Δr) from year of to t + Δt, survival
population turns into age interval [r + Δt, r + Δr + Δt), and the number of
dead population is μ(r, t)p(r, t)ΔrΔt, which μ(r, t) is mortality rate of r years
population in the year of t. Then we have
or
p(r + Δt, t + Δt) − p(r, t + Δt) p(r, t + Δt) − p(r, t)
+ = −μ(r, t)p(r, t), (2)
Δt Δt
when time increases Δt, the age also increase Δt, that is Δt = Δr. Let Δt =
Δr → 0 in Eq. (2), then
∂p(r, t) ∂p(r, t)
+ = −μ(r, t)p(r, t). (3)
∂r ∂t
Given the initial and boundary conditions, that is, initial population density
function p(r, 0) = p0 (r) and fertility rate p(0, t) = f (t), the continuous model of
population development equation can be obtained
⎧ ∂p(r,t) ∂p(r,t)
⎪
⎪ + ∂t = −μ(r, t)p(r, t), 0 ≤ r ≤ rm , t ≥ 0
⎨ ∂r
p(r, 0) = p0 (r), 0 ≤ r ≤ rm
(4)
⎪
⎪ p(0, t) = f (t), t≥0
⎩
p(rm , t) = 0, t ≥ 0,
In order to study the dynamical population of different ages at any time, the
Eq. (4) should be discreted respect to variables r and t. Based on Eq. (1), the r
years population Nr (t) in the year of t can be written
where sr (t) and s0 (t) are the survival rate of r years population in the year of t
and infant survival rate respectively, which can be calculated by the statistical
data of population census. b(t) is the population born in the year of t. Concerning
the reproductive age interval [r1 , r2 ], b(t) can be described as
r2
b(t) = br (t)kr (t)Nr (t), (7)
r=r1
where br (t) is the fertility rate of childbearing age female at r years old in the
year of t, i.e. the average number of infant born by each r years old female in
the year of t, kr (t) is gender ratio, the ratio of r years old female in the total
population in the year of t.
By representing the population of each age in the year of t as a vector of size
rm , let
T
N (t) = [N0 (t), N1 (t), · · · , Nrm (t)] , (8)
where
⎡ ⎤
0 0 ··· 0 0
⎢ s1 (t) 0 · ·· 0 0⎥
⎢ ⎥
⎢ s2 (t) · · · 0 0⎥
A(t) = ⎢ 0 ⎥, (10)
⎢ .. .. . . .. .. ⎥
⎣. . .. .⎦
0 0 0 srm −1 (t) 0
⎡ ⎤
0 · · · 0 b∗r1 (t) · · · b∗r2 (t) 0 · · · 0
⎢0 ··· 0 0 ··· 0 0 ··· 0⎥
⎢ ⎥
B(t) = ⎢ . . . . . . .. . . .. ⎥ , (11)
⎣ .. . . .. .. . . .. . ..⎦
0 ··· 0 0 ··· 0 0 ··· 0
b∗r (t) = s0 (t)sr (t)kr (t)hr (t), r = r1 , · · · , r2 , (12)
which is the average number of infants born by each female in the year of t.
Define the female’s fertility mode hr (t), which is r years old female’s fertility
r2
weighted factor and satisfied r=r 1
hr (t) = 1. Then br (t) can also written as
As the data information provided by the 6th census in Sichuan is limited, and
the factors that influence population development are various, it is relatively
difficult to get the rule that mortality rate function and fertility mode changes
as the time changes. The population fertility policy will not change in a relatively
short period; in the condition of stable society, we usually suppose that survival
rate sr (t), fertility mode hr (t) and gender ratio kr (t) relatively do not change.
The data published from the 6th census in Sichuan shows the total fertility rate
of women β(t) = 1.075 at present. Under the supposition above, we use the
data of the 6th census in 2010 in Sichuan as the cardinal number to predict the
population in the future six years in Sichuan. The prediction result is shown as
Table 1.
From the prediction results, we can see that the population in Sichuan is still
increasing. After Guangdong, Shandong and Henan, Sichuan is ranked fourth big
province in the whole country population, also aging serious provinces. Although
the data of the census shows the population fertility rate is relatively low, and
the total population still continuously grows.
From Table 2, we can see from the calculation result of population proportion
at each age section that the proportions of aged people and children still con-
tinuously increase, but the proportion of adults continuously decreases. In 2010,
people under the age of 14 years accounts for the total population of 16.97%,
0.37% points higher than the national average level; 15–64 year old population
account for the total population of 72.08%, 2.45% points lower than the national
average; the population aged 65 years account for the province’s total population
of 10.95%, 2.08% points higher than the national average.
The aging index (AI, sometimes referred to as the elder-child ratio) is a common
indicator of changes in age structure, which is defined as the number of people
aged 65 and over per 100 young people aged 0–14 years [10]. In 2000, only a
few countries (Germany, Greece, Italy, Bulgaria, and Japan) had more elderly
than youth (aging index above 100). To date, aging indexes are much lower in
developing countries than in the developed world, but the proportional rise in the
336 J. Hu
From Fig. 3 we can find that the total population quantity in the future will
increase continuously in the case of high total fertility rate. When the total fer-
tility rate maintains the low level, the total population has a short term growth,
and soon will decrease steadily. In the case of medium total fertility rate, the
total population will maintain stable in the future 50 years. Meanwhile, the pre-
diction of aging population proportion in the future 65 years is shown as Fig. 4.
As Fig. 4 shown, in the case of low total fertility rate, the proportion of aging
population will continuously increase and finally exceed 25% in 2060. In the case
of medium total fertility rate, the aging population increase slowly, and stabilizes
around 22% after 30 years. When the total fertility rate is high, the proportion
of aging population is relatively slow, and gradually stabilize and maintain in
the below 20% after 30 years of growth.
From the prediction results, we can find that the relatively high fertility rate
will make the population aging keep at a relatively low level, but will cause
the rapid population growth. For example, considering the proportion of aging
population and the total population in the period of 2020–2040, the proportion
of aging population changes from 16% to 26% and the total population decreases
from 83 million to 77 million in the case of low total fertility rate. Meanwhile,
Analysis and Prediction of Population Aging Trend 339
the proportion of aging population only increases 6% from 16% and the total
population increases 13 million from 83 million in the case of high total fertility
rate. The prediction indicates that only when the total fertility rate keeps at 2.0
can a relatively good result appear.
The rate of population aging may also be modulated by migration. Immi-
gration usually slows down population aging, because immigrants tend to be
younger and have more children. On the other hand, emigration of working-age
adults accelerates population aging. The 6th census shows that in 2010 the pop-
ulation mobility rate of Sichuan was 18.92%, lower than the national average
only 0.59% points. Among them, the emigration rate of Sichuan was 23.24%,
and the immigration rate was 14.59%.
4 Conclusions
This paper predicts the population structure in the future six years in Sichuan
by establishing the population development equation. And the results show that
total population in Sichuan will continuously increase and the aging population
is increasingly aggravated. The population aging is an inevitable population
problem. The aging population increase will be a series of problems to society,
we can not only solve it through population policy but also old-age supporting
policy, and social security policy, and regulating economic structure, etc.
In order to analyze the effect of fertility rate on the aging of the population,
we predict the population structure in the future 50 years in Sichuan in the case
of different fertility rate. And the results indicate that appropriate fertility rate
will help to regulate population structure and ease the population aging. When
the total fertility rate is 2.0, the population development in the future in Sichuan
will be relatively stable and the trend of population aging will slow down and
finally keep at the level of 22%.
References
1. Abdessalem T, Cherni HC (2016) Macroeconomic effects of pension reforms in
the context of aging populations: overlapping generations model simulations for
tunisia. Middle East Dev J 8(1):84–108
2. Alves CO, Delgado M et al (2015) Existence of positive solution of a nonlocal logis-
tic population model. Zeitschrift fr angewandte Mathematik und Physik 66(3):943–
953
3. Cui Q, Lawson GJ, Gao D (1984) Study of models of single populations: develop-
ment of equations used in microorganism cultures. Biotechnol Bioeng 26(7):682–
686
4. DESA (2013) World population aging 2013. Technical report, PD Department of
Economic and Social Affairs (DESA), New York
340 J. Hu
5. DESA (2013) World population prospects: the 2012 revision, key findings and
advance tables. Technical report, PD Department of Economic and Social Affairs
(DESA), New York
6. Liu M, Liao D et al (2013) The correctional model of population development
equation. Model Numer Simul Mater Sci 03(4):139–141
7. Muto I, Oda T, Sudo N (2016) Macroeconomic impact of population aging in
Japan: a perspective from an overlapping generations model. IMF Econ Rev
64:408–442
8. Novozhilov AS (2004) Analysis of a generalized population predator-prey model
with a parameter of normal distribution over individuals of the predator popula-
tion. J Comput Syst Int 43(3):378–382
9. Shi LN, Huang HM et al (2009) Aging population trend prediction based on
the gray recurrence dynamic model with equal dimension. Ludong Univ J 25(4):
315–317
10. Yang F, Zhang LD, Shen JF (2013) Further discussion on population growth mod-
els by stochastic differential equations. Adv Mater Res 705:499–503
11. Zhang Y, Gao S et al (2015) On the dynamics of a stochastic ratio-dependent
predator-prey model with a specific functional response. J Appl Math Comput
48(1–2):441–460
12. Zhou A, Sattayatham P, Jiao J (2016) Analysis of a predator-prey model with
impulsive diffusion and releasing on predator population. Adv Differ Eqn 1:1–18
Evaluation of Progressive Team Intervention
on Promoting Physical Exercise Behavior
Xinyan Guo(B)
1 Introduction
In the modern society, sports and human survival and development are more and
more closely related, its significance and function far exceeded the original con-
cepts and categories. Physical activity can effectively change people’s way of life,
prevent all kinds of chronic diseases, and moderate physical activity is one of the
most simple ways to improve and maintain health [3,17,18]. People pay much
more attention to living environment, living styles and the consequent changes of
morbidity and mortality rates [16]. Researches on health management and promo-
tion show a close relationship between residents’ living style and physical health.
In Japan, less than 30% of Japanese do frequent (usually one year or more)
and regular (30 min per time, twice a week) physical exercises. A survey in
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 28
342 X. Guo
Australia showed that 53% of the adults didn’t take regular exercises with their
sitting time being 4.7 h per day or more [10]. High income countries like U.S. and
U.K. showed the same results. According to the survey, in U.K., only 39% male
and 29% female adults were doing exercises regularly, moreover, the participation
rate declined with the change of age, work and family [1,11,12].
There are many exercise behavioral intervention studies. Geoff et al. (2010)
conducted a pre-post survey research to 200 British female undergraduates who
did not attended any physical exercises, and analyzed the benefits and cogni-
tive disorders [13]. This study indicated that interventions should give priority
to solve the disorders of participating physical exercises which can be perceived
by individual. On the other hand, interventions should emphasize this cognition
that physical exercises are beneficial to health, thereby promoting modifications of
cognition and behavior for participating physical exercises. In order to study the
influence of PE class to students’ physical activities, Sallis et al. designed a trial
that it made 338 senior undergraduates to attend randomly intervention credit
courses or control credit courses [19]. The results demonstrated that the total vol-
ume of activities including leisure time, strength exercises and flexible exercises in
female group was increased due to interventions, which facilitated female to partic-
ipate healthy physical exercises. However, there was not significant in male group,
because male students were more like sports than female found in basic survey.
About the applied studied of stage-based intervention, Marcus et al. performed
an exercise intervention the Imagine Action campaign for community volunteers
to research the results between pre-exercises and post-exercises, and this study
indicated that 30% of participants in Contemplation and 60% of participants in
Preparation moved into Action, while 31% in Contemplation moved into Prepara-
tion. This study also indicated that less expensive and intensive exercise interven-
tion can be adopted significantly [14]. Bogdan et al. evidenced the effectiveness of
stage-based intervention in their study which aimed at exercises behavioral inter-
vention for 560 orthotic patients [2]. Other researchers also proved the advantages
of stages-matched intervention to stages-non-matched intervention using other
approaches such as Internet aided intervention [4].
However, to make it function well, coordination and cooperation are needed
between individuals, family, society and various departments (such as health,
education etc.). At the same time, it also depends on the scientific method of
promoting behavior to intervene [5,7].
In this paper, Progressive Intervention is adopted to understand the individ-
ual characteristics, strengthen the awareness of individual self-management and
help individuals form a good behavior habits. Then through baseline data and
tracking data, the effectiveness of intervention was analyzed to provide opera-
tional framework and theoretical guidance for the health promotion, especially
for the “sub health” status of the population.
Evaluation of Progressive Team Intervention on Promoting Physical Exercise 343
Gender %
Male 47.1
Female 52.9
To ensure the quality of the intervention, staff training was carried out before
the test to ensure that the intervention process was completely. And all these
works are taken out accordance with the framework of the theory hypothesis.
Scale by Li Kete five point scale scoring methods: among them, perceived barri-
ers project assigned the remaining items reverse, positive score, namely “strongly
disagree” for 1 points, “disagree” for 2 points, “not sure” for 3 points, “agreed”
for 4 points, “strongly agree” for 5 points.
The effectiveness of the physical exercise behavior at the present stage
(including the action stage and the intensity of action) and the related char-
acteristics were measured [6].
Based on exercises behaviors traits and stages of residents, the invention
handbook was composed of introduction, interpretation and suggestion. The
content is as follow:
(1) Health, sub-health and diseases
The aim of this part is to provide some essential concepts about their physical
fitness for residents so that lay a good foundation for healthy life style. For
example, health can be divided into three states, namely health, sub-health
and diseases. A state of sub-health is either non-health or non-diseases. The
symptoms of sub-health include lack of energy and other weakness, but these
symptoms cannot be diagnosed by clinical examinations.
(2) Introduction of scientific exercise methods
In the part, the benefits of regular exercises will be introduced, including the
improvement of physical fitness, adjustment of psychological factors and so on.
And then to introduce what is regular exercises, how to control the amount of
exercises, how to warm-up and how to eliminate fatigue quickly.
(3) The common exercise methods aerobic exercise
The most common exercise is aerobic exercise that is to improve the cardio res-
piratory fitness and physical fitness. Furthermore, methods and skills of aerobic
exercise will be introduced in this part.
(4) Other exercise methods
These exercise methods do not need complex techniques, fitness facilities, and
extra costs, at the same time these methods are effective, feasible and popular
for public. For example, jogging, running, bicycle riding, aerobics and fitness
path.
(5) Exercise methods of different population
People should perform suitable exercises based on their physical and psychologi-
cal conditions such as ages, gender and vocation. The main contents of this part
is to provide distinct advices for different population.
The study is a quasi-experimental design due to the large sample size and the
certain range of exercise behavior intervention. In order to ensure the rationality
and integrity of data analysis, the empirical research is divided into two stages:
the first stage is baseline survey data analysis, and the second stage is tracking
data analysis. The test procedure is shown in Fig. 1.
Evaluation of Progressive Team Intervention on Promoting Physical Exercise 345
First measurement
Beginning of the First Baseline data test
Month Setting up the intervention means and Distributing
intervention handbook
Second measurement
End of the Second Month
Tracking data test
Intervention according to the Procedure
Third measurement
End of the Forth
Tracking data test
Month
Intervention according to the Procedure
fifth measurement
End of the Sixth
Tracking data test
Month
Intervention according to the Procedure
Firstly, the baseline survey was performed with the permission of subject indi-
viduals. The questionnaire was distributed to the residents, and the characteristics
of the physical exercise behavior and basic information were measured, then the
experimental group of residents were carried out by the intervention staff.
After six month behavior intervention, behavioral tests were carried out every
two months, four times totally. Both groups use the same questionnaire scale.
The survey was conducted in a narrative way of step-by-step introduction, expla-
nation and recommendations.
3 Intervention Process
3.1 Intervention Evaluation
Firstly, Epi Data was used to establish a database, and all the data were checked
after the entry. Then the modeling and analysis of the related research can be
carried out only after the examination of the normal distribution, in which the
reference value of all samples were within the reference range of criteria. And the
absolute value of Skewness coefficient and kurtosis coefficient should be in the 0
to 1 range, and the S-W test P value is greater than 0.05, which demonstrated
that the formal sample can meet the requirements of normal distribution [15].
Urban residents of Sichuan randomly selected for the survey sample (n =
343), and the two subjects showed no significant difference in the results of
exercise behavior and behavioral characteristics variables (p > 0.05).
3.2 Follow-up
Complete physical exercise behavior intervention points can be summed up in 5
words: Ask, Access, Advice, Assist and Arrange (5As).
346 X. Guo
Ask
Assist Advice
4 Intervention Outcome
Based on the theoretical hypothesis, the exercise behavior intervention exper-
iment is implemented to analyze the changes of physical exercise behavior
between intervention group and the control group before and after the test
period.
Firstly, the slope homogeneity of the covariance analysis is tested. The vari-
ance analysis model were analyzed in which the variables of behavioral charac-
teristics after intervention were dependent variables, the group variables (1 =
intervention group and 2 = control group) were independent variables. The
result showed that the interaction between independent variables and covariates
was not significant (P > 0.05), that is, the assumption of the slope homogeneity
(Table 3).
The result showed that the effectiveness is not significant from previous per-
ceived susceptibility, previous perceived severity, previous perceived benefits,
previous perceived barriers, previous behavioral attitude, previous subjective
norm and previous perceived behavioral control variable (P > 0.05). But some
variables’ effectivenesses are significant (P < 0.05), such as previous behavioral
cues and previous behavioral intention. Results are shown in Table 4.
Evaluation of Progressive Team Intervention on Promoting Physical Exercise 347
Therefore, the exercise behavior variables of the two groups were compared
and analyzed (Tables 5 and 6). The results indicated that the exercise behavior
of the intervention group was significantly higher than that of the control group,
which means the progressive intervention is effective to promote the formation
of physical exercise.
348 X. Guo
Table 5. Comparative analysis of the effectiveness before and after the intervention
5 Discussion
In addition, although the residents hold a positive attitude for physical exercise
benefits, but due to the limited understanding of their own physical condition
and the relevant authority on argument, there is no clear subjective feeling for
the contrast effect not to participate in fitness activities, and actively participate
in fitness activities. But the intervention group was improved in these aspects.
In the four follow-up tests, according to the time schedule, the main purpose
set of the “Arrange” is to understand whether the residents are still persist in
this behavior or not [9]. In the process of behavior intervention, the people who
did not participate in physical exercise should be divided into stages to carry out
the work, such as conducting appropriate in-depth interviews with individuals,
giving psychological guidance, emphasizing the importance of exercise behavior
in their consciousness, at the same time, enhancing the confidence of physical
exercise by demonstration role or embodiment contrast, and helping them realize
the change of behavior as early as possible. In order to pursue a better coop-
eration of individual residents, access to information about physical exercise,
intervention staffs may need to make some special arrangements for the environ-
ment of the conversation. For example, put some health handbooks or posters
about physical exercise, display some sports souvenirs on desk, to help residents
receive health intervention, and let them feel about that physical exercise is a
very important work in the process of communication.
6 Conclusions
This study indicated that individual had different psychological cognitive status
and behavior intention level, and faced different difficulties and obstacles. Based
on the survey results, in the baseline data, the majority of urban residents are
aware of the benefits from physical exercise, but they are not necessarily involved
in the fitness. In other words, the promoting effect which perceived benefits
impacted on behavior intention is still to be strengthened. For there are more
worried about, such as ways of physical exercises and strength, physical training
environment, time cost, etc. In addition, although the residents hold a positive
attitude for physical exercise benefits, but due to the limited understanding
of their own physical condition and the relevant authority on argument, there
is no clear subjective feeling for the contrast effect not to participate in fitness
activities, and actively participate in fitness activities. But the intervention group
was improved in these aspects.
National fitness activities is mass fitness activities, to promote physical and
mental health. Urban residents as the main subject of national fitness activi-
ties, their views of the community sports, support and participation in decision-
making, directly impact on the mass sports development. Using progressive inter-
vention mode can make people to clear the relationship between physical exercise
and promote the health, to deepen the understanding of physical exercise behav-
ior. in this study the intervention based physical exercise behavior intervention
points (5As), and the result meant progressive intervention is effective to pro-
mote the formation of physical exercise.
350 X. Guo
References
1. Begg S, Khor S, Bright M (2003) Risk factor impact on the burden of disease and
injury in Queensland. Queensland Burden of Disease and Injury Circular Series
2. Bogdan RC, Biklen SK (1982) Qualitative Research for Education. Allyn and
Bacon, Boston
3. Physical Activity Guidelines for Americans Committee et al (2008) Physical Activ-
ity Guidelines for Americans. US Department of Health and Human Services,
Washington, DC
4. French KE, Nevett ME (1997) Knowledge representation and problem solution in
expert and novice youth baseball players. Res Q Exerc Sport 67(4):386–395
5. Guo X (2015) Application of diffusion of innovation theory on the intervention
research of exercise behaviors in urban residents. Mod Prev Med 31(3):202–208
6. Guo X (2015) Construction and Validation of the Integration Model of Planned
Behavior Theory and Health Belief Model. Springer, Heidelberg
7. Guo X (2016) Examination on the discontinuity of physical exercise behavior stages
and its implication for intervention. J Chengdu Sport Univ 42(3):42–47
8. Guo X, Xu J (2010) Making and testing of the scale to measure the change stage
for the exercising behavior of urban residents based on a survey into some Sichuan
urban communities. J Chengdu Sport Univ 5:71–74
9. Guo X, Zhang W (2017) Stages and Processes of Self Change of Exercise Behavior:
Toward an Integrative Model of Change. Springer, Singapore
10. Harper C, Epidemiologist C (2008) Challenges and opportunities - key challenge 2:
make prevention a key to reducing health inequality. In: The Health of Queenslan-
ders 2008-8211; Prevention of Chronic Disease: Second Report of the Chief Health
Officer, Queensland. Queensland Burden of Disease and Injury Circular Series
11. Department of Health (2005) Choosing activity: a physical activity action plan.
Healthex Specialist
12. Department of Health (2005) Choosing health: making healthy choices easier. Pri-
mary Health Care
13. Lovell GP, Ansari WE, Parker JK (2010) Perceived exercise benefits and barriers
of non-exercising female university students in the United Kingdom. Int J Environ
Res Pub Health 7(3):784–798
14. Marcus BH, Banspach SW (1992) Using the stages of change model to increase the
adoption of physical activity among community participants. Am J Health Promot
AJHP 6(6):424–429
15. Markus KA (2012) Principles and practice of structural equation modeling by rex
b. kline. Struct Equ Model Multidisciplinary J 19(3):509–512
16. Mesbah M (2016) Measurement and analysis of quality of life related to environ-
mental hazards: the methodology illustrated by recent epidemiological studies. Int
J Manage Sci Eng Manage 11(2):1–16
17. World Health Organization (2008) Prevalence of insufficient physical activity, age
15+, age-standardized: both sexes. World Health Organization, Geneva
18. World Health Organization (2013) Bull World Health Organ. World Health Orga-
nization
19. Sallis JF, Calfas KJ (1999) Evaluation of a university course to promote physical
activity: project grad. Res Q Exerc Sport 70(70):1–10
SEM-Based Value Generation Mechanism
from Open Government Data
in Environment/Weather Sector
1 Introduction
Environmental issues are harmful effects of human activities on the biophysical
environment. Major current environmental issues may include climate change,
pollution, environmental degradation, and resource depletion etc. Global warm-
ing, or climate change as it’s now commonly referred to, is the greatest envi-
ronmental threat we’ve ever faced. There is little doubt that climate change is
contributing to the extreme weather disasters we have been experiencing, such as
worsening air quality, searing heat, and more frequent extreme weather events
(e.g. raging storms, ferocious fires, severe droughts, and punishing floods). It
threatens our health, communities, economy, and national security [7], for exam-
ple, the climate change-related heat mortality in the first half of 2012 is a part
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 29
352 X. Song et al.
of a deadly trend. In 2011, at least 206 people died from extreme heat, up
from 138 fatalities in 2010 and nearly double the 10-year average, according to
the National Oceanic and Atmospheric Administration [8]. If we don’t do more
to reduce fossil fuel emissions and other heat-trapping greenhouse gases that
are making heat waves more intense, more than 150,000 additional Americans
could die by the end of this century due to excessive heat. Heat-related death is
just one deadly side effect of extreme weather tied to climate change [7]. Other
extreme events also have side effects, such as storms can cause drowning, con-
taminate drinking water and result in outbreaks of infectious diseases; heat and
ozone smog increases respiratory diseases (e.g. asthma) and worsens the health
of people suffering from cardiac or pulmonary disease.
According to NRDC (2015), the global warming has mainly six side effects
on human health, including more vector-borne disease, more water-borne ill-
ness, longer allergy seasons, declining crops, air pollution and more heatwaves
[7]. Therefore, as we work to stop climate change and prevent those associated
impacts, we are also developing policies to minimize human suffering.
At present, worldwide nations have begun taking steps to combat this grow-
ing threat, working toward an international agreement in which every country
on earth plays its part. Many of the world’s largest polluters have stepped up
with significant commitments, amplified by efforts from cities, businesses, sports
leagues, non-governmental organizations (NGOs), and many other individuals
and groups that have responded to the urgent need for a climate action. For
example, since carbon pollution fuels climate change, drives extreme weather,
threatens communities and cuts too many lives short, the U.S. Environmental
Protection Agency has developed the Clean Power Plan, which sets the first
national limits on carbon pollution from power plants and provides states with
the flexibility to meet them.
The NRDC has suggested more solutions to address the global warming
issues, such as setting limits on global warming pollution; investing in grean
jobs and clean energy; driving smarter cars; creating green homes and buildings;
as well as building better communities and transportation networks. These solu-
tions will motivate our lawmakers to quit ignoring climate change and start lim-
iting carbon pollution that is heating our planet and increasing the intensity of
extreme weather. However, the government cannot be expected to do everything
to protect the environment. In cooperating with the private sector as a partner
can make sense based on the principle of partnership and appropriate process.
Opening data is recognized as a driver of efficiency and a vehicle for increasing
transparency, citizen participation and innovation in society. Open data can help
improve food security, healthcare, education, cities, the environment, and other
public and private services that are critical to development. Taiwan environ-
ment protection agency (TEPA) views environmental data as a strategic asset
to protect our environment. At present, more and more citizens are engaged in
the open data movement for different purposes, in particular, emerging private
SEM-Based Value Generation Mechanism from Open Government Data 353
data within and across countries, can new approaches for understanding the eco-
nomic and social impact of open government data be generated. The Open Data
500 U.S.A is the first comprehensive study of U.S. companies that use OGD
to generate new business and develop new products and services [6]. Similarly,
the Open Data 500 Korea, the Open Data 150 Canada, the Open Data 500
Australia, the Open Data 500 Mexico, the Open Data 500 Italy projects have
been launched. Many companies from different sectors (e.g. healthcare, food and
agriculture, finance etc.) are developed and generate value based on OGD from
department of defense, department of energy etc. Our study show that around
many environmental companies uses OGD to generate value, including the eco-
nomic, social and environmental values.
Despite the potential significance of OGD, emphasized by an abundance of
anecdotal evidence, we could not identify many studies on how OGD will con-
tribute to value generations. To-date, the economic and social impact of open-
data policies remains largely unclear, and there are scant empirical data available
on the effects of the various policy approaches, thus leaving policy makers with-
out the facts they need to assess and improve these policies. Some studies were
conducted to illustrate how data are used as a key resource to enhance their
core values [5,9,10]. However, more studies are needed to figure out how OGD
is used as a key resource to generate values in specific domains, such as the
environment/weather sector.
companies in USA, Australia and South Korea and establish a conceptual struc-
tural equation model for further validation.
Many countries are entering the mainstream of open data movement. A number
of open data benchmarks have been developed such as the World Bank’s Open
Data Readiness Assessment (ODRA), World Wide Web Foundation’s Open Data
Barometer (ODB, 2015), Open Knowledge Foundation Network’s (2014) Open
Data Index (ODI) and Capgemini Consulting’s (2013) Open Data Economy
(ODE), to name few global and widely used benchmarks. However, each of these
benchmarks serves a different purpose and focus. Susha et al. (2014) suggest
that ODB provides a more comprehensive perspective since it not only includes
measures at various stages like readiness, implementation, and impact but also
highlight the importance of involvement of major stakeholders and challenges
throughout the open data process [9]. According to ODB (2016), a scaled score
is an indication of how well a country is doing against other countries in getting
the basics of open data readiness, implementation and impact [4]. According to
the ODB (2014) technical handbook, When evaluating the ODB score, the open-
ness of environmental data (D14 element) is considered based on the data on
one or more of: carbon emissions, emission of pollutants (e.g. carbon monoxides,
nitrogen oxides, particulate matter etc.), and deforestation. This study take 34
companies from US, Australia and South Korea into consideration due to the fol-
lowing two reasons. On one hand, the ODB rankings (as demonstrated in Fig. 2)
show that US, Australia and South Korea are in the top ten countries in terms
of open data readiness, implementation and impact. On the other hand, due to
the high data openness in US, Australia and South Korea, many environmental
companies participated in the Open Data 500 USA project, the Open Data 500
Australia project, and the Open Data 500 Korea project.
The 34 OGD-driven companies in environment/weather sector are surveyed
and partially shown in Table 1, which present social impact, revenue (economic
impact), and environmental impact of different companies. It is noted that the
34 companies use OGD from various official sources and their main purposes are
to address severe environmental issues. The environmental issues to be addressed
and other missions to be accomplished are also listed.
The aim of this study is to understand the causal relationship of why the
OGD is beneficial for value generation from the environmental, social and eco-
nomic aspects (Table 2).
356
Table 1. OGD-driven companies in environment/weather sector (Part 1)
No Company name City state Environmental Social impacts Revenue Data sources Purposes Websites
country impacts sources
1 AccuWeather State Environmental Environment Subscriptions National (1) Provide weather
College protection, and climate Weather Service, forecasts (2) Deliver http://www.
PA US Climate change change Monitoring enterprise solutions (3) accuweather.com
prevention stations Cooperate relationships
X. Song et al.
between government
weather agencies and
the weather industry
2 CoolClimate Berkeley Environmentally Cleaner living Data analysis Multiple (1) Provide
CA US sustainable environment for clients, government open decision-making tools http://
development Database data sources and programs (2) coolclimate.
licensing, (e.g. Design tailored climate berkeley.edu
User fees for environmental solutions (3) Motivate
web or records, land use low-carbon choices (4)
mobile access information) Design climate actions
and programs
3 Earth Networks Germantown Extreme weather Environment Weather data Neighborhood- (1) Operate weather
MD US (e.g. tornadoes, and climate analysis for level sensors, observation, lightning www.
cyclones), change organizations National Oceanic detection and climate earthnetworks.
Climate change and Atmospheric (green gas) networks com
detection Administration, (2) Enable enterprises
National and consumers to make
Weather Service informed decisions
4 Environmental Milford Environmental Environment Data U.S. (1) Provide robust data
Data Resources CT US problem report and climate collection Environmental (2) Provide smarter http://www.
change outsources, Protection workflow tools for data edrnet.com
Data Agency, Multiple management
management government open
for clients data sources
(e.g.
environmental
records, land use
information)
Table 1. (Continued)
No Company name City state Environmental Social impacts Revenue Data sources Purposes Websites
country impacts sources
5 Bass Coast Bass Environmentally Citizen Contributions/ Department of (1) Support healthy
Landcare VIC AU sustainable engagement and donations, Environment and resilient http://www.
Network Inc development participation, Government Agencies, Local ecosystems (2) basscoastlandcare.
educational contract, Governments, Deliver sustainable org.au
opportunity, Membership Organizations agricultural and
good governance fees, environmental
Philanthropic management
grants practices
6 Enviro-dynamics Hobart Environmental Citizen Consulting Department of (1) Provide
Pty Ltd TAS AU protection, engagement and Environment environmental www.
Climate change participation, Australian solutions (2) Help enviro-dynamics.
prevention educational Bureau of business steer com.au
opportunity Statistics through the green
tape (3) Support
government to
deliver timely and
sustainable
environmental
policy and practice
outcomes (4)
Provide agricultural
and environmental
data analysis
SEM-Based Value Generation Mechanism from Open Government Data
357
358
No Company name City state Environmental Social impacts Revenue sources Data sources Purposes Websites
country impacts
7 Parklands Wodonga Environmental Citizen Contributions/ New South (1) Tackle www.parklands-
Albury Wodonga VIC AU protection, engagement and donations, Wales environmental alburywodonga.
Ltd Climate change participation, Government Government challenges (2) org.au
X. Song et al.
No Company name City state Environmental Social impacts Revenue sources Data sources Purposes Websites
country impacts
10 SBIS Yeongdeungpo- Environmental Citizen Consulting, Data Korea (1) Provide
gu protection, engagement and analysis for Meteorological reliable systems http://www.sbis.
Seoul KR Climate change participation clients Administration for customers co.kr
prevention
11 Softworx Nowon-gu Environmental Consumer Advertising, Korea (1) Offer
Seoul KR protection, empowerment Software Environment information on https://play.
Climate change licensing Corporation air pollution and google.com/
prevention fine dust store/apps/
details?id=com.
softworx.cai
12 w365 Gangnam-gu Environmental Consumer Consulting, Data Korea (1) Provide
Seoul KR protection, empowerment analysis for Meteorological weather www.w365.com
Climate change clients Administration information
prevention
SEM-Based Value Generation Mechanism from Open Government Data
359
360 X. Song et al.
Fig. 1. Top ten countries in the ODB 3rd edition ranking (Source: 2016 ODB global
report)
Since Bentler’s appeal to apply the technique to handle latent variables (i.e.
unobserved variables) in psychological science, structural equation modeling
(SEM) has become a quasi-routine and even indispensable statistical analysis
approach in the social sciences. The emergence and development of SEM was
regarded as an important statistical development in social sciences in recent
decades and this “second generation” multivariate analysis method has been
widely applied in theoretical explorations and empirical validations in many
disciplines. Structural Equation Models with latent variables are extensively in
measurement and hypothesis testing [1,2]. Compared with other statistical tools
such as factor analysis and multivariate regression, SEM carries out factor analy-
sis and path analysis simultaneously, since it can (1) measure and accommodate
errors of manifest variables (i.e. observed variables); (2) represent ambiguous
constructs in the form of latent variables (i.e. unobserved variables) by using
several manifest variables; and (3) simultaneously estimate both causal relation-
ships among latent variables and manifest variables. In addition, SEM can also
provide group comparisons with a holistic model, resulting in much more vivid
SEM-Based Value Generation Mechanism from Open Government Data 361
impressions than traditional ANOVA. SEM can also handle longitudinal designs
when time lag variables are involved.
As introduced above, SEM describes and tests relationships between two
kinds of variables – latent variables (LVs) and manifest variables (MVs). Latent
variables cannot be observed directly due to their abstract character. In con-
trast, observed variables contain objective facts and easier to measure. Sev-
eral observed variables can reflect one latent variable. As presented in Fig. 2, a
structural equation model usually consists of two main components, a structural
model and several measurement models. A simple measurement model includes
a latent variable, a few associated observed variables and their corresponding
measurement errors. The structural model consists of all LVs and their interre-
lationships. For model development purposes, some researches aim to validate
their assumptions of a dimensional framework of one or several discriminant LVs,
while others aim to elicit the causal relationship between the LVs. Confirmatory
factor analysis (CFA) with correlating latent variables satisfies the former pur-
pose, while these correlations need to be replaced by directional relationships for
the latter.
Figure 4 provides a simple example of a structural equation model investi-
gating the effect of LVs ξ1 , ξ2 and ξ3 on LVs η1 , η2 and η3 , and where several
MVs are used to represent the LVs. The MVs are shown in rectangles, the LVs in
ellipses, measurement errors in circles and with arrows indicating the direction
of the effects. If directional arrow between ξ and η is replaced by a correlation
two-way arrow, the model is a CFA and its purpose is to test whether MVs can
represent LVs well (i.e. convergent validity) and whether ξ and η are different
(i.e. discriminant validity). The basic concepts and principles of SEM are now
well established with the help of early explorations by researchers, structured
textbooks, well developed soft programs (e.g. LISREL, EQS and AMOS).
2 New offerings
Innovation is the source of value creation in Schumpeter’s economic theory, bring-
ing about novel combinations of resources, new production methods, as well as
new products and services, which, in turn, lead to the transformation of markets
and industries, thus increasing value. Data-driven innovation positively affects
value through generation of new knowledge, new processes, services and prod-
ucts, and new businesses.
3 Transparency & accountability
Most definitions of transparency recognize the extent to which an entity reveals
relevant information about its own decision processes, procedures, functioning
and performance. However, opening access to chosen public documents does not
necessarily contribute to a transparent government. By utilizing OGD for compa-
nies in environment/weather sector, it is important to promote the transparency
and accountability of OGD from the environment sector.
(3) Constructs of values for OGD-driven companies in environment/weather
sector
Two types of value are frequently discussed: economic value, defined as the
worth of a good or service as determined by the market, and social value, which
is created when resources, inputs, processes or policies are combined to gener-
ate improvements in the lives of individuals or society as a whole. The main
difference is that our research emphasizes the environmental impacts of those
companies from the environment/weather sector as the 34 companies make great
contributions to environmental protection and disaster prevention. The environ-
mental, economic and social values are combined to achieve the sustainability
goal of the world, as shown in Fig. 3. The environmental impacts mainly include
pollution reduction, natural resource conservation and climate change resilience.
OGD Enablers
Data quality Pollution
& Mechanisms reduction
Integrity
Sustainable Values Natural
resourse
Data
Disaggregation
Decision conservation
Making Climate
Environmental change
Data Timeliness
Impacts resilience
Data
Profit
transparency
&
Openness
Cost savings
Economic
Data usability New offerings
& Impacts
Curation Research &
development
Data protection
& Citizen
Privacy engagement &
participation
Data governance
& Transparency Social
Independence Impacts Education
&
Data source Accountability
Health &
&
Capacity standard of
living
The economic impacts are reflected in profits, cost savings, and research & devel-
opment. The social impacts are categorized as citizen engagement & participa-
tion, education, and health & standard of living.
As previously discussed, the conceptual SEM model-based value generation
methodology is established in Fig. 1.
Intensive research has been conducted to figure out the importance of big data
or massive OGD, among which value chain analysis approach and value stream
mapping method are extensively used. Both technical workers and management
practitioners are key actors in finding out how big data or OGD contributes to
value generations in various sectors. The beginning stage of our research on Open
Government Data for Value Generation aims to address the issue of how OGD
stimulates value generation in environment/weather companies. We attempt to
analyze the causal relationship between OGD and sustainable values driven by
OGD based on a structural equation model.
This paper, however, only looks into the first phase of the planned research
where it attempts to survey the landscape of OGD-driven environmental com-
panies in USA, Australia and South Korea and establish a conceptual struc-
tural equation model for further validation. Further research is needed to further
enrich the current study.
366 X. Song et al.
References
1. Bagozzi RP, Yi Y (1988) On the evaluation of structural equation models. J Acad
Mark Sci 16(1):74–94
2. Bagozzi RP, Yi Y (2012) Specification, evaluation, and interpretation of structural
equation models. J Acad Mark Sci 40(1):8–34
3. Foundation OK (2016) Open data index (2016)
4. Fundation WWW (2016) Open Data Barometer, 3rd edn. (2016). http://www.
opendatabarometer.org/report/about/method.html
5. Magalhaes G, Roseira C, Manley L (2014) Business models for open government
data. In: Proceedings of the 8th International Conference on Theory and Practice
of Electronic Governance. ACM, pp 365–370
6. Network TOG (2016) Open data 500 U.S. (2016). http://www.opendata500.com/
7. (NRDC) NRDC (2016) Our stories (2015). https://www.nrdc.org/
8. Oceanic N, (NOAA) AA (2016) Climate data and report (2015). http://www.noaa.
gov/
9. Susha I, Zuiderwijk A et al (2015) Benchmarks for evaluating the progress of open
data adoption. Soc Sci Comput Rev 33(5):613–630
10. Zuiderwijk A, Janssen M (2013) Open data policies, their implementation and
impact: a framework for comparison. Gov Inf Q 31(1):17–29
Impact of Management Information Systems
Techniques on Quality Enhancement Cell’s
Report for Higher Education Commission
of Pakistan
1 Introduction
Quality is a term which has always be confused with other concepts. The word
quality has a Latin word “quails” its meaning is ‘what sort of’. It has a range
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 30
368 F.U. Khan and A. Kamran
of different meanings. Quality is something for you knowingly known. But there
are things that are better than others. A high evaluation accorded to an educa-
tive process [11]. Where it has been demonstrated that, through the process,
the students’ educational development has been enhanced not only have they
achieved the particular objectives set for the course but, in doing so, they have
also fulfilled the general educational aims of autonomy of the ability to partici-
pate in reasoned discourse, of critical self-evaluation, and of coming to a proper
awareness of the ultimate contingency of all thought and action [15].
Through above examinations, you will comprehend that esteem is undefined
and setting focused. It stretches out from meaning “standard” to ‘flawlessness’.
Both are significantly settled in their different qualities operationalize in inde-
pendent, institutions, and, the national practice will be visualize later. “Values
can be justified in as a lowest ‘threshold’ by which the accuracy is judged”
[4]. It is quite possible to have higher education institutions to have a common
understanding in formulating to have teaching and research standards with high
quality.
The administration measurement of value is most likely more likened to
the instructive methodologies. We realize that not at all like physical products,
administrations are vaporous to the degree that they can be expended just the
length of the action or the methodology proceeds. Prominent issue of quality
had been identified in the higher education in the Medium Term Development
Framework (MTDF) [12]. Therefore, to enhance the quality of output and effi-
ciency of the higher education learning systems, a mechanism of establishment
of QECs has been developed by the Quality Assurance Committee to improve
the standards of quality of higher education in an organized way with consis-
tency across institutions. QEC prepared SAR to assess a systematic process of
collecting, reviewing and using relevant quantitative/qualitative data and infor-
mation from multiple and diverse sources about educational programs, for the
purpose of improving student learning, and evaluating academic and learning
standards. With a specific end goal to accomplish the subsequent reports the
way toward finishing SAR is complicated [2]. The fundamental issue of setting
up this report is the utilization of manual techniques, which is the reason for
creating reports with repetitive errors and obviously, unwieldy to oversee. The
answer for this issue is, as one may have seen in this time, quick move towards
automation [13]. Applied MIS techniques by utilizing software development tools
and database integration and generally utilized reporting tools. Utilizing above
mentioned tools and techniques will empower QA and clients to perform their job
effectively, lessen the repetitive data, save clients time and enhanced clarity [3].
in place to achieve the program objectives. The extent to which these objec-
tives are achieved through continuous assessment and improvements must be
demonstrated [10].
• Describe how the program outcomes support the program objectives. In Table
3 show the outcomes that are aligned with each objective..
• Describe the means for assessing the extent to which graduates are perform-
ing the stated program outcomes/learning objectives. This should be accom-
plished by the following (Table 2):
– Conducting a survey of graduating seniors every semester.
– Conduct a survey of alumni every two years.
– Conduct a survey of employers every two years.
370 F.U. Khan and A. Kamran
The data obtained from the above sources should be analyzed and presented
in the assessment report.
It is recommended that the above surveys should be conducted, summarized
and added to the self-study assessment report. Departments should utilize the
results of the surveys for improving the program as soon as they are available.
An example follows:
• Foundation
• Skills and Tools
• Awareness and Professional Ethics
1 Objective 1
To provide students with a strong foundation in engineering sciences and design
methodologies that emphasizes the application of the fundamental mathemati-
cal, scientific and engineering principles in the areas of engineering.
2 Objective 2
To provide students with skills to enter the workplace well-prepared in the core
competencies listed below:
3 Objective 3
To provide students with knowledge relevant to engineering practice, including
ethical, professional, social and global awareness, the impact of engineering on
society, the importance of continuing education and lifelong learning in both
technical and non-technical areas.
2 Problem Statement
The main problem of preparing this report is the use of manual methods, which is
the cause of producing reports with repetitive errors and obviously, cumbersome
to manage. The proposed solution to this problem is, as one may have observed
in this era, rapid move towards automation.
Impact of Management Information Systems Techniques 373
3 Method of Research
This research is aimed that of the process of Self-Assessment Report gets auto-
mated by using technology, for example through user friendly software then
this will help in enhance the process of QA Reporting for HEC. Therefore this
research is a Descriptive Research in which the method of research is Observa-
tional Research. Since the respondents in this research work in a natural envi-
ronment, this method is convenient in assessing multiple users performing their
jobs.
Using detailed investigation methodology adds a lot for the investigation, for
instance:
374 F.U. Khan and A. Kamran
In view of these variables and hypothesis, the Research Design followed for this
thesis is Qualitative Research Design. Reasons for choosing this design:
4 Conceptual Model
The above given diagram is a conceptual model of all entities participating in
the process of making SAR. This model only shows how the whole process works
currently without having being automated. From this diagram one can start to
figure out how the software can be evolved in the system to provide or achieve
the outcomes which are supposed to be achieved.
The process starts when an institution wishes to approve its program by HEC.
QEC personal sends a request to HEC which in return supplies ‘SAR Manual’
in order to prepare SAR Report. QEC user inserts MISSION OBJECTIVES &
OUTCOMES based on the CRITERION-1 in Self Assessment Report Manual.
Finally report is generated as per the format of HEC.
4.2 Languages
For the above purpose, C# & VB Languages have been selected to process
SAR Report. C# & VB is usually a multi-paradigm programming language
encompassing strong typing, essential, declarative, sensible, universal, object-
oriented (class-based), and also component-oriented programming disciplines.
C# & VB is one of the programming languages designed for the Common Lan-
guage Infrastructure.
378 F.U. Khan and A. Kamran
• Makes it easy to maintain and modify existing code as new objects can be
created with small differences to existing ones.
• Provides a good framework for code libraries where supplied software com-
ponents can be easily adapted and modified by the programmer. This is par-
ticularly useful for developing graphical user interfaces [1].
4.4 Platform
Visual Studio is a complete set of development tools for building C# & VB
applications, XML Web Services, desktop applications, and mobile applications.
Visual Basic, Visual C#, and Visual C++ all use the same integrated develop-
ment environment (IDE), which enables tool sharing and eases the creation of
mixed-language solutions [14].
4.5 Database
The Visual Studio Report Designer provides a user-friendly interface for creating
robust reports that include data from multiple types of data sources [5] (Fig. 6).
‘Visual Studio’ reports let you slice and dice your data and present it in
detail or summary form regardless of how the data is stored or sorted in the
underlying tables. It offers a great deal of power and flexibility to analyze and
present results [16].
User authentication and password protection Managers and QEC directors given
the authorization to change all the aspects of SAR report the director is working
with are also given the authorization to access the whole report but with the
exception of few section which are in read only mode.
5 Result
• Figure 2 illustrates the research framework and demonstrate the function of
moderate variable.
• Figure 4 shows the process of Self-Assessment Report through conceptual
model.
• Figure 5 demonstrates the software design of Self-Assessment Report.
• TIME: the process of ESAR will significantly save the user’s time.
• MIS Techniques: the above advantages of results may well be achieved MIS
techniques.
• REDUCTION OF REPEATION ERRORS: using RDBMS.
• FLEXIBILITY IN FUTURE MODIFICATION: can be achieved using OOP
techniques.
6 Conclusions
Quality assurance is a portal to “High quality education”. With a specific end
goal to evaluate the programs quality Higher Education Commission” has cre-
ated predefine Criterions forms. The department of Quality Enhancement Cell
which works under those predefined Criterion. This Criterion is met utilizing
frames with applicable information. Thusly, this will bring about an enhanced
interaction with QA clients utilizing E-SAR. One of the other major advantages
of it will be as better organized report. In this manner, the real issue for QA
clients of following QEC procedures can extensively be all around encouraged
utilizing the proposed solution. Infect, repetitive errors are better managed with
it. A well designed application is most suitable to accomplish the proposed results
for future work. In future, alternative design technique of HCI can be utilized
to furnish client with more upgraded collaboration in SAR making.
In order to discuss the process of QEC’s SAR, we clearly observe that the
whole process is currently based on manual work. There is a dire need to auto-
mate SAR process which reflects from the survey data analysis, collected from
the QEC users. The remaining Criterion of SAR can easily be considered by
applying MIS technology and software development tools in order to automate.
Impact of Management Information Systems Techniques 381
References
1. Afaf (2016) Advantages of OOP. https://www.researchgate.net/post/What is
adventages of Object Oriented Programming
2. Alvi NA, Alam A (2004) Pakistan institute of quality control
3. Arshad M (2003) Attitude of teachers of higher education towards their profession.
Ph.D. thesis, Allama Iqbal Open University, Islamabad
4. Ashcroft K (2005) Criteria standards and quality. https://books.google.com.pk/
books?isbn=113571990X
5. Code EF (1990) Keys and referential integrity. https://www.seas.upenn.
edu:codeblab.com/wp-content/uploads/2009/12/rmdb-codd.pdf
6. Code EF (1990) The relational model for database management: verison 2[M].
Addison-Wesley Longman Publishing co. Inc
7. Coulter N (2015) Advantages and disadvantages of OOP
8. ISO/IEEE 830-1998 (1998) Recommended practice for software requirements spec-
ifications. doi:10.1109/IEEESTD.1998.88286
9. Indicators E (2015) Expenditure on education as % of total government expendi-
ture (%). http://data.worldbank.org/indicator/SE.XPD.TOTL.GB.ZS
10. Institute BS (2015) Learning and growth perspective. http://balancedscorecard.
org/Learning-and-Growth-Perspective
11. James Hutton DP (2010) Corporate governance-good governance in the university
context. http://www.minterellison.com/Pub/NL/201003 HEFd/
12. Khan MA, Usman M (2015) Education quality and learning outcomes in higher
education institutions in pakistan. Springer, Singapore, pp 449–463
13. Michal Pietrzak JP (2015) The application of the balanced scorecard (bsc) in
the higher education setting of a polish university. Online J Appl Knowl Manag
3(1):151–164
14. Microsoft (2008) Introducing visual studio. https://msdn.microsoft.com/en-us/
library/fx6bk1f4(v=vs.90).aspx
15. Mishra S (2007) Concept of quality. http://oasis.col.org/bitstream/handle/11599/
101/QAHE Intro.pdf?sequence=1
16. Ralph P, Wand Y (2017) A proposal for a formal definition of the design concept.
In: Lyytinen K, Loucopoulos P et al (eds) Design Requirements Workshop. LNBIP,
vol 14. Springer, Heidelberg, pp 103–136
17. WikiPedia (2008) Microsoft sql server. https://en.wikipedia.org/wiki/Microsoft
SQL Server
A Priority-Based Genetic Representations
for Bicriteria Network Design Optimizations
Abstract. Network design is one of the most important and most fre-
quently encountered classes of optimization problems. It is a combinatory
field in combinatorial optimization and graph theory. A lot of optimiza-
tion problems in network design arose directly from everyday practice in
engineering and management. Furthermore, network design problems are
also important for complexity theory, an area in the common intersec-
tion of mathematics and theoretical computer science which deals with
the analysis of algorithms. Recent advances in evolutionary algorithms
(EAs) are interested to solve such practical network problems. However,
various network optimization problems typically cannot be solved ana-
lytically. Usually we must design the different algorithm for the different
type of network optimization problem depending on the characteristics of
the problem. In this paper, we investigate the recent related researches,
design and validate effective priority-based genetic representations for
the typical network models, such as shortest path models (node selec-
tion and sequencing), spanning tree models (arc selection) and maximum
flow models (arc selection and flow assignment) etc., that these models
covering the most features of network optimization problems. Thereby
validate that EA approaches can be effectively and widely used in net-
work design optimization.
1 Introduction
Many real-world problems from operations research (OR)/management science
(MS) are very complex in nature and quite hard to solve by conventional opti-
mization techniques. Since 1960s, there has been being an increasing interest
in imitating living beings to solve such kinds of hard optimization problems.
Simulating natural evolutionary process of human beings results in stochastic
optimization techniques called evolutionary algorithms (EAs) that can often out-
perform conventional optimization methods when applied to difficult real-world
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 31
A Priority-Based Genetic Representations for Bicriteria Network Design 383
|Sj − {x ∈ Sj |∃r ∈ S ∗ : r ≺ x }|
RNDS (Sj ) = ,
|Sj |
1
D1R (Sj ) = min{drx |x ∈ Sj },
|S ∗ |
r∈S∗
where drx is the distance between a current solution x and a reference solution r
in the 2-dimensional normalized objective space. fi means the objective function
for each objective i = 1, 2, · · · , q.
q
drx =
2
(fi (r) − fi (x)) .
i=1
The smaller the value of D1R (Sj ) is, the better the solution set Sj is. This
measure explicitly computes a measure of the closeness of a solution set Sj from
the set S ∗ .
(4) Reference Set S ∗
For making a large number of solutions in the reference set S*, the first step
calculates the solution sets with special GA parameter settings and much long
computation time by each approach which used in comparison experiments, the
second step combine these solution sets to calculate the reference set S ∗ . In
the future, a combination of small but reasonable GA parameter settings for
comparison experiments will be conducted. Thus, ensure the effectiveness of the
reference set S ∗ .
16, 40 19, 20
2 5 8
14, 23 16, 32 18, 39
18, 30
13, 50 15, 32 17, 25
s t
1 19, 28 16, 27 15, 32 19, 28 1
1 3 6 9 11
17, 28 18, 38 14, 30
15, 38
17, 32 15, 37 17, 37
19, 22 13, 29
4 7 10
where constraint (3), a conservation law is observed at each of the nodes other
than s or t. That is, what goes out of node i, i=1 xij must be equal to what
comes in k=1 xki .
problems, the simple approach of GA was difficult to apply directly. There two
special difficulties by using GAs for creating a path: (1) different path contains
variable number of nodes; (2) a random sequence of nodes usually does not
correspond to a path.
Zhang et al. [48] extended this variable-length chromosome for solving the
SPR problem. But crossover may generate infeasible chromosomes that generat-
ing loops in the routing paths. It must be checked that none of the chromosomes
is infeasible at each generation, and is not suitable for large networks or unac-
ceptable high computational complexity for real-time communications involving
rapidly changing network topologies. An example of generated variable-length
chromosome and its decoded path are shown in Fig. 2(a) and (c), respectively
for the directed network shown in Fig. 1.
We proposed a priority-based encoding method. As it is known, a gene in
a chromosome is characterized by two factors: locus, i.e., the position of gene
located within the structure of chromosome, and allele, i.e., the value the gene
takes. In this encoding method, the position of a gene is used to represent node
ID and its value is used to represent the priority of the node for constructing a
path among candidates. A path can be uniquely determined from this encoding.
An example of generated priority-based chromosome is shown in Fig. 2(b). At the
beginning, we try to find a node for the position next to source node 1. Nodes 2,
3 and 4 are eligible for the position, which can be easily fixed according to adja-
cent relation among nodes. The priorities of them are 1, 10 and 3, respectively.
The node 3 has the highest priority and is put into the path. The possible nodes
next to node 3 are nodes 4, 6 and 7. Because node 6 has the largest priority
value, it is put into the path. Then we form the set of nodes available for next
position and select the one with the highest priority among them. Repeat these
steps until we obtain a complete path, (1-3-6-5-8-11). Considering the charac-
teristic of priority-based chromosome, they proposed a new crossover operator,
called weight mapping crossover (WMX) and adopted insertion mutation and
immigration operators.
locus : 1 2 3 4 5 6 locus : 1 2 3 4 5 6 7 8 9 10 11
node ID : 1 3 6 5 8 11 node ID : 11 1 10 3 8 9 5 7 4 2 6
path : 1 3 6 5 8 11
Table 1. The ABS of 50 runs by different ga parameter settings with different genetic
representations
ahnGA priGA
ID Optimal Para1 Para2 Para3 Para1 Para2 Para3 Auto-timing
1 47.93 47.93 47.93 47.93 47.93 47.93 47.93 47.93
2 210.77 232.38 234.36 244.64 224.82 234.91 228.72 224.09
3 1.75 2.69 2.71 2.83 2.68 2.73 2.79 2.64
4 17.53 37.6 39.43 47.26 36.1 35.3 34.08 34.6
5 54.93 60.77 62.26 65.35 57.26 57.42 58.5 56.87
6 234.45 276.72 288.71 295.77 269.23 268.52 273.16 270.66
7 1.83 2.4 2.66 3.31 2.01 2.27 2.32 1.98
8 22.29 47.29 49.58 57.04 41.48 45.89 44.17 41.9
9 70.97 - - - 72.29 75.74 77.27 70.97
10 218.78 - - - 276.56 276.15 284.85 272.1
11 3.82 - - - 5.85 6.91 6.41 5.78
12 20.63 - - - 60.14 57.52 61.53 52.18
“-” means out of memory error
The Minimum Spanning Tree (MST) problem is one of the best-known net-
work optimization problems which attempt to find a minimum cost tree network
that connects all the nodes in the communication network. The links or edges
have associated costs that could be based on their distance, capacity, quality of
line, etc.
In the real world, the MST is often required to satisfy some additional con-
strain for designing communication networks such as the capacity constraints on
any edge or node, degree constraints on nodes, and type of services available on
the edge or node. This additional constraint often makes the problem NP-hard.
In addition, there are usually such cases that one has to consider simultane-
ously multicriteria in determining a MST, because there are multiple attributes
defined on each edge, has become subject to considerable attention. Almost every
important real-world decision making problem involves multiple and conflicting
objectives [17].
In this paper, we are considering a bicriteria spanning tree (bST) model. The
bST is to find a set of links with the two conflicting objectives of minimizing
communication cost and minimizing the transfer delay and the constraint of
network capacity is met. This problem can be formulated as the multiobjective
capacitated minimum spanning tree (mcMST) problem, and is a NP-hard.
3 7 3 7
1 4 8 11 1 4 8 11
2 5 9 12 2 5 9 12
6 10 6 10
n
n
min z1 (x) = cij xij (5)
i=1 j=1
n n
min z2 (x) = dij xij (6)
i=1 j=1
n
n
s. t. xij = n − 1 (7)
i=1 j=1
n n
xij ≤ |S| − 1 for any set S of nodes (8)
i=1 j=1
n
wij xij ≤ ui , ∀i (9)
j=1
In this formulation, the 0-1 variable xij indicates whether we select edge (i, j)
as part of the chosen spanning tree (note that the second set of constraints with
|S| = 2 implies that each xij ≤ 1). The constraint (7) is a cardinality constraint
implying that we choose exactly n − 1 edges, and the packing constraint (8)
implies that the set of chosen edges contain no cycles (if the chosen solution
contained a cycle, and S were the set of nodes on a chosen cycle, the solution
would violate this constraint). The constraint (9) guarantees that the total link
weight of each node i does not exceed the upper limit Wi .
node id k: 2 3 4 5 6 7 8 9 10 11 12
chromosome v(k): 5 1 3 8 9 8 4 5 12 8 11
T = {(1, 3), (2, 5), (3, 4), (4, 8), (5, 8), (5, 9), (6, 9), (7, 8), (8, 11), (10, 12), (11, 12)}
each test problem 10 times and gives the average results of the 3 performance
measures (i.e., the number of obtained solutions |Sj |, the ratio of nondominated
solutions RNDS (Sj ), and the average distance D1R measure). In Table 4, better
results of all performance measures were obtained from the i-awGA than other
fitness assignment approach.
5 Conclusion
References
1. Abbasi S, Taghipour M (2015) An ant colony algorithm for solving bi-criteria
network flow problems in dynamic networks. IT Eng 3(5):34–48
2. Ahuj RK, Magnanti TL, Orlin JB (1993) Network flows. Prentice Hall, New Jersey
3. Climaco JCN, Craveirinha JMF, Pascoal MMB (2003) A bicriterion approach for
routing problems in multimedia networks. Networks 41(4):206–220
4. Craveirinha J, Maco J et al (2013) A bi-criteria minimum spanning tree routing
model for mpls/overlay networks. Telecommun Syst 52(1):1–13
5. Davis L, Orvosh D et al (1993) A genetic algorithm for survivable network design.
In: Proceedings of 5th international conference on genetic algorithms, pp 408–415
6. Deb BK (2010) Optimization for engineering design: algorithms and examples.
Prentice-Hall, New Delhi
7. Deb K (1989) Genetic algorithms in multimodal function optimization. Master’s
thesis, University of Alabama
8. Deb K, Thiele L et al (2001) Scalable test problems for evolutionary multiobjective
optimization. Wiley, Chichester
9. Dorigo M (1992) Optimization, learning and natural algorithms. PhD thesis,
Politecnico di Milano
10. Duhamel C, Gouveia L et al (2012) Models and heuristics for the k-degree
constrained minimum spanning tree problem with node-degree costs. Networks
60(1):1–18
11. Fawcett H (2014) Manual of political economy. Macmillan and co., London
12. Fogel LJ, Owens AJ, Walsh MJ (1966) Artificial intelligence through simulated
evolution. Wiley, New York
13. Fonseca C, Fleming P (1995) An overview of evolutionary algorithms in multiob-
jective optimization. IEEE Trans Evol Comput 3(1):1–16
396 L. Lin et al.
14. Garey MR, Johnson DS (1979) Computers and intractability: a guide to the theory
of np-completeness. W.H. Freeman, New York
15. Gen M (2006) Genetic algorithms and their applications. Springer, London
16. Gen M, Cheng R, Oren SS (2000) Network design techniques using adapted genetic
algorithms. Springer, London
17. Gen M, Cheng R, Lin L (2008) Network models and optimization: multiobjective
genetic algorithm approach. Springer, London
18. Goldberg DE (1989) Genetic algorithms in search, optimization, and machine
learning. Addison-Wesley Pub. Co., Boston
19. Hansen P (1979) Bicriterion path problems. In: Proceedings of 3rd conference
multiple criteria decision making theory and application, pp 109–127
20. Hao X, Gen M et al (2015) Effective multiobjective EDA for bi-criteria stochastic
job-shop scheduling problem. J Intell Manufact 28:1–13
21. Holland J (1975) Adaptation in Natural and Artificial System. MIT Press, Ann
Arbor
22. Holland JH (1976) Adaptation. In: Rosen R, Snell FM (eds) Progress in theoretical
biology IV
23. Hwang CL, Yoon K (1994) Multiple attribute decision making. Springer, Heidel-
berg
24. Ishibuchi H, Murata T (1998) A multi-objective genetic local search algorithm and
its application to flowshop scheduling. Comput Ind Eng 28(3):392–403
25. Kennedy J, Eberhart R (2011) Particle swarm optimization. In: Proceeding of the
IEEE international conference on neural networks, Piscataway, pp 1942–1948
26. Kennedy J, Eberhart R (2011) Particle swarm optimization. Morgan Kaufmann,
San Francisco
27. Koza JR (1992) Genetic programming, the next generation. MIT Press, Cambridge
28. Koza JR (1994) Genetic programming II (videotape): the next generation. MIT
Press, Cambridge
29. Liang W, Schweitzer P, Xu Z (2013) Approximation algorithms for capacitated
minimum forest problems in wireless sensor networks with a mobile sink. IEEE
Trans Comput 62(10):1932–1944
30. Lin L (2006) Node-based genetic algorithm for communication spanning tree prob-
lem. IEICE Trans Commun 89(4):1091–1098
31. Lin L, Gen M (2008) An effective evolutionary approach for bicriteria shortest path
routing problems. IEEJ Trans Electron Inf Syst 128(3):416–423
32. Mathur R, Khan I, Choudhary V (2013) Genetic algorithm for dynamic capacitated
minimum spanning tree. Comput Technol Appl 4(3):404
33. Medhi D, Pioro M (2004) Routing, flow, and capacity design in communication
and computer networks. Morgan Kaufmann Publishers, San Francisco
34. Piggott P, Suraweera F (1995) Encoding graphs for genetic algorithms: an investi-
gation using the minimum spanning tree problem. Springer, Heidelberg
35. Raidl GR, Julstrom B (2003) Edge-sets: an effective evolutionary coding of span-
ning trees. IEEE Trans Evol. Comput. 7(3):225–239
36. Rechenberg I (1973) Optimieriung technischer Systeme nach Prinzipien der biolo-
gischen Evolution. Frommann-Holzboog, Stuttgart
37. Ruiz E, Albareda-Sambola M et al (2015) A biased random-key genetic algorithm
for the capacitated minimum spanning tree problem. Comput Oper Res 57:95–108
38. Schaffer JD (1985) Multiple objective optimization with vector evaluated genetic
algorithms. In: International conference on genetic algorithms, pp 93–100
39. Schwefel HPP (1995) Evolution and Optimum Seeking: The Sixth Generation.
Wiley, New York
A Priority-Based Genetic Representations for Bicriteria Network Design 397
1 Introduction
The county government has long been the basis of administration management
and state governance in China. It faces to the grass-roots and serves them
directly. Thus, the efficiency of its operation can greatly influence the over-
all quality of people’s life and production. Nowadays, as an effective way of
improving administrative efficiency and transparency, the e-government has been
praised highly by many countries. And China has also launched the Government
Online Project since 1999. In fact, the construction of e-government within the
county level is a very important part of China’s e-government development, as
well as the terminal node to the provincial level e-government, and governments
should pay great attention to e-government construction at county level for con-
structing mature e-government system structure [7]. To promote the healthy and
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 32
County Level Government’s E-Government Efficiency Evaluation 399
in Sect. 3 which includes the samples selection, data collection, the establishment
of evaluation index system and the model solution. On the basis of the solution,
the analysis of the current situation in Sichuan Province and the suggestions for
the ineffective county level government are given in Sect. 4. Finally, we conclude
with a summary of this paper and prospect about evaluation of e-government in
Sect. 5.
in Sichuan Province, this paper selects the 147 districts and counties (including
county level cities) in Sichuan Province as the objects of the research.
In accordance with the requirements of the DEA method, we need to estab-
lish an index system of input and output based on the comprehensive consider-
ation of region informatization level, economic development, cultural and edu-
cational level and other factors. There are nine categories of input indexes in
this paper which specifically include fixed telephone subscribers, the number of
mobile phone users, the value of the tertiary industry income, GDP, total fis-
cal revenue, governmental investment in science and education, the proportion
of urban population, the number of students in colleges and universities, the
number of colleges and universities. These index data are mainly obtained from
“Sichuan Province Statistical Yearbook” (2014) [16] and “The Statistics Bulletin
of the National Economy and Social Development” (2013) issued by the various
districts and counties (including county level cities).
To select proper output indicators, we referenced the objective indexes
derived from CCID consulting assessment of government website performance
and The United Nations for the global objective of e-government assessment.
This study revolves around the authoritative evaluation results to screen the
indexes. The five main evaluation indexes consist of government information
publicity, work services, public participation, the website management, applica-
tion of new technology. Output index data are mainly obtained from “The total
Report of the Sichuan Government Website Performance Evaluation” (2013) [6].
The principal component analysis of 9 input indexes and 5 output indexes data
were conducted by SPSS software (Tables 1 and 2).
According to extracts of the principal component to calculate the principal
component load, the formula is:
√
lij = p(zi , xj ) = λi aij (i = 1, 2, · · · , k, k as main composition, j = 1, 2, · · · , 147)
(5)
Then we can get the principal component index data of inputs and outputs.
Table 3. The results of e-government input and output efficiency evaluation (1)
Table 3. (Continued)
where nmax and nmin represent the maximum and minimum values of the column
data respectively (Tables 3 and 4).
Table 4. The results of e-government input and output efficiency evaluation (2)
Table 4. (Continued)
Table 5. The results of e-government input and output efficiency evaluation (3)
Table 5. (Continued)
5 Conclusions
Due to the wave of the informatization and popularization of the network, the
construction of e-government in the human condition is getting better and bet-
ter. Governments at all levels are also vigorously investing money and human
resources for the construction of e-government to promote the informatization.
To guide the construction of e-government, how to accurately evaluate and sci-
entifically observe the efficiency of e-government is a hotspot and difficulty in the
field of current e-government. Although there are a lot of foreign standards being
references, different countries may have different national conditions. The real
effective e-government construction has to be combined with the actual situation.
According to the views of scholars and organizations at home and abroad, based
on observation and thinking, we use different indexes and principal component
analysis and DEA model to evaluate the e-government construction efficiency of
all districts and counties of Sichuan Province. We hope to be able to scientifically
evaluate the comprehensive efficiency of various districts and counties of Sichuan
Province in the construction of e-government. On the whole, the local govern-
ment’s information department should focus on improving their investment scale
and technical efficiency to enhance the efficiency of e-government.
However, this paper also has some limitations. Firstly, we take both input
and output into consideration which also have been rotated, extracted and stan-
dardized. Therefore, this paper couldn’t use the projection algorithm to get the
improvement solutions which give the target value and scope for the non-DEA
effective. Secondly, DEA focus on the relative efficiency. It means that different
evaluation range may have different result. So how to choose the evaluation range
is a project worth exploring. In later study, we will commence on diminishing
the limitations above and optimize the solution.
References
1. Accenture (2006) E-government leadership: high performance and maximum value.
http://www.accenture.com
2. CCCT (2015) General report on performance evaluation of Chinese government
website in 2015. http://ccct.mofcom.gov.cn/article/zt jxpg2015
3. Commission E (2006) i2010–a European information society for growth and
employment. http://www.docin.com/p-28146066.html
4. CSTC (2013) 2013 Chinese government website performance evaluation index sys-
tem. http://www.cstc.org.cn/templet/default/show nbkw.jsp?article id=123668&
id=1619
5. CSTC (2015) 2015 Chinese government website performance evaluation results.
http://www.ccidnet.com/2015/1209/10063631.shtml
6. CSTC SPegnoc (2014) The total report of the Sichuan government website perfor-
mance evaluation 2014
7. Guo X, Zou J (2015) A summary of the current situation and pattern of
e-government development in China’s rural areas. E-Government 1:90–96 (in
Chinese)
8. Kao C (2015) Evaluation and improvement of e-government: the case of European
countries. In: International conference on E-democracy & E-government, pp 1211–
1216
9. Luna DE et al (2010) Using data envelopment analysis (DEA) to assess govern-
ment web portals performance. In: Proceedings of the 13th annual international
conference on digital government research, pp 107–115
10. Luo D, Shi Y (2010) Evaluation on e-government websites based on rough set
and genetic neural network algorithm. In: Proceedings of the third international
symposium on computer science and computational technology, pp 124–128
11. Nations U (2014) United nations e-government survey 2014: E-government for the
future we want
12. Research C (2004) Description of government website evaluation index system.
http://dc.ccw.com.cn
13. Rorissa A, Demissie D, Pardo T (2011) Benchmarking e-government: a comparison
of frameworks for computing e-government index and ranking. Gov Inf Q 28:354–
362 (in English)
14. Song M, Guan Y (2014) The electronic government performance of environmental
protection administrations in Anhui province. Technol Forecast Soc Change 96:79–
88 (in Chinese)
15. Song M, Guan Y (2015) Measuring e-government performance of provincial govern-
ment website in China with slacks-based efficiency measurement. Technol Forecast
Soc Change 39:25–31 (in Chinese)
16. Sichuan Provincial Bureau of Statistics SPB (2014) Sichuan province statistical
yearbook 2014
17. Zdjelar R (2013) Measuring success of implementation county development strat-
egy by using balanced scorecard method. In: International convention on informa-
tion & communication technology electronics & microelectronics, pp 1211–1216
Comparing Visitors’ Behavior Through Mobile
Phone Users’ Location Data
Masahide Yamamoto(B)
Abstract. In recent years, so-called “big data” have been attracting the
attention of companies and researchers. This study aims to identify the
number of visitors of each period and their characteristics based on the
location data of mobile phone users collected by the mobile phone com-
pany. The study sites of this survey are tourist destinations in Ishikawa
Prefecture and Toyama city, including Kanazawa city, which became
nationally popular after the Hokuriku Shinkansen opened in 2015. The
opening of the Hokuriku Shinkansen brought more visitors to many areas.
However, it also led to fewer visitors in some areas. The positive effect
was remarkable in Kanazawa.
1 Introduction
2 Method
This study used “MOBILE KUUKAN TOUKEIT M ” (mobile spatial statistics)
provided by NTT DoCoMo, Inc. and DoCoMo Insight Marketing, Inc. to collect
the location data of mobile phone users in order to count the number of visi-
tors at specific tourist destinations and examine their characteristics. MOBILE
KUUKAN TOUKEIT M is statistical population data created by a mobile phone
network. It is possible to estimate the population structure of a region by gender,
age, and residence using this service of a particular company.
The sites studied in this survey are tourist destinations in Ishikawa Prefecture
and Toyama city, including Kanazawa city, which became nationally popular
when the Hokuriku Shinkansen (high-speed railway) opened in 2015. Moreover,
the locations and characteristics of the individuals obtained herein are derived
through a non-identification process, aggregation processing, and concealment
processing. Therefore, it is impossible to identify specific individuals.
The survey areas are presented in Table 1 and Fig. 1. A regional mesh code
is a code for identifying the regional mesh. It stands for an encoded area that is
substantially divided into the same size of a square (mesh) based on the latitude
and longitude in order to use it for statistics. With regard to regional mesh,
there are three types of meshes: primary, secondary, and tertiary. The length of
one side of a primary mesh is about 80 km, and those of secondary and tertiary
meshes are about 10 Km and 1 km respectively.
In addition, split regional meshes also exist, which are a more detailed
regional division. A half-regional mesh is a tertiary mesh that is divided into
two equal pieces in the vertical and horizontal directions. The length of one side
is about 500 m. Furthermore, the length of one side of a quarter and 1/8 regional
meshes is about 250 m and 125 m respectively.
Comparing Visitors’ Behavior Through Mobile Phone Users’ Location Data 413
This study analyzed the location data collected from NTT DoCoMo, Inc.
to consider the effect of the opening of the Hokuriku Shinkansen on the survey
areas.
3 Previous Research
Previous tourism marketing research has primarily focused on the ways service
promises are made and kept, mostly generating frameworks to improve man-
agerial decisions or providing insights on associations between constructs [4].
Big data have become important in many research areas, such as data mining,
machine learning, computational intelligence, information fusion, the semantic
Web, and social networks [3]. To date, several attempts have been made to use
large-scale data or mobile phone location data in tourism marketing studies.
Most studies dealing with big data in tourism were published after 2010.
Fuchs et al. [5] presented a knowledge infrastructure that has recently been
implemented at the leading Swedish mountain tourism destination, Åre. Using
a Business Intelligence approach, the Destination Management Information Sys-
tem Åre (DMIS-Åre) drives knowledge creation and application as a precondition
of organizational learning at tourism destinations. Xiang et al. [9] tried to apply
big data to tourism marketing. The study aimed to explore and demonstrate the
utility of big data analytics to better understand important hospitality issues,
namely, the relationship between hotel guest experience and satisfaction. Specif-
ically, the investigators applied a text analytical approach to a large number of
414 M. Yamamoto
4 Results
In general, the number of visitors has been increasing since the Hokuriku
Shinkansen was launched on May 14, 2015, with the exception of Nanao station.
Comparing Visitors’ Behavior Through Mobile Phone Users’ Location Data 415
It should be noted that these “visitor” also include the residents living there,
because the data cannot exclude them. Of course, I tried to exclude residential
areas as much as possible when I specified the regional mesh codes. However, it
was rather difficult to do that, because the mesh codes are square-shaped.
First, I compared the results of two larger cities, Kanazawa and Toyama (see
Figs. 2 and 3). Both these cities have a station at which the Hokuriku Shinkansen
stops. It should be noted that Kanazawa city and Toyama attracted more visi-
tors in the afternoons, whereas Wakura and Wajima, which are located on the
Noto Peninsula, had more visitors in the mornings (8:00 a.m.–9:00 am). Visitors
to Toyama Station demonstrated approximately the same trend as those visit-
ing Kanazawa Station. However, there were fewer visitors on holidays than on
weekdays in Toyama.
I then examined the data of Kanazawa city (see Figs. 4 and 5). There were
three survey areas in this city: Kanazawa station, Kenrokuen Park and Higashi
Chayagai.
416 M. Yamamoto
70 70
60 60
50 50
40 40
30 30
20 20
Kenrokuen 15 Wajima 15
Fig. 10. Visitors’ gender distribution at Kenrokuen and Wajima (12:00 a.m.–1:00 p.m.
on holidays in October 2015)
70 70
60 60
50 50
40 40
30 30
20 20
Wakura
15 Yamanaka
15
Fig. 11. Visitors’ gender distribution at Wakura hot springs and Yamanaka hot springs
(12:00 a.m.–1:00 p.m. on holidays in October 2015)
70 70
60 60
50 50
40 40
30 30
20 20
Kanazawa 15 15
Toyama
Fig. 12. Visitors’ gender distribution at Kanazawa station and Toyama Station (12:00
a.m.–1:00 p.m. on holidays in October 2015)
418 M. Yamamoto
Despite the fact that Toyama is nearer to Tokyo than Kanazawa, the latter
successfully attracted more visitors from Tokyo.
Regarding the two prefectures, Ishikawa and Toyama, they both do have
many wonderful tourist attractions. However, as for the two cities, it seems that
Kanazawa is more attractive to tourists.
5 Conclusion
This study attempted to identify the number of visitors at two points in time
at various places in Japan and their characteristics using the location data of
mobile phone users collected by the mobile phone company.
As explained above, the opening of the Hokuriku Shinkansen increased the
number of visitors to many areas. However, it also led to fewer visitors in some
other areas. Its positive effect was remarkable in Kanazawa.
Numerous events have been recently held in Japan to attract visitors. In
addition to using the “MOBILE KUUKAN TOUKEIT M ”, combining it with
other ICT services, such as Google Trends, can help better predict the number
of visitors at new events. Specifically, by combining the “MOBILE KUUKAN
TOUKEIT M ” and the transition of the search results for a particular tourist des-
tination, it would be possible to more accurately predict the number of tourists.
If we can realize more accurate demand forecasting, it would be possible to
optimize the necessary goods and number of non-regular employees in advance.
Moreover, understanding consumers’ characteristics beforehand could enable us
to optimize the services, which could influence customer satisfaction.
References
1. Agency JT (2014) Keitaidenwa kara erareru ichijouhou tou wo katuyousita hounichi
gaikokujin doutaichousa houkokusho [foreign visitors dynamics research report uti-
lizing mobile phone location information]. Technical report, Japan Tourism Agency
2. Ahas R, Aasa A et al (2008) Evaluating passive mobile positioning data for tourism
surveys: an estonian case study. Tourism Manage 29(3):469–486
3. Bello-Orgaz G, Jung JJ, Camacho D (2016) Social big data: recent achievements
and new challenges. Inf Fusion 28:45–59
4. Dolnicar S, Ring A (2014) Tourism marketing research: past, present and future.
Ann Tourism Res 47:31–47
5. Fuchs M, Höpken W, Lexhagen M (2014) Big data analytics for knowledge gener-
ation in tourism destinations - a case from Sweden. J Destination Mark Manage
3(4):198–209
6. Gao H, Liu F (2013) Estimating freeway traffic measures from mobile phone location
data. Eur J Oper Res 229(1):252–260
7. Liu F, Janssens D et al (2013) Annotating mobile phone location data with activity
purposes using machine learning algorithms. Expert Syst Appl 40(8):3299–3311
420 M. Yamamoto
Mingcong Wu1 , Yong Huang1(B) , Yang Song2 , Liang Zhao1 , and Jian Liu1
1
Business School, Sichuan University, Chengdu 610065, People’s Republic of China
huangyong scu@aliyun.com
2
Department of Mathematics, Nanjing University of Aeronautics and Astronautics,
Nanjing 210016, People’s Republic of China
1 Introduction
With the development of the internet, the internet-based web service industry
is growing explosively, and web service of higher quality is required. The basic
application structure of web service is shown in Fig. 1. In the system, users
can request services according to their own needs through the internet. After
their service requests are passed to web server, the web server will interact with
application server and database server for different requests, and process various
requests properly. Finally, the results of the requests will be returned to the users.
The function of the application server is to guarantee the business logic in the
activities and coordinates the information exchange among the users. The task
of the database server is to complete query, storage, modify and other functions
of web service data in the database.
Web service system needs to use the adapted queueing model in dealing with
the order of the users, but the traditional queueing model cannot accurately deal
with this scene. When the arriving user finds the server is temporarily unable to
provide service, he will leave the server and repeat requests after a while, which is
called retrial strategy. Considering the utilization of server, managers often allow
server to serve at a relatively low rate when there are no users in the system,
but when the number of the users requesting service increases in the system, the
service rate needs to return to a higher level immediately, which is called work-
ing vacation interruption strategy. Besides, users who conduct business on the
internet tend to give up receiving service and leave the service system because
they don’t want to wait too long, which is called nonpersistent customers strat-
egy. Scholars have studied the above three cases separately, but there are no
studies of the queueing model which possess these three strategies at the same
time. This paper tries to study the combination of these three strategies, and
solve the construction of the queueing model of web service system under these
conditions.
This paper is organized as follows. In Sect. 1, relevant literatures are reviewed.
Section 2 is dedicated to formulate a queueing model to solve the problem men-
tioned above. In Sect. 3, a series of numerical analyses are presented and the
performance of the model is optimized. Section 4 gives some conclusions.
2 Literature Review
Queueing theory studies the working process of random service system. It orig-
inated from the telephone conversation in the early 20th century. In 1909–1920,
Erlang, the Danish mathematician and electrical engineer, studied the problem
of telephone conversation using the method of probability theory, which created
this subject of applied mathematics. Queueing system is divided into continuous-
time queueing system and discrete-time queueing system. Compared with the
former, the research of discrete-time queueing system started relatively late and
the early research results about the discrete-time queueing system are few. But
with the development of computer and communication in which time is slotted,
the discrete-time queueing system has become one of the research hotspots of
queueing theory in recent years. Meisling [10] made a pioneering work on the
discrete-time queueing system, and then a series of studies have deepened and
developed the analysis of this classical model. Kobayash and Konheim [3] stud-
ied the application of the discrete-time queueing system in the field of computer
communication network, and pointed out that the discrete-time queueing sys-
tem is more suitable for the modeling and analysis of computer networks, which
has played a positive role in the research and application of the discrete-time
queueing system.
Retrial queueing system is an important part of queueing theory research.
The characteristic of retrial queueing system is that customer who requests ser-
vice will enter a queue called orbit and retry after a random time when he finds
all servers are busy. Kosten [4] first proposed a retry service system. Yang and
Li [18] were the first to study a discrete-time retrial queue, they studied the
steady-state queue size distribution of the discrete-time Geo/G/1 retrial queue.
Then retrial queueing system is widely used in telephone switching systems,
computer networks and communications networks. So far, these research areas
are still vibrant. Some of the latest developments regarding retrial queueing
systems are Dimitriou [2], Rajadurai [13] and so on. If the server is occupied,
the customer retrying from the orbit may leave the queueing system or return
to the orbit, which is called nonpersistent customers phenomenon. Palm [11]
first proposed the issue of nonpersistent customers. Then, many scholars intro-
duced nonpersistent customers into various queueing systems. Liu and Song [9]
introduced nonpersistent customers to a Geo/Geo/1 retrial queue with work-
ing vacations. Phung-Duc [12] introduced two types of nonpersistent customers
into a M/M/c/K (K ≥ c ≥ 1) retrial queues. These models are also widely
used in real life. In fact, the introduction of nonpersistent customers is of great
significance to imitate the impatience characteristic of customers in reality.
The queueing systems with vacation and working vacation have always been
a research hotspot. Queueing system with vacation is an extension of the classi-
cal queueing system, which allows server not to serve customers at certain time
called vacation. For more information about queueing system with vacation,
see Tian and Zhe [16]. If server does not completely stop serving on vacation,
but rather serve at a slower service rate than the rate of regular busy period,
this vacation policy is called working vacation. Servi and Finn [14] first intro-
duced the queueing system with working vacation when they studied the model
of communication network. They introduced the working vacation mechanism
into the M/M/1 queue. Then, many scholars introduced working vacation into
various queueing system. Tian et al. [17] studied a discrete-time queueing sys-
tem with working vacation. Li and Tian [7] studied a discrete-time Geo/Geo/1
queue with single working vacation. On the basis of working vacations, working
vacation interruption strategy was introduced, which means that once a certain
indicator (such as the number of customers in the system) reaches a certain
424 M. Wu et al.
value on working vacation, server can stop working vacation and return to nor-
mal service level immediately. Li and Tian [6] first proposed working vacation
interruption strategy. They introduced working vacation and vacation interrup-
tion into M/M/1 queue, then they studied the corresponding discrete-time queue
[5]. Recently, studies on working vacation interruption are continued. Li et al. [8]
introduced working vacation interruption into discrete-time Geo/Geo/1 retrial
queue, Gao et al. [15] studied an M/G/1 queue with single working vacation and
vacation interruption under Bernoulli schedule.
Previous studies can’t address the real-world scenes of web service that are
mentioned. So, this paper considers introducing working vacation interruption
and nonpersistent customers to discrete-time Geo/Geo/1 retrial queue to provide
the solution.
3 Method
In order to deal with retrial, working vacation interruption and nonpersistent
customers phenomena in web service, this section constructs an adapted queue-
ing model and studies the queueing system. Section 3.1 put forwards a series of
assumptions about the queueing model (for convenience of study, assume that
the number of server is one) and obtains the corresponding transition probabil-
ity matrix for subsequent derivation. The research of queueing system needs to
be based on the stability of the system, so Sect. 3.2 derives the condition under
which the system is stable, and the rate matrix R for the derivation of stationary
distribution. The main purpose of the research of queueing system is to study
the states and the indexes when the system is stable, so Sect. 3.3 derives the
stationary distribution and some performance measures using matrix-analytic
method.
3.1 Model
The Geo/Geo/1 retrial queue with working vacation interruption and nonper-
sistent customers is assumed as follows:
First of all, the rules of service and customer arrival are assumed. Assume the
beginning and ending of service occur at slot division point t = n, n = 0, 1, · · · .
The service time Sb in a regular busy period follows a geometric distribution with
parameter μb . The service time Sv in a working vacation period follows a geomet-
ric distribution with parameter μv . Note that 0 < μv < μb < 1. Assume customer
arrivals occur at the slot t = n, n = 0, 1, · · · . Inter-arrival time, which is an inde-
pendent and identically distributed sequence, follows a geometric distribution
with parameter during a regular busy period and a geometric distribution with
parameter λb during a working vacation period. Also Note that 0 < λv < λb < 1.
Then, assume the rules of retrial and nonperisistent customers strategies.
Customer from the orbit of infinite size requests retrial at the slot t = n, n =
0, 1, · · · . And the time between two successive retrials follows a geometric dis-
tribution with parameter α. If the customer arrival and retrial occur at the
Research on Geo/Geo/1 Retrial Queue 425
same instant when the server is not occupied, we assume that the arriving cus-
tomer receives the service. And, due to the introduction of nonpersistent cus-
tomers strategy, suppose the probability that the retrial customer leaves system
is q(0 < q < 1), and the probability that retrial customer returns to the orbit is
1 − q.
Finally, working vacation and vacation interruption are assumed. The server
begins a working vacation each time when there is no customer in the system,
i.e., the server is free and there is no customer in the orbit. Assume the beginning
and ending of working vacation occur at the slot t = n+ , n = 0, 1, · · · . At the
end of each working vacation, the server begins a new vacation only if there
is no customer in the system. And the vacation time V follows a geometric
distribution with parameter θ, (0 < θ < 1). If the service is complete at slot
t = n− and there is customer in the system at slot t = n+ , the server will stop
the working vacation immediately and return to the normal service level, i.e.,
working vacation interruption strategy.
In this paper, for any real number x ∈ [0, 1], denote x = 1 − x. Assume
that interarrival time, service time, and working vacation time are mutually
independent.
After completing the above assumptions, the occurring order of the random
events can be described in Fig. 2 using number axis.
Meanwhile, the structure of the queueing model in web service can be shown
in Fig. 3. User accesses the web server through the internet. He joins the orbit
if the web server is busy, or else he receives service immediately and leaves the
system after the service is completed. The retrial user from the orbit receives
service immediately if the web server is free, otherwise he may leave the system
or come back to the orbit.
Let Qn be the number of customers in the orbit at the slot n+ , and Jn the
state of server at the slot n+ . There are four possible states of the server as
follows:
(1) The state Jn = 0 denotes the server is free in a working vacation period at
the slot n+ .
(2) The state Jn = 1 denotes the server is busy in a working vacation period at
the slot n+ .
(3) The state Jn = 2 denotes the server is free in a regular busy period at the
slot n+ .
426 M. Wu et al.
(4) The state Jn = 3 denotes the server is busy in a regular busy period at the
slot n+ .
where
⎛ ⎞
⎛ ⎞ − −
0 0 0 0
⎜ λv λv θ 0 λv θ ⎟
⎜ − − − ⎟ ⎜ − − − − ⎟
⎜ 0 μv λv θ 0 μv λv θ ⎟ ⎜ μv λv μ−v λv −θ μ λ μ λ θ ⎟
B0 = ⎜ ⎟ , B 1 = ⎜ 0 v v + v v ⎟,
⎝0 0 0 0 ⎠ ⎜ 1 0 0 0 ⎟
− ⎝ ⎠
− − −
0 0 0 μb λb
μb λb 0 0 μb λb + μb λb
⎛ ⎞
0 0 0 0
⎜ − − − − − − − ⎟
⎜ 0 μv λv (α +α q ) θ 0 μv λv (α +α q )θ ⎟
A0 = ⎜ ⎟,
⎝0 0 0 0 ⎠
− − −
0 0 0 μb λb (α +α q )
⎛
− −− − −
⎞
−− −− − −
⎜ λv α θ λv α q θ +λv α θ λv α θ λv α θ + λv α q θ ⎟
⎜ − − − − − − − − − − − − − − ⎟
⎜ 0 μv λv (α +α q ) θ + μv λv αq θ μv λv α (μv λv + μv λv θ)(α +α q ) + μv λv αqθ ⎟
⎜
A1 = ⎜ ⎟,
− − − − ⎟
⎜ 0 0 λb α λb (α +α q ) ⎟
⎝ ⎠
− − − − − − −
0 0 μb λb α (μb λb + μb λb )(α +α q ) + μb λb αq
Research on Geo/Geo/1 Retrial Queue 427
⎛ ⎞
− − − −
⎜0 λv αq θ + λv α θ 0 λv αqθ + λv αθ ⎟
⎜ − − − − − − ⎟
⎜0 μv λv αq θ 0 (μv λv θ + μv λv )αq + μv λv α⎟
A2 = ⎜
⎜ −
⎟.
⎟
⎜0 0 0 λb αq + λb α ⎟
⎝ ⎠
− − −
0 0 0 (μb λb +μb λb )αq + μb λb α
It can be seen from the block structure of the transition probability matrix
that is a quasi-birth and death (QBD) process.
Then take v into the inequation above, after a series of algebraic operations, the
− − −
QBD process {Qn , Jn } is positive recurrent if and only if μb (λb α −1)(αq − λb ) <
−
μb (λb +λb q)α, i.e., the stability condition of the queueing system.
Next, before deriving the stationary distribution of the system, need to obtain
the minimal non-negative solution R which satisfies the matrix quadratic equa-
tion
R = R2 A2 + RA1 + A0 . (1)
− − − −
Theorem 2. If μb (λb α −1)(αq−λb ) < μb (λb +λb q)α, the matrix equation above
has the minimal non-negative solution
⎛ ⎞
0 0 0 0
⎜ 0 r1 r2 r3 ⎟
R=⎜ ⎝0 0 0 0 ⎠,
⎟
0 0 r4 r5
where
− − − − − − − √ − − − −
1 − μv λv (α +α q ) θ − μv λv αq θ − D μv λv α r1 + μb λb α r3
r1 = , r2 = ,
− − − − −
2 μv λv αq θ 1− λb α
− − −− −
M r1 2 + N r1 + λv (α +α q )θ(1 − λb α)
μv
r3 = − − −
,
(1 − λb α)[1 − L − F (r1 + r5 ) − (λb αq + λb α)r4 ] −G
− − − − −
μb μb λb λb (α +α q ) α
r4 = ,
− − − − − −
μb λb αq − μb λb α α q + μb λb α + μb λb αq
− − − − −
μb λb (α +α q )(1 − λb α)
r5 = ,
− − − − − −
μb λb αq − μb λb α α q + μb λb α + μb λb αq
2
D = [μv λv (α + αq)θ + μv λv αqθ − 1]2 − 4μv 2 λv λv αq(α + αq)θ ,
− − − − − − − −
M = (μv λv αqθ + μv λv α + μv λv αq)(1 − λb α) + μv λv α(λb αq + λb α),
− − − − − − − − − − −
N = [(μv λv + μv λv θ)(α +α q ) + μv λv αqθ](1 − λb α) + μv λv λb (α +α q ) α ,
− − − − −
L = (μb λb + μb λb )(α +α q ) + μb λb αq,
− − −
F = μb λb αq + μb λb α + μb λb αq,
− − − − − − −
G = μb λb α r1 (λb αq + λb α) + μb λb λb (α +α q ) α .
Research on Geo/Geo/1 Retrial Queue 429
R11 R12
Proof. Notice the structure of A0 , A1 and A2 , assume R = , where
0 R22
R11 , R12 and R22 are all 2 × 2 matrices. Taking R into Eq. (1), have
⎧ ⎛ ⎞ ⎛− ⎞
−
⎪
⎪
− −−
α
−−
q
−−
α
⎪
⎪ 0 λ αq θ + λ αθ ⎠ + R11 ⎝ v θ λ λ α θ +λ θ
R11 = R11 2 ⎝ ⎠
v v v v
⎪
⎪ − − − − −
⎪
⎪
−
α
− −
q
− −
⎪
⎪ 0 μ λ
v v αq θ 0 μ λ
v v ( +α ) θ + μ λ
v v αq θ
⎪
⎪
⎪
⎪ 0 0
⎪
⎪ +
⎪
⎪ −
0 μv λv ⎛
− − −
(α +α q ) θ
⎪
⎪ ⎞
⎪
⎪ −
⎪
⎪
⎪
⎪ 0 λ αqθ + λ αθ
R12 = R11 2 ⎝ ⎠
v v
⎪
⎪ − − −
⎪
⎪
⎪
⎪ 0 μ v λv αqθ
⎛ + μ v λ v α + μ v λ v αq ⎞
⎪
⎪ −
⎪
⎪
⎪
⎪ 0 λ αq + λ α
⎪ + (R11 R12 + R12 R22 ) ⎝ − − ⎠
b b
⎪
⎪ −
⎪
⎪ 0 μ λ αq + μ λ α + μ λ αq
⎪
⎪ ⎛ − b b b b b b ⎞
⎪
⎨ − − −
λ α θ λ α θ + λ α q θ
+ R11 ⎝ ⎠
v v v
− − − −
⎪
⎪ − −
q
−
⎪
⎪ μ
⎛ v− v λ α (μ λ
v v + μ λ
v v )( α +α ) + μ λ
v v ⎞αqθ
⎪
⎪
⎪
⎪ − − −
q
⎪
⎪ ⎝ b−λ α λ b (α +α ) ⎠
⎪ + R12
⎪
⎪ − − − − − −
⎪
⎪ μ λ α (μ λ + μ λ )( α +α q ) + μ λ αq
⎪
⎪
b b b b b b b b
⎪
⎪ 0 0
⎪
⎪
⎪
⎪ + − − −
⎪
⎪ 0 μv λv ⎛ (α +α q )θ
⎪
⎪ ⎞
⎪
⎪ −
⎪
⎪ 0 λ αq + λ α 0 0
⎪
⎪ R22 = R22 2 ⎝
b b ⎠+
⎪
⎪ − − −
μ
−
λ α
− −
+α q )
⎪
⎪ 0 μb λb αq + μb λb α + μb λb αq 0 b b (
⎪
⎪ ⎛ ⎞
⎪
⎪ − − − −
⎪
⎪ λb α λb (α +α q )
⎪
⎪ + R22 ⎝ ⎠
⎪
⎩ − − − − − − −
μb λb α (μb λb + μb λb )(α +α q ) + μb λb αq
0 0 0 0
From the first and third equations, getR11 = and R22 =
0 r1 r4 r5
respectively.
Then,
taking R11 and R22 into the second equation, finally obtain
0 0
R12 = are as shown in Theorem 2.
r2 r3
− − − −
Theorem 3. If μb (λb α −1)(αq − λb ) < μb (λb +λb q)α, the stationary distribu-
tion of Q, J is given by
⎧
⎪
⎪ πk0 = 0, k≥1
⎨ π = π r k, k≥0
k1 01 1
k−1 r3 r4 k−1 k−1 k−1 (2)
⎪
⎪ π k2 = π 01 (r2 r1 + r5 −r1 (r5 − r1 )) + π 03 r4 r5 , k≥1
⎩ r3 k k k
πk3 = π01 r5 −r1 (r5 − r1 ) + π03 r5 , k ≥ 0,
where
−1
1 + r2 r3 (1 + r4 ) 1 + r4
π00 = 1 + ( + )Y + U ,
1 − r1 (1 − r1 )(1 − r5 ) 1 − r5
− − − − − −
λv θ λv (1 − λv θ ) − μv λv λv αq θ r1
Y = ,U = ,
− − − − − − − − − − − − −
1 − μv λv θ − μv λv αq θ r1 μb λb (1 − μv λv θ − μv λv αq θ r1 )
− − − − − −
λv θ λv (1 − λv θ ) − μv λv λv αq θ r1
π01 = π00 , π03 = π00 .
− − − − − − − − − − − − −
1− μv λv θ − μv λv αq θ r1 μb λb (1 − μv λv θ − μv λv αq θ r1 )
Taking
⎛ ⎞
0 0 0 0
⎜ 0 r k
r r k−1
+ r3 r4 k−1
− r1 k−1 ) r3 k
− r1 k ) ⎟
Rk = ⎜
1 2 1 r5 −r1 (r5 r5 −r1 (r5 ⎟,k ≥ 1
⎝0 0 0 0 ⎠
0 0 r4 r5 k−1 r5 k
into Eq. (3), then Eq. (2) is obtained. Meanwhile, π0 satisfies the following two
equations
From Eq. (4), get π01 and π03 as shown in Theorem 3. Then, taking π01 , π02 = 0
and π03 into Eq. (5), obtain π00 as shown in Theorem 3. So far, Theorem 3 is
proved.
Using the stationary distribution derived above, some performance measures
can be obtained as follows.
(1) The probability that the server is busy
∞
∞
1 − r5 + r3 1
Pb = πk1 + πk3 = π01 + π03 .
(1 − r1 )(1 − r5 ) 1 − r5
k=0 k=0
Research on Geo/Geo/1 Retrial Queue 431
Remark 1. Considering the constraint of the stationary condition and the display
of the graph, the values of the parameters are the most suitable.
0.64 0.65
0.65
=0.5 q=0.1 v=0.2
0.64 0.645
=0.7 0.62 q=0.3 v=0.3
0.63 =0.9 q=0.5 0.64 v=0.4
0.6 0.635
0.62
0.61 0.63
0.58
Pb
Pb
Pb
0.6 0.625
0.56
0.59 0.62
0.57 0.61
0.52
0.56 0.605
Fig. 4. (a) The probability that the server is busy versus θ for different α (b) The
probability that the server is busy versus θ for different q (c) The probability that the
server is busy versus θ for different μv
0.75 0.64 0.66
q=0.5 0.63
=0.6 v=0.2
q=0.7 =0.7 0.64 v=0.4
0.7
q=0.9 0.62 =0.9 v=0.6
0.62
0.61
0.65
0.6 0.6
Pb
Pb
Pb
0.6 0.59
0.58
0.58
0.55 0.56
0.57
0.56 0.54
0.5
0.55
0.52
0.45 0.54
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.5 0.6 0.7 0.8 0.9 1 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
,when v=0.3, b=0.6, v=0.4, b=0.8, =0.7 ,when v=0.3, b=0.6, v=0.2, b=0.8,q=0.2 ,when v=0.3, b=0.6, b=0.8, =0.5,q=0.2
Fig. 5. (d) The probability that the server is busy versus α for different q (e) The
probability that the server is busy versus α for different θ (f) The probability that the
server is busy versus α for different μv
0.7 0.7 0.62
0.68
=0.4 v=0.2 =0.6
0.68 0.61
=0.6 v=0.5 =0.8
0.66 =0.8 0.66 v=0.7 0.6 =0.9
0.64 0.64
0.59
0.62 0.62
0.58
Pb
Pb
0.6
Pb
0.6
0.57
0.58 0.58
0.56
0.56 0.56
Fig. 6. (g) The probability that the server is busy versus q for different α (h) The
probability that the server is busy versus q for different μv (i) The probability that the
server is busy versus q for different θ
gradually with the rate θ increasing regardless of the change of α, q and μv . What
causes this trend is the fact that the average service rate becomes smaller with
θ increasing. Note that the effect of θ on Pb becomes weaker with μv increasing.
Meanwhile, because Pb + Pf = 1, the probability that the sever is free Pf should
show the opposite trend of change (the same below).
Figure 5 illustrates the effect of α on the probability that the server is busy.
Figure 5(d) is for different q, Fig. 5(e) is for different θ and Fig. 5(f) is for
Research on Geo/Geo/1 Retrial Queue 433
=0.4 0.57
q=0.5 =0.6
0.625
0.58 =0.6 q=0.7 =0.7
=0.8 0.56 q=0.9 0.62 =0.8
0.56 0.55 0.615
0.54 0.61
0.54
Pb
Pb
0.53
Pb
0.605
0.52
0.52 0.6
0.5 0.59
0.48
0.49 0.585
Fig. 7. (j) The probability that the server is busy versus μv for different α (k) The
probability that the server is busy versus μv for different q (l) The probability that the
server is busy versus μv for different θ
different μv . Among these three figures, Pb decreases with the rate α increas-
ing regardless of the change of α, q and μv . This is because the number of the
customers who give up receiving service increases with μv increasing.
Figure 6 illustrates the effect of q on the probability that the server is busy.
Figure 6(g) is for different α, Fig. 6(h) is for different μv and Fig. 6(i) is for
different θ. Obviously, Pb decreases with the rate q increasing among these three
figures. This is because the number of the customers who leave the system due
to the failure of retrial increases with q increasing.
Figure 7 illustrates the effect of v on the probability that the server is busy.
Figure 7(j) is for different α, Fig. 7(k) is for different q and Fig. 7(l) is for different
θ. Obviously, Pb decreases with the rate μv increasing. This is because the average
service rate increases with μv increasing. From the change of the other three
parameters, this trend is consistent. Besides, the effect of μv on Pb also becomes
weaker with θ increasing.
(2) The numerical analysis of E[L]
Figures 8, 9, 10 and 11 illustrate the effect of parameters on the mean number
of customers in the orbit. The effect and the reasons that cause the effect are
also discussed.
E[L]
E[L]
0.4
0.3
0.2
0.05 0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
,when v=0.3, b=0.6, v=0.4, b=0.8,q=0.5 ,when v=0.3, b=0.6, v=0.4, b=0.8, =0.5 ,when v=0.3, b=0.6, b=0.8,q=0.5, =0.5
Fig. 8. (m) The mean number of customers in the orbit versus θ for different α (n) The
mean number of customers in the orbit versus θ for different q (o) The mean number
of customers in the orbit versus θ for different μv
434 M. Wu et al.
2.5 1 2.5
q=0.4 0.9
=0.1 v=0
q=0.6 =0.5 v=0.4
2 q=0.8 0.8 =0.9 2 v=0.8
0.7
E[L]
E[L]
E[L]
0.5
1 0.4 1
0.3
0.1
0 0 0
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
,when v=0.3, b=0.6, v=0.4, b=0.8, =0.6 ,when v=0.3, b=0.6, v=0.4, b=0.8,q=0.4 ,when v=0.3, b=0.6, b=0.8, =0.6,q=0.4
Fig. 9. (p) The mean number of customers in the orbit versus α for different q (q) The
mean number of customers in the orbit versus α for different θ (r) The mean number
of customers in the orbit versus α for different μv
0.3 1
1.4 q=0.5 =0.3
0.95
=0.5 q=0.7 =0.5
1.2 =0.7 0.25 q=0.9 0.9 =0.7
=0.9 0.85
1
0.2 0.8
0.8
E[L]
E[L]
0.75
E[L]
0.65
0.4
0.1 0.6
0.2 0.55
0 0.05 0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
v,when v=0.3, b=0.6, b=0.8, =0.3,q=0.1 v,when v=0.3, b=0.6, b=0.8, =0.6, =0.5 v,when v=0.3, b=0.6, b=0.8, =0.6,q=0.1
Fig. 10. (s) The mean number of customers in the orbit versus μv for different α (t)
The mean number of customers in the orbit versus μv for different q (u) The mean
number of customers in the orbit versus μv for different θ
0.3
0.8 0.8
E[L]
E[L]
E[L]
0.25
0.6 0.6
0.2
0.1
0.2 0.2
0.05
0 0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
q,when v=0.3, b=0.6, v=0.4, b=0.8, =0.7 q,when v=0.3, b=0.6, b=0.8, =0.5, =0.7 q,when v=0.3, b=0.6, v=0.4, b=0.8, =0.7
Fig. 11. (v) The mean number of customers in the orbit versus q for different α (w)
The mean number of customers in the orbit versus q for different μv (x) The mean
number of customers in the orbit versus q for different θ
Then, consider using the optimization model above to optimize the Geo/Geo/1
retrial queue with working vacation interruption and nonpersistent customers
established in Sect. 3 under the environment of web service system. Assume the
average waiting cost of each customer per unit time is 5, the average service cost
of each server in a regular busy period per unit time is 10, the average service
cost of each server in a working vacation period per unit time is 6, the probability
that the user requests service during a regular busy period is 0.6, the probability
that the user requests service during a working vacation period is 0.3, the retrial
rate is 0.6 and the probability that the user gives up receiving service due to the
failure of retrial is 0.2. Taking these parameters into the optimization model and
using LINGO to solve it, get μv = 0.277, μb = 0.739, θ = 0.125, M = 11.359. The
result means that the average total cost of the system per unit time is 11.359,
i.e., the minimum sum of the average waiting cost and the average service cost
per unit time, now, the service rate in a regular busy period and the service
rate in a working vacation period are 0.739, 0.277 respectively, and the vacation
period rate θ is 0.125. That is to say, in order that the performance of the web
service system is the best, managers should adjust the actual service capabilities
of the server to guarantee that the service time in a regular busy period follows
a geometric distribution with parameter 0.739, the service time in a working
vacation period follows a geometric distribution with parameter 0.277 and the
working vacation time follows a geometric distribution with parameter 0.125.
4 Conclusions
With the development of the internet, the web service industry has put forward
many new requirements to the design of queueing model. This paper provides a
queueing solution for web service. After presenting three practical problems that
web service encounters in real life, a queuing model which can deal with these
problems is constructed and a series of necessary researches are carried out on the
model. Then, based on the results of the researches, the numerical analysis and
model optimization are conducted, which provide some valuable consultations for
the web service management under the corresponding circumstances. In addition,
since the model proposed in this paper is a single server, which is somehow
discrepant with the reality of multi-server web service, the number of services
can be pending in the future research to better cope with the actual situation.
Research on Geo/Geo/1 Retrial Queue 437
References
1. Burr T (2001) Introduction to matrix analytic methods in stochastic modeling.
Technometrics 95(3):1379
2. Dimitriou I (2015) A retrial queue for modeling fault-tolerant systems with check-
pointing and rollback recovery. Comput Ind Eng 79:156–167
3. Kobayashi H, Konheim AG (1977) Queueing models for computer communications
system analysis. IEEE Trans Commun 25(1):2–29
4. Kosten L (1947) On the influence of repeated calls in the theory of probabilities of
blocking. De Ingenieur 49(1):947
5. Li J, Tian N (2007) The discrete-time GI/Geo/1 queue with working vacations
and vacation interruption. Appl Math Comput 185(1):1–10
6. Li J, Tian N (2007) The M/M/1 queue with working vacations and vacation inter-
ruptions. J Syst Sci Syst Eng 16(1):121–127
7. Li J, Tian N (2008) Analysis of the discrete time Geo/Geo/1 queue with single
working vacation. Qual Technol Quant Manage 5(1):77–89
8. Li T, Wang Z, Liu Z (2012) Geo/Geo/1 retrial queue with working vacations and
vacation interruption. J Appl Math Comput 39(1):131–143
9. Liu Z, Song Y (2013) Geo/Geo/1 retrial queue with non-persistent customers and
working vacations. J Appl Math Comput 42(1):103–115
10. Meisling T (1958) Discrete-time queuing theory. Oper Res 6(1):96–105
11. Palm C (1953) Methods of judging the annoyance caused by congestion. Tele
4(189208):4–5
12. Phung-Duc T (2014) Multiserver retrial queues with two types of nonpersistent
customers. Asia Pac J Oper Res 31(2):144–171
13. Rajadurai P (2015) Analysis of M [X]/G/1 retrial queue with two phase service
under bernoulli vacation schedule and random breakdown. In J Math Oper Res
7(1):19–41
14. Servi LD, Finn SG (2002) M/M/1 queues with working vacations (M/M/1/WV).
Perform Eval 50(1):41–52
15. Tao L, Zhang L, Xu X (2013) An M/G/1 queue with single working vacation and
vacation interruption under Bernoulli schedule. Appl Math Model 37(3):1564–1579
16. Tian N, Zhang Z (2006) Vacation queueing models: theory and applications.
Springer, New York
17. Tian N, Ma Z, Liu M (2008) The discrete time Geom/Geom/1 queue with multiple
working vacations. Appl Math Model 32(12):2941–2953
18. Yang T, Li H (1995) On the steady-state queue size distribution of the discrete-time
Geo/G/1 queue with repeated customers. Queueing Syst 21(1):199–215
Pan-Tourism Urbanization Model Based
on System Dynamics: A Case Study of Barkam
1 Introduction
In recent years, the model of new urbanization has been widely concerned.
Tourism industry and related industries develop interactively in order to promote
the new urbanization and the coordinated development of new rural construc-
tion. Therefore, the model of pan-tourism industry integration and industrial
cluster development is an effective way to promote the development of urban-
ization, which is proposed by experts and scholars at home and abroad.
Some scholars have made some research ideas. Alexandrova [1] aims to review
tourism cluster initiatives in Russia, particularly, tourism cluster formation
processes. Jackson [6] takes four towns in Australia as an example, exploring
the effectiveness of correlation factors of these four regions, and studying the
competitive advantage of cluster theory. However, they only concentrate on the
research of the existing tourism industrial clusters evaluation and focus on using
descriptive language to assess or predict which lacks a practical and objective
evaluation system.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 35
Pan-Tourism Urbanization Model Based on System Dynamics 439
Jackson [5] probes into the development of regional tourism clusters, and he
finds the correctness in theory of the Porter’s [12] model. Tourism cluster may
become a way to convert comparative advantage into competitive advantage and
make better use of existing tourism resources in China. It can be more adapted to
the characteristics of Chinese region. Qian [13] questions Mullins’s [10] classical
theory of tourism urbanization, and he shows the idea that the process of urban-
ization driven by modern tourism is not the product of post-modernization of
urban cultural expression, however, it is the product of tourism-related services
based on relatively standardized regulations. The two scholars focus on the com-
parative analysis of relevant theoretical models and existing data. The research
model they put forward has positive effects on the development of domestic
tourism, but they ignore the systematic description of tourism urbanization.
This paper is organized as follows. In Sect. 2, the pan-tourism urbanization is
introduced and then we proposed a differential pan-tourism system model with
industrial clusters theory. In Sect. 3, on the basis of pan-tourism system with
the actual situation in Barkam, we investigate the relevant data, including the
number of tourists, tourism revenue and other data about urbanization. Once we
have carried out regression analysis, we can get simulation results for pan-tourism
urbanization system. And we analyze the results and give recommendations. In
Sect. 4, we make a conclusion about the effectiveness of such a model and discuss
the shortcomings of this model.
2 Modeling
2.1 Concept Model
System dynamics is a science of dynamic complexity of the system developed by
the Professor Forester of MIT, which emphasizes the overall consideration of the
system and the interaction of each composition of the system. It is designed to
deal with the problems of high order, nonlinear and multiple feedbacks in com-
plex system [17]. Based on the method of qualitative and quantitative analysis
and the system dynamic simulation experiments, which can forecast the dynamic
change, the decision makers can observe the simulation results and take different
measures in various contexts.
In this paper, the process of system dynamics modeling can be divided as fol-
lows: firstly, according to the characteristics of problems, we realize the purpose
of modeling and determine the system boundaries to avoid the system archi-
tecture which will be too large or too small; secondly, we analyze the system
structure to divide the system hierarchy and sub modules, determine the causal
feedback loop and define the interaction mechanism among elements; thirdly,
through the regression analyzing of the data, we establish the system dynamic
mathematical model completely; finally, we analyze the simulation results which
are obtained from the model running and evaluate the model in order to identify
the problems and improve the model.
Based on the feedback control theory and the concept of industrial cluster
development, this paper constructs the dynamic analysis model of pan-tourism
440 L. Hu et al.
urbanization system to find out the basic reasons, key factors, changing rules
and internal relations further. The research ideas are shown in Fig. 1.
tourism. The second promotion model means industries which cannot integrate
with the tourism to produce new products or formats, but their development
will be promoted when tourism developed vigorously. For instance, the lodg-
ing, catering and transportation industry can supply support elements for the
tourism while the wholesale and retail trade, real estate, warehousing postal
services, construction and industry which offering corollary facility to tourism.
The third one is the interaction mode, referring to integration of agriculture
and tourism, which can form a new tourism products and formats, rich tourism
content and enhance the value of tourism products. Not only further does the
mode develop the new tourism market, but also it promotes the industry and
tourism industry to develop collaboratively.
The development of tourism industry leads to the development of related
industries and forms pan-tourism industrial clusters, therefore it brings about the
agglomeration of population and elements, and promotes the process of urban-
ization in local area. Urbanization is the process of the evolution of the economic
structure, mode of production, life style and social concept after rural population
and various factors constantly gathered to the city. The level of urbanization is
usually measured by the proportion of urban population to total population.
Actually, urbanization is not just a regional change in the nature of the
population. The connotation of urbanization is very abundant which means a
regional change process based on the non-agricultural population and non - agri-
cultural economy [16], and it’s overgeneralization if the urbanization level only
measured by the population ratio. This paper references to Fang [3], Sun [14]
and other scholar’s defining of urbanization degree and combines with the devel-
opment characteristics of pan-tourist urbanization to define urbanization degree
of pan-tourism urbanization system by using two indicators of urbanization and
economic urbanization.
The population urbanization rate is usually used to measure the level of
urbanization, and it is an important symbol to measure a country’s level of
economic development [7]. The integration development of tourism and related
industries influence the changes of the industrial structure, reduce the employees
of the primary industry continuously and increase the employees of the second
and tertiary Industry. In addition, the urban employment increases through the
transfer of labor, while gradually realizing the transformation of farmers’ iden-
tity, then completing the population in situ urbanization.
The economic urbanization rate reflects the non-agricultural process of the
overall economic structure, characterized by the proportion of the second and
third industries gradually increased while proportion of primary industry grad-
ually decreased. Tourism drives pan-tourism industrial clusters, whose output
values are calculated from the second and third industrial output values, loca-
tion entropies and the contribution rates [18]. The economic urbanization rates
are the proportion of cluster output values to GDPs.
442 L. Hu et al.
Tourist amount
+ -
Pollution
+
Tourism +
development Level of three
Tourism reception industry development
+
+ +
Pan - tourism
industry +
development
The change of
+ employment structure
Level of Industrial Cluster
+
urbanization +
+
Migration
POV GDP
SLE
TR ICTW
<EUR> TIR ICET IPT WROV
REOV TCR
TN ICTE
TIA TRA
PIOV
TOV
ICTP
TRAV
OSOV
ICTTIR
ICTO
3 Case Analysis
3.1 Data Sources
In this paper, Barkam city in northwest Tibetan area of Sichuan is selected as the
study object. The urbanization mode driven by tourism has been well reflected
at here. Based on the actual situation of Barkam, the pan-tourism urbaniza-
tion system is simulated [9]. The primary data is collected from Barkam’s sta-
tistical yearbook and Sichuan Province statistical yearbook. Although there are
some uncertain values, such as the total population and urban population, which
are strongly affected by the social environment and natural disasters, it can be
explained reasonably to make the model reflect the general trend of pan-tourism
system.
The simulation step length of this model is set to 1 year, and the simulation
year is from 2011 to 2025. The initial values for 2011 are determined based on the
Barkam’s statistical yearbook: the number of tourists was 505,800; tourism rev-
enue is 434.48 million Yuan. Table 3 shows the values of parameters determined
by regression analysis, table function and other methods.
250000 1year
0.5year
0.25year
Tourism revenue/10 yuan
200000
4
150000
100000
50000
Year
180 180
160 160
Number of tourists/104persons
Number of tourists/104persons
140 140
120 120
100 100
80 80
60 60
40 40
2012 2014 2016 2018 2020 2022 2024 2026 2012 2014 2016 2018 2020 2022 2024 2026
from 2011 to 2023, reaching 1.5 million by 2019 and having a peak at the end
of 2023. After that, there will be a slight decline. The speculative reason is that
pollution caused by the industrial clusters may affects the tourists’ enthusiasm
to travel to Barkam, while the development of the pan-tourism industry is driven
by tourism. At the same time, Barkam’s tourism revenue is also growing steadily
and it will exceed 20 billion Yuan by 2022. We can see that the booming tourism
industry of Barkam will be in a steady upward trend in the next decade from the
simulation results. Therefore, in order to maintain the vitality of tourism indus-
try and promote the development of industry clusters to propel urbanization, the
government should pay more attention on the tourism industry meantime to con-
centrate on the impact of resource depletion and pollution from local industrial
development [11].
(2) Simulation Analysis of Economy
As the Fig. 7 shown, the three industrial output values grow steadily in the
period from 2011 to 2025. It is speculated that tertiary industry output values
of Barkam will exceed 3 billion Yuan, while the GDP will exceed 4 billion Yuan.
Contrasting the primary and secondary industries, the tertiary industry out-
put values increased significantly. The main reason is that tourism promote the
development of related industries, such as transportation, lodging and catering
industry and related social public services, so that the tertiary industry become
the leading industry.
On the contrary, the growth rate of the primary industry is very slow, presum-
ably due to the destruction of the local ecological environment and the obstruc-
tion of rural construction in the process of realizing urbanization. Local gov-
ernments should take corresponding measures to strengthen the linkage of the
tourism industry with the primary and secondary industry, promote the devel-
opment of agricultural tourism and industrial tourism vigorously, extend the
chain of business opportunities strenuously, and create an advantage platform
to promote economic development effectively.
448 L. Hu et al.
600000
Primary industry output value
Secondary industry output value
500000 Tertiary industry output value
GDP
4
300000
200000
100000
0
2012 2014 2016 2018 2020 2022 2024 2026
Year
0.53
Urban population/10 persons
3.3
0.52
Urbanization rate/%
3.2
4
0.51
3.1
3.0 0.50
2.9 0.49
2.8 0.48
2012 2014 2016 2018 2020 2022 2024 2026 2012 2014 2016 2018 2020 2022 2024 2026
Year Year
Fig. 8. Barkam’s urban population sim- Fig. 9. Barkam’s urbanization rate sim-
ulation ulation
550000
0.924
500000
400000
0.920
4
350000
300000 0.918
250000
0.916
200000
150000 0.914
100000
0.912
2012 2014 2016 2018 2020 2022 2024 2026 2012 2014 2016 2018 2020 2022 2024 2026
Year Year
Fig. 10. Barkam’s cluster output simu- Fig. 11. Barkam’s economic urbaniza-
lation tion rate simulation
industries so that the development of the local three industries will be promoted
harmoniously.
4 Conclusion
In this paper, the quantitative relationship was analyzed by SPSS, and the
DYNAMO language and Vensim are used to establish the system dynamic model
of pan-tourism urbanization. The three subsystems about tourism, industrial
cluster and urbanization are interrelated through the visitor numbers, tourism
revenue, and clusters output value and employment population. The interactive
development relationship among the three reflects the industrial cluster driven
by tourism development and the process of rural labor transfer. This model
systematically analyzes pan-tourism urbanization operation mechanism, and it
provides the corresponding scientific basis and theoretical support to promote
urban and rural development, improve the level of characteristic urbanization
and the construction of new towns. Because the selected city of Barkam has
certain particularity, the system elements may not be comprehensive. In addi-
tion, as the difference of local development and economic level, the pan-tourism
urbanization model may not be suitable for all cities and towns, so it is necessary
to adjust specific elements and coefficients in a specific place in order to make a
more scientific analysis.
References
1. Alexandrova A, Vladimirov Y (2016) Tourism clusters in russia: what are their
key features the case of vologda region. Worldwide Hospitality Tourism Themes
8(3):346–358
2. Eberlein RL, Peterson DW (1992) Understanding models with vensim. Eur J Oper
Res 59:216–219
3. Fang C, Jing W (2013) A theoretical analysis of interactive coercing effects between
urbanization and eco-environment. Chin Geogr Sci 23(2):147–162
4. Gimzauskiene E, Duoba K et al (2015) Tourism clusters in eastern poland - analysis
of selected aspects of the operation. Procedia Soc Behav Sci 213:957–964
5. Jackson J (2006) Developing regional tourism in china: the potential for activating
business clusters in a socialist market economy. Tourism Manag 27(4):695–706
6. Jackson J, Murphy P (2006) Clusters in regional tourism an australian case. Ann
Tourism Res 33(4):1018–1035
7. Jian ML, Chi FL (2016) A qualitative study of urbanization effects on hotel devel-
opment. J Hospitality Tourism Manag 29:135–142
8. Liu RX, Kuang J et al (2001) Principal component regression analysis with SPSS.
Comput Methods Programs Biomed 71(2):141–147
Pan-Tourism Urbanization Model Based on System Dynamics 451
1 Introduction
With the rapid development of the Internet, on the one hand, online sales have
become a mode in which the manufacturers and retailers can increase sales. On
the other hand, more and more products have gradually affected by the network
externality. The introduction of online channel for the manufacturer can increase
their sales and avoid the large retailers’ monopoly of the offline channel. At the
same time, it can provide opportunities for the retailer to expand the market
and increase sales. Many well-known manufacturers (such as Lenovo, Apple, etc.)
chose to introduce the online channel based on the offline channel. However, a lot
of famous retailers (such as Alibaba, Wal-Mart Stores, etc.) had also introduced
the online channel one after another. Nevertheless, most studies of dual-channels
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 36
A Study on the Three Different Channel Models in Supply Chain 453
focused on the choice of the manufacturer, the internal channel coordination and
channel pricing, but ignored the impact of network externalities and the choice
of retailers in dual-channel situation.
The network externality means that the customer can obtain more utilities
when more suppliers are willing to join in the network and opposite situation
is similar. In making strategies for channel markets, however, sellers (including
the manufacturer and the retailer) have typically relied on assumptions and par-
adigms that apply to business without network effects. Therefore, the strategy
that they make may not match the economics of their changing industries. Many
literatures studied the pricing strategy for the dual-channel. Liu [13] pointed that
compared to the traditional single channel, when the manufacturer brings in the
online channel, pricing strategies will have different effects on the channel. Leider
[9] studied experimentally bargaining in a multiple-tier supply chain with hor-
izontal competition and sequential bargaining between tiers. They found that
the structural issue of cost differentials dominates personal characteristics in
explaining outcomes, with profits in a tier generally increasing with decreased
competition in the tier and increasing with decreased competition in alternate
tiers. Naeem, Charles and Rafay [3] considered a dynamic problem of joint pric-
ing and production decisions for a profit-maximizing firm that produced multiple
products. They presented a solution approach which is more general than pre-
vious approaches that require the assumption of a specific demand function.
Chuang and Yang [7] modeled the deteriorating goods pricing strategy with
reference price effects and solved the dynamic problem by obtained theoretical
results.
Some scholars carried out their research from the aspect of dual-channel
coordination. Shamir [14] studied the operational motivation of a retailer and
showed that by making forecast information publicly available to both his man-
ufacturer and to the competitor, a retailer is able to credibly share his forecast
information-an outcome that cannot be achieved by merely exchanging informa-
tion within the supply chain. Yang and Zhuo etc. studied a two-period supply
chain that consists of a retailer and a supplier and found that a revenue shar-
ing contract significantly affects the retailer’s payment behavior and supplier’s
wholesale price [16]. Zhong, Xi and Jing [17] analyzed the coordinating mecha-
nisms for a single-period supply chain comprising of one supplier and one retailer.
They pointed that the transfer payment plus a contract could mitigate the down
side risk effect on the supply chain performance. Yue and Allen, etc. [15] studied
a supply chain consisting of one supplier and n retailers and pointed the pric-
ing strategy under different situations. Wenyi et al. [6] presented an integrated
model of marketing choices and supply chain design decisions and developed the
closed-loop supply chain. Yulan and Stein, etc. reviewed operational models in
service supply chain management (SSCM) and examined the definitions of SSCM
[6]. Panda et al. [12] analyzed coordination of a manufacturer-distributer-retailer
supply chain, where the manufacturer exhibited corporate social responsibility
[12]. They built a manufacturer-Stackelberg game setting to propose a contract-
bargaining process and resolved channel conflict. Jing, Hui and Ying [5] analyzed
454 J. Yan et al.
the channel effects of four kinds of dual-channels’ pricing strategies. Aimin and
Liwen analyzed the stochastic demand, joint promotion, the price competition
and coordination problems between the manufacturer and the retailer [1]. Ravi
and Rajib [11] studied the co-opetition between differentiated platform in two-
sided markets. In their study, they highlighted the importance of technology.
They pointed that collaboration might provide incentives for a dominant plat-
form. In this paper, we consider the platform/channel ownership is independent,
not belonging to the seller or the consumer.
Some scholars focus on the study of network externalities for the competi-
tive strategy. Blocher [4] showed that amplifying network externalities among
mutual funds could explain substantial flow-based effects documented in the lit-
erature. Liu [10] identified the valid mechanism for the alternative range of profit-
sharing contracts and analyzed the effect of product substitutability coefficient
and network externalities on the alliance and profit-sharing contract. Hagiu and
Spulber [8] presented first-party content and coordination in two-sided markets
and they found that the strategic use of first-party content by two-sided plat-
form was driven by two key factors: the nature of buyer and seller expectations.
Edward and Anderson researched the platform performance investment in the
presence of network externalities [2]. They carried out a full analysis of three
distinct settings: monopoly, price-setting duopoly and pricing-taking duopoly.
They concluded that the conditions under which offering a platform with lower
performance but greater availability of content can be a winning strategy.
These literatures have analyzed the pricing and coordination strategies of the
dual-channels and the influence of network externalities on pricing and competi-
tive strategy of enterprises respectively. We can conclude that the main problem
of pricing and coordination is to coordinate the distribution of benefits between
manufacturers and retailers in the structure of the manufacturer’s choice of dual
channel. The network externality can increase the strength of the sharing econ-
omy, bring about a positive cycle effects, and provide guidance for pricing and
coordination of enterprises. However, from the two aspects of researches, there
exists the following deficiencies.
The retailer’s behavior can also have a great impact on the dual-channel in
the supply chain system. But most of the studies about the dual-channel and
the network externalities are limited to the behavior of the manufacturer. In
addition, the network externalities also have a great influence on the pricing and
decision making. Many literatures about the dual-channel have largely ignored
the effect of network externalities on consumers and the whole supply chain
system. Many studies about the network externalities simply consider the effect
of network externalities on the single channel mode, not using it to dual-channel
mode.
With the premise that both entities’ (including the manufacturer and the
retailer) optimal profits must greater than zero, this paper considers the prod-
ucts with the network externalities and builds three channel models under the
condition that the retailer is the leader in the structure models. This paper names
them as single channel model (SCM), the dual-channel model of manufacturers
A Study on the Three Different Channel Models in Supply Chain 455
Assumed that the manufacturer only produces a kind of product with network
externality. Considering that the sale period is not too long, consumers only buy
the goods from the manufacturer or the retailer on the channel and there is no
intersection between the consumption of different channels.
The retailer is the leader in the structure in all models. In the single channel
model, firstly, according to the retailer’s own marginal profit m1 , the manu-
facturer decides the wholesale price ω1 to maximize their own profits. Then,
the retailer adjusts its revenue to maximize its own marginal profit according
to the wholesale price. Further, the retailer sets the sales price of the product
p1 = ω1 + m1 . In the dual-channel model of manufacturers, firstly, according to
the retailer’s own marginal profit m2 , the manufacturer decides the wholesale
price ω1 and the sale price p22 on online channel to maximize their own profits.
Second, the retailer adjusts its revenue to maximize its own margin according
to the wholesale price. At last, the retailer sets the sales price of the product.
Similar to the dual-channel model of manufacturers, in the dual-channel model
of retailers, the manufacturer decides the wholesale price and the sale price ω2
on online channel p22 . The main difference is that the retailer needs to decide
the sale price on online channel p22 .
Consumers can buy goods through different channels, but they will have
different utilities and the estimated value of the product on offline or online
channels will also be different. For example, In the physical store, consumers
can immediately get product information and then purchase products. While
through the network channel, consumers can get more products no matter how
far it is. With the development of modern logistics, waiting time will become very
short. So we denote the degree of substitution or monopoly between online and
offline channels by β (0 ≤ β < 1), so 1 − β indicates the degree that the online
channel cannot monopolize. While refers to the manufacturer’s credit tendency
in the mind of the consumer, so does the 1 − γ.
In the single channel model, the consumer’s estimation value of the product
on the offline channel is V11 ; in the dual-channel model of manufacturers, the
consumer’s estimation value of the product on the offline and online channels are
V21 , V22 ; in the dual-channel model of retailers, the consumer’s estimation value
of the product on the offline channel is V31 , while the consumer’s estimation
value of the manufacturer’s and the retailer’s product on the online channel are
V32 , V33 .
The consumer will select the channel only when their utilities are greater
than zero. Without loss of generality, assume the customer is heterogeneous
456 J. Yan et al.
and subject to uniform distribution. In model one, θ11 refers to the consumers’
preference distribution on offline channel. It represents the consumer’s measure
for each potential transaction; In model two, we use θ21 instead of θ11 and denote
the consumer’s preference distribution for manufacturers of online channel by
θ22 . In model three, we use θ31 , θ32 instead of θ21 , θ22 and denote the consumer’s
preference distribution for manufacturers of online channel by θ32 .
The sales cost of unit product for the manufacturer on offline channel can be
interpreted by c1 , while for the retailer is c2 . Because the cost of online channel
is lower than the offline channel, so define k1 as the manufacturer’s allocation
efficiency relative to the offline channel, while for the retailer is k2 . The smaller
the k, the lower the cost. To be more representative, we can get k ∈ (0, 1).
α1 expresses the strength of network effects of consumers who select retailer’s
offline channel; α2 expresses the strength of network effects of consumers who
select manufacturer’s online channel; α3 expresses the strength of network effects
of consumers who select retailer’s online channel. Without loss of generality,
α ∈ (0, 1).
In the single channel model, A11 represents the largest potential market
demand on offline channel, Nr1 denotes the number of retailers on offline chan-
nel. In the dual-channel model of manufacturers, use A21 , Nr2 instead of A11 ,
Nr1 , A22 represents the largest potential market demand on online channel, Ns2
denotes the number of manufacturers on online channel. Due to the widening of
the market, we can conclude A21 + A22 > A11 . At the same time, because the
manufacturer adds the online channel, and appeal some consumers, so it will
inevitably lead to a decrease in the number of retailers. So Nr2 ≤ Nr1 . In the
dual-channel model of manufacturers, similar to model two, A31 can instead of
A21 ; A32 represents the largest potential market demand for the manufacturer
on online channel; A33 represents the largest potential market demand for the
retailer on online channel; denotes the number of retailers on online channel;
Ns3 denotes the number of manufacturers on online channel; Nr4 denotes the
number of retailers on offline channel; Similarly, A32 + A33 > A22 , Ns3 ≤ Ns2
and Nr4 ≤ Nr2 .
3 The Model
Based on the above assumptions, we study the single channel model firstly. The
supply chain mode is shown in Fig. 1.
Assume that the utility function of the consumer on the single channel is:
According to Eq. (1), for this channel, the consumer’s individual rationality
(participation) constraint is:
U11 ≥ 0. (2)
So
p11 − V11
θ11 ≥ . (3)
Nr1 α1
As a result, according to Eq. (3), the number of consumers who join the
retailer’s offline channel is:
1
(Nr1 α1 − p11 + V11 )A11
D11 = A11 dθ = . (4)
p11 −V11
N α
Nr1 α1
r1 1
D11 ≥ 0. (5)
Nr1 α1 − ω1 − m1 + V11
ΠM 1 = (ω1 − c1 )D11 = A11 (ω1 − c1 ) . (6)
Nr1 α1
Proposition 1. The manufacturer’s profit function is a concave function of the
wholesale price, so when the manufacturer’s profit reaches the maximum, the
price’s optimal value is:
Nr1 α1 − m1 + c1 + V11
ω1∗ = . (7)
2
(2) The retailer’s profit of offline platform is:
Nr1 α1 − m1 − c1 + V11
ΠR1 = (p11 − ω1 − c2 )D11 = A11 (m1 − c2 ) . (8)
2V11 α1
Proposition 2. The retailer’s profit function is a concave function of the earn-
ing of unit product on offline channel, so when the retailer’s profit reaches the
maximum, the margin’s optimal value is:
Nr1 α1 − c1 + c2 + V11
m∗1 = . (9)
2
458 J. Yan et al.
In the single channel model, the final optimal values for these parameters are
as following:
Nr1 α1 + 3c1 − c2 + V11
ω1∗ = , (10)
4
3Nr1 α1 +c1 + c2 + 3V11
p∗11 = , (11)
4
2
∗ (Nr1 α1 − c1 − c2 + V11 ) A11
ΠM 1
= , (12)
16Nr1 α1
2
∗ (Nr1 α1 − c1 − c2 + V11 ) A11
ΠR1 = , (13)
8Nr1 α1
2
3(Nr1 α1 − c1 − c2 + V11 ) A11
Π1∗ = ΠM
∗
1
∗
+ ΠR1 = . (14)
16Nr1 α1
As consumers have their own preferences for online and offline channels,
assume that the utility function of the consumer on the offline channel and
online channel respectively are:
U21 = V21 + Nr2 θ21 α1 − p21 − (1 − β)V21 V22 , (15)
U22 = V22 + Ns2 θ22 α2 − p22 − βV21 V22 . (16)
On the basis of Eqs. (15) and (16), the consumer’s individual rationality
(participation) constraints for these two channels are:
U21 ≥ 0, (17)
U22 ≥ 0. (18)
So
p21 + (1 − β)V21 V22 − V21
θ21 ≥ , (19)
Nr2 α1
p22 + βV21 V22 − V22
θ22 ≥ . (20)
Ns2 α2
A Study on the Three Different Channel Models in Supply Chain 459
As a result, according to Eqs. (19) and (20), the numbers of consumers who
join retailer’s offline channel and online channel respectively are:
D21 ≥ 0, (23)
D22 ≥ 0. (24)
The manufacturer decides the wholesale price and the online channel’s price
of unit product to maximize their own profits. The manufacturer’s profit is:
ΠM 2 ≥ Π M 1 . (35)
2 2
Let (Nr1 α1 −c 1 −c2 +V11 ) A11
16Nr1 α1 = C1 , A21 [Nr2 α1 −(1−β)V21 V22 −c1 −c2 +V21 ]
16Nr2 α1 = C2 and
βV21 V22 − k1 c1 + V22 = C3 . So according to formula (35), we can get:
4(C1 −C2 ) 2
A22 + 2Ns2 C3 + [ 4(CA1 −C 2)
+ 2Ns2 C3 ] − 4Ns2 2 C
3
α2 ≥ 22
2 . (36)
2Ns2
In the case of this model, consumers have not only different choices of chan-
nels, but also preferences for the entities in the channels. Therefore, the net util-
ities to a consumer on the retailer’s offline platform, the manufacturer’s online
platform, the retailer’s online platform are given by:
Similarly, the numbers of consumers who join retailer’s offline channel, man-
ufacturer’s online channel and retailer’s online channel respectively are:
D31 ≥ 0, (42)
D32 ≥ 0, (43)
D33 ≥ 0. (44)
Similar to the model two, the manufacturer decides the wholesale price and
the online channel’s price of unit product to maximize their own profits. The
manufacturer’s profit in this model is:
Therefore, in the dual-channel model of retailers, the final optimal values for
these parameters are as the following:
Nr4 α1 − (1 − β)V31 (V32 + V33 ) − c2 + 3c1 + V31
ω3∗ = , (51)
4
3Nr4 α1 − 3(1 − β)V31 (V32 + V33 ) + c2 + c1 + 3V31
p∗31 = , (52)
4
Ns3 α2 − βγV31 (V32 + V33 ) + k1 c1 + V32
p∗32 = , (53)
2
Nr3 α3 − β(1 − γ)V31 (V32 + V33 ) + k2 c2 + V33
p∗33 = , (54)
2
2
∗ [Nr4 α1 − (1 − β)V31 (V32 + V33 ) − c1 − c2 + V31 ]
ΠM 3
=A33
16Nr4 α1
2
[Ns3 α2 − βγV31 (V32 + V33 ) − k1 c1 + V32 ]
+ A34 , (55)
4Ns3 α2
2
∗ [Nr4 α1 − (1 − β)V31 (V32 + V33 ) − c1 − c2 + V31 ]
ΠR3 =A33
8Nr4 α1
2
[Nr3 α3 − β(1 − γ)V31 (V32 + V33 ) − k2 c2 + V33 ]
+A35 . (56)
4Nr3 α3
Finally, consider the prerequisite: the manufacturer’s profit in this model
should be greater than in model one and the retailer’s profit in this model should
be greater than in model two. So for the manufacturer, their optimal profit must
be satisfied:
ΠM 3 ≥ Π M 1 . (57)
2
Let A33 [Nr4 α1 −(1−β)V3316N
(V34 +V35 )−c1 −c2 +V33 ]
r4 α1
= C4 and βγV31 (V32 + V33 ) +
k1 c1 − V32 = C5 . So:
2
4( C1A−C
32
4
) + 2N s3 C 5 + [4( C1A−C
32
4
) + 2Ns3 C5 ] − 4Ns3
2 C
5
α2 ≥ 2 . (58)
2Ns3
At last, for the retailer, their optimal profit must be satisfied:
Let β(1 − γ)V31 (V32 + V33 ) + k2 c2 − V33 = C6 . According to formula (57)
2
8( C2A−C
33
4
) + 2N r3 C6 + [8( C2A−C
33
4
) + 2Nr3 C6 ] − 4Nr3
2 C
6
α3 ≥ 2 . (59)
2Nr3
Comparing the three models and when these prerequisites are met, we can
get the following conclusions. With manufacturers and retailers choose the online
channel in turn, it can be seen that the price of unit product on offline channel
gradually reduces. In model three, the online channel’s price of unit product
for the manufacturer and the retailer both lower than the online channel’s price
in model two. With the introduction of online channel, the wholesale price is
A Study on the Three Different Channel Models in Supply Chain 463
gradually reduced and the number of consumers who join retailer’s offline channel
will decrease. As the retailer chooses the online channel, the number of consumers
who join manufacturer’s online channel will also decrease, but the total number
of consumers will increase. Compared to the model one, when the manufacturer
chooses the online channel, their profit gets to the maximum, then when the
retailer uses online channel, the manufacturer’s profit reduced, but is higher
than in the model one. In model two, when the manufacturer chooses the online
channel, the retailer’s profit gets to the minimum, then when they choose the
online channel, their profits will increase and higher than in model one. Along
with the introduction of the online channel, the overall profit of the channel will
increase.
4 Numerical Example
This section presents numerical simulations to clarify the findings which is not
easy to understand. First, we maintain the following assumptions over the simu-
lations. For the case of the single channel model, in order to achieve profitability,
some variables need to meet some restrictions: A11 ≥ p11 ≥ ω1 ≥ c1 . In model
two, in order to prevent retailers ordering goods from online channels and be
able to make profit, they need to meet A21 ≥ p21 ≥ ω2 ≥ c1 , p22 ≥ ω2 ≥ c1 and
A22 ≥ p22 ≥ k1 c1 . Similar to model two, in model three, A31 ≥ p31 ≥ ω3 ≥ c1 ,
p32 ≥ ω3 ≥ c1 , A32 ≥ p32 ≥ k1 c1 , p33 ≥ ω3 ≥ c1 and A33 ≥ p33 ≥ k2 c2 .
According to formula A11 ≥ p11 ≥ ω1 ≥ c1 , A21 ≥ p21 ≥ ω2 ≥ c1 , A31 ≥
p31 ≥ ω3 ≥ c1 and formulas (2), (21), (22), (42), (43) and (44), the prerequisite is:
Compared the three models, the wholesale price of unit product is gradually
reduced with the introduction of online channel. It’s benefit for the retailer
(Figs. 6 and 7).
With the increase of the network externality, the number of consumers who
join the offline and online channels are all increasing in the three modes. Com-
pared the three modes, due to the introduction of online channel, some con-
sumers transfer to online channel. So the number of consumers who join the
offline channel will decrease. When the retailer chooses to introduce the online
channel, the number of consumers who join the manufacturer’s online channel
will also decrease. As shown in Fig. 7, the total number of consumers is increas-
ing with the increase of network externalities and decreasing with the degree of
substitution or monopoly between online and offline channels. With the introduc-
tion of online channel, the total number of consumers is increasing in turn. The
466 J. Yan et al.
total number of consumers can increase with the introduction of online channel.
At the same time, consumers have more channels to choose, manufacturers and
retailers have more demand for products, and thus these behaviors can increase
consumption and create more social values.
We can see from Fig. 8, with the increase of the network externalities all
profits are increasing. In model one, the profit of the manufacturer is the lowest.
In model two, as the manufacturer chooses the online channel their profits reach
the maximum. In model three, as the retailer chooses the online channel, their
profits somewhat reduce but higher than in model one. On the contrast, in model
one, the profit of the retailer is lower than in model three but higher than model
one. In model two, as the manufacturer chooses the online channel, their profits
reach the minimum. In model three, as the retailer chooses the online channel,
their profits increase and reach the maximum. The total profit is increasing with
the introduction of online channel. That means when manufacturers and retailers
both choose to use online channel, they will create more social values.
5 Conclusions
This paper considers the choice of the retailer based on the choice of the manufac-
turer under the network externalities. We focus on the influence when the entities
have different choices. By contrasting the single channel model, the dual-channel
A Study on the Three Different Channel Models in Supply Chain 467
References
1. Ai-Min YU, Liu LW (2012) Competing and coordination strategies for the dual
channel under stochastic demand and cooperative promotion. J Ind Eng Eng
Manag 26(1):151–155
2. Anderson EG, Parker GG, Tan B (2014) Platform performance investment in the
presence of network externalities. Inf Syst Res 25(1):152–172
3. Bajwa N, Sox CR et al (2016) Coordinating pricing and production decisions for
multiple products. Omega 64:86–101
4. Blocher J (2016) Network externalities in mutual funds. J Finan Mark 30:1–26
5. Chen J, Zhang H, Sun Y (2012) Implementing coordination contracts in a manu-
facturer stackelberg dual-channel supply chain. Omega 40(5):571–583
6. Chen W, Kucukyazici B et al (2016) Supply chain design for unlocking the value
of remanufacturing under uncertainty. Eur J Oper Res 247(3):804–819
7. Dye CY, Yang CT (2016) Optimal dynamic pricing and preservation technology
investment for deteriorating products with reference price effects. Omega 62:52–67
8. Hagiu A, Spulber D (2013) First-party content and coordination in two-sided mar-
kets. Manag Sci 59(4):933–949
9. Leider S (2016) Bargaining in supply chains. Manag Sci 62:3039–3058
10. Liu X (2016) Contracting for competitive supply chains under network externalities
and demand uncertainty. Discrete Dyn Nat Soc 2016(1):1–9
11. Mantena R, Saha RL (2012) Co-opetition between differentiated platforms in two-
sided markets. J Manag Inf Syst 29(2):109–140
468 J. Yan et al.
1 Introduction
In this section we briefly explain Braille and degraded braille.
Some Japanese Braille are constructed with a prefix as last three examples in
Fig. 1. Though Japanese is vertical writing or horizontal one, Japanese braille is
horizontal writing only.
The most primitive tool for writing braille is the slate and stylus. To use it,
the user presses the tip of the stylus down through the small rectangular hole to
make braille dots. Therefore the user is required to punch the braille dots with
mirror image in reverse order. Nowadays, for publishing a book in braille, texts
are input according to the braille grammar by using a braille text editor, are
revised, and are printed in braille, as Fig. 2.
Braille is a tactile reading system. One reads tangible points with a finger’s belly.
Figure 3 is a typical page of Braille book. The braille books that are frequently
Construction of Restoration System for Old Books Written in Braille 471
read are dirty, holes are opened in dots, and dots collapse. Figures 4 and 5 show
fresh dots and a part of a page that both normal cells and mirror images of
cells, respectively. Figures 6, 7, 8 and 9 show the degraded braille cells, where
the collapsed cell, the hole opened one, the dirty one and the distorted one is
presented, respectively. Tears, creases and stains in pages are shown in Figs. 10,
11 and 12, respectively.
2 Restoration System
In this subsection we explain our project of the restoration system for old books
written in Braille.
As noted above, the system has two main components, a machine learning system
for recognition of Braille and an error correction system.
Construction of Restoration System for Old Books Written in Braille 473
Fig. 13. Normal dot Fig. 14. Mirrored dot Fig. 15. Background
image image image
OpenCV is a famous computer vision library and has functions of object detec-
tion. Some detectors such as face, eye, mouth and so on are prepared, and one
also can make detectors for arbitrary objects. To make the object detector, one
prepares many images one wants to detect and execute the machine learning.
In the machine learning, characteristics are extracted from the many images of
the object, and the machine learns the characteristics. The set of images and
characteristics learned by the machine is called a cascade classifier. OpenCV has
some algorithm to make the classifier. In our system, we adopt train the cascade
routine with LBP characteristic to search dots in scanned page.
As the cells are regularly aligned on paper, by using the gradient of dots
or the edge of paper we can correct the gradient of page. The distribution of
detected dots determines row lines in page.
It is necessary to treat the old Braille books carefully because of the degradation
of cells in them as mentioned above. Therefore we do not adopt flatbed scanners
but adopt a noncontact type of scanner, Fujitsu ScanSnap SV600, which operates
by taking an elevated view (see Fig. 16), with resolution of 600 dpi and grey scale.
If a Braille book is printed on both side and all pages are scanned, all Braille
printed is scanned twice. They are read as the normal image and the mirror
image. This gives us a redundancy for interpretation of cells.
The next step is the dot detection by OpenCV mentioned above. The result
is given in Figs. 17 and 18.
474 Y. Shimomura et al.
Fig. 17. Dot detection by OpenCV Fig. 18. Close up of detected dots
As usual Braille books are printed on both side, we must distinguish between
the normal dots and the mirror ones. The both cell image are give again in
Figs. 19 and 20. We have already obtained many dot images for them, so we use
the images as reference data for the dot classification in the deep neural network.
This procedure gives us a normal dot distribution and a mirrored distribution.
By converting the mirrored distribution into left-side right, we obtain the second
candidate of the normal dot distribution.
Fig. 21. “1xxxxx” cell Fig. 22. “x2xxxx” cell Fig. 23. “12xxxx” cell
Fig. 24. “1234 × 6” cell Fig. 25. “12345x” cell Fig. 26. “123456” cell
Finally the codes are converted into ink-spots expressing Braille and are output
(See Fig. 27). For codes with the error flag, the ink-spots and candidates of
Japanese character are output, and are corrected by hand.
Construction of Restoration System for Old Books Written in Braille 477
3 Concluding Remarks
In this paper, we explain our project to restore old Braille books by converting
Braille books into machine-readable electronic data. The Braille book is scanned
by an image. With the machine learning Braille are detected by image recogni-
tion technology, classified and identified. Furthermore, the error corrections are
executed. Finally, the machine-readable character codes are stored.
Shimomura and colleagues have studied Braille 35 years before. The resump-
tion of our study of Braille is because a request by the Japan Braille Library
and Ishikawa Braille Library. In recent years, Braille books have been converted
into electronic data and stored, and printed by computer processing. However
the previous books are not converted into electronic data and are left as Braille
books as it is. There are Braille books where the original is not found and with-
out the original. The request was to convert Braille books into digital data before
they became degraded and they became unreadable. Responding to this request,
we resumed our research. We want to quickly restore degraded Braille books.
References
1. Shimomura Y, Mizuno S (1981) Translation of machine readable catalog/cataloging
into braille and vice versa. Bull Kanazawa Women’s Junior Coll 23:111–117
2. Shimomura Y, Mizuno S, Hasegawa S (1983) Automatic translation of machine read-
able data into braille. The Institute of Electronics, Information and Communication
Engineers, IEICE Technical report ET82-9:57–62
3. National Association for Providing Facilities for the Visually Impaired (2003) Braille
transcription guide, 3rd edn
Enterprise Operation Management
Discrimination of Personal Opinions
from Commercial Speech Based
on Subjectivity of Text
1 Introduction
of individuals, social networking services like Twitter and customer review pages
in E-commerce sites. Owing to the importance of personal opinions on the Web,
many opinion mining techniques have been proposed [3,5–9]. One of the major
topics in opinion mining is to identify the polarity of a given text whether the
expressed opinion in a text is positive, negative, or neutral.
However, texts collected from web sites are often contaminated with com-
mercial speech. Commercial speech is generated by companies and individuals
for the intent of making a profit, which is essentially different from personal
opinions. Therefore, differentiating between personal opinions and commercial
speech is important for obtaining useful results in opinion mining.
In our previous study, we proposed a method for discriminating personal
opinions from commercial speech [1,2]. In the previous study, we assumed that
subjective expressions such as negative meaning words like “bad” and emoticons
like “:)” are more likely to be used for describing personal opinions than commer-
cial speech. Under the assumption, we modeled the total subjectivity score of a
given text for discriminating between personal opinions and commercial speech.
However, in the previous method, the dictionary of subjective expressions was
generated by human labor and it was strongly dependent on the Japanese gram-
mar. It is, therefore, difficult to apply the method to other languages.
In this paper, we propose a language-independent method for discriminating
personal opinions from commercial speech. The proposed method focuses on the
statistical difference of appearance ratio of each word between personal opinions
and commercial speech corpora. Based on the difference, the subjectivity of each
word is estimated. Assuming that words having high subjectivity are frequently
used in personal opinions, the proposed method judges whether a given text is
personal opinions or commercial speech.
The rest of the paper is organized as follows. In Sect. 2, we explain the outline
of related works. In Sect. 3, we describe the details of the proposed method. In
Sect. 4, we explain experiments using corpora written in different languages. In
the section we show that the proposed method can accurately discriminate the
two types of documents independently from languages.
2 Related Works
The process of opinion mining can be divided into three phases: data collection,
polarity analysis and post processing such as visualization of mining results. In
this paper we focus on the data collection phase. In many studies on opinion
mining aims at analyzing the polarity (positive, negative or neutral) of a given
sentence or text [3,5–9]. Text corpora using in these studies are collected from
various sources such as social networking service sites like Twitter and the cus-
tomer review pages of E-commerce sites. These studies implicitly assume that all
the analyzing documents are regarded as personal opinions. However, the texts
gathered from these sources are often contaminated with commercial speech.
There is obvious difference between minds of writers in personal opinions and
Discrimination of Personal Opinions from Commercial Speech 483
commercial speech. Therefore, for people who want to collect real public impres-
sions about subjects, personal opinions should be discriminated from commercial
speech.
In our previous study [1,2], we proposed a method for discriminating personal
opinions from commercial speech. In the previous method, in order to discrim-
inate the two types of documents, we focused on the four kinds of subjective
expressions: negative meaning expressions, sentence-final particles, interjections,
and specific symbols such as emoticons. Measuring the appearance ratios of these
expressions, we defined the subjectivity score of a given text. Using the scoring
model, we judged texts having high subjectivity scores as personal opinions.
However, the four kinds of subjective expressions were selected by human labor
and these words are strongly dependent on the Japanese Grammar. Therefore,
it is difficult to apply the previous method to other languages.
In this paper, we propose a language-independent method, where the dictio-
nary of subjective expressions is automatically generated by analyzing two kinds
of corpora: texts of personal opinions and ones of commercial speech. Based on
statistical difference of appearance ratio of each word between the two corpora,
the subjectivity score of each word is defined. Subjective words frequently occur-
ring in personal opinions can be automatically detected. Since the scoring model
is independently from languages, the proposed method can be applied to any
languages by just chaining corpora.
3 Proposed Method
In order to discriminate personal opinions from commercial speech, the proposed
method focuses on the difference of appearance ratios of words between two text
corpora: corpus Cp and corpus Cn . Here, corpus Cp consists of text documents
describing personal opinions and corpus Cn consists of text documents describing
commercial speech. Using the two corpora, the subjectivity score of word w is
defined as
s (w) = sp (w) − sn (w) , (1)
where sp (w) and sn (w) are defined as
rp (w)
sp (w) = , (2)
rp (w) + rn (w)
rn (w)
sn (w) = , (3)
rp (w) + rn (w)
here, rp (w) and rn (w) are the appearing ratios of word w in the corpus Cp and
Cn , respectively. rp (w) and rn (w) are respectively defined as follows:
d∈C f (w, d)
rp (w) = p
, (4)
d∈Cp w ∈W (d) f (w , d)
d∈Cn f (w, d)
rn (w) =
, (5)
d∈Cn w ∈W (d) f (w , d)
484 Y. Seino and T. Hayashi
here, W (d) is the set of words occur in document d, and f (w, d) is the frequency
of word w in document d.
Since subjective words like negative-meaning words are more frequently used
in personal opinion rather than commercial speech [1,2], sp (w) is expected to
be larger than Sn (w) if word w is a subjective expression. The subjective scores
of such words are expected to be positive (s(w) > 0). On the other hand, the
subjective scores of words used in commercial speech tend to be negative.
Eliminating extremely-common words from a document is an important pre-
process for text mining because these words give little meaningful information.
In order to detect such common words, the document frequency of each word in
a corpus is calculated. If word w occurs in over 99% documents in both corpus
Cp and corpus Cn , the frequencies fp (w) and fn (w) are set as 0s.
Based on the subjective scores of words, the total subjectivity of a document
d can be calculated as follows:
1
S (d) = s (w) , (6)
|W (d)|
w∈W (d)
where W (d) is the set of words which appear in document d. If the subjectivity
of a document is positive (S(d) > 0), the proposed method judges the text
as personal opinions; if otherwise (S(d) ≤ 0), the method judges the text as
commercial speech.
4 Experimental Results
In order to confirm that the proposed method works independently from lan-
guages and subjects, we conducted 6 kinds of tasks (T1–T6) using different
corpora. As corpus Cp (personal opinions) and corpus Cn (commercial speech)
used in each task, we collected texts from different websites. Table 1 shows the
source websites, the number of documents in each corpus, the language of the
texts, and the topic of the texts. As shown in the table, in task T2, T3 and T4,
the texts in corpus Cp and corpus Cn (personal opinions and commercial speech)
were collected from the same websites. In these websites, documents are catego-
rized into user reviews and professional reviews. In these tasks, the user reviews
were assigned as the texts of corpus Cp (personal opinions) and the professional
reviews were assigned as the texts of corpus Cn (commercial speech). There is
obvious difference between minds of writers of the two types of documents. Pro-
fessional writers tend to emphasize positive comments about subjects for their
sponsors and seldom report negative comments about them. On the other hand,
personal writers report any comments about subjects. For people who want to
collect real public impression about subjects, user reviews would be more useful
than professional comments. Therefore, professional reviews were regarded as
commercial speech in the tasks.
The purpose of the experiments is to confirm whether the texts in corpus
Cp (personal opinions) and the text in corpus Cn (commercial speech) were
Discrimination of Personal Opinions from Commercial Speech 485
The F -value is the harmonic mean of the precision P and the recall R. The
precision P and the recall R are defined as
|Drelevant ∩ Dclassified |
P = , (8)
|Dclassified |
|Drelevant ∩ Dclassified |
R= , (9)
|Drelevant |
where Drelevant is the set of relevant texts, i.e., the texts of personal opinions in
the test set, and Dclassified is the set of documents classified as personal opinions
by the proposed method.
The experimental results are shown in Table 2. In the table, the minimal,
maximal and average F-values across the 10 rounds are shown for each task.
From the table, we can confirm that in all the tasks (T1–T6), the high classifi-
cation performances (F̄ ≥ 0.90) were obtained. These results indicate that the
proposed method can discriminate personal opinions from commercial speech
independently from languages and subjects.
Figure 1 shows the distributions of subjectivity scores of the texts in corpus
Cp and corpus Cn in used in each task. As shown in the figure, personal opinions
and commercial speech can be clearly separated by the subjectivity score. These
results indicate the effectiveness of the proposed scoring.
We analyzed frequently occurring words in personal opinions and commercial
speech, respectively. Table 1(a) and (b) shows typical words in personal opinions
and commercial speech, respectively. We have found that the words listed in
Table 1(a) have high subjectivity scores and the words in Table 1(b) have nega-
tive subjectivity scores. As shown in the table, in personal opinions, subjective
expressions, sentence-final particles (in Japanese), emoticons (in Japanese), and
first-person singulars (in English and French) frequently occur. On the other
hand, in commercial speech first-person plurals (in English and French) and
neutral expression such as words just explaining the specification of a product.
While negative-meaning expressions such as “bad” frequently occur in personal
opinions regardless of its language, the appearance ratios of these words are
Discrimination of Personal Opinions from Commercial Speech 487
Fig. 1. The distributions of the subjectivity scores of the texts in corpus Cp (personal
opinions) and corpus Cn (commercial speech) used each task
5 Conclusion
References
1. Hayashi T, Abe K, Onai R (2008) Retrieval of personal web documents by extract-
ing subjective expressions. In: International conference on advanced information
networking and applications - workshops, pp 1187–1192
2. Hayashi T, Abe K, Roy D, Onai R (2009) Discrimination of personal web pages by
extracting subjective expressions. Int J Bus Intell Data Min 4(1):62–77
3. Khan FH, Bashir S, Qamar U (2014) Tom: Twitter opinion mining framework using
hybrid classification scheme. Decis Support Syst 57(3):245–257
4. Kohavi R (1995) A study of cross-validation and bootstrap for accuracy estimation
and model selection. In: International joint conference on artificial intelligence, pp
1137–1143
5. Rao S (2016) A survey on sentiment analysis and opinion mining. In: International
conference on advances in information communication technology & computing,
p 53
6. Ravi K, Ravi V (2015) A survey of opinion mining and sentiment analysis. J Knowl
Based Syst 89(C):14–46
7. Selvam B, Abirami S (2013) A survey on opinion mining framework. Int J Adv Res
Comput Commun Eng 3(9):3544–3549
8. Vinodhini G, Chandrasekaran RM (2012) Sentiment analysis and opinion mining:
a survey. Int J Adv Res Comput Sci Softw Eng 2:282–292
9. You Q (2016) Sentiment and emotion analysis for social multimedia: Methodologies
and applications. In: ACM on multimedia conference, pp 1445–1449
Shift in the Regional Balance of Power from
Europe to Asia: A Case Study of ICT Industry
Zahid Latif1(B) , Jianqiu Zeng1 , Shafaq Salam1 , Zulfiqar Hussain1 , Lei Wang1 ,
Nasir Jan2 , and Muhammad Salman3
1
School of Economics and Management, Beijing University of Posts and
Telecommunications, Beijing 100876, People’s Republic of China
zahid25latif@yahoo.com
2
School of Economics, Beijing Normal University, Beijing 100088,
People’s Republic of China
3
Department of Economics, Gomal University, Khyber Pakhtunkhwa, Pakistan
Abstract. During the last couple of decades, ICT sector became the
most innovative service sector that affected the living standards of human
beings all over the world. In the beginning of the 21st century, some of the
Asian countries made reforms in the ICT sector and spent an enormous
amount for the progress of this sector. On the other hand, developed
countries in the European Union (EU) faced different crises which badly
affected the dissemination of this sector. Consequently, EU countries lost
their hegemony in the field of information technology and resultantly,
some of the emerging Asian countries like China, India, and South Korea
got supremacy over the EU in this field. Currently, these countries have
a strong IT infrastructure, R&D sector, IT research centers working for
the development of ICT. Moreover, this paper investigates reasons for
the shifting of the balance of digital power from Europe to Asia.
1 Introduction
The global economy is facing many challenges as advanced economies are declin-
ing and emerging economies continue to expand their grounds. The level and
duration of these developments are unprecedented. Also, recent developments in
the field of information and telecommunication technology (ICT) such as cloud
computing affects the trend of economies. We can define the trend as a shift in
the regional balance of power. As the world’s economic balance of power is shift-
ing quickly, this trend has been accelerated due to the global recession. China
overtakes the United States as the world’s largest economic power and India
enters the race of these two countries as another big economy. After the great
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 39
Shift in the Regional Balance of Power from Europe to Asia 491
global recession, the world’s balance of economic power in term of gross domes-
tic product (GDP) was gradually shifted to the South and East [9]. Currently,
the western industrialized countries are trying to get the growth momentum
after the recession tragedy but they haven’t fully recovered yet. On the other
hand, the developing nations including Asian countries suffered comparatively
less than the industrial countries and recovered rapidly after the recession. In the
next years, the developing Asian countries accelerated their industrial growth,
particularly in information and communication sector [12,25].
A steep decline has seen with the occurrence of 9/11, which resulted in the
steady attrition of economic certainty that finally concluded with the great
geopolitical obstruct of the western financial crisis [1,7]. It has now become
the truth of the 21st century that the western world (Europe) is rapidly los-
ing its superiority in the field of ICT and is replaced by a new international
system shaped by geographical entity known as Asia [7]. Asia has the greater
most number of internet users around the world and it covers almost half of
the whole world. On the basis of internet usage and population statistics, Asia
covers 45.7% of the world internet users, while rest covers the remaining 54.3%
(See Table 1 and Fig. 1).
The above analysis shows the internet penetration and usage in Asia and
rest of the world. The figures clearly describe the shift of technology towards
Asia. Although due to low GDP and poverty, many people in Asian countries do
not have access towards internet but the figures show richer tendency for these
countries. China has one of the biggest communication network in Asia, as well
as in the whole world. The ICT semiconductor industry has become one of the
key industry of People Republic of China and has a huge potential to do business
in ICT. This industry has undergone a rapid change of growth and development
during the past decade [15]. The ICT sector in Asia includes goods and services
that process, transmit or receive information. It includes technologies such as
hardware, software, computer services, microelectronics, e-learning, e-business,
e-health, multimedia, as well as emerging technologies such as photonics, fixed
and mobile network convergence, life sciences, environmental sciences, Internet
of Things, mobile internet, cloud computing and digital imaging [26].
2 Research Background
According to realist approach, the potential for clashes among great powers is
an old story in the international system [4,8,19]. European Commission research
scholars in EU Monthly Magazine predicted about the multi-polar world in 2025.
It described that it is likely for the world to become truly multi-polar and reflect-
ing the new balance of power, and the loss of US leadership. It further predicted
that if the US remains the first military power, the scientific and technological
advancement of some Asian states in new irregular war tactics like cyber-war
and cyber-attacks will weaken the hegemony of US in information and communi-
cation technology [4,6,11,23]. Some authors give much importance to ICT even
four decades before and predicted that the rise of a post-industrial society would
be information centered and marked by a shift from production to service job
[18,27,31].
The world’s economic balance of power is shifting rapidly from North to
South, and the trend has been accelerated by the global recession [1,9,30].
Kenichi Ohmae developed a concept of a “Triad”, he predicted that the world
economy including the information technology will be led by the United States,
Japan, and the European Union [13]. But in present circumstances, the concept
of Kenichi seems to have become concealed by a new order consisting of China,
United States, and India in the field of ICT. Uri Dadush presented a model in a
chapter of book “Handbook of Emerging Economies”, in which he argued that
the shift of technological balance from North to South is increased after the
global recession. He considered technology as the most important element than
the other factors for economic growth of both groups [16,24].
Shift in the Regional Balance of Power from Europe to Asia 493
The diffusion of ICTs, like other innovative technologies, proceed with three
particular stages introduction, growth and maturity [17]. The ICT skills and
infrastructure vary from country to country and is analyzed according to these
three stages.
Figure 3 shows that if a country developed, not only the mass of ICT users
and practitioners increased but the necessary infrastructure for ICT must be
494 Z. Latif et al.
developed. During the growth stage the diffusion rate is fast and is continuous
until the maturity has been attained. The conceptual model of ICT diffusion
will able the government and policy makers to adjust their goals and strategies
in order to move from one stage to another stage. The proposed model will help
the policy makers to improve the ICT skills and infrastructure in a way that it
will reach to the next stages of the model.
The recent development in ICT in the Asian region has accelerated the GDP
growth of emerging Asian countries. Furthermore, the Asian region receives mas-
sive investments in the field of ICT, including both local and foreign investments.
Most of the Asian countries attract massive investment through the sector of
ICT. In this regard, the People Republic of China is on the top among the other
emerging states followed by Russia, Brazil, and India. China is the biggest sup-
plier of ICT equipment, while India has developed IT city called Silicon City
(Bangalore).
The share of ICT in GDP has been decreasing gradually from the last several
years in Europe, which dropped from 6.6% in 2009 to 5.9% in 2013. While the
situation in emerging countries (mostly Asian and Asia pacific countries) in
2009 have accounted for third of the telecom services revenue and in 2013 it
received almost 40% of the total world telecom revenue [3,5]. As a result, the
gap in the growth rate and development of ICT between the developed countries
and emerging countries became wider in 2013. In 2013, the balance in power
regarding ICT kept on shifting to emerging countries continuously and become
shared 80% of global growth contributing almost more than third of the world
ICT market [29]. The comparison of ICT market growth and GDP are shown n
the Table 2.
The Table 3 shows the breakdown of digiworld in the region, it also have
some predictions about the ICT market globally. The Fig. 4, also give a graphical
picture of the percentage share of ICT market contribution throughout the world.
The Figure clearly shows that the emerging Asian countries surpass the Europe
in ICT market share following US.
Some scholars also state that the rise of Asia, particularly China will enable Bei-
jing to pressurize US and the other western countries, not only through military
power, but in the field of technology as well, which will lead to an “open and
intense geopolitical rivalry”. China will rise and become a new power through-
out Asia [10,22,28]. According to the analysis of European Commission Joint
Research Centre, China, Korea and Taiwan are the three highly specialized coun-
tries in ICT manufacturing, who have continued to strengthen their positions in
the global ICT market [14]. China is the most obvious power on the rise in the
present world scenario regarding ICT. In this study, China is the subject matter,
because it not only belongs to the group of emerging economies growing very
rapidly [2]. China took the lead among other regional states due to both its spe-
cialization in manufacturing of IT equipment and its economic size. Moreover,
496 Z. Latif et al.
Fig. 5. Shift in the regional balance of power from Europe to Asia (ICT)
India and other emerging Asian states, which are now promoting their ICT sec-
tor, and can possibly surpass few major western states in the upcoming decades.
The transfer of digital power ICT from West to Asia is getting velocity and
may soon change the perspective for dealing with the global challenges. West is
already aware of Asia’s increasing potency in ICT.
The key lesson learnt from the international point of view is that, China,
Korea and Taiwan having a large ICT manufacturing sector, have an important
potential for growth, particularly, as the important resources like expenditures
and qualified staff. In other words they have strengthened the ICT and R&D
sector. While the US has dominance over EU due to its high productivity level
of manufacturing as well as high R&D resources in order to retain its competi-
tiveness in global ICT market [20,21]. The detailed comparison and contribution
of Asian countries and Europe in the world market (in ICT sector) are given in
Fig. 5.
Shift in the Regional Balance of Power from Europe to Asia 497
5 Conclusion
The facts and figures outlined in this paper show that the balance of power in
information and communication technology (ICT) is drifting slowly and grad-
ually towards Asia from Europe. Asia is a densely populated region with more
than 2.8 billion people, living in a land fertile for IT and communication indus-
try, with cheap labor and capital, rich raw resources and tremendous absorbing
capacity for new technologies and developments. All these factors are molding
the situation in favor of emerging Asian countries and shifting the regional bal-
ance of power in ICT towards Asia. The development of ICT has been the main
objective of most of the Asian countries in changing the global environment
towards the Asian region in terms of communication and technology. New Silk
Road is an example of this strategy. Besides, US as the ICT giant has its own
strategies towards occupying the information and communication market. This
will require the sacrifice of core national interests in the short run. At the same
time, China and other Asian emerging countries can get the dominancy in ICT
sector only if these could ensure the interests of other countries according to
their own ways in the long run.
References
1. Altman RC (2009) The great crash, 2008: a geopolitical setback for the west.
Foreign Aff 88(1):2–0014
2. Bannister F, Connolly R (2014) ICT, public values and transformative government:
a framework and programme for research. Gov Inf Q 31(1):119–128
3. Bastarrica F (2014) Mergers and acquisitions HR index. SSRN Electron J. http://
dx.doi.org/10.2139/ssrn.2484775
4. Buzan B (2013) China and the US: comparable cases of ‘peaceful rise’ ? Chin J Int
Polit 6(2):109–132
5. Chesbrough HW (2006) The open innovation model: implications for innovation in
Japan
6. Coen D (2009) Business lobbying in the European Union. Lobbying the European
Union Institutions Actors & Issues
7. Cox M (2012) Power shifts, economic change and the decline of the west? Int Relat
26(4):369–388
8. Cox M, Booth K, Dunne T (1999) The interregnum: controversies in world politics
1989–1999. Cambridge University Press, Cambridge
9. Dadush U, Stancil B (2010) The world order in 2050
10. Friedberg AL (2000) Arming China against ourselves. Commentary 108(1):27–33
11. Halliday F (1996) Book review: Michael COX, US foreign policy after the cold war:
superpower without a mission? Millenn J Int Stud 25(1):174–175 (London: Royal
institute of international affairs and pinter, 1995, 148 pp, no price given)
12. Hawksworth J, Tiwari A (2011) The world in 2050: the accelerating shift of global
economic power: challenges and opportunities. PwC-PricewaterhouseCoopers LLP,
London
13. Hodara J, Ohmae K (1987) Beyond national borders: reflections on Japan and the
world. Foreign Aff 65(5):1103
498 Z. Latif et al.
14. Hoge JF (2004) A global power shift in the making: is the united states ready?
Foreign Aff 83(4):2–7
15. Kunigami A, Navas-Sabater J (2010) Options to increase access to telecommuni-
cations services in rural and low-income areas. Access & Download Statistics
16. Looney RE (2014) Handbook of emerging economies
17. Mamaghani F (2010) The social and economic impact of information and commu-
nication technology on developing countries: an analysis. Int J Manage 18(7):79–79
18. Masuda Y (1980) The information society: as post-industrial society. Institute for
the Information Society
19. Mearsheimer JJ (2001) The tragedy of great power politics. Foreign Aff 80(6):173
20. Nepelski D, Prato GD (2012) Internationalisation of ICT R&D: a comparative
analysis of Asia, the European Union, Japan, United States and the rest of the
world. Asian J Technol Innov 20(2):219–238
21. Nepelski D, De Prato G, Stancik J (2011) Internationalisation of ICT R&D. Insti-
tute for Prospective Technological Studies, Joint Research Centre, European Com-
mission
22. Ross RS (2006) Balance of power politics and the rise of China: accommodation
and balancing in East Asia. Secur Stud 15(3):355–395
23. Ross RS (2012) The geography of the peace: East Asia in the twenty-first century.
Int Secur 23(4):81–118
24. Schaub M (2009) Foreign investment in China-entry, operation and exit strategy.
CCH Hong Kong Limited
25. Singh A (1997) Financial liberalisation, stockmarkets and economic development.
Econ J 107(442):771–782
26. Sudan R, Bank W (2010) The global opportunity in it-based services: assessing
and enhancing country competitiveness. World Bank Publications, pp 214–219
27. Toffler A (1991) Power shift: Knowledge, wealth, and violence at the edge of the
21st century. Powershift Knowl Wealth Violence Edge Century 5(4):91–92
28. Williams F (1989) Measuring the information society 4(1):82–83
29. Wohlers M, Giansante M et al. (2014) Shedding light on net neutrality: towards
possible solutions for the Brazilian case. In: International Telecommunications Soci-
ety Conference, pp 87–99
30. WorldBank (1997) Global economic prospects and the developing countries. World-
Bank
31. Zhao Y (2014) Communication, crisis, & global power shifts: an introduction. Int
J Commun 8(2):26
The Empirical Evidence of the Effect
on the Enterprises R&D from Government
Subsidies, Political Connections
and Rent-Seeking
1 Introduction
enterprises are trying to gain more financial resources or policy welfares through
such “recessive capitals”, namely the PCs [23,29,30].
In these situations, enterprises in China must have strong motivation to
establish PCs by rent-seeking [4]. Therefore, the following hypothesis is pro-
posed:
Hypothesis 1 : Rent-seeking can help enterprises establish PCs.
Now, the formal institutions are weak in China [5]. Corruption and rent-seeking
distort fiscal and monetary policy [3,8]. Entrepreneurs are more likely to spend
time and resources on rent seeking rather than on productive activities in order to
influence governments [4]. Under weak institutional contexts, enterprises have to
satisfy government’s requirements, which aggravates burden on enterprises [14].
In this condition, government subsidies can deteriorate corporate performance.
Quevedo [7] proposed that government S&T subsidies have a negative effect on
the R&D expenditure, namely the crowding-out effect. Huang [9] made a case
study on Chinese listed companies between 2001 and 2007, finding that PCs
have a promoting effect on the performance of SOEs, but negative impact on
private listed enterprises.
Following the normal logic of enterprise behavior, the following hypothesis is
proposed:
Hypothesis 2 : Enterprises are intent to establish PCs by rent-seeking for obtain-
ing more government S&T subsidies, but it will lead to crowding-out effect on
R&D input.
3.1 Sample
The original sample contains the non-financial private A-share companies listed
in Shanghai and Shenzhen Stock Exchange during 2008–2012. The final sample
keeps 361 companies listed 1614 totally after basic sample processing.
The major data used in the empirical part includes: the detailed government
subsidies data, PCs data, basic enterprise data, post crime data at the provincial
level and the excess administration expense data. The data was derived from
the CSMAR and WIND database, the China inspection yearbook and China
statistical yearbook. Most of them need manual calculation.
On the basis of model (3), model (4) adds the PCs variable Poli, and the
sub-sample regression for local PCs companies. In addition, the regression is
divided into different groups according to the corruption level in the provinces.
The corruption level is depicted by the corruption and bribery cases registered
every 10 thousand public officers, which is exactly the ratio of registered post
criminal cases at the provincial level to the number of civil servants.
In the model (1) & (2), X is a multiple vector of control variables includ-
ing: the logarithmic company size (log Asset), rate of asset (ROA), debt-to-
assets ratio(Dbassrt), establishment years (EstAge), the logarithm of top three
executive compensations (LogMane3Pay), the duality of CEO and chairman
Table 1. Descriptive statistics of variables from groups with different PCs (10,000
yuan)
Companies without PCs Companies with PCs Companies with local PCs
Mean sd Med Mean sd Med Mean sd Med
S&T subsidies 2.62 6.52 0 4.54 31.21 0 4.75 33.4 0
Operating 3037.4 3993.23 1607.68 6495.62 13721.8 2260.71 5423.21 12418.35 1929.38
income
R&D input 45.08 89.64 14.06 77.18 248.51 16.44 59.19 241.09 12.15
Overheads 187.82 224.04 119.54 298.49 554.39 136.26 247.64 451.23 114.78
Top three 1.18 1.17 0.9 1.34 1.36 1.05 1.16 0.9 0.97
executive
compensation
Excess −0.01 0.12 −0.01 0 0.15 −0.01 0 0.07 −0.01
administration
expense
Top three 1 0.97 0.75 1.27 1.61 0.88 1.04 1.03 0.79
directors’
compensation
The proportion 0.37 0.05 0.36 0.36 0.05 0.33 0.36 0.04 0.33
of independent
directors
The duality of 0.19 0.39 0 0.18 0.38 0 0.24 0.43 0
CEO and
chairman
Corruption 24.08 5.67 24.31 24.52 6.78 24.31 24.4 7.34 24.31
level
504 D. Cai et al.
Table 2. Descriptive statistics of variables between groups with different excess admin-
istration expense
On the basis of Table 2, though the enterprises with high excess adminis-
tration expense can obtain more S&T subsidies, they do not show any sig-
nificantly higher ratio of R&D input/overheads but get higher ratio of the
overheads/operating income. It may be caused by the crowding out effect from
the rent-seeking behaviors, namely the existence of corruption and rent-seeking
reduce the efficiency of the use of government S&T subsidies.
enterprises have to invest great manpower and resources for rent-seeking, which
will fail the fulfill use of S&T subsidies and crowd out scientific research innova-
tion input. In addition, the coefficient of Loca is significantly negative in column
2, suggesting that the local PCs will strengthen such side effect, which is consis-
tent with the results in Table 4.
Variable logDelpexp
High corruption Low corruption
∗∗∗
log SciSubsidy −0.044 −0.119 0.106∗∗ 0.095∗
(0.032) (0.046) (0.058) (0.066)
Rent −6.180∗∗∗ −5.567∗∗∗ −0.020 −0.115
(0.818) (1.051) (1.859) (2.053)
log Asset 0.595∗∗ 0.507∗∗ 0.611∗∗∗ 0.855∗∗∗
(0.232) (0.247) (0.204) (0.250)
ROA 3.687 −1.426 5.704∗ 3.225
(3.745) (2.402) (3.713) (3.424)
Dbassrt −2.181 −1.514 −0.651 −0.966
(1.741) (1.533) (1.207) (1.209)
EstAge 0.034 0.011 0.009 0.024
(0.054) (0.066) (0.018) (0.024)
log Mane3Pay 0.994∗∗∗ 0.948∗∗∗ 0.159 0.084
(0.324) (0.280) (0.300) (0.319)
ChairmanCEO −0.364 0.204 0.000 −0.057
(0.703) (0.732) (0.479) (0.409)
IndeDirRatio 1.163 −2.755 −4.325 −5.091
(2.960) (2.324) (5.482) (5.076)
Poli −0.175 - 0.257 -
(0.393) - (0.379) -
Loca - −1.136∗∗∗ - 0.412
- (0.430) - (0.449)
cons −8.135 −2.900 1.771 −2.137
(7.067) (7.119) (5.826) (6.425)
Year Yes Yes Yes Yes
Industry Yes Yes Yes Yes
R2 0.505 0.584 0.479 0.434
N 62 62 82 82
Note: Robust standard errors in parentheses, ∗∗∗ Significant at 0.01,
∗∗
Significant at 0.05, ∗ Significant at 0.1.
5 Research Conclusions
We find evidence that rent-seeking behavior facilitate private listed enterprises in
establishing PCs and thus obtaining more S&T subsidies. However, it is proved
that rent-seeking activities will finally result in crowing-out effect of the R&D
input, especially for the enterprises located in the province of higher corruptions.
For such companies, the PCs are closely related to the S&T subsidies, which
breeds severe rent-seeking corruption and weakens the availability of subsidies.
Our results support the idea that PCs can result in rent-seeking corruption,
because enterprises are motivated to ingratiate with government officials by rent-
seeking activities for the purpose of getting more S&T subsidies. And it turns out
that the crowding-out effect of such activities will eventually fail the availability
of subsidies. The PCs and S&T subsidies cannot effectively promote corporate
performance or improve the development of the whole society, but only serve
for a few entrepreneurs and politicians. The government should regulate the
allocation of government subsidies, and have a reasonable control over local
government powers to improve the efficiency of the whole market and stimulate
the enthusiasm of enterprise innovation.
References
1. Allen F, Qian J, Qian M (2005) Law, finance, and economic growth in china. J
Financ Econ 77(1):57–116
2. Claessens S, Feijen E, Laeven L (2008) Political connections and preferential access
to finance: the role of campaign contributions. J Financ Econ 88(3):554–580
3. Dimakou O (2009) Bureaucratic corruption and the dynamic interaction between
monetary and fiscal policy. Eur J Polit Econ 40:57–78
4. Du J, Mickiewicz T (2015) Subsidies, rent seeking and performance: being young,
small or private in China. J Bus Ventur 31(1):22–38
5. Estrin S, Korosteleva J, Mickiewicz T (2013) Which institutions encourage entre-
preneurial growth aspirations? J Bus Ventur 28(4):564–580
6. Faccio M (2006) Politically connected firms. Am Econ Rev 96(1):369–386
7. Garcla-Quevedo J (2004) Do public subsidies complement business R&D? A meta-
analysis of the econometric evidence. Kyklos 57(1):87–102
8. Hessami Z (2014) Political corruption, public procurement, and budget composi-
tion: theory and evidence from OECD countries. Eur J Polit Econ 34(6):372–389
9. Huang MC, Wu CC (2016) Facts or fates of investors’ losses during crises? Evidence
from reit-stock volatility and tail dependence structures. Int Rev Econ Financ
42:54–71
10. Ketata I, Sofka W, Grimpe C (2015) The role of internal capabilities and firms’
environment for sustainable innovation: evidence for germany. R&D Manage
45(1):60–75
11. Kleer R (2010) Government R&D subsidies as a signal for private investors. Res
Policy 39(10):1361–1374
510 D. Cai et al.
12. Lee EY, Cin BC (2010) The effect of risk-sharing government subsidy on corporate
R&D investment: empirical evidence from korea. Technol Forecast Soc Change
77(6):881–890
13. Levin RC, Reiss PC (1984) Tests of a schumpeterian model of R&D and market
structure. University of Chicago Press, pp 175–208
14. Marquis C, Qian C (2013) Corporate social responsibility reporting in China: sym-
bol or substance? Organ Sci 25(1):127–148
15. Mellahi K et al (2016) A review of the nonmarket strategy literature: toward a
multi-theoretical integration. J Manage 42(1):143–173
16. Meuleman M, Maeseneire WD (2012) Do R&D subsidies affect SMEs’ access to
external financing? Soc Sci Electron Publishing 41(3):580–591
17. Minggui Y, Yafu H, Hongbo P (2010) Political connection, rent seeking and the
effectiveness of fiscal subsidies by the local government. Econ Res J 3:65–77
18. Richardson S (2006) Over-investment of free cash flow. Rev Acc Stud 11(2):159–189
19. Schiederig T, Tietze F, Herstatt C (2012) Green innovation in technology and
innovation management-an exploratory literature review. R&D Manage 42(2):180–
192
20. Shen J, Luo C (2015) Overall review of renewable energy subsidy policies in China-
contradictions of intentions and effects. Renew Sustain Energy Rev 41:1478–1488
21. Su ZQ, Fung HG et al (2014) Cash dividends, expropriation, and political connec-
tions: evidence from China. Int Rev Econ Financ 29(1):260–272
22. Wu A (2016) The signal effect of government R&D subsidies in China: do ownership
matter? Technol Forecast Soc Change 117:339–345
23. Xu C (2011) The fundamental institutions of China’s reforms and development. J
Econ Lit 49(4):1076–1151
24. Xu E, Xu K (2013) A multilevel analysis of the effect of taxation incentives on
innovation performance. IEEE Trans Eng Manage 60(1):137–147
25. Xu K, Huang KF, Xu E (2014) Giving fish or teaching to fish? An empirical
study of the effects of government research and development policies. R&D Manage
44(5):484–497
26. Yu F, Guo Y et al (2016) The impact of government subsidies and enterprises’
R&D investment: a panel data study from renewable energy in China. Energy
Policy 89:106–113
27. Zhang H, Li L et al (2014) Political connections, government subsidies and firm
financial performance: evidence from renewable energy manufacturing in China.
Renew Energy 63(1):330–336
28. Zhang J, Marquis C, Qiao K (2016) Do political connections buffer firms from or
bind firms to the government? A study of corporate charitable donations of Chinese
firms. Organ Sci 27:1307–1324
29. Zhang Y (2015) The contingent value of social resources: entrepreneurs’ use of
debt-financing sources in western China. J Bus Ventur 30(3):390–406
30. Zhou W (2013) Political connections and entrepreneurial investment: evidence from
China’s transition economy. J Bus Ventur 28(2):299–315
Scenario-Based Location Arc Routing Problems:
Introducing Mathematical Models
1 Introduction
An important cost of the most of companies is their logistic costs. It can be
reduced through appropriate planning and designing supply chains. In fact, dis-
tribution networks in supply chains are highly important [9]. The location prob-
lem aims at finding optimal locations for facilities or centers. In fact, one of
the most important factors in the success of a production unit is determining
appropriate locations of sites and also, on a smaller scale, production facilities.
Therefore, finding optimal or near-optimal solutions is essential. Vehicles rout-
ing, on the other hand, is among the most challenging issues in supply chain
management. These two issues affect the cost of networks and supply chains.
Hence, the location-routing problem (LRP)is defined to determine the location
and routing decisions in at the same time. It is a combination of both loca-
tion and routing optimization problems, whose decisions are made at the same
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 41
512 A. Amini et al.
time. In recent years, several LRPs were presented. Considering the LRP, better
logistics could be designed [4].
In general, routing issues may be defined on a graph of number of nodes and
arcs. The LRP deals with routing problems where demands are on nodes. One
related issue which has been considered in recent years by some researchers is the
routing problem when demands belong to arcs instead of nodes. Arc routing is
a special type of routing problems. In this content, researchers have taking into
account the conditions and restrictions in real applications, a variety of models,
and methods for solving these problems. Ghiani and Laporte [5] addressed a
problem and solved it based on the concept of the rural postman problem (RPP)
as one the earliest studies on arc routing problems. There are many researches
in this filed. For example, Pia and Filippi [8] addressed a capacitated arc routing
problem (CARP) to consider a waste collection problem and Beullens et al. [1]
employed some heuristics for a periodic CARP.
The location arc routing problem (LARP) is emerged when arc routing deci-
sions are made along with location ones. The difference between the LRP and the
LARP is displayed by Fig. 1. Waste collection, mail delivery, and telecommuni-
cation network design is among most well-known problems in which LARP could
be addressed [7]. Also, some node routing problems could be considered as an arc
routing one. For instance, when there are supermarkets as the nodes of demand,
one can convert a node routing to an arc routing problem; because the street
with N supermarkets is an arc with an aggregated demand from them. It might
reduce the complexity of the problem. It means that a LARP could be defined
Scenario-Based Location Arc Routing Problems 513
for a lot of real applications. Firstly, Levy and Bodin [2] considered a LARP by
which a mail delivery problem is taken into account. Then, Ghiani and Laporte
[6] employed some heuristics to deal with a LARP. Lopes et al. [11] developed a
mathematical model for LARP. In addition, Hashemi and Seifi [3] developed two
mathematical models for single-depot and multiple-depot LARPs, respectively.
One the latest works is what Riquelme-Rodrlguez et al. [10] presented which
deals with a LARP with inventory constraints.
The paper on hand addresses a LARP by developing a mathematical model.
Also, according to the uncertain nature of the data and information in the real
world, we intend to build our research based on this fact. In our best of knowledge
there is a gap in considering uncertainties in LARP. So, two scenario-based
approaches are employed in the developed mathematical model. Section 2 defines
the problem and notations and develops a deterministic mathematical model
for LARP. Section 3 deals with two scenario-based approaches and modifies the
mathematical model for each. In Sect. 4 a numerical example is analyzed based
on the mathematical models. Finally, we conclude the paper.
There is a complete graph G = (V, A) where V and A denotes the set of vertices
and set of arcs, respectively. V is consisted of K as the set of potential depots and
R as the set of extremities of arcs, V = R ∪ K. Each arc denoted by (i, j) which
means that the arc begins form vertex i and ends at j. There is a given number
of homogenous vehicles for each potential depot in a set called P . Then each
depot is able to have |P | separate tours. Some arcs have positive demands and
the demand of arc(i, j) equals the demand of arc(j, i). This means that both of
them are not required to be serviced and it is sufficient to meet the demand of one
of them. The problem is finding optimal depots among the potential ones, assign
vehicles and arcs to them, and traverse those arcs in an optimum routing. It is
assumed that arcs with positive demands must be serviced once by a vehicle and
also split delivery of demands is not allowed. Number of deadheaded traversing
in each arc is not limited. In addition, tours must begin from a depot and end to
the same one. In the deterministic model are parameters are given in advance.
Following notations are introduced to develop the mathematical model.
Parameters:
Variables:
xpk
ij : Binary variable which equals 1 if required arc(i, j) is served by vehicle p
of depot k;
pk
yij : Number of deadheaded traverses in arc(i, j);
pk
fij : Departed flow form arc(i, j);
vij : Binary variable which equals 1 if arc(i, j) has a positive demand;
qpk : Binary variable which equals 1 if vehicle p of depot k is established;
gpk : Departed flow form depot k by vehicle p;
zk : Binary variable which equals 1 if depot k is established.
Regarding the assumptions and notations the mathematical model for the
deterministic LARP is developed as below. As a matter of fact, this model is
inspired by what is developed in [3].
pk pk
k k
min θ = zk Fk + xij Cij + yij Hij + qpk W (1)
k i,j,p,k i,j,p,k p,k
s.t. (xpk
ij + xpk
ji ) = vij , ∀(i, j) ∈ R (2)
k∈K,p∈P
(xpk pk
ij + xji ) = 0, ∀(i, j) ∈ K (3)
k∈K,p∈P
(xpk pk
ij + yij ) − (xpk pk
ji + yji ) = 0, ∀j ∈ V, ∀p ∈ P, ∀k ∈ K (4)
i∈V i∈V
qpk ≤ zk , ∀p ∈ P, ∀k ∈ K (5)
pk pk
(xkj + ykj ) = qpk , ∀p ∈ P, ∀k ∈ K (6)
j∈R
(xpk pk
jk + yjk ) = qpk , ∀p ∈ P, ∀k ∈ K (7)
j∈R
xpk pk
ij + yij ≤ M qpk , ∀(i, j) ∈ V, ∀p ∈ P, ∀k ∈ K (8)
xpk
+
ii
pk
yii = 0, ∀i ∈ V, ∀p ∈ P, ∀k ∈ K (9)
pk
yij = 0, ∀p ∈ P, ∀k ∈ K (10)
i∈K,i=k
xpk
ij ≤ Dij , ∀(i, j) ∈ R, ∀p ∈ P, ∀k ∈ K (11)
Dij ≤ M vij , ∀(i, j) ∈ R. (12)
Scenario-Based Location Arc Routing Problems 515
qpk ≤ |P | , ∀k ∈ K (13)
p∈P
gpk ≥ Dij xpk
ij − M (1 − qpk ), ∀p ∈ P, ∀k ∈ K (14)
i,j
gpk ≤ Dij xpk
ij + M (1 − qpk ), ∀p ∈ P, ∀k ∈ K (15)
i,j
xpk
ij , vij , zk , qpk ∈ {0, 1}, ∀(i, j) ∈ V, ∀p ∈ P, ∀k ∈ K (22)
pk
yij ≥ 0&Integer, ∀(i, j) ∈ V, ∀p ∈ P, ∀k ∈ K (23)
pk
fij , gpk ≥ 0, ∀(i, j) ∈ V, ∀p ∈ P, ∀k ∈ K. (24)
Objective function (1) minimizes the total costs of opening depots, traversing,
and hiring vehicles. Constraint (2) guarantees that required arc(i, j) must be
served by one vehicle form one depot in one direction, i.e. from i to j or j to i.
Constraint (3) ensures that traveling between depots is not allowed. Constraint
(4) represents the continuity of tours, and Constraint (5) is in charge of vehicle
assignments to depots could be considered if that depot is opened. Constraint (6)
indicates that one arc must leave the depot to one arc of its tour at the beginning
of the tour and, on the other hand, Constraint (7) shows that arc must enter
the depot form one arc of its tour at the end of the tour. In Constraint (8) it is
mentioned that xpk pk
ij and yij can be equal to 1 if they are assigned to the vehicle
p of depot k, where M is a big sufficient positive number.
Constraint (9) guarantees that there is not any arc between one vertex and
itself. Constraint (10) declares that tours are not allowed to end at a different
depot from which they begin. Also, Constraints (11) and (12) find the arcs with
positive demands. Regarding Constraint (13), maximum number of assigned
vehicles to each depot cannot exceed the total number of vehicles. Constraints
(14) and (15) measure the total amount of demands for each vehicle of each
depot if it is opened. Constraints (16) and (17) regard the capacity limitations
for opened depots and selected vehicles. Constraint (18) guarantees that the
existing flows from arc(i, j) equal the entering ones to that arc minus its demand
if there is one servicing traveling for that arc in that tour. Also, Constraint (19)
516 A. Amini et al.
shows that flows of arc(i, j) are zero if there is not any traversing. Although,
flows are not required to be determined, Constraints (18) to (21) ensure not
to have sub-tours. Finally, Constraints (20) and (21): existing flow form each
opened depot by each vehicle equals the total amount of demands of that tour
and entering flow to that depot is zero.
3 Scenario-Based LARP
Regarding the real problems in the world, data might not be exactly given. So,
many problems have uncertain parameters. The paper on hand aims at employ-
ing two scenario-based approaches to consider uncertainties: (i) Minimization of
maximum regret, (ii) Minimization of mean and deviation of objective function
value. The scenario-based LARP is called SLARP in this paper. So the first
Approach is named SlARPR and the second one is named SlARPM D . Also, let
s denotes the index of scenarios form its set S from now on.
Considering LARP model, Eq. (25), and new variables, following model is
developed as the integrated model for SLARPR .
min θR = rmax , (26)
s.t. rmax ≥ θs − θs∗ , ∀s ∈ S (27)
xpk
ij ≤ Dijs , ∀(i, j) ∈ R, ∀p ∈ P, ∀k ∈ K, ∀s ∈ S (28)
Dijs ≤ M vij , ∀(i, j) ∈ R, ∀s ∈ S (29)
Scenario-Based Location Arc Routing Problems 517
gpks ≥ Dijs xpk
ij − M (1 − qpk ), ∀s ∈ S (30)
i,j
gpks ≤ Dijs xpk
ij + M (1 − qpk ), ∀s ∈ S (31)
i,j
4 Numerical Example
A numerical example is considered to analyze developed models. This exam-
ple consists of three potential depots, two vehicles for each, and six extremi-
ties for arcs. We assume that there is a link between each pair of extremities
and between each potential depot and each extremity. Potential depots belong
to set {1, 2, 3} and extremities are in set {4, 5, 6, 7, 8, 9}. In this example arcs
there are positive demands between extremities (4, 5), (4, 7), (4, 9), (5, 7), (5, 8),
(6, 7), (6, 8), (6, 9), and (8, 9). Arcs with positive demands must be served in
one direction. For instance, one of arcs 4-5 and 5-4 must be chosen for meet-
ing the demand between extremities 4 and 5. Five parameters of the model
are generated with patterns W ∼ Uniform(100, 200), F ∼ Uniform(400, 800),
k k
Cij ∼ Uniform(50, 100), Hij ∼ Uniform(10, 50), and Dij ∼ Uniform(100, 1000).
In addition, parameters of capacities are generated regarding the demands in a
way that infeasibility does not occur.
Also, five scenarios are defined. Here, scenario S1 has the lowest amount of
demands among all scenarios. Then, we assume that this scenario has the lowest
amount of capacities and costs. So, scenario S5 has the highest amount for
each parameter. All parameters of a scenario are generated by above-mentioned
patterns through dividing the ranges of each parameter. For example, range
(100,200) is assumed for W . This range is divided to five ranges to generate the
scenarios. Thus, ranges (100,120), (121,140), (141,160), (161,180), and (181,200)
are the ranges of the uniform distribution function of W for scenarios S1 to S5,
respectively. This process is employed for each parameter.
λ P set θs θR θM D
S1 S2 S3 S4 S5 λ=0 λ = 0.5 λ=1
S1 - - 1168.00 - - - - - - - -
S2 - - - 2137.00 - - - - - - -
S3 - - - - 1757.00 - - - - - -
S4 - - - - - 2545.00 - - - - -
S5 - - - - - - 2244.00 - - - -
θR - - 1811.00 2201.00 2533.00 2802.00 3263.00 1019.00 2522.00 3554.00 4586.00
θM D 0 α1 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2498.40 2713.56 2928.72
θM D 0 α2 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2243.70 2455.32 2666.94
θM D 0 α3 1750.00 2161.00 2531.00 2763.00 3296.00 1052.00 2326.50 2508.00 2689.50
θM D 0 α4 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2497.40 2637.62 2777.84
θM D 0 α5 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2656.90 2823.56 2990.22
θM D 0 α6 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2765.90 2978.74 3191.58
θM D 0.5 α1 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2498.40 2713.56 2928.72
θM D 0.5 α2 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2243.70 2455.32 2666.94
θM D 0.5 α3 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2327.10 2505.36 2683.62
θM D 0.5 α4 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2497.40 2637.62 2777.84
θM D 0.5 α5 1757.00 2173.00 2531.00 2747.00 3297.00 1053.00 2657.40 2821.16 2984.92
θM D 0.5 α6 1811.00 2201.00 2533.00 2802.00 3263.00 1019.00 2773.40 2974.96 3176.52
θM D 1 α1 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2498.40 2713.56 2928.72
θM D 1 α2 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2243.70 2455.32 2666.94
θM D 1 α3 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2327.10 2505.36 2683.62
θM D 1 α4 1752.00 2169.00 2520.00 2753.00 3298.00 1054.00 2497.40 2637.62 2777.84
θM D 1 α5 1757.00 2173.00 2531.00 2747.00 3297.00 1054.00 2657.40 2821.16 2984.92
θM D 1 α6 1740.00 2188.00 2542.00 2759.00 3290.00 1046.00 2769.00 2977.40 3185.80
of each scenario. We can find that scenarios do not have same results to each
other. It means that the value of parameters affects the optimum decisions even
when the arc with positive demands are same. Also it is found that if traversing
costs of each tour are dependent to its depot, decision makers cannot simply
declare that having multiple tours with one depot is better than having multiple
depots. For example, although scenarios S1, S3, and S5 prefer to use two tours
for the opened depot and not to open an extra one, S4 opens two depots with
one tour for each while it can assign another vehicle to each of them and remove
the other one. In fact, parameters of the problem indicate the best solution.
SlARPR and SlARPM D models have been taken into account after obtain-
ing optimal solutions of scenarios. Regarding the objective function of SlARPM D ,
a value for λ is required. This example assumes three different levels for λ: 0,
0.5, and 1. Also, this model needs to know the probability of each scenario in
advance. Considering different conditions, six sets, called P set , are given for the
following probabilities of scenarios: α1 = {0.2, 0.2, 0.2, 0.2, 0.2}, α2 = {0.4,
0.2, 0.2, 0.1, 0.1}, α3 = {0.2, 0.4, 0.2, 0.1, 0.1}, α4 = {0.1, 0.2, 0.4, 0.2, 0.1}, α5 =
{0.1, 0.1, 0.2, 0.4, 0.2}, α6 = {0.1, 0.1, 0.2, 0.2, 0.4}, where each set is defined as
{Ps1 ,Ps2 ,Ps3 ,Ps4 ,Ps5 }. Table 2 indicates the OFV of models under different values
for λ and set of probabilities and all equivalent OFVs for scenarios and other mod-
els. For instance, the optimum value of θR are fixed for scenarios and SlARPM D
520 A. Amini et al.
Table 3. Optimal depots and routings for the SlARPR and two cases of the SlARPM D
model their OFVs are measured. It is worth mentioning that we assume that sce-
narios have same probabilities when we want to fix the SlARPR optimum value of
decision variables in SlARPM D .
It can be figured out that each OFV is in its lowest level when its respective
model is optimized. Obviously, some other problems reached the same value. For
∗
example, θR and it equals to the equivalent value for θR when SlARPM D model
is optimized with λ = 0.5 and α6. Also, one can find some optimum decisions
in Table 3. It shows the effect of parameters and models in optimum solutions.
Minimizing the maximum regret of scenarios, depots 1 and 2 are opened while
depots 1 and 3 are opened when SlARPM D model is considered. In addition, λ
and P set affect the routing of arcs. Minimizing total costs, the models prefer not
to deadhead arcs as much as possible. Most deadheading traverses relate to the
linking arcs between depots and extremities.
5 Conclusion
References
1. Beullens P, Muyldermans L, Cattrysse D et al (2003) A guided local search heuristic
for the capacitated arc routing problem. Eur J Oper Res 147(3):629–643
2. Bodin L, Levy L (1989) The arc oriented location routing problem. INFOR Inf
Syst Oper Res 27(1):74–94
3. Doulabi SHH, Seifi A (2013) Lower and upper bounds for location-arc routing
problems with vehicle capacity constraints. Eur J Oper Res 224(1):189–208
4. Drexl M, Schneider M (2014) A Survey of the Standard Location-Routing Problem.
Publications of Darmstadt Technical University Institute for Business Studies
5. Ghiani G, Laporte G (1999) Eulerian location problems. Networks 34(4):291–302
6. Ghiani G, Laporte G (2001) Location-arc routing problems. Opserach 38(2):151–
159
7. Nagy G, Salhi S (2007) Location-routing: issues, models and methods. Eur J Oper
Res 177(2):649–672
8. Pia AD, Filippi C (2006) A variable neighborhood descent algorithm for a real
waste collection problem with mobile depots. Int Trans Oper Res 13(2):125–141
9. Prodhon C, Prins C (2014) A survey of recent research on location-routing prob-
lems. Eur J Oper Res 238(1):1–17
10. Riquelme-Rodrı́guez JP, Gamache M, Langevin A (2016) Location arc routing
problem with inventory constraints. Comput Oper Res 76:84–94
11. Rui BL, Fea Plastria (2014) Location-arc routing problem: heuristic approaches
and test instances. Comput Oper Res 43(3):309–317
The Impact of Industrial Structure Change
on Economic Growth
1 Introduction
Industrial structure and regional economic growth are interrelated and pro-
mote each other, the upgrading of industrial structure will promote the regional
economic growth, regional economic growth process accompany with industrial
structure change, reasonable evolution of industrial structure has become impor-
tant symbol of economic upgrading and modernization, which can promote the
potential power of regional economic growth. Domestic and foreign scholars make
a lot of research on the industrial structure and economic growth. In the late
1960s, Kuznets explored the process of economic growth in the United States
from 1948 to 1966, believing that industrial structural change was an impor-
tant driving force of economic growth. In the 1980s, Chenery et al. found that
economic growth was unbalanced, the production factors would flow from the
low-yield sector to the high-yield sector, and the effect of industrial structure
can drive economic growth [1]. At the beginning of this century, Peneder argued
that the differences in productivity and its changing trends among production
sectors would lead to the flow of the production factors in various sectors, the
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 42
The Impact of Industrial Structure 523
resulting structural dividend could cause economic growth [6]. Perez and Free-
man indicated the inherent relationship between industrial structure and the
whole economic and social development model and the level [2]. Jorgenson dis-
cussed the origin of US economic growth and its international comparison, and
revealed the role of the evolution of industrial structure [5]. Gan et al. took use
of stochastic frontier production function to research and find that industrial
structure and institutional factors could directly affect the economic scale, but
also through the indirect impact of resource allocation function, and then they
acted on the efficiency of output to promote regional economic growth [3].
2 Index Selection
2.1 Measurement of Industrial Structure Rationalization
Rationalization of the structure refers to the coupling relationship of input-
output in, reflecting the rational use of resources and the degree of coordination
among the various industries. The Theil index was originated from the entropy
concept in information theory, which was used by Theil to calculate the degree
of income gap in 1967. It is also scientific to use this index to measure the
rationalization of the structure. Referring to the relevant research of Chunhui
Gan et al. [3], this paper adopts the Theil index, the formula as following:
n n
Yi Yi LI Yi Yi Y
TL = ln / = ln / ,
i=1
Y Y L i=1
Y LI L
This paper chooses the Gross Domestic Product (GDP) as a measure of economic
growth. Gross Domestic Product (GDP) refers to the market value of the final
products and services which a country or a region uses production factors to
produce in a given period. It not only reflects a country’s economic performance,
but also can reflect national strength and wealth.
3 Model Construction
ln Gi = β1 + β2 ln T Li + β3 ln T Si + ui .
In the model, G is regional GDP, T L is the Theil index, referring to the ratio-
nalization of industrial structure, T S is output ratio of the third and second
industrial, to measure advanced industrial structure. In order to eliminate het-
eroscedasticity, each variable is taken as a natural logarithm, and u is the random
error term, β1 , β2 , β3 is parameter.
This paper selects the relevant data from 1978 to 2012 in Hebei, and applies
Eviews7.0 software to process the relevant data. The calculated results of indus-
trial structure rationalization and advanced industrial structure are shown in
Table 8.
Table 1 shows: Theil index mean is 0.2324, the standard deviation is 0.0517,
the maximum and minimum are 0.3508, 0.1466. In 28 years, degree of indus-
trial structure rationalization kept steady. T S appears overall upward trend, its
mean is 0.6596, standard deviation is 0.0709, the maximum and minimum are
0.8160,0.5001, showing volatility rise (Table 2).
Figure 1 shows that from 1985 to 2012, T L changes from initial higher than
0.2 to later less than 0.2, showing that industrial structure is gradual in the
reasonable trend. At the same time, T S changes from the initial 0.4 and 0.6 to
0.6 and 0.8 later, industrial structure is a high-level forward in Hebei Province.
In addition, it should be emphasized that the correlation of T L and T S is not
strong (as shown in Table 1), and the change trend of them is not the same.
Thus, the interaction of T L and T S need not to be considered when analyzing
the impact industrial structure change on economic growth (Table 3).
The Impact of Industrial Structure 525
Year T L TS Year T L TS
1985 0.2252 0.5001 1999 0.2275 0.6943
1986 0.2268 0.511 2000 0.2518 0.6777
1987 0.2317 0.5012 2001 0.2409 0.7069
1988 0.2679 0.6667 2002 0.2383 0.7383
1989 0.2831 0.671 2003 0.2455 0.7139
1990 0.2706 0.7249 2004 0.2156 0.6607
1991 0.3481 0.816 2005 0.2216 0.6337
1992 0.3398 0.7821 2006 0.2219 0.6375
1993 0.3508 0.6383 2007 0.1924 0.6388
1994 0.2551 0.648 2008 0.1969 0.6063
1995 0.2023 0.6769 2009 0.179 0.6773
1996 0.1858 0.6533 2010 0.1713 0.6653
1997 0.197 0.6502 2011 0.1667 0.6462
1998 0.2065 0.6626 2012 0.1466 0.6702
TL TS
TL Pearson correlation 1 0.094
Sig.tow-tailed - 0.693
N 28 28
TS Pearson correlation 0.094 1
Sig.tow-tailed 0.693 -
N 28 28
ADF test results show that the t-test values of lnGDP, ln T L, ln T S are less
than the critical value at the significance level of 10%, indicating that the null
hypothesis is rejected at least confidence level of 90%, lnGDP and ln T L, ln T S
are integrated of order on. Thus, the long-term relationship between lnGDP and
ln T L, ln T S can be co-integration test.
(2) Co-integration Test
In the co-integration test, the Johansen maximum likelihood method is used
to test the co-integration relationship among variables. For a number of non-
stationary time series, if one linear combination is stationary, there exists a
long-term equilibrium co-integration relationship among these variables. The
trace statistic and the maximum eigenvalue statistic can be used to test. The
test results are shown in Table 5:
The results of co-integration test in Tables 5 and 6 show that the trace sta-
tistic and the largest eigenvalue statistic are larger than the critical value at
the significance level of 5%, indicating that the hypothesis can be rejected at the
95% confidence level, indicating that there is co-integration relationship between
the variables. Thus, there are more than two co-integration relationships among
the three variables (lnGDP, ln T L, ln T S) at the significance level of 5%.
Table 7 shows the estimated values of the parameters of the VEC model,
where the value of CointEq1 represent the coefficient estimate of the error cor-
rection term. While the error equation can be derived from Table 4:
CointEq1 = D(ln GDP) + 0.23 ln GDP(−1) + 0.22 ln T L(−1) − 0.09 ln T S(−1) + 0.12.
D(ln GDP ) = CointEq1 − 0.23 ln GDP (−1) − 0.22 ln T L(−1) + 0.09 ln T S(−1) − 0.12.
This error equation shows: when ln GDP changes each additional unit, the
impact of ln GDP (−1) on it is −0.23 units, the impact of ln T L(−1) on it is
−0.22 units, the impact of ln T S(−1) on it is 0.09 units. In addition, Theil index
and GDP are negative correlation, lnT L rises one unit, D(ln GDP ) will drop
0.22 units; and TS and GDP show positive correlation, ln T S rises one unit,
D(ln GDP ) rises 0.09 units. This shows that the economy in Hebei Province
will grow with the industrial structure rationalization and the advanced indus-
trial structure, but its influence coefficient shows that the influence of industrial
structure rationalization on the economy is higher than that of the advanced
industry structure on economy in Hebei.
(4) Granger Causality Test
Table 8 shows: ln GDP and ln T L exist causal relationship, there is a mutually
reinforcing relationship between industrial structure rationalization and regional
economic growth; while advanced industrial structure and economic growth are
only a single causal relationship, economic growth can promote advanced indus-
trial structure, and advanced industrial structure can not promote economic
growth. GDP in Heibei increased from 39.675 billion RMB in 1985 to 2.65751
trillion RMB in 2012, while the output value of the secondary industry and
tertiary industry in Hebei Province has maintained a growing trend, indicating
that Hebei’s economic growth can promote industrial structure more reasonable.
From 1985 to 2012, the second industry in Hebei still dominates in economy. But
as time changes, the proportion of the first industry declines, the proportion of
tertiary industry continues to rise, the industrial structure is more rationalized.
This shows the stability of economic growth is mainly due to the rationalization
of the structure, but advanced industrial structure is only one aspect of economic
growth effect, not to promote sustained economic growth.
5 Conclusion
References
1. Chenery HB, Robinson S, Syrquin M et al (1986) Industrialization and growth: a
comparative study, vol 4, pp 591–596
2. Freeman P (2000) Structural crises of adjustment, business cycles and investment
behavior. Tech Change Econ Theor 38(6):38–66
3. Gan C, Zheng R, Yu D (2011) The impact of china’s industrial structure change on
economic wth and volatility. Struct Change Econ Dyn 5(4):4–18 (in Chinese)
4. Ghosh SK (2015) The effect of imperfect production in an economic production
lotsize model. Int J Manag Sci Eng Manag 10(4):288–296
5. Jorgenson DW, Ho MS, Stiroh KJ (2003) Lessons from the us growth resurgence. J
Policy Model 25(5):453–470
6. Peneder M (2003) Industrial structure and aggregate growth. Struct Change Econ
Dyn 14(4):427–448
7. Tiemei G (2009) Economic analysis methodology and modeling - eviews application
and example. Beijing Tsinghua Univ 1:126–165
8. Tripathi RP (2016) Economic ordering policies under credit financing for deteriorat-
ing items with stock-dependent demand rate using discounted cash flow approach.
Int J Manag Sci Eng Manag 12:111–118
Disclosure Behavior of Annual Reports of Listed
Companies Under Digital Worship
Xiaojing Xu(B)
1 Introduction
Digital worship refers to the number of people to avoid or love [2]. The digital
worship is widely used in China, such as the choice of telephone numbers, busi-
ness opening date selection and so on. In this digital influence, people think that
figure “8” seems to be “fatten” homophonic, meaning fortune. People think that
the number “4” seems to be “dead” homophonic, and quite avoided. Such as
the 2008 Beijing Olympic Games elected in August 8th at 8: 8 pm to open, and
Alibaba Group was originally scheduled for August 8th , 2014 to be on the United
States stock market. Many studies have proved that there is a phenomenon of
worshiping number 8 in China’s stock market. In this paper the question posed
is whether this digital worship exists in the timing of listed company’s annual
report disclosure? If so, does it have a significant impact on the market? Specifi-
cally, on the one hand, if listed companies are affected by the number of worship,
whether they will choose date with the lucky number to publish annual report.
On the other hand, if the investors’ behavior is also affected by the digital wor-
ship, whether the listed companies will make good use of selecting the auspicious
figures to disclose annual reports to get more investors’ attention? If the digital
worship has a impact on the disclosure behavior, it will be a new market vision.
The research of this paper has reference significance for investment decision-
making, risk management and understanding the behavior of listed companies
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 43
Disclosure Behavior of Annual Reports of Listed Companies 531
and investors. The article will use the method of empirical analysis, with the
date ending in number 8 and number 4 as the starting point to demonstrate
whether the listed companies behavior of timing of annual report disclosure
is affected by the digital worship. The remainder of the paper is structured as
follows: The second part briefly reviews the literature. The third section describes
the data and makes assumptions. The fourth part introduces the econometric
model and empirical analysis. The fifth part summarizes the full text and makes
recommendations.
2 Related Literature
Cao, Li et al. [2] studied China is a very particular about the “homophonic cul-
ture” of the country, 8 homonyms are “fatten”, meaning fortune. And 4 homo-
phonic is “dead”, by a lot of people deliberately avoided. Xu [8] pointed out that
the Chinese people believe that the numbers can predict good and bad. With
social development, the number will have a new reference meaning and more
distinct meaning, the number “8” is repeatedly subject to people’s heat Hold-
ing. Shen [6] studied the symbolization and illusion of money, and discussed the
influence of digital worship on modern wealth. In the stock market, Zhao and
Wu [9] selected 569 non-financial stocks in Shanghai stock market as the object
of study, and got code ending with 8 stock price high, its price-earnings ratio on
IPO day and the following year higher than other mantissa stocks. Philip and
Jason [1] found that the stock valuation of Shanghai and Shenzhen stock market
there is a number of worship under the cluster effect, the price median with
8 appears twice the frequency of 4, because the Chinese inherent values that
the number 8 is auspicious number, 4 is unlucky numbers. The above document
proves that the stock code and the price contains the auspicious number will
have an impact on the stock price. Liu, Li and Yang [4] found that auspicious
number preference will significantly affect the relationship between the system-
atic risk of stock collapse and the expected rate of return. The above literature
proves that there is a phenomenon of digital worship in many areas of Chinese
stock market, and it has significant influence on the stock price and the expected
return rate. However the number of worship will affect the information disclosure
of listed companies and investors decision-making behavior, there is no relevant
literature, which is the research direction of this paper.
Human decision-making process is not based on purely rational, but there
will be a series of cognitive bias [5]. In a given context, people’s decision-making
behavior depends not only on the problem itself, but also on the way the problem
is expressed. Han and Xu [3] found that listed companies significantly lower or
early disclosure of the disclosure of annual reports will receive more attention
to investors, which reveals the listed companies through the choice of time to
disclose the annual report of an important channel for market value management.
Zhou and Huang [10] found that the disclosure of our annual report on the stock
market appeared “week preference” and “get together preferences”. To disclose
the good news on Tuesday, the stock excess return is higher. But the disclosure
532 X. Xu
of bad news on Saturday, the stock is also higher than the excess rate of return.
The stock has a positive impact on excess returns which the company chose the
time when a number of annual reports published at the same time to disclose
the bad news, in order to avoid the attention of investors. Wang and Wang [7]
studied the impression management theory of information disclosure strategy of
listed companies, and analyzed how listed companies can maximize their own
benefits through management information disclosure. Thus, listed companies
have strategy in the choice of disclosure of the timing, and they choose to publish
are going to catch the chance of timing to attract and distract the attention of
investors. This paper attempts to prove whether the listed companies use the
digital worship effect and select the mantissa is the auspicious number of dates
in order to achieve the effect of investor concern. In short, digital worship and
disclosure strategy belong to the category of behavioral finance, and it is also a
kind of market vision on the stock market.
As shown in the above figure, the number which listed companies choose to
publish the annual report in the date of mantissa 8 is the most, accounting for
15.01%. But the date selected in the mantissa of 4 is the least, accounting for
4.72%. The results shows that the date listed companies choose to publish the
annual report is Imbalance. There is a phenomenon of digital worship. However,
how much dose this phenomenon impact the stock market? In particular, if the
investor has the behavior of digital worship, the listed company chooses the
auspicious date to disclose the annual report in order to get more attention
from the investors when it chooses the disclosure time of the annual report. How
to measure the annual report after the release of more investors are concerned
about? This paper selects the 5-day yield index, which reflects the short-term
price volatility of the stock market. If the five-day rate of return is greater than
zero, then the stock received the attention of investors and investment, so that
the stock prices up in the short term. Conversely, the opposite is true.
Disclosure Behavior of Annual Reports of Listed Companies 533
In order to facilitate the study, this paper chooses the date of the mantissa
8 as the auspicious date and the date of the mantissa 4 as the unlucky date to
analyze whether the digital worship on the date will affect the disclosure time
of the listed company. So this paper puts forward the hypothesis,
Hypothesis 1 (H1): If it choose the date of the mantissa 8 to publish annual
report, the five-day stock returns will have a positive effect.
Hypothesis 2 (H2): If it choose the date of mantissa 4 to publish annual
report, the five-day return has a reverse effect.
On the other hand, listed companies are also under the digital worship of
praying auspicious, especially the annual report is bad news. They may choose
to disclose the annual report in the auspicious date so that corporate earnings
have a positive effect. The article sets that when the net profit margin is less
than or equal to last year’s net profit margin is bad news, when the net profit
margin higher than last year’s net profit margin is good news. Therefore,
Hypothesis 3 (H3): If the listed company which has good news in annual
report select the date of mantissa 8 to disclose annual report, not only have
a positive effect on the company’s five-day yield, but also have higher five-day
yield compared to other days.
Hypothesis 4 (H4): If the listed company which has good news in annual
report select the date of mantissa 8 to disclose annual report, not only have
a positive effect on the company’s five-day yield, but also have higher five-day
yield compared to other days.
4 Empirical Analysis
This data selection follows these principles: (1) As the annual reports always
publish in the next year’s January to April, so financial data selected from 2010
to 2014 annual report. And the yield selected from 2011 to 2015 annual report
after the publication of five-day yield. Since the 2014 annual report is published
in 2015, so the annual report data is from 2014, and the yield data is collect in
2015 after the release of annual report. (2) Removing some of the outliers and
missing values; (3) The independent variables of the maximum and minimum of
3% flattening treatment. All the data in this article comes from the database of
Guotai’an. The empirical analysis is based on Eviews6.0.
In this paper, a hybrid panel data model is used. Using the cross-section weight-
ing method, the empirical results are shown in Table 2.
Based on the empirical results in Table 2, the analysis of the various research
hypotheses is as follows:
Firstly, the DV1 coefficient of model 1 is 0.9320, which indicates that the
listed company has positive effect on its 5-day rate of return on the date of the
ending date of 8, and compared to other dates, which can improve the 5-day
yield of 0.9320 units. So hypothesis 1 proves.
536 X. Xu
Secondly, as can be seen from model 2, the good news is better than other
dates by 1.0161 units of the five-day rate of return on the date of tail number 8.
So the good news announced in the date of tail number 8 can get more investors
attention. So Hypothesis 2 proves.
Thirdly, from Model 3 we can see that listed companies can increase the yield
of 0.8512 units by 8 days, which means that bad news can weaken the negative
effect of the stock market when the date is 8. So Hypothesis 4 gives evidence.
Finally, in the announcement date mantissa is 4 combinations, the model four
DV2 coefficient is −1.6410. It indicated that the listed company in the date
with tail number of 4 published annual report will reduce the 1.6410 unit rate
of return. So Hypothesis 2 gives evidence. From Model 5, we can see that the
good news is released on the date ending on the 4th , and the yield is reduced
by 1.5512 units compared with other dates. From Model 6, we can see that the
bad news is released on the date ending on the 4th, which will reduce the 1.8321
unit yield compared with other dates.
References
1. Brown P, Mitchell J (2004) Culture and stock price clustering: evidence from the
peoples’ republic of China. Soc Sci Electron Publ 16(1C2):95–120
2. Cao B (2015) Developmental characters and reason analysis of number preference.
Adv Psychol 05(11):640–647
3. Han G, Xu H (2016) Operating performance of listed companies, disclosure of
annual report and investors’ concerns. China’s Econ Probl 4:17–34
4. Li L (2015) Cognitive bias of compound proportion relation and its interpretation
verification. PhD thesis, East China Normal University, Shanghai
5. Shen G (2016) Financialization, symbolism and illusion of money. J Wuhan Univ
Sci Technol (Soc Sci Ed) 18:478–482
6. Shengyao Liu LY, Yunhong Y (2016) Collapse of Chinese stock market systematic
risk and investor behavior preference. Finan Res 2:55–70
7. Wang X, Wang Y (2006) Theoretical basis of information disclosure strategy of
listed companies. Auditing Econ Res 2:84–87
8. Xu L (2012) A comparison of connotations of Chinese and western digital cultures.
J Zhejiang Univ Sci Technol 24:115–127
9. Zhao WF (2009) Asset price vision under digital worship. Econ Res 6:129–141
10. Zhou J, Huang D (2011) Investors’ limited attention and time choice of listed
companies’ announcements. Secur Mark Guide 5:53–60
Energy Finance in Promoting Renewable Energy
Sustainable Development in China:
The Channels and Means of Financing
1 Introduction
Renewable energy (RE) has become the fundamental direction and core content
of the global energy transformation. As renewable energy is environmentally
friendly, low-carbon energy is the main force behind the technological revolu-
tion in energy and has become a major emerging industry growth point. China
enacted a series of polices to promote sustainable RE development to increase
The Energy foundation and the National Development and Reform Commission
predicted that from 2005 to 2020, China’s needed to invest 18 trillion CNY into
the energy sector, with new energy, energy saving and environmental protection
measures needing about 7 trillion CNY [6]. Therefore, government subsidies
and investment alone is clearly insufficient, so full use needs to be made of
market financing using financial channels to resolve the funding shortages. It
has been pointed out that the development of appropriate financing channels
and instruments for both end users and the industry was one of the drivers for
increased investment [2]. This issue is analyzed in this part of the paper.
International financing sources have played a major role as incubator funds for
the development of rural renewable energy in China, with many renewable energy
projects in China having been financed by multilateral and bilateral organi-
zations. In China the internationally funding projects have included: “Capac-
ity building for the rapid commercialization of renewable energy”, a “China
renewable energy development project”, a “solar village project”, a “wind power
project in China’s Hubei” and a “clean energy research project”. International
financing channels for China’s renewable energy sector include international
financial institution loans, inter-government loans, non-government loans and
foreign investment, overseas Chinese investment in the territory, the Clean Devel-
opment Mechanism (CDM) and the Carbon Abatement Fund.
initiatives and the provision of financial incentives. The Chinese Township Elec-
trification Program was the first nationally based implementation of renewable
technologies to supply electricity to rural populations and is one of the largest
renewable energy-based rural electrification programs in the world. The develop-
ment of such projects demonstrates the robust and sustainable renewable energy
infrastructure in China, especially for solar energy.
tricity users by the project operators to pay for the lease financing. In this way, a
project sponsor can reduce their capital pressure as well as reduce the project’s
financial risk.
6 Conclusion
Investment, financing and corresponding policies have played a significant role in
sustainable RE development. Investment, financing, finance means and related
laws, regulations and major incentive mechanisms in China have progressed sig-
nificantly in recent years. Various incentive mechanisms such as the RE prefer-
ential fiscal policies and R&D fund support for the commercialization of tech-
nologies has gone some way to decreasing the development costs of renewable
energy projects. In the future, joint financing and policies and support mech-
anisms for renewable power generation need to be further strengthened. Fiscal
and tax incentives have significantly relieved the financial burdens on renew-
able energy power generation enterprises and preferential tax policies for VAT,
income tax and import duties for the renewable energy power generation indus-
try as well as strong financial subsidies have provided invaluable support for
renewable power projects. Tariff incentives, in particular, have ensured reason-
able profits for renewable energy power generation enterprises. However, invest-
ment and financing renewable energy policies need to be strengthened further
546 L. Xu et al.
References
1. Abolhosseini S, Heshmati A (2014) The main support mechanisms to finance
renewable energy development. Renew Sustain Energy Rev 40:876–885
2. Bobinaite V, Tarvydas D (2014) Financing instruments and channels for the
increasing production and consumption of renewable energy: Lithuanian case.
Renew Sustain Energy Rev 38:536–545
3. Cao X, Kleit A, Liu C (2016) Why invest in wind energy? Career incentives and
chinese renewable energy politics. Energy Policy 99:120–131
4. Center NRE (2016) The national renewable energy development report
5. Geng W, Ming Z et al (2016) China’s new energy development: status, constraints
and reforms. Renew Sustain Energy Rev 5:885–896
6. Koseoglu NM, van den Bergh JC, Lacerda JS (2013) Allocating subsidies to R&D
or to market applications of renewable energy? Balance and geographical relevance.
Energy Sustain Dev 17:536–545
7. Ming Z, Song X et al (2013) New energy bases and sustainable development in
China: a review. Renew Sustain Energy Rev 20:169–185
8. Ng TH, Tao JY (2016) Bond financing for renewable energy in Asia. Energy Policy
95:509–517
9. Shen J, Luo C (2015) Renewable and sustainable energy reviews. Overall review
of renewable energy subsidy policies in China—Contradictions of intentions and
effects. Renew Sustain Energy Rev 41:1478–1488
10. Yang XJ, Hu H et al (2016) China’s renewable energy goals by 2050. Environ Dev
20:83–90
11. Zhao ZY, Zuo J et al (2011) Impacts of renewable energy regulations on the struc-
ture of power generation in China—a critical analysis. Renew Energy 36:24–30
The Study on Factors Affecting the Attraction
of Microblog – An Empirical Study Based on
Sina Microblogging Site
1 Introduction
However, in China, Sina Weibo, similar with Twitter, is currently the most
widely used microblogging platform with the most number of users. As of June
2016, the size of micro-blog users was 242 million, and the utilization rate is
34%, which has been gradually recovered compared with the end of 2015 [4].
The widespread use of Sina Weibo has attracted enterprises to pay attention to
microblog marketing. Enterprises need to use microblog for branding promotion
and products marketing.
With the rapid development of social network based on Web2.0, micro-blog
services as an important social platform, which has the advantage of informa-
tion sharing has attracted the participation of many network users. With the
rapid development of Internet and wide application in many fields, most of the
enterprises have begun to use the Internet for marketing activities of enterprises,
micro-blog can focus on his fans and fans, which can realize real-time informa-
tion sharing, business information dissemination. As an independent media on
social media, the commercial value of micro-blog’s business from the micro-blog
platform, a large number of users and huge traffic. Enterprises can use micro-
blog platform, and according to their own characteristics to implement a variety
of marketing activities. Therefore, the paper hopes to find what kind of micro-
blog is more likely to get people’s attention, and help enterprises to make better
social marketing.
2 Related Work
Since Twitter become people’s important social networking platform and the
important marketing channel of enterprises, there is a growing body of research
on Twitter. The characteristics of Twitter and its applications are studied in
various fields. Kwak et al. [18] studied the topological features of Twitter, and
indicated that Twitter serves more as a news medium than a social network. Fis-
cher and Reuber [5] use an inductive, theory-building methodology to develop
propositions regarding how effectuation processes are impacted when entrepre-
neurs adopt Twitter. Chen [3] studied how active Twitter use gratifies a need to
connect with others and the characteristics of user behavior.
There are many scholars have studied the user behavior characteristics on
social network like Twitter, and also studied the effect of user behavior on user
influence, information spread and information credibility. Cha et al. [1,2] stud-
ied the effects of user behavior characteristics such as number of fans and users’
authoritativeness on microblog diffusion, and found that microblogs of opinion
leaders, celebrities, politicians and entrepreneurs are more widespread. Räbiger
and Spiliopoulou [16] proposed a complete framework for supervised separation
between influential and non-influential users in a social network, finding that
there are predictive properties associated with the activity level of users and
their involvement in communities. These studies are focus on the influence of
user identity (such as user occupation and user’s authoritativeness) and user
behavior characteristics (such as number of fans and contact degree between
users and fans). They discuss about few influence of contents of tweet. Besides,
The Study on Factors Affecting the Attraction of Microblog 549
user influence, information credibility and information spread effect have a cer-
tain correlation with popularity or attraction of message, but they are not all
the same. Messages of influential users or of credibility may be not attractive.
And attractive messages may not have good spread effect (i.e., a tweet receives
many comments and likes, but fewer users’ retweeting. We may say this tweet is
attractive, but with less spread effect).
The largest microblogging website in China, Sina Weibo, born in 2009, also
attracts much attention from researchers. In a comparison of Twitter and Sina
Weibo, Gao et al. [7] analyzed the textual features, topics and sentiment polar-
ities of posts for two microblogging websites, revealing significant differences
between them. Liu et al. [14] analyzed the spatial distribution pattern of Chinese
tourism by microblog, and revealed the factors which affect the spatial distrib-
ution of tourism. Gao Jie [6] makes some discussion on the celebrity microblog
advertisement in Sina micro-blog. Combined with the features of government
microblog, Xie et al. [13] constructs a theoretical model, and use structured ques-
tionnaire to validate the model. Yu et al. [17] examined the key trending topics
on Sina Weibo, compared them with the observations on Twitter. They found
that, on Sina Weibo, trends emerge almost completely attributed to reposts of
entertainment content such as jokes and images, while on Twitter, the trends
are always due to current global events and news stories.
There is also some research about user behavior of popular microblogs on
microblogging sites. Qiu [9] qualitatively analyzed the influencing factors on
popular microblog from the five aspects of disseminator, dissemination contents,
audiences, dissemination channels and dissemination skills. Liu et al. [15] found
that source trustworthiness, source expertise, source attractiveness, and the num-
ber of multimedia have significant effects on the information retweeting. Guan,
Gao and Yang [8] found that male users are more likely to be involved in hot
events, and messages that contain pictures and those posted by verified users
are more likely to be retweet, while those with URLs are less likely. Lun et
al. [10] found that retweeting and commenting are distinct types of microblog-
ging behaviors. Retweeting aims to disseminate information in which the source
credibility and posts’ in formativeness play important roles, whereas commenting
emphasizes social interaction and conversation in which users’ experience and
posts’ topics are more important. Most of them think that a popular microblog
is retweeted many times or commented by many people, few of them consider
the number of “likes” as the factor to measure the popularities of the microblog.
On the basis of previous studies, this paper combine the forwarding number,
the number of comments and points like the number of calculation of “micro-blog
attention” score to measure micro-blog attention degree. And through correla-
tion analysis, association analysis to analyze the factors affecting micro-blog
attention.
550 Y. He et al.
Table 1. The partial correlation coefficients of the number of “likes” and comments
with retweeting times respectively.
Table.1 shows that the coefficients of the number of “likes” α = 0.457 and
the coefficients of the number of comments β = 0.52.
Eventually, “Attraction Index of Microblog” formula was:
Microblog with pictures, video and other multimedia information will bring more
visual or auditory stimuli, so these kinds of microblogs more attractive. If the
user is authenticated, whose microblog are more reliable and authoritative. So, it
is more likely to be commented or retweeted by others, the microblog attraction
may also be higher.
We used Apriori model in Clementine 12.0 to analyze the relationship
between MAI and microblog multimedia information, users’ authoritativeness
respectively. We set the support level with 5%, and the confidence level with
10%.
552 Y. He et al.
Association rules is used to find out the relevance hidden in the database,
and expressed in the form of rules. It is shaped like an implication “X ⇒ Y ”.
The associational rule refers to the sets of attribute-values, which frequently
appeared in data set recognition, also named as frequent item-sets. Using these
frequent sets, association relation rules process is set up. In association rules,
count (X) is the number of tuples that object set D contains set X. Support for
set X is:
count (X)
Support (X) = . (6)
|D|
Confidence is the proportion of the number of tuples that contains set X and
Y and the number of tuples that contains set X:
count (X ∩ Y )
Confidence (X ⇒ Y ) = . (7)
count (X)
4 Results
4.1 Descriptive Statistics of MA
For the 107328 microblogs, according to the formula (5) we calculated each
microblog’s MAI. To limit the value of MAI within [0, 1], we used formula (8)
to standardize the value of MAI, then the final MAI of each microblog was got.
xi − xmin
xnew = . (8)
xmax − xmin
The average of MAI is 6.62×10−3 . The minimum value is 0, and the maximum
value is 1. The MAI of most microblogs was low. The number of microblogs
whose MAI is equal to 0 is 65270, accounting for 60.8% of the total number of
microblogs. The number of microblogs whose MAI is greater than the average
MAI is 14726, accounting for 13.7% of the total number of microblogs. The MAI
distribution is shown in Fig. 1. Due to the small value of MAI, we expanded the
value 100 times to display (because the microblogs whose MAI greater than 0.1
is few, Fig. 1 is not in full).
According to the MAI, microblogs were divided into two categories: attractive
microblogs and unattractive microblogs. Microblogs whose MAI is greater than
the average MAI are attractive microblogs, while Microblogs whose MAI is less
than the average MAI are unattractive microblogs. The following analysis is
based on this.
The Study on Factors Affecting the Attraction of Microblog 553
• Whether the microblog with multimedia information does not affect the
microblog attraction.
• Whether the user is authenticated has effect on the microblog attraction.
The microblogs posted by authenticated users are more attractive than those
posted by non-authenticated users.
Table 3. The Spearman correlation coefficients between MAI and the number of user’s
fans, followers and microblogs respectively
Table 3 shows that for different users, the effect of user behavior on microblog
attraction is different. For authenticated users, the number of fans have a great
effect on the microblog attraction, with the correlation coefficient reached 0.522,
followed by the number of microblogs, and the number of followees has little
effect. For non-authenticated users, the number of fans has less effect on the
microblog attraction, and the number of followees and microblogs has almost no
effect.
Due to the large gap of the absolute number of total microblogs and attrac-
tive microblogs, it is not suitable to compare the absolute value directly. So,
we calculated the proportion of total microblogs and attractive microblogs per
hour respectively. For example, the proportion of total microblogs per hour is
calculated by dividing the number of total microblogs per hour by the number
of all microblogs (107328), so as the proportion of attractive microblogs. The
posting time distribution of total microblogs and attractive microblogs is shown
in Fig. 3.
As Fig. 3 shows, the posting time distribution of total microblogs conforms
to the guess that microblogs posted at noon (11:00–13:00) and in the evening
(21:00–23:00) are more than that posted at other time. And there are few
microblogs posted between 1:00–7:00 am. We can also find that attractive
microblogs posted between 8:00–11:00 and 19:00–21:00 are more than that at
other time, and the fluctuation is more obvious.
Fig. 4. Proportion of the attractive microblogs within all microblogs per hour
• We only studied the effects of microblog post form and the user information to
the microblog attraction, not considering the influence of microblog content.
So in the following research, we can continue to study the effects of microblog
content to its attraction, including the microblog sentiment, and topic types
etc.
• For the analysis of each factor affecting the microblog attraction, we only
used a method. However, different methods as the principle or perspective is
different, may lead to different results. Therefore, in the future study, we can
analyze the same problem with different methods, and compare the difference
of results to choose the more appropriate methods and results.
• The current study examined microblogging behavior using Sina Weibo in the
Chinese context. Other social networking sites like Twitter, Facebook have
much similarity to microblogging site, but in many ways, they are different,
such as different technology, different culture, and different user behaviors
between different platforms. Therefore, it would be unacceptably risky to con-
clude that the findings of this study represent a cross-contextual evaluation
of microblogging behavior. Future researchers should make extensive efforts
to explore the difference of user behavior between different social networking
platforms.
References
1. Cha M, Haddadi H et al (2010) Measuring user influence in twitter: the million
follower fallacy. In: International conference on weblogs and social media, ICWSM
2010, Washington, DC, USA, May 2010
2. Cha M, Benevenuto F et al (2012) The world of connections and information flow
in Twitter. IEEE Trans Syst Man Cybern Part A Syst Hum 42(4):991–998
3. Chen GM (2011) Tweet this: a uses and gratifications perspective on how active
twitter use gratifies a need to connect with others. Comput Hum Behav 27(2):755–
762
4. CNNIC (2016) The 38th statistical report on internet development in China. Tech-
nical report. http://www.cnnic.net.cn/ (in Chinese)
5. Fischer E, Reuber AR (2011) Social interaction via new social media: (how)
can interactions on twitter affect effectual thinking and behavior? J Bus Ventur
26(1):1–18
558 Y. He et al.
6. Gao J (2016) Research on the advertising value of celebrity micro-blog. PhD thesis,
Jilin University, Changchun (in Chinese)
7. Gao Q, Abel F et al (2012) A comparative study of users’ microblogging behavior
on Sina Weibo and Twitter. Springer, Heidelberg
8. Guan W, Gao H et al (2014) Analyzing user behavior of the micro-blogging website
Sina Weibo during hot social events. Phys A Stat Mech Appl 395(4):340–351
9. Jin Q (2013) An analysis of the influencing factors on popular micro-blog. PhD
thesis, Zhengzhou University (in Chinese)
10. Kim JM, Jung YS et al (2011) Partial correlation with copula modeling. Comput
Stat Data Anal 55(3):1357–1366
11. Ko J, Kwon HW et al (2014) Model for Twitter dynamics: public attention and
time series of tweeting. Phys A Stat Mech Appl 404(24):142C149
12. Li F, Du TC (2014) Listen to me - evaluating the influence of micro-blogs. Decis
Support Syst 62(2):119–130
13. Liang CC (2015) Factors influencing office-workers’ purchase intention though
social media: an empirical study. Int J Customer Relat Mark Manage 5:109–116
(in Chinese)
14. Liu DJ, Jing HU et al (2015) Spatial pattern and influencing factors of tourism
micro-blogs in China: a case of tourism Sina micro-blogs. Sci Geogr Sin 35(6):717–
724
15. Liu Z, Liu L, Li H (2012) Determinants of information retweeting in microblogging.
Internet Res 22(4):443–466
16. Räbiger S, Spiliopoulou M (2015) A framework for validating the merit of proper-
ties that predict the influence of a Twitter user. Expert Syst Appl 42(5):2824–2834
17. Yu L, Asur S, Huberman BA (2011) What trends in Chinese social media? arXiv
preprint arXiv:107:3522
18. Yu S (2014) Microblogging becomes the first listed Chinese social media. Accessed
People’s daily: http://tech.sina.com.cn/i/2014-04-18/09309329858.shtml
A Genetic Algorithm Approach for Loading
Cells with Flow Shop Configuration
1 Introduction
The goal of this paper is to provide a methodology to most effectively utilize
available manpower and optimize the scheduling of products with regards to
multiple performance measures. The proper utilization of resources is crucial
to success and survivability of any company and the proposed methodology
seeks to improve utilization of resources in multi-stage labor-intensive manu-
facturing environments. Using a shoe manufacturing plant as a case study, this
paper addresses the problems of manpower allocation, cell loading, and flowshop
scheduling. The shoe manufacturing plant has several cell groups, consisting of
a lasting cell (LC), a rotary machine cell (RMC), and a finishing cell (FC) as
shown in Fig. 1. The LC and FC are labor intensive manufacturing cells consist-
ing of several sequential operations. A simplified version of the RMC is used in
this paper, so the majority of the focus of this paper is on the LC and FC. There
are fewer operation types in the LC and FC than there are workers available,
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 46
560 P. Gannon and G.A. Süer
1.1 Products
This plant produces shoes with a variety of designs, colors, and materials. The
shoes are available in two sole types, full shot or midsole. Of these two sole types,
the shoes can be made using either PVC or TPR and are available in three colors
(black, honey, and nicotine). The breakdown of this product structure is shown
in Fig. 2.
The demand for shoes is based on their size. The sizes range from size 5 to
size 15 and demand is greatest closest to middle of the size range. The demand
curve shown in Fig. 3 demonstrates the relationship between size and demand.
The Rotary Machine Cell (RMC) is the bottleneck of the manufacturing cell. The
RMC has six available positions, each of which can process a pair of shoes. The
Rotary Machine also has a tank to hold the material to be injected. Unprocessed
shoes are loaded into the machine and injected with either PVC or TPR in order
A Genetic Algorithm Approach for Loading Cells 561
to form the soles of the shoes. A setup period is needed to switch materials or
colors to be injected since the machine possesses only one tank to hold the
material.
The Rotary Machine consists of a loading/unloading station, an injection
station, and four cooling stations. Figure 4 illustrates the setup of the rotary
machine. A shoe’s processing time on the Rotary Machine is influenced by the
mold type (full shot or mid-sole), the size of the shoe, and whether it is designed
for males or females. The average injection molding time for full shot shoes is
0.356 min for male shoes and 0.3 min for female shoes. Midsole shoes require less
562 P. Gannon and G.A. Süer
processing, having average injection molding times of .286 min for male shoes and
.25 min for female shoes. Table 1 shows injection times for the various designs
at the available sizes. Times are classified as Fullshot (FS), Midsole (MS), male
(M) and Female (F).
Shoes loaded into the rotary machine cannot be unloaded until all shoes in
the machine have been processed. This leads to shoes with lower processing times
spending more extra time in the machine relative to shoes with higher processing
times. This extra time leads to extra material being injected into their molds.
Because of this, it is beneficial to minimize variations in the processing times of
shoes that are to be loaded together. However, this problem has been addressed
in prior works [6,12,16], so for the purpose of this paper, one size will be assigned
at a time in the rotary machine.
1.3 Molds
Different molds are needed for processing shoes depending on size and type (Full
Shot or Mid Sole). The molds carry a prohibitive cost, limiting their availability.
As shoes of varying size and type are processed in the RMC, the molds have
to be constantly changed in order to process the corresponding shoe. Also, the
molds have to be cleaned when the material or color is changed. The problem
considering a limited number of molds for each size has been addressed previously
in the literature. For this study, it is assumed that there are no mold availability
restrictions.
A Genetic Algorithm Approach for Loading Cells 563
2 Literature Review
This paragraph will briefly discuss some prior cell loading and manpower deci-
sion work done in the literature. Süer, Saiz, Dagli, and Gonzalez [14] expanded
research in the cellular control area to multi-cell environment, which necessitates
cell loading considerations. Süer [13] developed a two-phase methodology to find
optimal manpower allocation and cell loads simultaneously. An extension of this
work was done by Süer [13]. This work add lot-splitting considerations to the two
phase approach to find optimal manpower allocation and cell loads and found
that lot splitting is advantageous when setup times are negligible. Akturk [1]
addressed the production planning problem in cellular manufacturing systems.
By using capacity constraints to evaluate the impact of cell loading decisions on
lower levels, they were able to more accurately portray the cellular manufactur-
ing system and its operation. Saad [9] developed a multi-objective optimization
technique to load and schedule cellular manufacturing systems using simulation
and a taboo search algorithm. Babayigit [2] analyzed the problems of manpower
allocation and cell loading with a mathematical model and genetic algorithm,
further extending previous works by Süer [13,14].
Süer [15] proposed a three-phase methodology to address cell loading and
product sequencing in labor-intensive cells. Optimal manpower allocation and
cell loads are found in the first two stages and the third stage treats product
sequencing as a travel salesman problem to minimize intra-cell manpower trans-
fers. Some scheduling work done in the literature is discussed in this paragraph.
Nakamura [8] considered group scheduling on a single stage to minimize total
tardiness and found that scheduling problems of moderate size can be ordered to
reduce the number of schedules to be searched. Sinha [10] found that a desired
level of throughput and optimum work-in-progress in a cell can be achieved
through sequencing, reduced batch size, and period batch control. Gupta [5]
developed an optimization algorithm and heuristic procedures to schedule jobs
on two identical machines. The objective was to find a schedule with optimal total
flowtime that gives smallest makespan. Kim [3] used a restricted taboo search
algorithm to schedule jobs on parallel machines in order to minimize the max-
imum lateness of jobs. Gupta [4] studied single machine group scheduling with
sequence-independent family setup times in order to minimize total tardiness.
The heuristics developed were shown to be effective in minimizing total tardiness.
Subramanian [11] focused on cell loading and job scheduling at a shoe man-
ufacturing plant. The objective of this research was to develop heuristics for cell
loading and combine these with simple scheduling methodologies to load all jobs
without exceeding capacity. This capacity was set using a MaxCap methodol-
ogy, which is the maximum capacity to be loaded. This methodology is critical
in determining the number of cells required and how the load is spread across
different cells in the facility. An interesting finding from this research is that
the optimal result from the cell loading stage may not remain optimal after the
cell scheduling stage. Urs [7] developed three scheduling heuristics using basic
machine scheduling methodology. The heuristics aimed to minimize makespan in
the rotary machine scheduling. The best performing heuristic was the minimum
564 P. Gannon and G.A. Süer
difference in cycle time heuristic. This research stands out from prior works by
assuming a limited number of molds for each size, meaning multiple sizes can be
run on the Rotary Molding Machine simultaneously. Mese [7] studied cell load-
ing and family scheduling in a cellular manufacturing environment. This study
is distinguished from prior research by giving every job in a family an individual
due date. A focus of this study was the tradeoff between meeting due dates of
jobs and reducing total setup time. Setup times are reduced if all jobs of a family
are scheduled together, but this could cause delays and tardiness in other jobs in
other families. A way to offset this is to allow family splitting, but this increases
setup times and in return may increase the number of tardy jobs. The author
used mathematical modeling and a genetic algorithm to solve this problem. The
mathematical model was slow and impractical to use for larger problems. The
GA developed was both effective in finding optimal or near optimal solutions
and efficient.
Süer [16] developed a three phase methodology to perform cell loading and
scheduling in a shoe manufacturing company. This research used three family
definitions (sub-families, families, and superfamilies) in the cell loading process.
These different family definitions are used at different stages of the planning and
scheduling process. Superfamily definition was used to determine the number of
cells of each type. Family definitions are used after product-to-cell assignments.
These families allow the number of setups to be minimized. Finally, the subfamily
definition is used in the implementation of heuristics before cell loading begins.
This research also used the MaxCap methodology used by Subramanian [12]. A
valuable conclusion from this research is that the best post loading results did
not give the best post scheduling results. This demonstrated the need to consider
that isolating a single level of a multilevel problem may not result in the best
solution for the overall problem. Only the injection molding cell is covered in this
research, leaving further research to be done regarding the lasting and finishing
cells in the shoe manufacturing company.
3 General Methodology
The three-phase methodology proposed to solve the problem is described in the
following sections.
Phase 1 - Manpower Allocation
Thirty-five workers are split between the LC and FC. The six worker splits
evaluated are 15/20, 16/19, 17/18, 18/17, 19/16, 20/15, where 15/20 represents
15 workers available for allocation in the LC and 20 available in the FC. The
workers allocated to the LC and FC are then allocated among the operations
within those cells. A mathematical model is used to optimally allocate manpower
in order to maximize production rates. The model is run separately for each
product at each worker level in both the LC and FC.
Phase 2 - Cell Loading
Products are assigned to cells in order to maximize the machine-level based
similarity among products in the cells and also minimize the number of cells
opened. A genetic algorithm is used to assign products to cells.
A Genetic Algorithm Approach for Loading Cells 565
Equation (1) shows the objective function of the mathematical model, which
is to maximize the production rate. The relationship between number of workers
at a station and the production rate is determined in Eq. (2), ensuring enough
workers are assigned to each operation to meet the desired production rate. Eq.
(3) establishes an upper limit on the number of workers allowed at an operation.
Equation (4) ensures that the total number of workers assigned to the stations
does not exceed the total number of workers in the system. An example will be
used to explain each phase of the problem. An example featuring 10 products
each with five operations in both the LC and FC will be used. Unit processing
times are given in Table 3. The mathematical model described earlier in this
section was run using these processing times. The optimal worker allocations for
product 1 at all worker levels are shown in Table 4. Table 5 shows optimal worker
allocations for all 10 products at the 15/20 worker level.
The mathematical model was solved using ILOG OPL 6.3. The optimal pro-
duction rates are determined in both the LC and FC at the given worker level. For
example, product 2 at the 15/20 worker level for the Lasting Cell requires three
workers at each operation and results in a production rate of 3.52 units/minute.
Products 1 2 3 4 5 6 7 8 9 10
1 - 0.76 0.67 0.5 0.58 0.67 0.76 0.67 0.67 0.76
2 - - 0.76 0.58 0.76 0.88 0.67 0.67 0.88 0.76
3 - - - 0.5 0.58 0.67 0.58 0.58 0.67 0.67
4 - - - - 0.58 0.67 0.58 0.58 0.67 0.5
5 - - - - - 0.88 0.58 0.58 0.88 0.67
6 - - - - - - 0.67 0.67 1.00 0.76
7 - - - - - - - 0.76 0.67 0.58
8 - - - - - - - - 0.67 0.5
9 - - - - - - - - - 0.76
10 - - - - - - - - - -
The capacity requirements are calculated using the production rates found
in phase 1 and demand. Each product’s demand and capacity requirements are
shown in Table 7. Capacity requirements are shown in minutes and as fraction
of the total minutes available in each cell. Also, capacity requirements are deter-
mined for both the LC and FC and cells are loaded so that neither is over the
allowable utilization, 100% for the example problem. An example solution for
the problem are shown in Table 8. Cell 1 is assigned products 10, 7, 6, and 2.
The utilization in the LC is 0.88 and the utilization for FC is 0.71.
cij − ≥ dj , j = 1, 2, · · · , n (13)
M × wj ≥ tj , j = 1, 2, · · · , n (14)
yij − yij+1 + M × zij ≥ pij , j = 1, 2, · · · , n (15)
yij+1 − yij − M × zij ≥ −M + pij , j = 1, 2, · · · , n (16)
yij − yij+1 + M × bij ≥ pij , j = 1, 2, · · · , n (17)
570 P. Gannon and G.A. Süer
The various objective functions which can be used depending on the desired
performance measure are shown in Eqs. (6), (7), (8), which seek to minimize
number of tardy jobs, total tardiness, and makespan, respectively. Eq. (9) sums
the jobs tardiness to define total tardiness. Eq. (10) establishes the relationship
between the jobs’ completion time and its makespan, ensuring that makespan is
equal to the completion time of the last job. Eq. (11) ensures that a job has to
finish processing in its current cell before it can start in the following cell. In a
similar manner, Eq. (12) ensures that a job must complete processing in the final
cell before it can be labeled complete. The relationship between completion times,
due dates, and tardiness is established in Eq. (13). Equation (14) assigns a value
of either zero, for on time jobs, or one, for tardy jobs. Equations (15) and (16)
perform a pairwise comparison between jobs in cell 1 to determine which job is
processed first. This comparison is done between all jobs and Eq. (17) through
Eq. (20) show the comparison being performed in the second and third cells.
The schedules for the part families from the example with regards to
makespan are shown in Table 9. For example the sequence for part family 1
is 2-7-6-10 and the makespan is 53. A Gantt chart is shown for part family’s
schedule with regards to makespan in Fig. 5.
Schedule Makespan
Part Family 1 2 7 6 10 53
Part Family 2 8 3 1 - 60
Part Family 3 5 4 9 - 50
Product sequence
1 2 3 4 5 6 7 8 9 10
Product 2 4 1 5 6 8 3 9 10 7
LC Utilization 0.24 0.3 0.28 0.17 0.22 0.2 0.2 0.15 0.23 0.26
FC Utilization 0.27 0.3 0.17 0.3 0.25 0.22 0.18 0.15 0.23 0.18
Cell 1 1 1 2 2 2 2 3 3 3
Total LC Util 0.24 0.54 0.82 0.17 0.39 0.59 0.79 0.15 0.38 0.64
Total FC Util 0.27 0.57 0.74 0.3 0.55 0.77 0.95 0.15 0.38 0.56
Parent 1 8 1 7 2 6 3 5 4
Parent 1 4 2 3 6 5 8 7 1
Parent 2 2 5 8 7 6 3 4 1
Parent 2 2 7 6 5 1 3 8 4
Child 1 4 2 3 6 7 5 8 1 Child 1 8 1 7 2 5 6 3 4
Child 2 2 7 4 6 1 3 5 8 Child 2 1 2 8 7 6 3 5 4
After crossover and mutation have been performed, products are reassigned
to cells in the same manner as the original chromosome to ensure the maxi-
mum utilizations are not exceeded. Selection is by selecting the top chromosomes
among both the parent and child chromosomes to move onto the next generation.
This study also investigates the effect of performing mutation before crossover.
Classical genetic algorithms flow is crossover before mutation. By reversing the
traditional flow, it is possible better parent chromosomes will be created and per-
forming crossover with the better parent chromosomes will result in improved
offspring chromosomes. Selection is done in this study by selecting the best
ranked chromosomes from all the parent and offspring chromosomes for the next
generation.
Manpower levels
15/20 16/19 17/18 18/17 19/16 20/15
Overall best GA2 GA4 GA3 GA3 GA2/GA4 GA3
GA strategy
Best MS 60 60 60 61 59 58
Best nT 8 8 9 9 8 8
Best TT 137 122 120 135 111 116
Best # Machines 274 272 276 247 279 278
A Genetic Algorithm Approach for Loading Cells 575
References
1. Akturk MS, Wilson GR (1998) A hierarchical model for the cell loading problem
of cellular manufacturing systems. Int J Prod Res 36(7):2005–2023
2. Babayiǧit C (2003) Genetic algorithms and mathematical models in manpower
allocation and cell loading problem [electronic resource]. Master’s thesis, Ohio Uni-
versity
3. Chang OK, Shin HJ (2003) Scheduling jobs on parallel machines: a restricted tabu
search approach. Int J Adv Manuf Technol 22(3):278–287
4. Gupta JND, Chantaravarapan S (2008) Single machine group scheduling with fam-
ily setups to minimize total tardiness. Int J Prod Res 46(6):1707–1722
5. Gupta JND, Ho JC (2001) Minimizing makespan subject to minimum flowtime on
two identical parallel machines. Comput Oper Res 28(7):705–717
6. Huang J, Süer GA, Urs SBR (2012) Genetic algorithm for rotary machine schedul-
ing with dependent processing times. J Intell Manuf 23(5):1931–1948
7. Mese E (2009) Cell loading and family scheduling for jobs in a shoe manufacturing
company. Master’s thesis, Ohio University
8. Nakamura N, Yoshida T, Hitomi K (1978) Group production scheduling for mini-
mum total tardiness part (i). IIE Trans 10(2):157–162
9. Saad SM, Baykasoglu A, Gindy NNZ (2002) A new integrated system for loading
and scheduling in cellular manufacturing. Int J Comput Integr Manuf 15(1):37–49
10. Stnha RK, Hollier RH (1984) A review of production control problems in cellular
manufacture. Int J Prod Res 22(5):773–789
11. Subramanian AK (2004) Cell loading and scheduling in a shoe manufacturing
company. Master’s thesis, Ohio University
12. Süer G, Subramanian A, Huang J (2009) Heuristic procedures and mathematical
models for cell loading and scheduling in a shoe manufacturing company. Comput
Ind Eng 56(2):462–475
13. Süer GA (1996) Optimal operator assignment and cell loading in labor-intensive
manufacturing cells. Comput Ind Eng 31(1–2):155–158
576 P. Gannon and G.A. Süer
14. Süer GA, Saiz M et al (1995) Manufacturing cell loading rules and algorithms for
connected cells. Manuf Res Technol 24(24):97–127
15. Süer GA, Cosner J, Patten A (2008) Models for cell loading and product sequencing
in labor-intensive cells. Comput Ind Eng 56(1):97–105
16. Süer GA, Ates OK, Mese EM (2014) Cell loading and family scheduling for
jobs with individual due dates to minimise maximum tardiness. Int J Prod Res
52(19):5656–5674
Research on the Competitive Strategy
of Two Sided Platform Enterprises
Based on Hotelling Model
1 Introduction
In recent years, bilateral market is the industrial organization and the theory of
competitive strategy research. Its unique business platform competition strate-
gies have attracted the attention of scholars. The earliest group buying websites
appeared in United States (U.S. buy network Groupon). It is mainly the use
of consumers to get a larger discount psychology, and take shape an effective
network interaction. Then, group buying to earn service fees from businesses.
Influenced by this operating mode Chinese group purchase network develop-
ment rapid after 2010. From the group purchase network to the mobile terminal
APP application, Meituan, Public comment network, Juhuasuan, Baidu Nuomi,
Wowo Group and other large professional group buying provide abundant prod-
ucts that involved in almost every aspect of people’s life such as food, clothing,
cosmetics beauty, fitness, hotel, tourism and so on. Then consumers are attracted
by lower prices and larger discounts. With the increase of the number of con-
sumers, more and more merchants registered in platform of enterprise. Volume
of business brings in profit for the platform enterprise. According to the Chinese
group purchase market statistics in June 2015 shows that by the end of June
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 47
578 Z. Wang and Y. Guo
2015, Chinese network group purchase transaction volume reached 16.74 billion
yuan, compared to the same period the growth of 182.5%, the number of peo-
ple who participate in the group reached 250 million person-times, compared to
the same period the growth of 161.8%. As a typical enterprise platform, group
purchase platform competition is increasingly fierce. Everything is not going
smoothly. Due to the diversity of businesses and consumers, some businesses
and consumers will choose a platform for business, others will be registered in
enterprises with different network platform. The choice behavior of different busi-
nesses and consumers will influence the pricing direction and profit of enterprise
platform. Research the choice behavior of consumers and businesses to develop
a reasonable pricing mechanism and competitive strategy for different types of
choice is of great significance for the development of the group buying platform.
2 Related Works
Entering twenty-first Century, the bilateral market theory has attracted wide
attention with the rapid development of the information economy. As the leading
theory of industrial organization, its importance has been increasing. The earliest
examples of two-sided markets can be traced back to the “New York sun” led the
so-called “penny movement”, all the journalists no longer need to pay the cost
to buy a newspaper, the newspaper operator is the main profit from advertising
revenue [1]. Rochet and Tirole [7] thought bilateral market is described that have
two independent edge, and its ultimate benefit from the platform transaction.
They established a bilateral market platform model that has become research
basis of bilateral market theory. Cheng and Sun [6] put forward that bilateral
market is a dumbbell type structure, the participating parties B expressed as
consumers, and the participating parties S as businesses, as shown in Fig. 1.
Chen [4] proposed the network group purchase system, the network group
purchase has carried on the more comprehensive description (Fig. 2). As a bilat-
eral platform for enterprises, group buy site is also a two different types of
participants through the platform to trade. It is the specific bilateral market
dumbbell type structure too.
Through the study a lot of literature, most of scholars think that the bilateral
market has the following characteristics. Firstly, the platform has two different
types of user (businesses and consumers) at least in the market, and provides
Competitive Strategy of Two Sided Platform Enterprises 579
different products or services for different types of user. Or make a deal for
different types of user through the platform. Secondly, various types of user
interaction platform, they need each other. Platforms help them to complete the
transaction and improve the efficiency of transaction. Thirdly, the existence of
network externalities between different types of client platform, which increased
the number of user on the platform in one side (for example increasing the
number of consumers) will increase the value of platform for other types of
user [5].
In recent years, with the advent of the era of big data, there has been a new
platform of bilateral market enterprise research. Bardey, Helmuth and Lozach-
meur [2] studied competition in two-sided markets with a common network exter-
nality rather those than with the standard inter-group effects. Behringer and
Filistrucchi [3] showed that a two-sided monopolist may find it short-run profit-
maximizing to charge a price below marginal cost on one side of the market.
Hence showing that the price is below marginal cost on one side of a two-sided
market cannot be considered a sign of predation. They then argue for a two-sided
Areeda-Turner rule that takes into account price-cost margins on both sides of
the market. Roger [8] studies duopoly in which two-sided platforms compete in
differentiated products in a two-sided market. Zhu [10] talked about the plat-
forms that is from the one-sided market to the one-sided + two-sided market,
as an example of JD.com. Scholars from different angles discussed the bilateral
market competition relationship, research more and more comprehensive. This
paper will discuss the competitive relationship of bilateral market based on the
user ownership relationship.
belongs to the single homing. Other consumers and businesses are registered or
select more than one platform to consumption. This user behavior belongs to
multi homing. Users behavior present in more than one group purchase platform,
because its ownership behavior is not unified. Poolsombat and Vernasca thought
in the bilateral market this behavior called partial multi-homing that one part
of user belongs to the single homing and other part of user belongs to multi
homing. In reality, the users both ends of platform companies such as most of
group purchase platform belong to partial multi-homing. Taking into account
the factors of customer loyalty, homing behaviors of user are divided into three
types: the type of one is the consumers and businesses all belong to single homing
behavior; the type of two is the consumers or businesses belonging to partial
multi-homing, and other businesses and consumers belong to single homing;
types of three consumers and businesses are belonging to partial multi-homing.
According to the characteristics of the bilateral market and external factors that
influence the bilateral market, this paper studies the following problems.
(1) Owing to the difference between two sides of group platform and impact
on network externalities, leading to businesses and consumers have differ-
ent homing behaviors. The type of users’ homing behavior in the difference
between the group buying platforms can have an effect on great difference.
Whether it can reduce the impact of network externalities, obtain a larger
profit for group buying platforms.
(2) Businesses and consumers can freely choose the ownership of the group pur-
chase platform. Belonging to different behavior of businesses and consumers
will affect the group purchase platform enterprise pricing. Because of con-
sumers and businesses of different homing types, group purchase platform
pricing may be different.
(3) In the two-sided market can choose freely under the condition of different
attribution behavior of businesses and consumers. Corporate profits will
affect by consumers and businesses of different types of ownership. As a
result, group purchase platform gains may be different.
(4) In order to develop itself, group purchase platform may often take different
strategy. For example they use market segmentation to attract different con-
sumers and businesses. By taking the difference strategy, ownership behav-
ior of consumer businesses may be changed. Ownership direction guiding of
users may bring more profits for the platform of enterprise development.
First of all, we define symbols for the convenience of research and then discuss
classification. In this model, service cost of the group buying platform to provide
users is ignored.
Notation:
i, j: Two platform enterprises with a competitive relationship;
k, m: Businesses and consumers are located on both sides of the group buying,
and obey the uniform distribution on the line k, m = 1, 2;
vki , vkj : The user register exchange on the group-buying platform i and platform j
pik : Group-buying platform pricing for the users;
t: User to buy platform unit transportation cost (its economic meaning of
the difference between the two platforms);
β: Network externalities between the users;
v0 : Users obtain basic utility from platform i or platform j based on the net
utility of the platform, if all the users will be registered on the platform
to have transaction at least, the v0 tends to infinity;
nik , njk : The number of single homing user group K on the group buying platform
i and platform j;
Nki : The number of single homing and the number of multi homing on the
group buying platform i of users user group K;
π: The profit of group buying, equal to the number of users of the price
multiplied by the price;
ss: Both of two sides are single homing
sm: One side is single homing, another side is partial multi-homing;
mm: Both of two sides are partial multi-homing
A and B is homing zero point on the both of two sides of two group purchase
platforms, and in this two points the users obtain same profit. The number of
users in the platform n1k = x, n1m + n2m = 1, so:
Because both of two platforms are single homing n11 + n12 = 1, n21 + n22 = 1,
get n21 and n22 .
The profit function of the group buying platform is:
π1 = p11 n11 + p12 n12
π2 = p21 n21 + p22 n22 .
Set one side to the side of the single side (Fig. 3 has been described in Fig. 1),
part of the multiple homing of 2, the situation as shown in Fig. 4:
Fig. 4. On both sides of the platform users in the two competing platforms belonging
to the range
Competitive Strategy of Two Sided Platform Enterprises 583
In the two sides the total number of users is 1. So Nk1 + n2k = 1, Nk2 + n1k = 1.
User of single homing on the platform 1 net utility is: vk1 = v0 − p1k − tx + βN21 ,
K side users of single homing on the platform 2 profit as follows:
Users to the platform 2 of the net utility of 1 − n22 , which is 2 of the user’s
distance from the platform 1 of the distance y: y = 1−n22 = β/t−(p12 )/t−β/tn21 .
Simultaneous solution:
⎧
t(p11 −p21 ) β(p11 −p21 )
⎪
⎪ n1
= 1
− 2 −β 2 ) − 2(t2 −β 2 )
⎨ 1 2 2(t
β β(p11 −p21 ) β2 2t2 −β 2
n12 = 1 − − − 2t(t2 −β 2 ) p2
1
− 2t(t2 −β 2 ) p2
2
⎪
⎪
2t 2(t2 −β 2 )
⎩ n2 = 1 − β
−
β(p11 −p21 )
− β 2
β 2
2t(t2 −β 2 ) p2 − 2t(t2 −β 2 ) p2 .
1 2
2 2t 2(t2 −β 2 )
In the condition of t > β, the net utility of the group buying platform 1
and group buying platform 2 can get the maximum value, by p11 = p21 = p1 ,
p12 = p22 = p2 , group buying platform pricings are: psm sm
1 = (t − β )/t, p2 = 0.
2 2
Under the circumstances, p11 = p21 = p1 , p12 = p22 = p2 we can get the price of
the group buying platform:
t−β
pmm = pmm =β .
1 2
2t − β
584 Z. Wang and Y. Guo
The number of single belonging to the two group purchase platform is:
2t2 − β 2
n1mm = n2mm = n1mm = n2mm = .
1 1 2 2
(t + β)(2t − β)
Similarly, in the case of t > β, the existence of the maximum profit is:
2tβ 2 (t − β)
π mm = .
(t + β)(2t − β)2
This conclusion is consistent with the general pricing rules of the group buying
platform and the basic characteristics of the bilateral market. There is also differ-
ence pricing strategies for the platform enterprises. Adopted differential pricing
method is also one of the strategies of competition between enterprises.
Group buying platform can get the highest profit in this condition that users
on both of sides belong to single homing. The users on one side belong to single
homing, and on another side belong to partial multi-homing take the second
place. The behavior of users on both of two sides belong to partial multi-homing
is reduced the platform profits. On the one hand, users on the one side belong
to single homing and on the another side belong to partial multi-homing can
put up the platform enterprise pricing, on the other hand, both sides of the user
are single homing can obtain most profitable. This conclusion seems contrary
to the conclusion of the 2. In fact, through use of various initiatives to prevent
users’ multi-homing behaviors, the group purchase platforms enhance loyalty of
user for the platform. It is the internal incentive factors for the development of
platform enterprises.
(t+β)2 (2t−β)3 , when t > β, ∂π∂t number of well over 0, simple that when
,mm
2 ∂π ,mm
β ≥ 23 t, ∂π∂t > 0, when β≤ 3 t,
well t sign of uncertainty.
∂t
From the above of that, users on both of sides belong to single homing and
users on one side belong to single homing and another side belongs to partial
multi-homing are show that the difference of group buying platform is influence
greater than network externalities. In these two cases, pricing and profit of plat-
form will be higher. When users on both of sides belong to partial multi-homing
the platform profit is uncertain. This conclusion shows that platform may not be
able to get higher profits. If they only consider the unilateral profits and higher
prices in the process of competition, they need a reasonable price for platform
competition.
6 Conclusion
The users’ homing behavior is a complicated economic behavior, which has an
important influence on the development of competitive strategy. Based on the
586 Z. Wang and Y. Guo
(1) The platform should be based on the needs of users to develop differentiated
strategies, personalized competitive strategies to meet the needs of the type
of customer needs, and to avoid the impact of network externalities that
bring the risk to the platform.
(2) On the online group buying platform Multi homing behavior of users will
reduce the fees charged to merchants. Forming malignant price competition
and rapid expansion also appeared a lot of homogeneous of group buying
platforms such as lashou.com. At last it lost competitiveness. In the course
of the operation of the platform, we should pay attention to both businesses
and consumers on both sides of the users and try to improve the customers’
loyalty for the development of the platform and enterprise profitability.
(3) Users only choose a group buying platform can enhance the competitive
advantage of group buying platform. Behaviors of consumers and businesses
can get a distinct competitive advantage. Such as the group purchase web-
site, the more good reputations the more consumer users, which can attract
more business enterprises to cooperate with the site, forming a virtuous
circle and crowding out other competitors.
(4) Effect of multi homing behavior interoperability platform users on the plat-
form can weaken the profit. Cooperation with a number of group purchase
website payment platform is not only good for consumer convenience but
also attract more business cooperation. Although the consumer is multi hom-
ing behavior, in the view of platform, it is a single ownership for the group
purchase website. As a whole, the interconnection platform can weaken own-
ership of consumer behavior, improve the competitiveness of the platform
enterprise.
References
1. Amery E, Amery M, South LR (2009) American Journalism History: The History
of Mass Media Interpretation, 9th edn. Renmin University of China press, Beijing
Competitive Strategy of Two Sided Platform Enterprises 587
Lu Li1,2(B)
1
Tourism School of Sichuan University, Chengdu 610065, People’s Republic of China
342920007@qq.com
2
School of Economics and Management of Chengdu Technological University,
Chengdu 610031, People’s Republic of China
1 Introduction
2 Literature Review
2.1 Tourist Experience and Satisfaction
Research on tourist experience is an important and difficult problem. Tourist
experiences have both individual difference and the common performance. The
differences mean that the scope, content, depth of each tourists experience are
not the same. But if we take the point of tourist groups, their experiences have
a certain commonality and show similar regularity.
In theory, the ultimate goal of travel is in order to obtain high quality tourism
experience. Therefore, tourism destination and their managers try to achieve the
aim through offering high quality tourist experience [2].
From the perspective of management and marketing, tourist experience is the
tourist satisfaction. From the perspective of managers, the tourist experience is
formed by a travel destination. Thus tourist is the main source of information for
destination management. Tourist evaluations of tourism destination constitute
the important feedback. Managers take the corresponding management improve-
ment and marketing strategies according to the feedback information [16].
This shows the current research situation for all visitors’ experience.
Although the ontology of tourists experience has not reached consensus, most
scholars formed the understanding of the different Angle of view. They showed
their own understandings of tourist experience from different aspects. They also
correspondingly gives the measure method and management tools of tourists
experience [1,11,15].
festival tourism between the United States and Canada. Gatz [7], Hoyle [10]
studied the problem of festival tourism marketing. Mayfield and Crompton [13]
discussed the marketing concept of festival organizers.
It shows that the researches mainly focus on the influence of the large
tourism festival, particularly in terms of festival tourism and city relationships
[6]. Chinese scholars that studied tourism festival activities focus on the following
aspects, including the impact on the destination, the festival tourism operation
models, the development present situation and countermeasures, perception of
festival tourism for destination residents and tourists, etc.
The purpose of this study was to apply IPA analysis on festival tourists expe-
rience research. Chengdu temple fair festival is selected as empirical research
object. The most prominent five factors were extracted in the cultural festivals.
Visitors of the target group random questionnaires were distributed during the
2016 Chengdu temple fair. SPASS19.0 software and IPA analysis method was
use to analyze tourist experience and satisfaction.
592 L. Li
This study used the traditional IPA method to analyze the important factors
of tourists’ satisfaction at first. Then the importance and satisfaction evaluation
correlation detection was carried forward. Thirdly, it made the IPA raw data
transformation in the way proposed by Deng [4,5]. Extended importance was
calculated by satisfaction evaluation. Each factor was analyzed through trans-
form data. Finally, the two analysis results were compared to get corresponding
conclusions and suggestions.
was drawn. The specific steps of research are listed below. The first step, the
five elements of satisfaction mean value was determined. Then, according to the
importance of evaluation data, we calculated the percentage of the importance,
the average value. We made a horizontal axis and vertical axis, the horizontal axis
represents the degree of satisfaction axis, the dashed vertical axis represents the
importance. The second step, we confirmed the corresponding position location
of 5 evaluation factors in four quadrants according to their importance and
satisfaction of the actual average. The third step, according to the characteristics
of the various factors in different quadrant, we put forward the corresponding
countermeasures and suggestions.
In addition, in order to eliminate the correlation between importance and sat-
isfaction factors, many scholars suggested using extended importance to replace
self-statement importance. Two kinds of caliber is also analyzed in this paper.
Scored for indirect importance, natural logarithmic is computed by partial cor-
relation coefficient of overall satisfaction and satisfaction with all the elements
in the way of Wei-jaw Deng, namely extended importance score.
4 Research Results
4.1 Tourists’ Demographic and Tourism Characteristics
Demographic characteristics showed the 204 visitors polled, basically reached
the same level of the male to female ratio, which accounted for 51.2% of men,
accounted for 48.8% of women. Levels of the age focused on range 18 to 30 years
old, accounted for 75%. Level of education was mostly for college/university
level, 67.5%. Family income more focused on 3000–8000 interval, accounted for
62.3%. Tourist’s origin was in a slightly better non Chengdu city, at 59.3%.
Please refer to Table 1 for relevant information.
This study also surveyed the tourists travelling characteristics, including
travel patterns, the purpose of the visit, travel expenses, etc. In view of tra-
ditional culture festival activities, we also added the play frequency as a survey
element. From the statistical results, it can be seen that the first time visitors
accounted for 59.6%, but there are still close to half for many times. Chengdu
Temple Fair revisit rate is higher, to a certain extent, which also reflects the
regional influence of the cultural festival activities. Travel patterns were given
priority to with family and friends travel together, accounted for 42.2% and
42.2% respectively. For visit purpose, relax accounted for up to 66.7%, the second
is the scenery which is 46.6%. Travel expense (including tickets, transportation,
catering and shopping, etc.) average of about 300 Yuan. Relevant information
as shown in Table 2.
satisfaction as the longitudinal axis, and the mean average of each dimension as
the midpoint of four quadrants, as shown in Fig. 2. The 2 elements situated in the
first quadrant means “Keep up the good work”, namely whose satisfaction and
importance are both higher. They are element 1 “Traditional folk performances
and activities” and 2 “The Three Kingdoms culture theme features”. The ele-
ment situated in the second quadrant means “Possible overkill”, namely whose
importance is lower but satisfaction is higher. Element The element situated in4
“Supporting facilities and services” is like that. The 2 elements situated in the
third quadrant show “Low priority”, namely whose importance and satisfaction
are both lower. They are element 3 “Fashion innovation and innovation” and 5
“Ticket price setting”.
(2) On the basis of extended importance
According to the results of the questionnaire, the mean value of overall tourist
satisfaction for Chengdu temple fair is 4.00. Then, we calculated extended impor-
tance of various factors accordance with the method of Deng [4] (See Table 4).
situated in the third quadrant means “Low priority”, which implied extended
importance and satisfaction are both lower. It is the elements 5 “Ticket price
setting”. Element 3 “fashion elements and innovation” is situated in the fourth
quadrant, namely lower satisfaction but higher importance, which means “Con-
centrate Here”.
(3) Comparison two kinds of IPA analysis conclusion
The results showed that two kinds of IPA analysis conclusion were basically the
same. Slight differences were in two aspects. Firstly, on the ranking of extended
importance, “Three kingdoms culture theme” exceed the “Traditional folk per-
formances and activities”, in the first place. Extended importance of “Support-
ing facilities and services” was below “Ticket price setting”, which rowed at the
bottom. Secondly, by the IPA analysis diagram, we can find that in extended
importance analysis, the “Fashion elements and innovation” element was situ-
ated in the fourth quadrant, which means “Concentrate Here”.
From the two kinds of IPA analysis conclusion, it can be seen that the overall
tourist satisfaction of Chengdu temple fair is good. Core competitive elements
were traditional folk activities and the Three Kingdoms culture theme features.
Tourists’ attention and satisfaction on the both elements is high. We need to
“Keep up the good work” on the two. And for the evaluation of supporting facil-
ities and services, it was different from traditional scenic spots. Tourists showed
lower attention and higher satisfaction. The element was situated in the “Possi-
ble overkill” quadrant. The experience length during the festival activities and
distance of tourist source region can be related reasons. Ticket price is located
in the range of the low awareness and low satisfaction, which embodies the low
sensitivity of tickets present price. In addition, the “fashion and innovation ele-
ments” is situated in different quadrants of different analysis methods. On the
one hand, it reflected the innovation elements had a substantial impact to overall
satisfaction. On the other hand, it also indirectly showed the high attention to
the traditional culture. Visitors still hope the traditional festival activities can
reflect more pure original flavor and essence of traditional culture.
598 L. Li
Acknowledgements. This study was supported by a grant from Higher School Cul-
tural and Social Science Fund of Sichuan Education Department (to DENG Qingnan)
(No. WHCY2016B02).
References
1. Arnould EJ, Price LL (1993) River magic: extraordinary experience and the
extended service encounter. J Consum Res 20(1):24–45
2. Bongkoo L, Shafer CS (2002) The dynamic nature of leisure experience: an appli-
cation of affect control theory. J Leisure Res 34(3):290–310
3. Chen X (2013) The modified importance-performance analysis method and its
application in tourist satisfaction research. Tourism Tribune 28(11):59–66
4. Deng WJ, Lee YC (2007) Applying kano model and ipa to identify critical service
quality attributes for hot springs hotel in peitou. J Qual 14(1):99–113
5. Deng WJ, Pei W (2007) The impact comparison of likert scale and fuzzy linguistic
scale on service quality assessment. Chung Hua J Manag 8(4):19–37
6. Fengying L, Lucang W (2007) Literature review of festival tourism research. For-
ward Position 7(8):33–35
7. Getz D (1997) Event management & event tourism. Ann Tourism Res 10:22–40
8. Haemoon O (2001) Revisiting importance-performance analysis. Tourism Manag
22(6):617–627
9. Hollenhorst S, Gardner L (1994) The indicator performance estimate approach to
determining acceptable wilderness conditions. Environ Manag 18(6):901–906
10. Hoyle LH (2002) Event marketing: how to successfully promote events, festivals,
conventions, and expositions. Wiley, New York
11. Manfredo MJ, Driver BL, Brown PJ (1983) A test of concepts inherent in expe-
rience based setting management for outdoor recreation areas. J Leisure Res
15(3):263–283
12. Martilla JA, James JC (1977) Importance-performance analysis. J Mark 41(1):77–
79
13. Mayfield TL, Crompton JL (1995) The status of the marketing concept among
festival organizers. J Travel Res 33(4):14–22
14. Meyer P (1970) Festival: USA and Canada. Ives Washbum, New York
15. Williams DR (1989) Great expectations and the limits to satisfaction: a review of
recreation and consumer satisfaction research. In: Outdoor Recreation Benchmark
1988: Proceedings of the National Outdoor Recreation Forum, pp 422–438
16. Yu Xy, Zhu GX, Qiu H (2006) Review on tourist experience and its research
methods. Tourism Tribune
How Cross Listing Effects Corporate
Performance: Measurement by Propensity Score
Matching
Yingkai Tang1 , Huang Huang1 , Hui Wang1 , Yue Liu2 , and Jinhua Xu3(B)
1
Business School, Sichuan University, Chengdu 610064, People’s Republic of China
2
Urban Vocational College of Sichuan, Chengdu 610064, People’s Republic of China
3
School of Management, Guangdong University of Technology,
Guangzhou 510520, People’s Republic of China
Xujinhua@gdut.edu.cn
1 Introduction
As the economic globalization accelerates and international capital market
becomes more and more integrated, many Chinese companies listed overseas
have returned to the domestic A-share market, and got cross-listed. Meanwhile
in recent years, domestic and foreign scholars have paid more attention to the
change of corporate performance after cross listing. However, empirical results
proved cross listing cannot improve corporate performance. Instead, foreign
researches showed that after cross listing, corporate internal and external perfor-
mance had gone down [8,15,26]. Researches on the change of corporate internal
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 49
How Cross Listing Effects Corporate Performance 601
environment after cross listing, showed cross listing can improve the stock liq-
uidity, reduce the agency cost [6,7], relax financing constraints and reduce the
cost of financing constraints [22]. In the past, researches on cross listing often
focused on the effect of cross listing on corporate performance, then according
to whether the company had been cross listed, set dummy variables for OLS
regression, ignoring the systematic bias; simultaneously, there was selective bias
in samples of cross-listed companies and non-cross-listed companies. No relevant
research on impact mechanism of cross listing has been found.
This paper, based on mass samples of China’s A-share market, uses Propen-
sity Score Matching (PSM) to examine the difference of operating performance
between cross-listed and non-cross-listed companies, and effectively avoid sys-
tematic bias in the study of cross listing. And then, this paper uses Structural
Equation Model to explore the impact mechanism of cross listing on operating
performance.
The structure of this paper is as follows: the first section is an introduction;
the second section will be literature review; the third section will be research
design; the fourth section will be empirical results; the last section will be con-
clusion and suggestions.
2 Literature Review
Jaffee and Russell [12] and Stiglitz and Weiss [27] pointed out that there was
information asymmetry in credit market, companies were able to complete
projects, but banks did not know about this. Most Chinese listed companies
get external financing from the credit market, where they encounter serious
financing constraints. Therefore, cross listing has always been regarded as an
important signal that a company focuses on protecting investors, strictly dis-
closes information and standardizes corporate governance, which can improve
public awareness and information environment of the company [2], increase the
level of information disclosure in the market, and strengthen investor protec-
tion [3,28], thereby reducing the degree of information asymmetry, and relaxing
financing constraints. Researches of Lins et al. [18] showed that cross listing
can loosen corporate financing constraints, reduce financing costs, and reduce
the dependence on internal cash; Doidge et al. [7] thought that compared with
non-cross-listed companies, cross-listed companies can expand their financing
channels, and prevent management from appropriation of corporate resources;
in China, empirical results of Pan and Dai [23] also showed that after cross list-
ing, corporate investments can be much less sensitive to cash flow, so financing
constraints can be effectively loosened. Empirical studies of Liu [19] showed that
returning to A-share market can ease corporate financing constraints to a certain
extent.
602 Y. Tang et al.
3 Research Design
3.1 Research Methodology and Procedures
P (X) = Pr [D = 1 |X ] = E [D |X ] . (1)
D is the research variable, standard samples are divided into AH group and
control group, if a company is cross-listed, D = 1, otherwise D = 0; P is proba-
bility of a company to get A+H cross-listed, that is, the propensity score; X is
the influencing factor of cross listing, that is, the matching variable.
Then, as what Dehejia did, this paper uses Logit binary regression model to
estimate:
4 Empirical Results
4.1 The Screening of Matching Models
In order to make effective matching, regression results of five Logit models are
listed in Table 2. Regression results show that the larger size is, and the higher
Tobin’s Q ratio or the market value is, the more a company is willing to be
cross-listed; the higher fixed asset ratio is, the less current asset holdings a com-
pany has, then the higher demand for external financing is, therefore, the higher
fixed asset ratio is, the more a company is willing to be cross-listed; the higher
ownership concentration is, the more a company is willing to be cross-listed,
but after cross-listing, the power of controlling shareholders will undoubtedly
be diluted, therefore, considering the change of balance of shareholder power
after cross listing, we find that cross listing can reduce the power of controlling
shareholders, but nearly double the shareholding ratio of the second and third
largest shareholders, as a result, ownership concentration index is significantly
positive, indicating that major shareholders other than controlling shareholders
have great power to decide whether to be cross-listed or not; L/A ratio and
debt per share have negative effect on the tendency for cross listing; at the same
time, state-owned enterprises (SOEs) are more likely to choose cross listing, as
Huang [11] found that, because of China’s special capital market environment,
leaders of SOEs try to promote their own reputation and benefit their political
career through overseas listing; meanwhile, the total asset growth rate has a
negative effect on the tendency for cross listing, because controlling sharehold-
ers with strong growth ability reject cross listing for fear of the dilution of their
power [5].
As what Lian [17] did, this paper, considering the significance of variables in
Logit regression, uses Psedo-R2 value and AUC value, the area under the ROC
curve, to measure the effect of Logit regression model [29], and all Psedo-R2
values of 5 Logit regression models are more than 0.38, AUC values more than
0.9. When conducting PSM with Logit regression, if AUC value is more than
0.8, then indicators of the regression model are suitable. In this paper, all AUC
values are more than 0.9, so indicators are suitable. Considering Psedo-R2 values
and the significance of variables, Model 4 is chosen as the matching model.
606 Y. Tang et al.
tion curve of propensity scores of control group shifts significantly to the right,
and the difference of the probability distribution of propensity scores between
the two groups significantly decreases, indicating that the matching significantly
amends the deviation of the probability distribution of propensity scores, the
matching is effective, and hypothesis of commonality is verified.
After the first phase of PSM, we match cross-listed companies with non-cross-
listed companies, and use univariate test to examine their operating performance
and corporate governance. In Panel A, all samples in treatment group and control
group are compared, and results are as follows: after cross listing in both A
and H − share market, the company’s cash asset ratio significantly decreases
by 0.0406, and ownership concentration significantly increases, indicating that
after cross listing, major shareholders increase their stakes in case of the dilution
of their power. L/A ratio increases by 0.0398, indicating that after cross listing,
it is easier to get external debt financing. Meanwhile, this paper chooses ROA
and TBQ as proxy variables of corporate value, which decrease by different
degrees after cross listing. In Panel B, samples of state-owned enterprises in
treatment group and control group are compared, and results show that cash
asset ratio slightly decreases in the significance level of 10%, while none of TBQ,
608 Y. Tang et al.
ROA, ownership concentration and L/A ratio passes the significance test. In
Panel C, samples of private enterprises in treatment group and control group
are compared, the change trend of each indicator is in accordance with that in
Panel A, and all indicators are significant at the level of 1%. However, after
cross listing, ownership concentration and L/A ratio of private enterprises are
apparently higher than that in Panel A, cash asset ratio decreases more than that
in Panel A, ROA and TBQ also decrease more than that in Panel A, indicating
that after cross listing, major shareholders of private enterprises have stronger
awareness of the risk of ownership dispersion, are more willing to increase their
stakes, and are more prone to expand the ratio of debt financing, and to use
financial leverage to gain profits (Table 4).
Above all, cross listing will worsen operating performance. This paper will
explore the impact mechanism of crossing listing on operating performance from
debt structure, asset liquidity and ownership structure.
• Debt Structure. It is certain that changes of financing environment can affect
a company’s asset structure, and then operating performance.
• Asset Liquidity. The mitigation of financing constraints after cross listing can
improve the company’s cash holdings, and then affect operating performance.
• Ownership Structure. After cross listing, financing market is larger, ownership
structure changes, and corporate governance is improved, and then operating
performance is affected.
(2) Path effect analysis
Using Structural Equation Model to perform an iterative scheme for maxi-
mum likelihood with the conceptual model of Fig. 2, and path analysis results
are shown in Fig. 3. The probability of the whole model in adaptability test, is
P = 0.902 > 0.1, which does not pass the significance test and accept the null
hypothesis, indicating that the theoretical model and sample data are adapt-
able. Other adaptability indicators are RMSEA = 0.000 < 0.05, CFI = 1.000,
TLI = 1.025, IFI = 1.002, RFI = 1.000, NFI = 1.000, indicating that adaptabil-
ity of the model is great.
1 The Effect of Cross Listing on Operating Performance
As shown in Fig. 3, cross listing has no significant direct effect on proxy
variables of operating performance ROA and TBQ. Results of significant path
effect are listed in Table 6.
corporate value after cross listing, path of cash flow, 34.0%, and path of owner-
ship, 14.4%. When TBQ is the proxy variable of corporate value, path of debt
structure contributes 65% to the decrease of corporate value, path of cash flow,
30.5%, and path of ownership, 4.5%, which is relatively lower (Table 7).
5 Conclusion
This paper discusses the development status of companies cross-listed in both A
and H-share market in China, and examines the effect of cross listing on oper-
ating performance by using Propensity Score Matching (PSM), and then uses
Structural Equation Model and path analysis to explore the impact mechanism
and specific effect of cross listing on operating performance from paths includ-
ing debt structure, asset liquidity and ownership structure. The results show:
(1) Compared with non-cross-listed companies, those companies cross-listed in
both A and H-share market, have witnessed a worsening of operating perfor-
mance, a significant increase of ownership concentration and liabilities to assets
(L/A) ratio, and a significant decrease of asset liquidity. (2) Cross listing has
affected operating performance mainly through debt structure, asset liquidity,
less through ownership structure. Main contributions of this paper are as follows:
(1) using Propensity Score Matching (PSM) to solve the problem of endogeneity
in the research of cross listing; (2) further analyzing the impact mechanism and
specific effect of cross listing on corporate performance.
How Cross Listing Effects Corporate Performance 613
Suggestions are as follows: Firstly, A+H cross listing cannot improve oper-
ating performance of Chinese companies, because most A+H cross-listed com-
panies choose to be listed in H-share market first, and then in A-share market,
however, compared with the capital market in Hong Kong, the capital market
in the mainland of China is not completed enough, thus after A+H cross list-
ing, companies’ governance environment has not been improved. Therefore, the
top priority is to improve the capital market system. Secondly, domestic listed
companies should be encouraged to go out into the developed capital market for
financing, utilizing good markets in developed countries, so as to improve the
domestic market in an indirect way; after cross listing, companies’ ownership
concentration and ratio of debt financing significantly increase, indicating debt
financing becomes more important, and then companies face greater investment
risk. As path of debt is the main factor worsening operating performance, so
companies should adjust their ratio of debt financing to a reasonable range, in
order to avoid excessive leverage risk; after cross listing, with less financial con-
straints, companies tend to reduce their cash holdings, and then to make blind
investments. As decrease of asset liquidity is also a main factor reducing cor-
porate profitability, companies should properly hold liquidity assets in case of
need.
References
1. Aslan H, Kumar P (2016) Controlling shareholders, ownership structure and bank
loans
2. Baker H, Nofsinger J, Weaver D (2002) International cross-listing and visibility. J
Finan Quant Anal 37:495–521 (in Chinese)
3. Barzuza M (2006) Lemon signaling in cross-listing
4. Berger A, Di Patti E (2013) Computation of discharge through side sluice gate
using gene-expression programming. Irrig Drain 62:115–119 (in Chinese)
5. Busaba W, Guo L et al (2015) The dark side of cross-listing: a new perspective on
cross-listing. J Bank Finan 57:1–16 (in Chinese)
6. Coffee J (2002) Racing towards the top? The impact of cross-listings and stock
market competition on international corporate governance. Columbia Law Rev
102:1757–1831 (in Chinese)
7. Doidge C, Karolyi G, Stulz R (2004) Why are foreign firms that are listed in the
U.S. worth more. J Finan Econ 71:205–238 (in Chinese)
8. Durand R, Gunawan F, Tarca A (2006) Does cross-listing signal quality. J Contemp
Acc Econ 2:48–67 (in Chinese)
9. Fernandes N (2014) On the fortunes of stock exchanges and their reversals: evidence
from foreign listings. J Finan Intermediation 23:157–176 (in Chinese)
10. Gu N, Sun J (2009) Financing constraints, cash flow volatility and corporate pre-
cautionary cash holdings. J Bus Econ 4:73–81 (in Chinese)
11. Hung M, Wong T, Zhang T (2012) Political considerations in the decision of chinese
soe to list in Hong Kong. J Acc Econ 53:435–449 (in Chinese)
614 Y. Tang et al.
12. Jaffee D, Russell T (1976) Imperfect information, uncertainty, and credit rationing.
Q J Econ 90:651–666 (in Chinese)
13. Jensen M (1986) Agency costs of free-cash-flow, corporate finance, and takeovers.
Am Econ Rev 76:323–329 (in Chinese)
14. Kalcheva I, Lins K (2007) International evidence on cash holdings and expected
managerial agency problems. Rev Finan Stud 20:1087–1112 (in Chinese)
15. King M, Segal D (2005) Are there longer horizon benefits to cross-listing untangling
the effects of investor recognition, trading and ownership
16. Kot H, Tam L (2016) Are stock price more informative after dual-listing in emerg-
ing markets evidence from Hong Kong-listed chinese companies. Pacific-Basin
Finan J 36:31–45 (in Chinese)
17. Lian Y, Su Z, Gu Y (2011) Evaluating the effects of equity incentives using PSM:
evidence from china. Front Bus Res China 5:266–290 (in Chinese)
18. Lins K, Strickland D, Zenner M (2005) Do non-US firms issue equity on us stock
exchanges to relax capital constraints? J Finan Quant Anal 40:109–133 (in Chinese)
19. Liu X, Tian M, Zhang C (2016) Has returning to a-share market eased the corpo-
rate financing constraints? Based on the analysis of cash-cash flow sensitivity
20. Margaritis D, Psillaki M (2010) Capital structure, equity ownership and firm per-
formance. J Bank Finan 34:621–632 (in Chinese)
21. Myers S, Majluf N (1984) Corporate financing and investment decisions when firms
have information that investors do not have. J Finan Econ 13:187–221 (in Chinese)
22. Pagano M, Roel A, Zechner J (2002) The geography of equity listing: why do
companies list abroad? J Finan 57:2651–2694 (in Chinese)
23. Pan Y, Dai Y (2008) Dual listing and financing constraints-an empirical evidence
based on chinese companies with ‘a+h’ dual-listing. China Ind Econ 5:139–149 (in
Chinese)
24. Porta R, Lopez-de Silanes F et al (2002) Investor protection and corporate valua-
tion. J Finan 57:1147–1170 (in Chinese)
25. Rosenbaum P, Rubin D (1983) The central role of the propensity score in obser-
vational studies for causal effects. Biometrika 70:41–55 (in Chinese)
26. Sarkissian S, Schill M (2009) Are there permanent valuation gains to overseas
listing. Rev Finan Stud 22:371–412 (in Chinese)
27. Stiglitz J, Weiss A (1981) Credit rationing and markets with imperfect information.
Am Econ Rev 71:393–410 (in Chinese)
28. Wojcik D, Clark G, Bauer R (2004) Corporate governance and cross-listing: evi-
dence from European companies
29. Zhang H, Liu J (2015) Does high tendency for cash holdings hurt corporate value
of private enterprises? A study based on financing constraint hypothesis or free
cash flow hypothesis. Macroeconomics 5:100–109 (in Chinese)
30. Zhang J, Cheng Z, Zhang J (2011) The impact of a cross-listing on the level and
the value of cash holdings-evidence from Chinese listed companies. J Shanxi Finan
Econ Univ 11:108–115 (in Chinese)
Factors Affecting Employee Motivation Towards
Employee Performance: A Study on Banking
Industry of Pakistan
1 Introduction
In the most current scenario employees have became the primary strength of
any business where the employees deliver their continuous effort to drive orga-
nizations’ decisions into action to accomplish its objectives. Whereas, the moti-
vational factor regarding the employees is becoming a part of organizational
strategy. However, motivation is an aspect that helps an individual to select
or deselect the job, to continue and work proficiently during his/her job [16].
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 50
616 A. Khan et al.
2 Literature Review
2.1 Motivation
Motivation is mostly differentiate as one of the most influential predictors of indi-
vidual behavior and a key analyst of performance for essential aspect of behavior
[22]. Hence, it is not shocking that motivation show in a range of regulation paper
[15], scholars and executives have vast attention in accepting person motivation to
use communal medium and tools on behalf of companies [14]. Motivational aspect
(such as success, appreciation, accountability, effort, progression, individual devel-
opment) are associated to the application contented though hygiene aspects (such
as corporation strategy and management, interpersonal relatives, working envi-
ronment, pay, job safety, position, reimbursement) are associated to the employ-
ment background [6]. Herzberg measured hygiene aspects as extrinsic features,
which have an unsettling outcome on the staff job approach and create them even-
tually displeased in their profession when the needs aren’t sufficiently meet. Fur-
ther, motivational aspects are recognized as intrinsic feature, which make staff ful-
filled when needs are meet, conversely, do not formulate them disgruntled in the
deficiency of these [21]. Further the study affirms that motivation exists merely
when individual appreciates an optimistic correlation that endeavor direct to work
performance and work performance directs to incentives [6]. Moreover, the objec-
tive locale theory of motivation, where the theory emerge from the proposal of
anticipation theory as a target locale is an essential instrument, which work as an
“instant valve of individual act” [17] that direct the individuals towards attaining
the objective. The objective influences the performance by straight concentration,
organizing endeavor, rising determination and inspiring plan. Further the earlier
research [3] offer a perceptive of an individual attitude, prospect, plan and aims in
life. Moreover, this research helps business to considerate their employees’ objec-
tive in both their qualified and private life, whilst identifying the need to stimu-
late them. At the Similar time, they seem to improve the participation, reliability,
and assurance of the staff to their businesses. Positively, accepting the features
that influence the motivation of staff is an huge dispute for business, leaders and
executives [23]. Especially the people worth different aims in life. People might
have diverse needs and wants, ethics, ambition, objective, and prospect in life [1].
A motivational presumption effort to clarify the aspect that have straight or cir-
cuitous pressure on motivation and business performance, such as worker stimulus
and other motivational factors [23].
Further, it is significant that staffs are qualified and knows how to perform
job related duties with the help of paraphernalia as insufficient utensil usage can
affect in mishap or divergences in recital no issue that how a lot and utensil was
appropriate. Learning of the individual must also be provided oriented to the
good use of defensive tools and individual safety [4].
2.3 Benefits
Benefits can affect the Employee performance in many ways. The benefits are not
frequently issued towards assessment and are consequently cheaper to achieve
during an employer throughout the market [2]. Therefore, cheaper the benefits
ought to amplify employee performance. Further the benefits can proceed as an
alternate for salary. To inspect the employer investigation facts and establish
the employee’s reduced the salary; once numerous benefits had obtained to the
individual after a few years [2]. Moreover, the employees sight the benefits and
salary as alternatives, disposed to provide salary in trade for extra benefits. While
the benefits set as a significant part of employee compensation correspondence
and simply acted as organize in different researches, but not as the key issues of
analysis.
Furthermore, in the literature the reward shape towards the benefits that
individual obtain from their work [11], and the important part of a worker job
approach such as business assurance, incentive and employee performance [12].
Thus, in any business rewards in cooperate as a significant role in structuring
and supporting the obligation between the workers that guarantee an average job
performance and employee loyalty. As, the business entity substitute premise,
employee penetrates corporate with the exact set of proficiency towards requir-
ing goals, and anticipate in revisit a respectable working conditions where the
employees assist to use their skills and satisfy and attain the desired goals [18].
The rewards enhance the height of effectiveness and efficiency of the individual
towards their jobs and in place of outcome there is an improvement of organiza-
tional performance [18].
Factors Affecting Employee Motivation Towards Employee Performance 619
2.5 Recognition
“Recognition explains as the appreciation to the employees for the level of per-
formance, and success or an influence to achieve goal. It can be intimate or com-
munity, fundamental or official. It is continuously in tally to pay.” [20]. However
the employees also need recognition. Persons like to distribute the celebration of
their success with others and have to be recognized in the organization. Whereas
the needed is satisfied it works to be an excellent motivator. Further, if employ-
ers depend on reward only to recognize influence and success it is most probable
that the employee’s goals will become altered to protect the pay and nothing
more while this will lead to a besmirched culture of the organization. Thus using
recognition correctly it will be cost-effective way of increasing success and allow
employees to feel intricate in the corporation culture [20].
3 Problem Statement
Today the banks are facing the problem of low productive employees in all the
sectors. However, in banking industry the lower level of motivating employees are
in number due to the nature of their work. However, this study assist to find out
the relationship among the following factors Benefits, Job environment, Empow-
erment, Recognition, to extract the best known factor among the employees to
enhance their motivational level. The study will investigate the impact level for
the above mentioned aspects on employee motivation of their job and their after
cause on their moral satisfaction and their individual performance in banks.
• To extract the best known motivational factors that supports the moral inten-
tion of individual to perform.
• Which factor is highly motivates employees in the banking industry?
• Do the motivational aspects enhance the individual performance?
620 A. Khan et al.
4 Hypothesis
4.1 Motivational Factors
5 Methodology
part of my study. The questionnaire is the primary tools to gather data for my
dissertation. Further human comportment is coherent which rely upon reasons,
however all individuals has their own explanation for precise concern they are
attaining from convinced area and services even by exploiting certain individual
skills, this is the key reason for picking questionnaire as the primary to of data
collection.
References
1. Ahmad K, Fontaine R (2011) Management from Islamic perspective. Pearson,
Malaysia Sdn Bhd (Person custom publishing)
2. Baughman R, Dinardi D, Holtz-Eakin D (2003) Productivity and wage effects of
“family-friendly” fringe benefits. Int J Manpow 24(3):247–259
3. Bhatti K (2015) Impact of Islamic piety on workplace deviance. PhD thesis, Inter-
national Islamic University Kuala Lumpur, Malaysia
4. Bratton J (2012) Human resource management: theory and practice. Palgrave
Macmillan, Basingstoke
5. Fetterman DM, Wandersman A (2005) Empowerment evaluation principles in prac-
tice. Eval Pract 15:1–15
6. Griffin RW (2012) Management eleventh edition. Cengage Learning, Hampshire
7. Hui MK, Au K, Fock H (2004) Reactions of service employees to organization-
customer conflict: a cross-cultural comparison. Int J Res Mark 21(2):107–121
8. Igalens J, Roussel P (1999) A study of the relationships between compensation
package, work motivation and job satisfaction. J Organ Behav 20(7):1003–1025
9. Jones G, George J (2011) Essentials of contemporary management 11(2):81–86
10. Jones PH, Chisalita C, Veer GCVD (2005) Cognition, technology, and work: spe-
cial issue on collaboration in context: cognitive and organizational artefacts. Cogn
Technol Work 7(2):70–75
11. Kalleberg AL (1977) Work values and job rewards: a theory of job satisfaction.
Am Sociol Rev 42(1):124
12. Kressler H (2004) Motivate and reward: performance appraisal and incentive sys-
tems for business success. Palgrave Macmillan, New York
13. Lai C (2009) Motivating employees through incentive programs. Jyvaskyla Univer-
sity of Applied Sciences
14. Leftheriotis I, Giannakos MN (2014) Using social media for work: losing your time
or improving your work? Comput Hum Behav 31(31):134–142
Factors Affecting Employee Motivation Towards Employee Performance 625
15. Levin MA, Hansen JM (2008) Clicking to learn or learning to click: a theoretical
and empirical investigation. College Stud J 42:665–674
16. Lin PY (2008) The correlation between management and employee motivation in
sasol polypropylene business, South Africa. Polypropylene Business
17. Locke EA, Latham GP (1991) A theory of goal setting and task performance. Acad
Manag Rev 15:367–368
18. Mottaz CJ (1988) Determinants of organizational commitment. Hum Relat
41(6):467–482
19. Mullins LJ (2011) Management and organisational behaviour. Oxford University
Press, Oxford
20. Robbins SP (2005) Organizational behavior. Prentice Hall, Upper Saddle River
21. Simons T, Enz CA (1995) Motivating hotel employees: beyond the carrot and the
stick. Cornell Hospitality Q 36(1):20–27
22. Steers RM, Shapiro DL (2004) The future of work motivation theory. Acad Manag
Rev 29(3):379–387
23. Sulaiman M, Ahmad K (2014) The perspective of Muslim employees towards moti-
vation and career success
24. Wang Y (2004) Observations on the organizational commitment of Chinese employ-
ees: comparative studies of state-owned enterprises and foreign-invested enter-
prises. Int J Hum Resour Manag 15(4–5):649–669
25. Wong S, Siu V, Tsang N (1999) The impact of demographic factors on hong kong
hotel employees’ choice of job-related motivators. Int J Contemp Hospitality Manag
11(5):230–242
How Content Marketing Can Help the Bank
Industrial: Experience from Iran
1 Introduction
Content marketing is the process of creating high-quality, valuable content to
attract, inform, and engage an audience, while also promoting the brand itself.
Content marketing is a kind of Marketing that use words, sounds and photos to
improve the knowledge of customer, introduce their brands and new products.
Content marketing is about giving away information to build relationships and
earn trust, but gating some of your best content is an acceptable and valuable
practice. In fact, this is not the only benefit of content marketing, another utility
of this kind of marketing is costs less than other types of marketing. Content
marketing is a new topic in Iran. Perviously, in bank industrial, they just tried
to give better services to customers and hope they introduction their brands. We
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 51
How Content Marketing Can Help the Bank Industrial 627
are trying to help the bank industrial for using content marketing to increase
their customer. Content marketing is comparable to what media companies do
as their core business, except that in place of paid content or sponsorship as a
measure of success, brands define success by ultimately selling more products
or services [13]. Content marketing is the marketing and Business Process for
creating and distributing relevant and valuable content to attract, acquire, and
engage a clearly defined and understood target audience - with the objective
of driving profitable customer action [10]. “Strong brands are based on a story
that communicates who is the company, is to communicate what you really are”
[2]. Therefore, the content marketing should be based on the company’s values.
Also, without considering the quality of the content, which is the most important
part of digital marketing, the choice of the frequency of promotion and of the
right social media plays a significant role in the success of the content marketing
campaign. When building such a strategy, it is important to always have in mind
all the social business strategy factors. All content has to work together; all the
groups need to work together. There are 6 essential components of content mar-
keting strategy: creation - curation - optimization - social media - amplification -
analysis [1]. Content marketing aids in brand recognition, trust, authority, cred-
ibility, loyalty and authenticity. Content marketing can help accomplish these
tasks for a variety of constituencies, and on several levels: for the organization
it represents, for a company’s products and services, and for the employee who
represent the business or service [8]. In the development of a content marketing,
there are numerous opportunities to be more relevant and effective. Planning
content that’s meaningful to the customers you’re trying to engage according of
content [9]. Content marketing is anything an individual or an organization cre-
ates and/or shares to tell their story. What it isn’t: A warmed-over press release
served as a blog post. It is conversational, human and doesn’t try to constantly
sell to you. It also isn’t a tactic that you can just turn on and off and hope
that will be successful. It has to be a mindset that is embraced and encouraged.
You’ve got to start thinking like a publisher and use that to plan and execute
your entire marketing plan which content of any variety should be a part [3].
Making professionally produced creative content available online is proving to
be a high-risk business, because of market fragmentation, high development and
production costs and the need to fund as yet unprofitable new services from the
declining revenue streams of “traditional” analogue and physical distribution
[12]. The marketing management philosophy which holds that achieving orga-
nizational goals depends on determining the needs and wants of target markets
and delivering the desired level of satisfaction more effectively and efficiently
than competitors [7]. The forms of content marketing are constantly changing as
new tools to create, publish and share that content are launched and others are
shut down. Enhancements and new functionality are added to content publishing
tools every day, which means the tools you are using to create, publish and share
content today might not be the tools you are using tomorrow [5]. The paper has
two main research objectives: (1) Study the influential factors on content mar-
keting (2) Study the influential factors on content marketing in Bank Mellat.
628 S. Zomorodian and Y. Lu
The paper is structured as follows: literature review that show our hypothesis
come from previous study. Research methodology that show our method for sur-
vey the elements. Result that show why we accept hypothesis. Discussion that
show the final result and give some suggestion for future research.
2 Literature Review
In today’s market, content marketing can solve some problems for company.
Actually in Iran this kind of marketing is new and we try in this article to show
how this kind of marketing can help to the bank industrial for attract more cus-
tomers. So we focus on previous studies to find some elements related to content
marketing and then use questionnaire to collected data from customers, after
that we analyze the data and make the hypothesis. Again we use questionnaire
to collected data to accept or reject the hypothesis. Content marketing is defined
as a marketing process of creating and properly distributing the content in order
to attract, make communication with, and understand other people so that they
can be motivated to do beneficial activities [4]. More specific solutions are out-
lined in content marketing strategies ‘where I focus on quality of the content,
its creation, distribution and evaluation’ [1]. In addition to monitoring mentions
and share, engaging with people who responded to the content can be a very
powerful way to spread your reach and to connect with potential prospects or
industry stakeholders [10]. The explanation is that “When a brand uses specific
words or stories that resonate with a consumer, they can dig deeper into who
they are as a consumer. By utilizing content marketing, brands can cater cam-
paigns and stories around buying patterns and personalities” [2]. Base on those
researches we make this hypothesis: Quality of content has effect on export com-
panies customers. To be relevant to your audience and create a powerful brand
you must win their trust and admiration. With the creation of valuable con-
tent you build interest that transforms into lasting relationships [11]. Due to
the characteristics of these emerging technologies, the digital content market is
growing rapidly and traditional content providers face service transformation
decisions. While a majority of the previous technology adoption studies have
focused on the viewpoints of users and customers, cost reduction, or electronic
channel related technologies, in this research we analyze the emerging technol-
ogy adoption decisions of competing firms for providing new content services
from a strategic perspective. Utilizing game theoretical models, we examine the
effects of market environments (technology cost, channel cannibalization, brand
power, brand extension, information asymmetry and market uncertainty) on
firms’ adoption decisions [6]. Base on those researches we make this hypothe-
sis: Sensitive of content has effect on export companies customers. However, by
implicitly restricting it’s focus to a buyer’s perspective, the resource-based and
relational views also leave the question of which resources and competencies add
greater value-for-customer largely unanswered [14]. According to some authors,
the contribution of different benefits to a relationship’s value will vary along the
relationship life-cycle. Specifically, the supplier’s know-how of the supply mar-
ket, the adaptation and improvement of extant products and the development
How Content Marketing Can Help the Bank Industrial 629
3 Research Methodology
For finding the elements related to content marketing in the bank industrial,
we find some elements from previous studies use questionnaire in the different
branch of Mellat bank and the customer answer them after that we analyze
the data. The questionnaire consists of 40 questions divided into two sections:
the first, consisting of 4 multiple choice questions, allow us to obtain general
information about the sample (gender, age, years of service and Education level).
The second, measured by a Cronbach’s Alpha analyze:
• Has quality of content effect on bank industrial customers?
• Has sensitive content effect on bank industrial customers?
• Has new content effect on bank industrial customers?
The area of this research is Tehran. The capital of Iran. Statistical Society
of this research is the customers of Mellat bank in 2016. Scale of five options
to describe the items listed in the questionnaire of “very large impact” to “very
little impact” is used. A total of 370 questionnaires were collected at the end of
the questionnaire, as shown in Table 1.
4 Result
In the first analysis, factor analysis was exploratory and in the second one con-
firmatory. There are plans to test certain hidden factors beyond the variables.
In this case, orders are expected variables. In the researchers to test hypothe-
ses related to a particular factor structure of the action. In this analysis, the
researchers tried to obtain empirical data based on a model that assumes a
relatively small number of parameters, describing explain or justify. Reliability
Statistics: as shown in Table 2.
KMO and Bartlett’s Test: as shown in Table 3.
The SPSS software for data Factor and AMOS software for factor analysis
was used, the result is as shown in Fig. 1.
630 S. Zomorodian and Y. Lu
5 Discussion
In content marketing the customers are not presented with a direct suggestion,
instead they take another issue and information that is useful for them. At the
end they trust the content provider. Content marketing also can use with another
kinds of marketing.
In this paper, we examine customer opinion to analyze some elements that
can be effective on content marketing. In Iran this kind of marketing is new so
we try to show the bank industrial that they can use it to improve their business.
Our results show that there are elements related to content marketing. How-
ever, they are sensitive content, quality of content and new content. In con-
clusion, In today’s marketing, content marketing due to lower fee than other
marketing methods can be very effective. In the banking industry in Iran due to
the existence of different banks and brand diversity is very high, customers in
the choice of the Bank are having doubts.
The most simple and low-cost way to attract customers in order to help
create content provides the services of the Bank and the Bank’s competitive
advantages compared to other banks to inform customers. In this context must
be carefully that creating content is purely informational aspect and do not have
directly tried to attract customers, because it may have a reverse and does not
help to attract customers.
References
1. Augustini M (2014) Social media and content marketing as a part of an effec-
tive online marketing strategy. PhD thesis, Doctoral dissertation, Diploma thesis,
Masaryk University: Faculty of informatics, Brno
2. Baltes LP (2015) Content marketing-the fundamental tool of digital marketing.
Bull Transilvania Univ Brasov Econ Sci Ser V 8(2):111
3. Elisa R, Gordini N (2014) Content marketing metrics: theoretical aspects and
empirical evidence. Eur Sci J 10(34):92–104
How Content Marketing Can Help the Bank Industrial 633
1 Introduction
Entrepreneurial activities have shown increasingly close relationship with job
creation in both developed and developing countries, and have become a strong
driving force to world’s economic development. Meanwhile, entrepreneurship has
occupied a prominent position on the agendas of policymakers and researchers
[14]. In complex environments, Triple Helix has been operationalized in differ-
ent ways, spaces, and contexts where those agents are transforming their roles
in the development and strengthening of national innovation and entrepreneur-
ial ecosystems [3]. Unfortunately, at early start, failure is always going along
with those entrepreneurial projects. According to 2015 Annual Report of GEM
(Global Entrepreneurship Watch), the huge success made by 20%–30% startups
in China is at the cost of 70–80% failures. Even in developed countries, 35% star-
tups failed in the first year, almost half of them disappeared in five years [15].
Numerous reasons exist but mainly because entrepreneur started with an inap-
propriate project. For instance, Nabil provided a starting point for a stronger
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 52
The Establishment and Application of AHP-BP Neural Network Model 635
(1) Goal oriented principles. The index system should be designed according
to the needs of entrepreneurs, and keep in line with policy guidance, and
it should be able to evaluate the market prospects of the entrepreneurial
project.
(2) Scientific principle. Evaluation indicators must be able to reflect and evaluate
the entrepreneurial projects correctly.
(3) Systematic principle. The indicators should be able to reflect the horizontal
and vertical relationship between different levels.
(4) Operational principles. The selected indexes should be easy to practice. The
index system should be simplified, be able to provide qualitative and quan-
titative evaluation to projects.
According to the principles above, the factors that influence the normal oper-
ation of the enterprise are classified into several parts. By sending questionnaires,
interviewing experts and referring to literature at home and abroad [2,17,20,22],
the frame is subdivided, supplemented and deleted in detail. Finally, the index
system is built, as is shown in Table 1.
636 K. Wu and X. Li
The connection weights of input nodes i and hidden nodes j is wij , the
connection weights of hidden node j and the output node k is vkj , the thresh-
old of the hidden layer node is, the threshold of the output layer node is θj .
The model is θk provided with N learning samples (XP , YP )(p = 1, 2, · · · , N ).
XP = (xp0 , xp1 , · · · , xp(n−1) )T is the input vectors for learning sample P ,
YP = (yp0 , yp1 , · · · , yp(n−1) )T is the ideal output vector for learning sample P .
In input layer, let the input equals output, that is:
Opi = xpi (i = 0, 1, · · · , n − 1). (1)
The operating characteristic of the middle hidden layer node is:
n−1
netpj = wji Opi − θj , (2)
i−0
Opi , Opj , Opk are the output of input layer node, hidden layer node and
output layer node separately. netpj , netpk are the input of hidden layer and
output layer separately. Sigmoid function is used as transfer function f (·): f (x) =
1/(1 + e−x ).
For convenience, in the Eqs. (2), (4), let:
wjn = −θj , Opn = 1, vkr = −θk , Opr = 1.
The learning process of the input and output samples of the traditional BP
network is the adjustment of error between the actual output and sample output
of network weights, it is actually a nonlinear optimization problem uses the most
common gradient descent method in mathematical programming:
∂E ∂E
w(n + 1) = w(n) − b · v(n + 1) = v(n) − b · .
∂w ∂v
In the above equity, b refers to learning rate. In the traditional BP algorithm,
b is constant. The value of b directly affects the convergence and convergence
rate of the algorithm.
Step 1. Based on actual problem, set the number of input and output layer
nodes, and initialize the parameters, including the learning accuracy ε,
the prescribed number of iterative steps M0 , he upper limit of hidden
layer nodes R, learning parameters b, momentum coefficient a, the initial
number of R is 1. The admissible region of the initial weight H is divided
into N equal parts β1 β2 · · · βN .
Step 2. Input the learning samples and make the values of sample parameters
[0, 1].
Step 3. Give random numbers between −1 and 1 to the initial weighting matrix
βi (βi ⊂ H)(i = 1, 2, · · · , N ).
Step 4. Train the network with the traditional BP method.
Step 5. Judge whether or not the number of iterative steps is exceeding M0 . If
yes, continue. If no, turn to Step 7.
The Establishment and Application of AHP-BP Neural Network Model 639
Step 6. If r + 1 exceed the upper limit of R, if yes, end, no, turn to Step 3.
Step 7. Determine whether the accuracy of learning meets the requirements, if
yes, continue, if no, turn to Step 4.
Step 8. Call the deletion and merge algorithm of hidden nodes, if there is a
deletion or merger, turn to Step 4, otherwise, algorithm completes.
i(i = 1, 2, · · · , 9) stands for the 9 experts. vu1 stands for the grade given by
experts, u1 ∈ [0, 1] stands for “poor”, u2 ∈ (1, 2] stands for “bad”, by analogy,
u5 ∈ (4, 5] stands for “excellent”.
(1) Input Layer. First of all, assign the evaluation of the index selected by the
AHP method; and they are used as the input layer of the neural network
variables. From what has been stated above, 15 input layer variables belong
in the neural network.
(2) Hidden Layer. According to the improved learning algorithm of BP neural
network proposed by TIAN Jing-wen etc. [16], set the number of hidden layer
nodes as 1, then, the network will learn by itself until we get the appropriate
number of nodes.
(3) Output Layer. The comments of entrepreneurial project evaluation are
defined as two categories: “choose it” and “give it up”, these can be rep-
resented by the output vectors (1, 0), (0, 1). So, the number of output layer
nodes is 2.
AHP and BP neural network methods are combined to create the BP-AHP
model of EPS, and the basic steps of the algorithm are as follows:
Step 1. Simplify the evaluation index of EPS, and the important index is
selected as the input layer of neural network.
Step 2. Set the number of neural network output layer nodes, and initialize the
network parameters (including the given learning accuracy ε, provisions
iteration number M0, the limit of hidden nodes r, learning parameter
η. The initial hidden node number is set to 1.)
Step 3. Input the learning samples and make the values of sample parameters
[0, 1].
Step 4. Give random numbers between −1 and 1 to the initial weighting matrix.
Step 5. Train the network with the modified BP method, and determine the
weight matrix between each layer.
Step 6. Judge whether or not the number of iterative steps is exceeding the
prescribed. If yes, end; If no, go back to step 5 and continue learning.
Step 7. Collect values of the indexes and process these data to make them [0, 1].
Step 8. Input the processed data to the trained BP neural network and get the
output.
Step 9. According to the output results, combined with the entrepreneurial
project index reviews set, choose the right entrepreneurial projects.
The Establishment and Application of AHP-BP Neural Network Model 641
5 Empirical Research
We choose 21 entrepreneurial cases from Sichuan University in the past 3 years,
of which 10 projects have been successful, 11 entrepreneurial projects ended
in failure. Use the BP-AHP model to evaluate entrepreneurial projects selected.
The cases are: computer training (Q1 ), online digital products (Q2 ), development
and production of management software (Q3 ), computer DIY and maintenance
(Q4 ), website construction and service (Q5 ), computer cleaning services (Q6 ),
water saving car washing (Q7 ), reward supervision social software (Q8 ), online
customized travel (Q9 ), advertisement designing (Q10 ), anime 3D printing (Q11 ),
a beverage shop (Q12 ), fruit fast delivery (Q13 ), creative souvenirs design (Q14 ),
fast dry cleaning service (Q15 ), household service (Q16 ), cozy cafe (Q17 ), creative
home decoration (Q18 ), children’s art training (Q19 ), architectural design (Q20 ),
online second hand market (Q21 ). Q1 , Q2 , Q3 , Q5 , Q6 , Q10 , Q12 , Q16 , Q17 , Q19 ,
Q21 failed in the first half year; others operate smoothly. We used the first 16
entrepreneurial projects as training samples of the BP - AHP model, the last 5
items as prediction evaluation samples.
1
9
vk = vui .
9 i−1
The output of successful projects are (1, 0), the output of failed projects are
(0, 1). The BP neural network architecture is built by 15-1-2 (The number of
input layer nodes is 15, the number of hidden layer nodes is 1, the number of
output layer nodes is 2). We initialize the network (the upper limit ε = 0.0002,
learning rate η = 0.5, Inertia Parameter a = 0.1), Make xk (k = 1, 2, · · · , 15) ∈
[0, 1] by dividing them with 10, and input the processed data to the BP network
model. Then the network is trained by the modified BP learning algorithm and
the network architecture becomes 15-9-2. At the same time, we get the optimized
network weight matrix.
Now, it is time to evaluate Q17 , Q18 , Q19 , Q20 , and Q21 with the trained neural
network. Give values to the indexes xk of the 5 entrepreneurial projects, Make
xk ∈ [0, 1] by dividing them with 10. Input the processed data to the trained
642 K. Wu and X. Li
BP neural network and get the outputs as shown in the second row of Table 2.
According to the principle of maximum membership degree, we can determine
the evaluation results of each project; see the third row of Table 2.
It can be seen from Table 2, the projects which evaluation is “Give it up”
failed in actual operation; the “Choose it” ones succeed. The predictions of BP
neural network completely accord with the practical ones, which indicate that,
this risk early warning model feasible and effective.
6 Conclusions
EPS is a comprehensive prediction problem affected by many factors, so it is
important to choose appropriate evaluation method. According to the charac-
teristics of entrepreneurial projects, this paper analyzes the factors that affect
EPS, and constructs the evaluation model of EPS based on AHP and BP neural
network subjectively and objectively. This model can not only extract the main
attributes of entrepreneurial projects, reduce the input variables of the neural
network, but also reduce the complexity of the neural network and the training
time, improve the generalization ability, reasoning ability and classification abil-
ity. Therefore, the model is feasible and effective, and it has important reference
value in guiding the entrepreneur to choose the right project.
References
1. Brennan MJ, Schwartz ES (1985) Evaluating natural resource investments. J Bus
58(2):135–157
2. Deng Z, Yang P, Yang X (2015) The way of college students’ entrepreneurial project
selection. China Manag Informationization 18(21):235–236 (in Chinese)
3. Guerrero M, Urbano D (2016) The impact of triple helix agents on entrepreneurial
innovations’ performance: an inside look at enterprises located in an emerging
economy. Technol Forecast Soc Change
The Establishment and Application of AHP-BP Neural Network Model 643
4. Hagan MT, Demuth HB, Beale M (1996) Neural Network Design. PWS Publishing
Company, Thomson Learning, Boston
5. He C, Li X, Yu H (2002) The new improvement of bp neural network and its
application. Math Pract Theory 32(4):555–561
6. Jain R, Ali SW (2013) A review of facilitators, barriers and gateways to entrepre-
neurship: direction for future research. South Asian J Manage 20:122–163
7. Khelil N (2015) The many faces of entrepreneurial failure: insights from an empir-
ical taxonomy. J Bus Ventur 31(1):72–94
8. Kumaraswamy MM, Rahman MM (2002) Risk management trends in the construc-
tion industry: moving towards joint risk management. Eng Constr Architectural
Manage 9(2):131–151
9. Lockett G, Stratford M et al (1986) Modelling a research portfolio using ahp-a
group decision process. R&D Manage 16(2):151–160
10. Lu X (2010) The main strategies for college students to choose the entrepreneurial
projects. Innov Entrepreneurship Educ 6:21–23 (in Chinese)
11. Nielsen SL, Lassen AH (2011) Images of entrepreneurship: towards a new catego-
rization of entrepreneurship. Int Entrepreneurship Manage J 8(1):35–53
12. Pena I (2002) Intellectual captital and business venture success. J Intellect Capital
3:180–198
13. Ren P, Xu Z, Liao H (2016) Intuitionistic multiplicative analytic hierarchy process
in group decision making. Comput Ind Eng 101:513–524
14. Schelfhout W, Bruggeman K, Maeyer SD (2016) Evaluation of entrepreneurial
competence through scaled behavioural indicators: validation of an instrument.
Stud Educ Eval 51:29–41
15. Schwab K (2015) The Global Competitiveness Report, 2014–2015. World Economic
Forum, Geneva
16. Tian J, Gao M (2006) Artificial Neural Network Algorithm Research and Applica-
tion. Beijing Institute of Technology Press (in Chinese)
17. Urban B, Nikolov K (2013) Sustainable corporate entrepreneurship initiatives: a
risk and reward analysis. Technol Econ Dev Econ 19:383–408
18. Wen L, Li L, Liao S (2013) Analysis on influencing factors of college students’
entrepreneurial project selection. Project Manage Technol 11(6):89–92 (in Chinese)
19. Xie Q, Yuan X (2006) A review of the evaluation system of venture investment
projects. Sci Technol Manage Res 8:182–187 (in Chinese)
20. Yan Y (2011) Application of fuzzy comprehensive evaluation method in the
selection of college students’ entrepreneurial projects. Project Manage Technol
9(12):51–55 (in Chinese)
21. Zhang N, Yan P (1998) Neural Networks and Fuzzy Control, 1st edn. Tsinghua
University Press, Beijing (in Chinese)
22. Zhu Y (2014) Selection of college students’ entrepreneurial projects from the per-
spective of the long tail theory. Innov Entrepreneurship Manage 5(6):52–54 (in
Chinese)
Improving the User Experience and Virality
of Tourism-Related Facebook Pages
1 Introduction
Along with the spread of smartphones, the number of people using social network
services (SNSs) has been increasing. The number of monthly active users of
Facebook worldwide (those who use Facebook one or more times per month)
reached 1.79 billion as of 30 September 2016 [1]. In addition, it is reported that
the number of monthly active users of Instagram is 0.6 billion (as of 15 December
2016) [4], that of Twitter is 0.313 billion (as of 30 June 2016) [12], and that of
Line is 0.2184 billion (as of 31 March 2016) [6]. According to the 2015 White
Paper on Information and Communications in Japan published by the Ministry
of Internal Affairs and Communications [5], the SNSs used in Japan over the
last one year include Line (37.5%), Facebook (35.3%), and Twitter (31.0%).
The number of people using SNSs generally tends to decrease with increasing
age: approximately 50% of people aged 20 or younger, less than 40% of people in
their 30’s and 40’s, and more than 20% of people aged 60 or older use Facebook.
It is reported that Facebook is being used by people of various age groups.
With this background, a variety of research projects on Facebook have been
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 53
Improving the User Experience and Virality 645
carried out. Wilson et al. classified 412 sociology papers on Facebook into five
categories: descriptive analysis of users, motivations for using Facebook, identity
presentation, the role of Facebook in social interaction, and private information
disclosure [13].
Recently, both the number of companies that use Facebook Pages to advertise
their activities and the number of organizations that use them to disseminate
information have increased. Ohara et al. clarified the characteristics of photos
that increase the responses of users of Facebook Pages of fast-food companies
by principal component analysis [8]. In tourism, disseminating local information
on Facebook Pages and increasing the attractiveness of pages are expected to
lead to the promotion of regional tourism industries. By multiple regression
analysis, Sabate et al. investigated the factors that increase the attractiveness of
Facebook Fan Pages (currently, Facebook Pages) of five travel agencies in Spain
using the numbers of “Likes” and comments regarding each post as indices of
attractiveness [9]. The results indicate that the number of “Likes” is affected
by the presence of video images and photos in the post and that the number of
comments is affected by the presence of photos and the time period of posting.
However, the methods of using Facebook are rapidly advancing and most recent
posts almost always include photos or video images.
Research on Facebook and its usefulness for those involved in the promotion
of regional tourism industries is limited, although how-to books on posting on
Facebook Pages [3] and marketing strategies using Facebook [10] have been pub-
lished. Consequently, Sawada et al. [11] analyzed the content of tourism-related
posts on Facebook Pages with high attractiveness and clarified the following.
The responses of users of Facebook Pages offering information related to Christ-
mas, flowers, and foods were high, whereas those for pages offering information
on cultural events tended to be low. The responses of users of Facebook Pages
offering real-time information, such as information on the start of the bloom-
ing of cherry blossoms, tended to be high. In addition, it was found that the
responses of users were affected by the type of photos posted.
To increase the attractiveness of Facebook Pages, the analysis of not only
the posts of organizations but also the feelings of users that cause them to post
comments need to be clarified. The purpose of this study is to analyze the current
state of the use of Facebook Pages in tourism in Japan and the sentiments behind
the comments on Facebook Pages with high attractiveness, and to clarify the
feelings of the users who posted comments and the factors behind the increase
in the attractiveness of these pages.
In Sect. 2, the current state of the use of Facebook Pages offering regional
tourism information is analyzed in terms of the number of fans, management,
and the responses of users to find the characteristics of Facebook Pages with high
attractiveness. In Sect. 3, user comments on these Facebook Pages are analyzed.
In Sect. 4, the factors behind the increase in the attractiveness of pages are
discussed on the basis of the results of analyzing user comments. Section 5 is the
conclusion.
646 A. Sawada and T. Yoshida
The authors browsed the 842 target pages in the travel category and classi-
fied them depending on their contents. Figure 1 shows the results. Among the
842 pages, 30% of the pages are managed by accommodation facilities, 14% by
tourist facilities such as aquariums and museums, 7% by travel agencies, and
3% by transportation organizations. Some of these pages offer regional tourism
information such as festivals, events, and the start of the autumn foliage season.
However, these pages are basically used as a public relations tool of these facil-
ities and organizations. Other pages disseminate regional tourism information
rather than advertising the activities of organizations. The management of these
pages varies from companies and individuals to volunteer groups. Twenty-one
percent of the pages (178 pages) focus on information about particular regions;
3% focus on information about specific topics such as one-day hot spring trips,
famous places for flowers, and Buddhist statues; and 6% focus on information
about foreign countries. In this study, we targeted the 178 pages focusing on
information about particular regions.
Improving the User Experience and Virality 647
Fig. 2. Number of fans of the 178 Facebook Pages offering regional tourism information
(2) Management
Figure 3 shows the classification of the management of the 178 pages offering
regional tourism information. Among the pages, 36, 24, and 9% of the Face-
book Pages are managed by individuals, private companies, and local govern-
ments, respectively. Others include regional tourist associations in the form of
general incorporated associations, incorporated nonprofit organizations (NPOs),
and volunteer groups.
Figure 4 shows the classification of the management of Facebook Pages with
more than 10,000 fans among the 178 pages. Considering all the Facebook Pages
offering regional tourism information, more than 30% of the pages are managed
by individuals, as shown in Fig. 3; however, only 11% (2 pages) of Facebook Pages
with more than 10,000 fans are managed by individuals, as shown in Fig. 4.
An organization that manages one of the Facebook Pages with more than
40,000 fans was interviewed on 30 October 2015. The management consisted of
a total of nine members including one young staff member and eight people who
provide regional information. They have adopted the management rule that the
frequency of posting is one or two posts per day. The interviewee said that finding
648 A. Sawada and T. Yoshida
new material every day is troublesome work even though there are nine members.
When a Facebook Page is managed by individuals, finding raw material and
collecting information require much time and expense, leading to difficulty in
posting at a high frequency.
(3) Facebook Pages with High Attractiveness
Table 1 shows a summary of 19 Facebook Pages with more than 10,000 fans.
Facebook Pages for various areas from Hokkaido to Okinawa attract many fans,
although 2 pages on Yokohama are also included in the list. For these 19 Face-
book Pages, posts, the presence or absence of photo and video images, the pres-
ence or absence of links to other homepages, the number of “Likes” for each
post, the number of “Shares” of each post, and the number of user comments
were obtained between January 1 and December 31, 2015, by Graph API.
Engagement rate is an index for evaluating the attractiveness of pages on
SNSs. The conventional engagement rate of Facebook Pages is given by
the page are used in the calculation. Therefore, in this study, the conventional
engagement rate calculated using Eq. (1) was used to determine the engagement
rate per fan in Table 1. According to the Facebook Engagement Survey 2014
[2], the mean engagement rate of Facebook Pages with ≤ 10, 000 and < 50, 000
fans is 1.996% and that of Facebook Pages with ≤ 50, 000 and < 100, 000 fans
is 1.499%. The mean engagement rate of Facebook Pages in the travel/leisure
category is 1.56% (mean number of fans: 30,642).
The number of Facebook Pages with engagement rates of < 1%, ≥ 1 and
< 2%, ≥ 2 and < 3%, ≥ 3 and < 4%, and ≥ 8 and < 9% per fan is 5, 2, 4,
7, and 1, respectively. The Facebook Page on Shirakawa-go accepts posts from
users, and the number of posts is extremely high. The engagement rate per post
is small, as is the engagement rate per fan.
In this study, both the engagement rate per fan and the number of fans are
used as indices of attractiveness. Three Facebook Pages with an engagement rate
of ≥ 3% and a large number of fans, namely, the Facebook Pages of “Kyushu
Tourism Information”, “Akita Vision”, and “Yokohama Tourism Information”,
were selected from Table 1 as Facebook Pages with high attractiveness to be
analyzed.
In this section, the user comments on the pages with high attractiveness were
classified into ten types to clarify the feelings that cause users to post comments.
For the Facebook Pages of “Kyushu Tourism Information”, “Akita Vision”, and
“Yokohama Tourism Information”, user comments collected over a period of one
year (2015) were analyzed to avoid seasonal bias.
First, engagement rates of all the posts for the three pages in 2015 were calcu-
lated. We read all the user comments for the posts with the top 100 engagement
rates (hereafter, user comments for the top 100 posts) and the user comments
for the posts with the lowest 100 engagement rates (hereafter, user comments for
the lowest 100 posts) and classified the user comments in accordance with the
feelings that caused users to post comments. For this classification, ten feelings
in the Dictionaryon Expression of Feeling [7]; namely, “joy (pleasure)”, “anger
(unpleasantness)”, “sadness (crying)”, “fear (apprehension)”, “shame (humil-
iation)”, “liking (longing, nostalgia)”, “disgust (loathing, regret)”, “sensation
(inspiring, excitement)”, “ease (calm)”, and “surprise (amazement)”. Table 2
shows examples of user comments classified into these ten feelings. Examples
1–9 are classified as one of the ten feelings. When multiple feelings are included
in one comment, as in example 10, the user comment is classified as multiple
feelings. Comments including the intension of “share” in the sentence are classi-
fied as “share”. The comments that do not belong to any of the ten feelings are
classified as “others”. Stamps without any comments are classified as “stamp”.
650 A. Sawada and T. Yoshida
Rank Facebook page Management Number Engagements number Engagement rate Number of
name of fans posts between
Jan. 1-Dec. 31
2015
“Likes” “Shares” Comments Per post Per fan
1 Kyushu General 70,105 856,559 38,515 5,493 2,427 3.5% 371
Tourism incorporated
Information association
2 Shirakawa-go Private 67,049 1,125,034 44,538 6,147 450 0.7% 2,612
company
3 Akita Vision Local 44,780 621,495 26,001 4,637 1,513 3.4% 431
government
4 The Heart of Local 43,673 27,622 553 151 44 0.1% 651
Osaka-Visit government
Osaka Japan
5 Yokohama Public interest 42,655 651,163 23,136 3,370 1,433 3.4% 473
Tourism incorporated
Information foundation
6 Yokohama Cooperative 32,748 149,274 4,468 1,054 499 1.5% 310
China Town association
7 Gooood Place!! Private 32,566 236,962 7,080 1,297 1,141 3.5% 215
in Shiga company
8 Otaru Fan Private 22,830 31,289 27 9 870 3.8% 36
company
9 Exchange of Private 18,592 99,287 5,473 436 408 2.2% 258
Kyoto company
Information
-Kyoto Now-
10 Hyogo Tourism Local 16,470 18,832 529 115 81 0.5% 239
Guide government
11 Okinawa Diving Private 13,870 1,520 18 13 111 0.8% 14
company
12 Okayama Great Public interest 12,547 76,720 3,081 721 323 2.6% 249
Spot Net incorporated
association
13 Fukuoka No Individual 12,472 4,241 51 28 103 0.8% 42
Machi
14 I LOVE Individual 12,267 62,946 3,432 797 1,050 8.6% 64
TOKUSHIMA
15 Hokkaido Fan Private 12,053 179,615 10,050 1,129 305 2.5% 625
Magazine company
16 Let’s Visit Nara Private 11,857 40,560 1,590 157 188 1.6% 225
company
17 Kamakura Private 11,275 58,557 1,301 671 348 3.1% 174
Block company
18 Web magazine Private 11,026 58,428 2,468 362 286 2.6% 214
[Shikoku company
Tairiku]
19 Nagasaki Local 10,657 58,284 2,647 384 383 3.6% 160
Tourism government
Promotion
Section
“others” is 1,200. The percentage of “liking (longing, nostalgia)” was the highest
(29.8%), followed by “share” (11.3%) and “sensation” (8.2%). There are no user
comments for either the top or the lowest 100 posts classified as “shame”.
The numbers of the user comments for the top and lowest 100 posts in each
feeling classification were compared. As shown in Table 3, we developed a 2 × 2
contingency table to summarize the number of user comments classified as one
of the feelings and the total number of user comments classified as other feelings,
“share”, “stamp”, and “others”, to carry out Fisher’s exact test. A contingency
table was developed for each feeling in the test. Table 4 shows the summary
of the data and results. A significant difference was observed between the user
comments for the top and lowest 100 posts for six feelings, “share”, “stamp”,
and “others”. The underlines in Table 4 indicate which of the top or lowest 100
posts has a higher number of user comments than the expected value for the
classification with significant difference.
The number of user comments for the top and lowest 100 posts classified as
“joy” is small. Although a significant difference was observed between the two,
the content of comments is an expression of gratitude, such as “thank you for the
information on xxx”, “I’m looking forward to attending xxx” for the announce-
ment of an event, and “I’m looking forward to eating xxx” for seasonal food.
652 A. Sawada and T. Yoshida
Table 3. Contingency table summarizing the number of particular feelings for the user
comments for the top and lowest 100 posts.
Top Lowest
Feeling X a b
Feelings other than feeling X c d
a: The number of user comments for the top
100 posts classified as feeling X
b: The number of user comments for the lowest
100 posts classified as feeling X
c: The number of user comments for the top
100 posts classified as feeling other than X
d: The number of user comments for the lowest
100 posts classified as feeling other than X
The number of user comments for the top and lowest 100 posts classified
as “anger” is small. Although a significant difference was observed between the
two, there are few user comments expressing anger and unpleasantness.
The number of user comments for the lowest 100 post classified as “sadness”
is small. For the top 100 posts, most of the user comments classified as “sadness”
express sadness that they cannot visit a place because of work despite a desire
to do so or that photos of their hometown cause a longing to return.
The number of user comments for the lowest 100 posts classified as “fear”
is small. Examples of the user comments for the top 100 posts include concern
regarding going to a place introduced in the post and the fear of unconventional
designs and ideas in the post.
There are no user comments classified as “shame” for either the top or the
lowest 100 posts.
The number of user comments classified as “liking” is the largest among the
user comments for both the top and lowest 100 posts. Although a significant
difference was observed between the two, the content of comments is an expres-
sion of liking, such as “I like xxx hot spring” and “I love xxx!”, the expression
Improving the User Experience and Virality 653
Table 4. Number of comments for the top and lowest 100 posts classified into each
feeling category and the results of Fisher’s exact test
of longing, such as “I would like to visit xxx once in the future”, and the feeling
of nostalgia because of overlapping old memories with the post. People often
express liking when they read posts related to their past experience and knowl-
edge, such as a previously eaten food, a place they would like to visit in the
future, and memories of childhood.
The user comments for the top and lowest 100 post classified as “disgust”
mostly express feelings of regret, such as “I would like to go but I cannot”. Only
a few user comments express undesirable and unfavorable feelings.
The number of user comments for the top 100 posts classified as “sensation”
tended to be large, and a significant difference was observed between users com-
ments for the top and lowest 100 posts. Among the user comments for the top
100 posts, the number of user comments classified as “sensation” is the second
most numerous after those classified as “liking”. Users are moved by photos of
beautiful scenes and excited by photos of delicious-looking food for both cate-
gories of posts.
The number of user comments for the top and lowest 100 posts classified as
“ease” is small. Users feel relief or healed upon viewing the photos posted.
Regarding “surprise”, for user comments on the top 100 posts, users are
surprised at the scale and impact of photos or amazed by the beauty of photos.
User comments for the lowest 100 posts express surprise because they found
unexpected content in the post, such as “I never imagined that such a place
existed there”.
654 A. Sawada and T. Yoshida
Some users making comments classified as “share” feel empathy with the
post and express their intent to share the post. However, users simply click the
share button (a simple comment expressing the intension as is often the case). A
significant difference was observed between the user comments for the top and
lowest 100 posts. The number of user comments classified as “share” is greater
for the lowest 100 posts. We request that users related to the management and
the fans of the Facebook Page share posts regardless of their feelings toward the
contents of posts.
The number of user comments for the top and lowest 100 posts classified as
“stamp” is small. A significant difference was observed between the two, and the
number of user comments for the lowest 100 posts classified as “stamp” tended
to be larger.
The number of user comments classified as “others” is largest for both the
top and lowest 100 posts. Examples of user comments include questions, such as
example 12 in Table 2, reports of past experience, such as “I have visited there”
and “I ate it yesterday”, and expressions of intent and plans, such as “I will visit
there next” and “I will go to eat there again this year”. We suggest that the
comments reporting past experiences are triggered by the feeling of liking the
post. The comments expressing intent and plans are possibly triggered by the
feeling of sensation (users are moved or excited about the post) and the feeling
of joy. However, we cannot detect the true feeling of users, and we classified user
comments that do not include the direct expression of feelings as “others” in
this study.
4 Discussion
In this section, directions for increasing the attractiveness of Facebook Pages for
those involved in the promotion of regional tourism industries are discussed by
referring to actual examples of user comments for which a large number of user
comments expressed the feelings of users.
4.1 User Comments for the Top 100 Posts Classified as Liking
Example 1 is a post related to seasonal fish. Users report that the fish is deli-
cious and describe the characteristic taste of the fish during breeding season.
Example 2 is a post reporting the arrival of the season for tourist boats with
kotatsu, a small table having an electric heater underneath and covered by a
quilt. Comments 2-1 and 2-2 indicate that users have enjoyed this type of boat
before, recommend it to others, and express their intent to experience it again.
The user of comment 2-3 had experienced the boat without kotatsu and expresses
a willingness to experience the boat with kotatsu this season.
Among the user comments for the top 100 posts, those classified as “liking”
are the most numerous. To increase the attractiveness of Facebook Pages for
tourist spots and food, it is indispensable to offer posts of famous tourist spots
and food that many users like, want to visit, and experience. For well-known
Improving the User Experience and Virality 655
• Comment 2-1 (“liking”) The boat with kotatsu is only in winter and was
quaint. It was really nice (∧ ∧ )/!
• Comment 2-2 (“liking”) We ate steamed eel after enjoying a boat with
kotatsu!! I’d like to visit again.
• Comment 2-3 (“liking”) I visited there about 5 years ago because it’s my
husband’s hometown(∧ o∧ )/. It was December, but the boat didn’t have a
kotatsu. I’d like to try it when I visit again(o∧∧ o).
4.2 User Comments for the Top 100 Posts Classified as Sensation
4.3 User Comments for the Lowest 100 Posts Classified as Liking
Example 4 is a post related to local sweets. From comments 4-1 and 4-2, we can
see that users feel nostalgia for their favorite sweets. In contrast, comments 4-3
and 4-4 indicate that the sweets are not famous; only a small number of people
know about the sweets. Example 5 is a post related to a local restaurant. From
the post and comments 5-1 and 5-2, it is clear that the users know the restaurant
introduced in the post and agree on its attractiveness. From comments 5-3 and
5-4, it is clear that the users would like to visit the restaurant and eat the dishes
described in the post.
Among the user comments for the lowest 100 posts, the number of comments
classified as “liking” was the largest (357). The users who have eaten the sweets
or visited the restaurant before, as shown in Examples 4 and 5, are thought to
feel empathy for the posts and make comments because of a feeling of nostalgia
and the memory of the food and service, although users who did not know
the contents of the posts showed no interest in and did not respond to the
posts. In addition, users related to the region are interested in the posts because
information new to them is provided in the posts. This finding suggests that the
posts about places and foods familiar to those from the region or to the local
people, even though not famous, can attract a certain amount of responses from
users, leading to an increase in the attractiveness of Facebook Pages.
5 Conclusion
In this study, we carried out an analysis of the feelings expressed in user com-
ments on Facebook Pages with many fans offering regional tourism information,
and clarified the feelings of the users who posted the comments and the factors
behind the increase in the attractiveness of these Facebook Pages.
Compared with websites, Facebook Pages can be easily started and have
no startup costs. Therefore, the number of pages managed by individuals is
Improving the User Experience and Virality 657
large. When focusing on Facebook Pages with many fans, the percentage of
Facebook Pages managed by individuals is small; the Facebook Pages managed
by organizations such as regional tourist associations and companies offering
regional tourism information are the mainstream. However, such pages are not
always actively accessed by many visitors. The information explained in this
paper may be useful for increasing the attractiveness of Facebook Pages.
The engagement rate of the Facebook Page “I Love Tokushima” (Table 1)
is extremely high (8.6%). The number of posts per year is 64, which means
an annual posting frequency of 1.2. According to Locowise Ltd. (the UK), the
engagement rate of Facebook Pages with 1-4 posts per week is the highest. The
survey by Locowise Ltd. targeted all Facebook Pages, not limited to those of the
tourism industry. In future studies, we will investigate the relationship between
the frequency of posting and the engagement rate of Facebook Pages on tourism
to determine the optimal frequency of posting.
Acknowledgments. This study was supported by Japan Society for the Promotion
of Science (JSPS) KAKENHI (Grant Number 15K00476).
References
1. Facebook (2017) Statistics of facebook. http://newsroom.fb.com/company-info/
2. Facenavi (2017) Facebook engagement survey 2014. http://facebook.boo.jp/
facebook-engagement-survey-2014
3. Inc Comnico (2014) Getting Fans! Know-How of Posting on Facebook. Shoeisha
4. Instagram (2017) Statistics of instagram. https://www.instagram.com/press/
5. Ministry of Internal Affairs and Communications (2017) 2015 white paper on infor-
mation and communications in japan. http://www.soumu.go.jp/johotsusintokei/
whitepaper/ja/h27/pdf/
6. Line (2017) January-march 2016 business report. https://linecorp.com/ja/pr/
news/ja/2016/1347
7. Nakamura A (1993) Dictionary on Expression of Feelings. Books Tokyodo, Tokyo
8. Ohara S, Kajiyama T, Ouchi N (2015) Extraction of characteristics of photos to
increase the engagement rate of facebook pages of companies analysis of photos of
products of fast-food companies. IEICE Trans A J98–A(1):41–50
9. Sabate F, Berbegal-Mirabent J, Cañabate A et al (2014) Factors influencing pop-
ularity of branded content in facebook fan pages. Eur Manag J 32(6):1001–1011
10. Saito S (2014) Facebook Marketing [Business Technique] Technique of Obtaining
Valuable “Like”. Shoeisha
11. Sawada A, Yoshida T, Murakami K (2016) Factors behind increasing attractiveness
of tourism-related facebook pages. In: Proceedings for the 13th Workshop of the
Society for Tourism Informatics, pp 17–20
12. Twitter (2017) Twitter usage/company facts. https://about.twitter.com/ja/
company
13. Wilson RE, Gosling SD, Graham LT (2012) A review of facebook research in the
social sciences. Perspect Psychol Sci 7(3):203–220
Meta-analysis of the Factors Influencing
the Employees’ Creative Performance
Yang Xu1 , Ying Li1(B) , Hari Nugroho2 , John Thomas Delaney3 , and Ping Luo1
1
Business School, Sichuan University, Chengdu 610064, People’s Republic of China
liyinggs@scu.edu.cn
2
Department of Sociology, Gedung F. Faculty of Social and Political Sciences,
Universitas Indonesia, Depok 16424, Indonesia
3
Kogod School of Business, American University, 4400 Massachusetts Ave,
Washington, DC 20016-8044, USA
1 Introduction
Creative performance refers to the products, processes, methods, and ideas cre-
ated by the employees, which are novel, practical, and valuable to an organiza-
tion. Although creativity varying from person to person cannot be generalized,
it is not fixed. In other words, creativity is constantly fluctuating. Likewise,
the creative performance fluctuates with the employees’ mood caused by the
working environment. What are the factors affecting the employees’ creative
performance? Do they have positive or negative impacts on the staff? Which of
these factors has more profound influence and which has less? These issues are
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 54
Meta-analysis of the Factors Influencing the Employees’ Creative 659
Creative performance refers to the products, processes, methods and ideas cre-
ated by the employees, which are novel, practical, and valuable to an orga-
nization. Work passion, a core characteristic of self-identification, refers an
employee’s strong love for a certain occupation, which he or she considers vital
and into which this individual tends to invest time and effort.
Autonomous orientation refers to an individual’s desire to act, based on his
or her interests and self-values. More precisely, this individual considers incidents
as opportunities and challenges, and he or she will take active actions to seize
these opportunities and to shoulder the responsibilities, showing strong internal
and extrinsic motivations.
Control orientation refers to an individual’s desire to act, based on other
individuals’ attitudes rather than his or her own ideas, because he or she is
vulnerable to the external environment.
Mastery approach goal orientation refers to the development process in which
an individual tends to concentrate on the abilities, learning, and tasks. Per-
formance approach goal orientation refers to an individual’s emphasis of the
performance-related matters.
Performance avoidance goal orientation refers to an individual’s avoidance of
the performance-related matters.
Cognitive re-evaluation refers to an individual’s control of the emotional
response through changing the impact of events on personal awareness.
Inhibition of expression refers to an individual’s control of the emotional
response by suppressing the emotional expression.
Feedback consistency refers to the consistency between an employee’s expec-
tation of the boss’s feedback on his or her work performance and the actual
feedback this individual receives.
660 Y. Xu et al.
in each essay, are roughly the same. In terms of research methods, these papers,
however, principally adopt empirical research and scenario simulation. As for
the investigations of the influencing factors, they largely highlight one of them.
Based on the literature above, this paper utilizes a different approach– the meta-
analysis, seeking to examine the factors that impact the employees’ creative
performance holistically.
2 Research Method
3 Research Results
3.1 Description of the Research Data
Fig. 1. Distribution of the quantity of effect values from different article types
selected documents are included in the table, even though they may influence
each other. The specific data are shown in Table 3.
The distribution of the value of R and Z is clearly illustrated in Fig. 2, a line
graph. In terms of the value of R, the average is 0.35; the standard deviation
is 0.201; the maximum value is 0.71; the minimum value is −0.058; and the
skewness is −0.219, which demonstrates negative skewness. As for the value of
Z, the average is 0.62; the standard deviation is 0.410; the maximum value is
1.506; the minimum value is −0.086; and the skewness is 0.421, indicating pos-
itive skewness (See Table 4). Eventually, the paper, calculates the overall effect
value of all literature, which is 0.551, generally indicating the positive correla-
tion between the variables of the selected research articles and the employees’
creative performance.
The methods for homogeneity test include Q test, H test, chi-square test, forest
plot. Using the Q test, the paper first calculates the Q value through the following
equations:
k 2
Q= wi(ri − R) , (1)
i=0
R= wiri/ wi, (2)
where k represents the number of the effect values of the selected literature; r is
the effect value; and w denotes the proportion of the effect value. The calculated
values of R and Q are 0.313575684 and 0.040220883, respectively.
666 Y. Xu et al.
The Meta results in Table 3 show that the 25 effect values are homogeneous.
The Q test follows the chi-square distribution with a degree of freedom K − 1;
hence, when the value of K is 25, the chi-squared value is 28.241 at a significance
level of 0.05. In other words, if Q = 0.040220883 < X0.02524
2
= 28.241, then the
selected research articles are homogeneous.
Bias test is considered essential in the meta-analysis. The publication bias refers
to the different opportunities for publication and corresponding impacts on the
results caused by the researchers, evaluators, editors’ preferences of the direction
and intensity of the research in the submission, acceptance, and publication
process of an article. According to the book Conducting Meta-Analysis through
Stata, there are multiple approaches to testing the publication bias, such as the
funnel plot, Egger linear regression method, and Begg rank correlation method.
This paper uses the funnel plot, which is the most commonly used visualized
method for a qualitative measurement of publication bias, first proposed by Light
et al., in 1984. Figure 3 shows the funnel plot derived from the Meta-analysis of
Table 3.
According to Fig. 3, there is no publication bias among the articles and theses
used in this study, since the funnel plot is relatively symmetric.
It can be seen from overall effect values in Table 3 that when wholly considering
the influencing factors (positive mood, negative mood, autonomous orientation,
control orientation, time pressure, psychological satisfaction, supervisor support,
autonomous need, relationship motive, internal motivation, external motivation,
control motivation, working motivation, team performance, team identity, charis-
matic leadership, concentration, mastery approach goal orientation, performance
approach goal orientation, performance avoidance goal orientation, cognitive re-
evaluation, inhibition of expression, feedback consistency), they are positively
correlated to the employees’ creative performance (R = 0.551). To enhance the
creative performance, the managers may improve the employees’ intrinsic moti-
vation through incentive policies or through the executives’ encouragement and
support. Every employee, for the most part, will be affected by these factors;
hence, this study, to some extent, may serve as a reference for the scholars or
managers. Nevertheless, due to the possible interaction between these factors,
their relationship with the employees’ creative performance may be weak and
accurate, which is the limitation of this paper. The author hopes that the future
scholars of this field may clearly analyze this interaction.
We proposed suggestion as following: First, the future researchers may inves-
tigate the interaction between the factors impacting the employees’ creative per-
formance as well as explore the corresponding effects. The reason is because
adjusting or changing one of the factors may indirectly influence other deter-
minants, which contributes to a great change in the creative performance of
the employees. Meanwhile, changing one factor may eliminate the effect of sev-
eral factors, which may cause negative change in the staff creative performance.
Second, the future researchers may examine the proportion of the effect of the
individual factor on the creative performance. It assists managers to prioritize
incentives so that the enterprise may maximize the profits at the lowest cost.
Overall, joint efforts are needed in terms of enhancing the employees’ creative
performance.
References
1. Chong E, Ma X (2010) The influence of individual factors, supervision and work
environment on creative self-efficacy. Creativity Innov Manag 19(3):233–247
2. Dong L, Zhang J, Li X (2012) Relationship between employee’s affect and creative
performance under the moderation of autonomy orientation standard (in Chinese)
3. Gong Z, Zhang J, Zheng X (2013) Study on feedback-seeking influence of creative
performance and motive mechanism (in Chinese)
4. Huang L, Krasikova DV, Liu D (2016) I can do it, so can you: the role of leader
creative self-efficacy in facilitating follower creativity. Organ Behav Hum Decis
Process 132:49–62
5. Song Y, He L et al (2015) Mediate mechanism between work passion and creative
performance of employees. J Zhejiang Univ (Sci Ed) 42(6):652–659 (in Chinese)
6. Tang C, Liu Y, Wang T (2012) An empirical study of the relationship among charis-
matic leadership style, team identity and team creative performance in scientific
teams. Sci Sci Manag S&T 33(10):155–162 (in Chinese)
7. Ulger K (2016) The creative training in the visual arts education. Thinking Skills
Creativity 19(32):73–87
8. Zhang J (2003) Study on situational factors of employees creative performance
and motive mechanism. Master’s thesis, Capital Normal University, Beijing (in
Chinese)
9. Zhang J, Dong L, Tian Y (2010) Improve or hinderinfluence of affect on creativity
at work. Adv Psychol Sci 6:955–962 (in Chinese)
10. Zhang J, Song Y et al (2013) Relationship between employees’ affect and cre-
ative performance the moderation role of general causality orientation. Forecasting
32(5):8–14 (in Chinese)
The Impact of Mixed Ownership Reform
on Enterprise Performance–An Empirical Study
Based on A-Share Listing Corporation in China
Abstract. This paper, with data of 486 listed companies from 2003 to
2014 over the time span of 12 years, finds that the proportion of non state-
owned shares in the Fixed-Effect Model is negatively correlated with the
profitability of the company, and it is positively correlated with the com-
pany’s development capacity; The proportion of circulating stocks has a
significant positive impact on the corporate debt solvency and devel-
opment ability; The proportion of the largest shareholder’s shareholding
and the proportion of the top ten shareholders are significantly correlated
with the total asset growth rate, while separation rate of two rights and
total asset growth rate are negatively correlated. Moreover, by using the
Difference-in-Difference Model to test the reform policies, the regression
results find that the restructuring has a significantly positive impact on
corporate profitability and development capacity. The consistency of two
methods confirms that the mixed ownership reform policy has a positive
role in promoting the profitability, debt solvency and development capa-
bility of state-owned listed companies.
1 Introduction
The core reforms of state-owned enterprise (SOE) on the three levels are mainly:
state-owned assets supervision, state-owned enterprise equity, state-owned enter-
prise operation. Among the three, SOE equity is the core of state-owned enter-
prise restructuring. The main core of SOE equity is to actively develop a mixed
ownership economy, allowing more state-owned economy and other ownership
economy to develop a mixed ownership economy, allowing mixed ownership econ-
omy to implement employee holdings, and finally the formation of co-ownership
of capital and labor Therefore, the exploration of the relationship between the
different ownership structure of various property nature and the performance
of enterprises in the first restructuring process and the “second revision” is a
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 55
The Impact of Mixed Ownership Reform on Enterprise Performance 671
2 Literature Review
Since reform and opening up, China has been experiencing a slow process of
economic transition. From the mid and late 1990s, due to the establishment
of the modern enterprise system emphasized by the Third Plenary Session of
the 14th CPC, the inefficiency of SOE drew the attention of the majority of
Chinese scholars to the researches on the efficiency of SOE. Yao [15] based on
the data analysis of the third industrial census, concluded that the rise of non-
state economic components may promote the overall level of China’s industrial
improvement. Liu using the 1995 national industrial census data, found that
among the variety of ownership enterprises, SOE has the lowest efficiency [7].
Liu and Li used [8] the data from 451 competitive enterprises (1994–1999) to
make an empirical research and obtain the negative impact of state ownership
on corporate performance, and conclude that non-state capital is positively cor-
related with firm performance. During this period, many scholars have studied
the impact of ownership changes on corporate performance [4,13]. Song Ligang,
Yao [12] also found a clear time trend, the effect of restructuring is most sta-
ble for those enterprises with a moderate length of restructuring history and
672 S. Ma et al.
for the enterprises that implemented restructuring between 1997 and 1999. Bai
and Tao [1] used the panel data of the national industrial enterprises from 1998
to 2003 to analyze the economic and social benefits of the SOE restructuring,
and concluded that the economic benefits of the state-controlled enterprises are
better while the social benefits of the non-state-owned enterprises are better,
and that the restructuring effect will continue for some time. Yang et al. [14]
found that although the number of employees decreased after the restructuring
of collective enterprises, the wage benefits per capita and taxation increased sig-
nificantly, indicating that in general the restructuring of collective enterprises
had a positive effect on social welfare. Sheng [11] used the multiplier method
based on the tendency scoring matching to analyze the micro-data of Chinese
industrial enterprises from 1999 to 2007, and concludes that the marketization
and the introduction of competition mechanism makes the reform of SOE play
a role in promoting social welfare. In the ongoing process of enterprise restruc-
turing, some scholars use the Double-differential Model to compare the effects
of the restructuring policy. Li and Qiao adopted the Double-differential Model
for China’s industrial data from 1999 to 2006, and found that the economic
performance of state-owned enterprises improved significantly in 2003, and that
the overall economic performance of SOE improved. Chen and Tang based on
the national industrial data from 1999 to 2007, study the social burden and the
policy burden on the enterprises with the Double-differential Model, and find
that the mixed ownership reform can reduce the policy burden of SOR, and that
the reform efficiency of monopoly mixed ownership is higher than that of the
competitive industry.
This paper uses the 486 listed companies listed in China’s A-share market from
2003 to 2014, including all the industry classification of the SFC in 2014, with a
total of 5832 effective observations, and set the dummy variable for the industry
and the three economic zones. According to the “State Council’s work report
on the state-owned enterprise reform and development” published in 2012, more
than 90% of the state-owned enterprises have completed the shareholding reform.
Debt Asset ratio (DAR), which is the percentage of total liabilities at the end of
the period divided by the total amount of assets. It is an important measure of
the level of corporate liabilities and the degree of risk. 3 The growth capacity
is measured by the sustainable growth rate (SGR) and the total asset growth
rate (TAGR). The SGR is the highest growth rate that can be achieved by the
non-issuance of new shares and maintaining the current operating efficiency and
financial policy. So, this indicator represents a suitable pace of development The
growth rate of the total assets is the ratio of the total assets increase of the year
to its total assets at the beginning of the year, so it reflects the growth of the
assets. The growth of the assets is an important aspect of the development of
the enterprises. Enterprises of high growth rate can maintain the steady growth
of assets.
(2) Explain the Variables
1 Mixed ownership restructuring index: the proportion of non-state-owned
shares (nonNSOS). Because the GTA CSMAR database contain the capital
structure of all the listed companies including the total number of shares and
the number of state-owned shares, so the proportion of non-state shares can be
calculated.
2 The proportion of circulating capital shares (LTBL): the ratio of the cir-
culating shares to the total shares of the company. The greater the proportion
of circulating shares, the more the stock reflects the true value of the company.
3 Proportion of sponsor shares (POP) is the ratio of the total number of
sponsor shares to the total capital shares of the company. The sponsor shares
refer to the special shares offered by the listed company to the founder(s) of the
company.
4 The proportion of the largest shareholder holdings (POFLS): the ratio
of the number of shares held by the largest shareholder to the total number of
shares.
5 The proportion of the top ten shareholders holding (POTTS): the ratio
of the number of shares held by the top ten shareholders to the total number of
shares.
6 The separation rate of two rights (SRTR): the difference between the
control of the actual controller of the listed company and his or her ownership.
We use the Herfindahl index to measure the concentration of the industrial
market. The higher the Herfindahl index, the higher the degree of market con-
centration, and the higher the degree of monopoly. Other explanatory variables
include total assets, total liabilities, paid-in capital, gross operating income, total
operating costs, equity multiplier, equity ratio, flow ratio, quick ratio, total num-
ber of shareholders, and income tax.
As the listed companies that we have selected basically involve in the SFC
industry classification of 2014, and these listed companies registered in the three
major economic zones of the eastern and central regions. Therefore, for the
study of the industry, several major industry categories are focused on, such as
(1) electricity, heat, gas and water production and supply, (2) manufacturing,
(3) real estate and (4) wholesale and retail. For the study of the regions, the three
674 S. Ma et al.
major economic zones in the eastern, central and western regions, two dummy
variables are set as (East) for the East region and (West) for the West region.
Of the 486 listed companies selected, 268 are located in the eastern region, 114
are in the central region and 106 are in the western region.
So in order to measure the policy impact changes of the test group and the
control group at the same time level, we can make the following adjustment to
get the difference caused by restructuring policies:
The Impact of Mixed Ownership Reform on Enterprise Performance 677
Table 2. An analysis of the effect of mixed ownership reform policy on the performance
of listed companies (difference-in-difference model regression).
{E(Y |D1i = 1) − E(Y |D1i = 0)} − {E(Y |D2i = 1) − E(Y |D2i = 0)}. (4)
The net effect of this restructuring policy not only measures the impact of
the policy before and after its implementation, but also measures the policy
differences between the test group and control group. So we have this regression
model of Difference-In-Difference (Model 2):
Yit is the performance index of the listed company i in the year t, and D1i
is the dummy variable between groups, where D1i = 1 is the test group, D2i is
the time dummy variable; and D3i is the interactive item. D3i = D1i × D2i ; β3
is the Difference-in-Difference statistics, that is, the differences brought about
by policy. Since other variables in Model Two are derived from the Model One,
so the definition and interpretation of these variables in Model Two are not
repeated here. For brevity we only list the regression results of several variables
that related to the proportion of non-state equity, and the results and discussion
of other variables are omitted. The results of Model (2) are shown in Table 2:
Table 2 shows the regression results for Model (2). The regression results
between the companies that implement mixed ownership reforms and those that
do not implement mixed ownership reforms show that difference-in-difference
statistics for the Interactive item D3 is very significant for the total net profit
margin, and is relatively significant for the total assets growth rate. In addi-
tion, the difference-in-difference statistics β3 coefficient is significantly positive,
indicating that compared to the enterprise without reform, the enterprise with
reforms have greatly improved their profitability and development capability.
The effect of the mixed ownership restructuring policy is not significant to the
difference-in-difference statistics β3 coefficient in the regression of the debt sol-
vency, which indicates that the effect of the mixed ownership reform policy is
not reflected. From Table 2, we can also see that with the increase in the pro-
portion of non-state-owned shares, the decline in profitability and sustainable
growth rate is relatively alleviated compared with the results in Table 1, indicat-
ing that the restructuring policy has played a role in improving the profitability
of enterprises.
References
1. Bai D, Lu J, Tao Z (2006) An empirical study on the effect of state-owned enterprise
reform. Econ Res 8:4–13 (in Chinese)
2. Djankov S, Murrell P (2002) Enterprise restructuring in transition: a quantitative
survey. J Econ Lit 40(3):739–792
3. Frydman R, Gray C, Hessel M et al (1998) When does privatization work? The
impact of private ownership on corporate performance in the transition economies.
Q J Econ 114(4):1153–1191
4. Hu Y, Song M, Zhang J (2005) The relative importance and interrelationship of
the three theories of competition, property right and corporate governance. Econ
Res 9:44–57 (in Chinese)
5. Kang SW, Lee KH (2016) Mainstreaming corporate environmental strategy in
management research. Benchmarking 23(3):618–650
6. Kim J, Song HJ et al (2017) The impact of four CSR dimensions on a gaming
company’s image and customers’ revisit intentions. Int J Hospitality Manage 61:73–
81
7. Liu X (2000) The impact of the ownership structure of china’s industrial enterprises
on the efficiency difference - an empirical analysis of the census data of national
industrial enterprises in 1995. Econ Res 2:17–25 (in Chinese)
8. Liu X, Li L (2005) An empirical analysis of the impact of restructuring on corporate
performance. China Ind Econ 3:5–12 (in Chinese)
9. Megginson WL, Nash RC, Netter JM et al (2004) The choice of private versus
public capital markets: evidence from privatizations. J Finan 59(6):2835–2870
10. Pagano M (2005) The political economy of corporate governance. CSEF Working
Pap 95(4):1005–1030
11. Sheng D (2013) The restructuring of state-owned enterprises, the degree of compe-
tition and social welfare - a study based on the corporate cost mark-up percentage.
Economics (Q) 4:1465–1490 (in Chinese)
12. Song L, Yao Y (2005) The impact of restructuring on corporate performance. Soc
Sci China 2:17–31 (in Chinese)
13. Wu C, Li D (2005) Enterprise behavior in mixed market. Donyue Tribune 1:38–47
(in Chinese)
14. Yang Z, Lu J, Tao Z (2007) Political asylum and reform: research on the restruc-
turing China’s collective enterprise. Econ Res 5:104–114 (in Chinese)
15. Yao Y (1998) Effects of non-state ownership on technological efficiency of industrial
enterprises in China. Econ Res 12:29–35 (in Chinese)
Online Impulse Buying on “Double Eleven”
Shopping Festival: An Empirical Investigation
of Utilitarian and Hedonic Motivations
1 Introduction
With the growing field of e-commerce, online shopping trend is becoming popular
across the world. China has become largest e-commerce economy in the world.
Consumers always prefer to buy goods online especially on occasions. Most of
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 56
Online Impulse Buying on “Double Eleven” Shopping Festival 681
the time when they make decision about online shopping they act impulsively.
One click ordering, easy access to products, absence of delivery, rich information
about product and time saving these all factors leads to buy impulsively online.
There are many factors that influence on impulse buying behavior. Impulse pur-
chase, is explained as compelling, hedonically complex and unplanned buying
behavior [35]. With the tremendous growth of e-commerce and rapid develop-
ment of information technology, online impulse purchasing has become an epi-
demic. It is also noteworthy that about 40% of all online shoppers spending
is attributable to online impulse purchasing [24,36]. Online shopping setting
is more encouraging to impulse buying behavior than its offline complement.
Online shopping environment liberates consumers from the constraints (e.g.,
social pressure from the staff, inconvenient store locations, limited operating
hours, and it need much time) that they might experience during physical shop-
ping events [13].
100000 60.00%
50.30% 50.00%
47.90%
75000 45.80%
42.10% 68826
64875 40.00%
10,000 Persons
38.30%
61758
34.30% 56400
50000 28.90% 51310 30.00%
45730
22.60%
38400 20.00%
16%
25000 10.50%
8.50% 29800
10.00%
21000
11100 13700
0 0.00%
2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
According to the CNNIC report in December 2015 China had 688 million
netizens, up 39.51 million over the previous year. The internet penetration rate
reached 50.3%, up 2.4% points from the end of 2014 (see Fig. 1) [8]. The survey
result shows that in 2015, the mobile phone was the most popular device used
for internet access by new internet users, by 71.5% of them, up by 7.4% points
from the end to 2014. Among the new internet users in 2015, 46.1% were under
19, and 46.4% were students, and entertainment and communication were their
two biggest reasons to access the internet. Online impulse purchase is defined
as an immediate and sudden online buying with no preshopping intentions [31].
Previous studies reported that unplanned buying account for up to 60% of all
purchases [19,27] and according to [18,21] 40% to 60% of impulse purchases
depending on product category. A number of studies have been done in offline
setting with impulse buying while a little attention focused on online impulse
buying. Recently in an online buying setting, scholars have studied how to better
682 U. Akram et al.
appeal to impulse shoppers to take benefit of the behavior which has helped brick
and mortar retailers flourish for decades [9,23,36,37]. Irrespective of context, a
main purpose in retailing is to increase the attraction to improve sales [6,21].
Due to the pervasiveness and practical implications of impulse purchase, retailers
have focused significant efforts on facilitating the behavior [9,23]. This study is
not only beneficial for online retailers, but it also has future directions and
guidelines for scholars who are having been studied on impulse buying behavior
and produced number of studies in last decade.
In this regards, it is important to understand the comparative benefits of
buying online over offline shopping. Therefore, retailers should be aware of the
motivations behind the consumers’ online buying process. Motivations are an
essential factor to define the individual’s behavior. They stem unmet needs and
represent, through concrete actions, the benefits which people hope to achieve
[33]. Two types of motivational factors influence on online shopping i.e., Utilitar-
ian and Hedonic motivations [26]. While making essential consumption decisions,
it is reasonable to create a difference between utilitarian and hedonic motives.
These motives construct two aspects of attitudes with respect to behavioral acts.
Hedonic motivations are related with experiential or emotional aspects which
make the pleasured shopping experience while utilitarian motivation aspects
associated with rational, functional, practical, economic or extrinsic benefits. In
this research, we focus on how these two motivations influence on online impulse
shopping.
Table 1. 2009–2016 Alipay “Double Eleven” day trading volume (unit: 100 million
yuan)
In China, online shopping market has shown great potential due to rapid
growth of Chinese netizens and online consumers (see Fig. 1). In order to achieve
the competitive advantages a number of businesses joining the e-commerce indus-
try. They must understand consumer needs and try to meet their demands, they
should make a marketing strategies (e.g., low price product, augmented services,
speedy transaction process, quality of products) to earn profit and attract neti-
zens. In this regards, the “Double Eleven” online shopping festival has an emerg-
ing trend. Taobao introduced “Double Eleven” shopping festival on November
11, 2009 on “singles’ Day”. “Double Eleven” online shopping festival is most
popular among Chinese netizens/e-commerce industry and it is largest com-
mercial activity. Early 2016, the festival has been successfully held for seven
times. This event has carried out vast economic benefits, which have increased
year by year, and every year revive the record single-day net buying business
amount (see Table 1). Aforesaid studies about online impulse buying behavior
have shown that there is a great opportunity to identify how online impulse
Online Impulse Buying on “Double Eleven” Shopping Festival 683
purchase effected due to this event i.e. “Double Eleven” because authors best
knowledge no study has been conducted in this context. Only a few researchers
have studied to examine the “Double Eleven” model successful factors from dif-
ferent perspectives. Liu [24] reported that this setting can effectively motivate
to online shoppers to create impulse buying behavior (buying behavior that goes
beyond what has been planned).
This study provides a better understanding about motivational factors that
influence on online impulse buying adoption in China with the perspective of
“Double Eleven” shopping festival. A little work has been investigated on rela-
tionship between motivational factors and online impulse buying, but no sin-
gle study has yet investigated these factors in the context of “Double Eleven”
shopping event. The Current study fill this gap by incorporated the conception
of motivational factors i.e. Utilitarian and Hedonic on online impulse buying
behavior in the context of “Double Eleven” shopping activity. Aforementioned
discussions, the purpose of this manuscript is threefold: (1) to highlights the
major findings in study regarding “Double Eleven” shopping festival, (2) to
investigate the relationship between online impulse buying and utilitarian moti-
vations with the regard of Double Eleven shopping activity (3) to explore the
relationship between hedonic motivations and online impulse buying with the
regard to “Double Eleven” shopping activity. The rest of the manuscript is orga-
nized as follows. First we discuss core concepts of online impulse buying and
motivational factors i.e., utilitarian and hedonic and their relationship, based
on their connection we develop the hypotheses and draw research model. The
next section, we test the hypotheses based on data collection and data analysis.
Finally, last section presents the research outcomes, managerial implications and
future direction.
H1 (HWB → OIB): Hedonic web browsing positively relate with online impulse
buying in the context of “Double Eleven” shopping events.
H2 (UWB → OIB): Utilitarian web browsing positively relate with online
impulse buying in the context of “Double Eleven” shopping events.
Fig. 2. A structure model for Hedonic web browsing, Utilitarian Web browsing and
Online impulse buying
Mathematical Model
4 Research Methodology
The aim of this study is to find the motivational factors effect on OIB on “Dou-
ble Eleven” shopping festival. Data was collected from six different districts i.e.
Haidian, Chaoyang, Fengtai, Changping, Tongzhou, and Dongcheng of Beijing.
The survey was conducted on the Double Eleven shopping festival date on 11th
November 2016. Convenience sampling method was utilized to collect the large
data.
4.1 Questionnaire
Paper questionnaire and online survey techniques was used for data collection.
Initially, questionnaire was developed/adapt in English and subsequently trans-
lated into Chinese and back translation into English for accuracy. Services of
Chinese language experts and translators were employed to translate the ques-
tionnaire into Chinese and then it was also translated back into English to check
accuracy. Translation and back translation was validated by International Chi-
nese Training Center, Beijing. The first part of questionnaire was designed to
ensure the respondents have online buying experience on “Double Eleven” shop-
ping festival and in order to get information related with respondents’ character-
istics. The second section was designed to examine the relationship among online
impulse buying behavior, hedonic and utilitarian motivations. Pilot testing were
utilized (n = 65). Additionally, total 470 questionnaires were distributed among
online shoppers and (n = 426) was valid for the data analysis with response rate
90.63%. Note books were provided to the respondents in order to motivate and
seduce their patience, which cost about 5 RMB (.7 $).
4.2 Instrumentations
Five items were adapted from [36] to measure the online impulse buying behavior
e.g., My purchases were spontaneous. In order to measure the utilitarian web
browsing five items were adopted from [9] e.g., I browse to buy better items
in price and quality and to measure the hedonic web browsing four items were
adopted from [9] e.g., while web browsing, I am able to forget my problems and
to feel relaxed. Rezaei [32] also used these three instruments in his study.
k k
k
Measured with the Joreskog Rho = i=1 λ2i i=1 1 − λi
2 2
i=1 λi + .
Before examining the structure model, multi-collinearity issue was assessed
through variance inflation factor (VIF) produced by SPSS 21 version. The results
of VIFs indicated the value of 1.34 for online impulse buying, 2.70 for utilitarian
web browsing and 1.76 for hedonic web browsing variable, all stated values are
lower than the threshold value 10 [10]. Therefore, the results show that no serious
problem of multicollinearity in our study. Missing values in data set is a big
challenge for social science scholars like information system, Human resource
management and marketing areas. Many techniques can be performed for the
treatment of missing values but multiple imputation considered effectively [32].
In this study, to handle missing values and impute missing values effectively by
using SPSS, expectation maximization algorithm (EMA) was utilized. For the
missing value treatment, little’s missing completely at random MCAR which is
chi-square χ2 test for missing completely at random Little’s MCAR test: χ2 =
334.270, df = 332, significance level = .444. Furthermore, before factor analysis
Kaiser-Meyer-Olkin (KMO) test was assessed for the sampling adequacy which
shows KMO value .867. KMO value .867 is less then benchmark value .10 [16].
To examine the structure model confirmatory factor analysis (CFA) was
used to test the hypothesis and measurement model by using AMOS 21 ver-
sion. After removing problematic items, we reran the CFA analysis, the results
indicated that satisfactory fit (CMIN/DF: 1.82; p < .000; NFI = .98; IFI = 0.96;
Online Impulse Buying on “Double Eleven” Shopping Festival 687
GFI = .887; CFI = .91; RMSEA = 0.05). All values of CFA analysis were sat-
isfactory and meet the threshold value. Reliability and Validity test performed
in the model. Convergent validity was examined by using three parameters i.e.,
(1) factor loading grater then .70 with statistical significance (2) composite reli-
ability (CR) larger then .80. (3) Average variance extraction (AVE) higher then
.50 [16]. Table 3 indicated that all values of factor loading is greater than .70, in
addition, all constructs show the high level of reliability internal consistency CR
and Cronbach’s alpha values ranges from .84 to .98. The value of AVE is greater
then the benchmark value .50. Thus, we attained the good level of convergent
validity and reliability. Table 3 reports all statistical values of mean, standard
deviation and discriminant validity. We used Hair’s criterion to evaluate the dis-
criminant validity which directs square root to average variance extraction AVE
for all constructs should be higher then it correlations [16]. Discriminant validity
among all variables were confirmed based upon Hair’s criterion (see Table 4).
688 U. Akram et al.
Table 5 depicts the results of path coefficients (β) which indicates the hypoth-
esized association among the constructs. Figure 2 represents the results of struc-
ture model. H1 that proposes there is a positive relationship between hedonic
web browsing and online impulse buying was approved with path coefficient (β)
0.823, t-statistics 21.4 and standard error 0.067. Results demonstrated that hedo-
nic web browsing positive and significantly influences on online impulse buying
on “Double Eleven” shopping festival. H2 proposed that utilitarian web brows-
ing positively influence on online impulse buying (utilitarian web browsing →
online impulse buying) with coefficient (β) of 0.764, standard error of 0.089 and
the value of t-statistics 32.1. H2 was also supported. Furthermore, Table 4 shows
the values of R2 for the relationships of constructs. The R2 value for (UWB →
OIB) .541 and for (HWB → OIB) .424, which means 54.1% change in online
impulse buying due to utilitarian web browsing while 42.4% change in online
impulse buying due to hedonic web browsing. Hedonic web browsing strongly
influences on online impulse buying than utilitarian web browsing. The findings
confirm that UWB and HWB strongly influenced on online impulse buying.
The results of this study demonstrated that Hedonic and utilitarian web
browsing both influenced on online impulse buying behavior on “Double Eleven”
shopping festival. The findings also support broadened theory of impulse pur-
chase behavior [5], which recommends that web browsing motivation is a key to
enhance online impulse purchase for apparel purchase from both hedonic and
utilitarian perspective. Inclination with the findings of this study Kim and Eas-
tim argued that there is a huge difference between hedonic and web browsing
behavior in online shopping. With the perspective of utilitarian value, consumers
are focused towards completing consumption goals [3,4,7,29], while hedonic
value, consumers are more attentive on entertainment, emotional and fun when
dealing with online browsing [7,22]. Park [9] found that hedonic web browsing
positively influences on OIB whereas utilitarian web browsing negatively influ-
enced on online impulse buying. In an online shopping context, this evidence
supports the hedonic nature of online impulse behavior [6,30].
E-tailer should make successful strategies by focusing on hedonic and util-
itarian web browsing motives for e-shoppers by ensuring in professional man-
ner, for instance; easy purchase process, good selection of variety, elegance and
security of the web site. These all factors lead to online impulse shopper. This
manuscript has some important practical implication towards OIB on “Double
Eleven” shopping activity. This study’s finding will help the e-tail managers and
web hosts associated with online selling to exploit exogenous factors HEB and
UWB to enhance their impulse sales through web traffic. Utilitarian and hedonic
web browsing plays a significant role to motivate e-shoppers to buy impulsively
specially on “Double Eleven” event. In China, there is rapid growth towards neti-
zens and penetration rate of internet users, now every individual prefer buying
online. The outcomes of this study will useful for two e-tail big players in China
(i.e., www.jd.com and www.taobao.com) as well as small e-tail businesses. There-
fore, marketers should focus on these strategies in order to increase the impulse
buying behavior which is major contribution in the field of e-commerce retailing
and marketing (Table 6).
Despite all the strengths and potential contributions of the study, to general-
ize the findings, future research should consider the limitation of study. First, in
this study we collect data from Beijing and just focus on “Double Eleven” shop-
ping festival. Results can be enhanced by expanding the study sample and incor-
porating more festivals that are found to increase online sales. Moreover, this
manuscript has considered only two motivational factors i.e., UWB and HWB,
future scholars may use other factors related to web site personality, emotional
factors such as perceived usefulness and perceived ease of use and outcomes
could be used to improve other type of e-commerce web site. This study is based
on quantitative data analysis and cross sectional in nature. For online shopping
characteristic of Double Eleven festival longitudinal and qualitative method (i.e.,
interviews and focus group) may also be used for deep understanding of online
shopping environment. Pre-and post-purchase behavior with regard to “Dou-
ble Eleven” shopping festival may also be analyzed by future researchers. Lastly,
future studies are recommended to analyze the proposed model in different e-tail
cultural environment.
690 U. Akram et al.
Appendix
References
1. Akram U, Peng H et al (2017) Impulsive buying: a qualitative investigation of the
phenomenon. Springer, Singapore
2. Akram U, Peng H et al (2016) Impact of store atmosphere on impulse buying
behaviour. Moderating effect of demographic variables. Int J U-and E-Serv Sci
Technol 9(7):43–60
Online Impulse Buying on “Double Eleven” Shopping Festival 691
3. Babin BJ, Griffin M (1994) Work and/or fun: measuring hedonic and utilitarian
shopping value. J Consum Res 20(4):644–656
4. Babin LA, Babin BJ, Boles JS (1999) The effects of consumer perceptions of the
salesperson, product and dealer on purchase intentions. J Retail Consum Serv
6(6):91–97
5. Baumeister RF (2002) Yielding to temptation: self-control failure, impulsive pur-
chasing, and consumer behavior. J Consum Res 28:670–676
6. Beatty SE, Ferrell ME (1998) Impulse buying: modeling its precursors. J Retail
74(2):161–167
7. Choi J, Li YJ et al (2014) The odd-ending price justification effect: the influence of
price-endings on hedonic and utilitarian consumption. J Acad Mark Sci 42(5):545–
557
8. CNNIC (2016) Statistic report on internet development in China
9. Dholakia UM (2000) Temptation and resistance: an integrated model of consump-
tion impulse formation and enactment. Psychol Mark 17(11):955–982
10. Diamantopoulos A, Siguaw JA (2006) Formative versus reflective indicators in
organizational measure development: a comparison and empirical illustration. Br
J Manage 17(4):263–282
11. Dittmar H, Long K, Meek R (2004) Buying on the internet: gender differences in
on-line and conventional buying motivations. Sex Roles 50(5):423–444
12. Eastin SK, Matthew S (2011) Hedonic tendencies and the online consumer: an
investigation of the online shopping process. J Internet Commer 10(1):68–90
13. Eroglu SA, Machleit KA, Davis LM (2001) Atmospheric qualities of online retail-
ing: a conceptual model and implications. J Bus Res 54(2):177–184
14. Gohary A, Hanzaee KH (2014) Personality traits as predictors of shopping motiva-
tions and behaviors: a canonical correlation analysis. Arab Econ Bus J 10(9):166–
174
15. Ha S, Stoel L (2012) Online apparel retailing: roles of e-shopping quality and
experiential e-shopping motives. J Serv Manage 23(2):197–215
16. Hair JF, Black WC et al (2010) Multivariate data analysis: a global perspective
17. Hair JF, Sarstedt M (2011) PLS-SEM: indeed a silver bullet. J Mark Theor Pract
19(2):139–151
18. Hausman A (2000) A multi-method investigation of consumer motivations in
impulse buying behavior. J Consum Mark 17(5):403–426
19. Inman JJ, Ferraro R, Winer RS (2004) Where the rubber meets the road: a model
of in-store consumer decision making
20. Jones MA, Reynolds KE et al (2003) The product-specific nature of impulse buying
tendency. J Bus Res 56(7):505–511
21. Kacen JJ, Hess JD, Walker D (2012) Spontaneous selection: the influence of prod-
uct and retailing factors on consumer impulse purchases. J Retail Consum Serv
19(6):578–588
22. Kaltcheva VD, Weitz BA (2013) When should a retailer create an exciting store
environment. J Mark 70(1):107–118
23. Kervenoael RD, Aykac DSO, Palmer M (2009) Online social capital: understanding
e-impulse buying in practice. J Retail Consum Serv 16(4):320–328
24. Liu Y, Li H, Hu F (2013) Website attributes in urging online impulse purchase: an
empirical investigation on consumer perceptions. Decis Support Syst 55(3):829–837
25. Madhavaram SR, Laverie DA (2004) Exploring impulse purchasing on the internet.
Adv Consum Res 31:59–66
692 U. Akram et al.
1 Introduction
Multiple attribute decision making (MADM) aims to find the ranking position
of alternatives in the presence of multiple incommensurate attributes. Many
MADM problems take place in an environment in which the information about
attribute weights are incompletely known and attribute values take the form of
intervals and fuzzy numbers [16,24,29].
Grey relational analysis (GRA) is part of grey system theory [3], which is
suitable for solving a variety of MADM problems with both crisp and fuzzy data.
The application of GRA with fuzzy data has recently attracted the attention of
many scholars [5,8,25].
GRA solves MADM problems by aggregating incommensurate attributes for
each alternative into a single composite value while the weight of each attribute is
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 57
696 M.S. Pakkar
2 Methodology
2.1 Fuzzy Multiple Attribute Grey Relational Analysis
u− − − −
0j = max{r1j , r2j , · · · , rmj } ∀j, (4)
u+
0j = +
max{r1j +
, r2j ,··· +
, rmj } ∀j. (5)
− +
To measure the degree of similarity between rij = [rij , rij ] and u0j =
− +
[u0j , u0j ] for each attribute, the grey relational coefficient, ξij , can be calculated
as follows:
− +
mini minj [u− − + − +
0j , u0j ] − [rij , rij ] + ρ maxi maxj [u0j , u0j ] − [rij , rij ]
+
ξij = − + , (6)
[u , u ] − [r− , r+ ] + ρ maxi maxj [u− , u+ ] − [r− , r+ ]
0j 0j ij ij 0j 0j ij ij
ficient, generally ρ = 0.5. It should be noted that the final results of GRA
for MADM problems are very robust to changes in the values of ρ. Therefore,
selecting the different values of ρ would only slightly change the rank order of
attributes [13]. To find an aggregated measure of similarity between alternative
Ai , characterized by the comparability sequence Ri , and the ideal alternative
A0 , characterized by the reference sequence U0 , over all the attributes, the grey
relational grade, Γi , can be computed as follows:
n
Γi = wj ξij , (7)
j=1
698 M.S. Pakkar
n
where wj is the weight of attribute Cj and j=1 wj = 1. In practice, expert
judgments are often used to obtain the weights of attributes. When such infor-
mation is unavailable equal weights seem to be a norm. Nonetheless, the use
of equal weights does not place an alternative in the best ranking position in
comparison to the other alternatives. In the next section, we show how DEA can
be used to obtain the optimal weights of attributes for each alternative in GRA.
Since all the grey relational coefficients are benefit (output) data, a DEA-based
GRA model can be formulated similar to a classical DEA model without explicit
inputs [15]:
n
Γk = max wj ξkj , (8)
j=1
n
wj ξij ≤ 1 ∀i, (9)
j=1
where Γk is the grey relational grade for alternative under assessment Ak (known
as a decision making unit in the DEA terminology). k is the index for the
alternative under assessment where k ranges over 1, 2, · · · , m. wj is the weight
of attribute Cj . The first set of constraints (9) assures that if the computed
weights are applied to a group of m alternatives, (i = 1, 2, · · · , m), they do not
attain a grade of larger than 1. The process of solving the model is repeated
to obtain the optimal grey relational grade and the optimal weights required
to attain such a grade for each alternative. The objective function (8) in this
model maximizes the ratio of the grey relational grade of alternative Ak to
the maximum grey relational grade across all alternatives for the same set of
weights (max Γk /maxi=1,··· ,m Γi ). Hence, an optimal set of weights in the DEA
based-GRA model represents Ak in the best light in comparison to all the other
alternatives. It should be noted that the grey relational coefficients are normal-
ized data. Consequently, the weights
n attached to them are also normalized. In
addition, adding the constraint j=1 wj = 1 to the DEA-based GRA model is
not recommended here. In fact, the sum-to-one constraint is a non-homogeneous
constraint (i.e., its right-hand side is a non-zero free constant) which can lead
to underestimation of the grey relational grades of alternatives or infeasibility in
the DEA-based GRA model (see [22])
DEA-based GRA model. We want the grey relational grade, Γk (w), calculated
from the vector of weights w = (w1 , · · · , wn ) to be closest to Γk∗ . Our definition
of “closest” is that the largest distance is at its minimum. Hence we choose
the form of the minimax model: minw maxk {Γk∗ − Γk (w)} to minimize a single
deviation which is equivalent to the following linear model:
min θ (11)
n
s. t. Γk∗ − wj ξkj ≤ θ, (12)
j=1
n
wj ξij ≤ Γi∗ ∀i, (13)
j=1
θ ≤ 1, (14)
θ, wj ≥ 0 ∀j. (15)
The combination of (11)–(15) forms a minimax DEA based-GRA model that
identifies the minimum grey relational loss θmin needed to arrive at an optimal
set of weights. The first constraint ensures that each alternative loses no more
than θ of its best attainable relational grade, Γk∗ . The second set of constraints
satisfies that the relational grades of all alternatives are less than or equal to their
upper bound of Γk∗ . It should be noted that for each alternative, the minimum
grey relational loss θ = 0. Therefore, the optimal set of weights obtained from
the minimax DEA based-GRA model is exactly similar to that obtained from
the DEA-based GRA model.
On the other hand, the priority weights of attributes are defined out of the
internal mechanism of DEA by AHP. In order to more clearly demonstrate how
AHP is integrated into the newly proposed minimax DEA-based GRA model,
this research presents an analytical process in which attributes’ weights are
bounded by the AHP method. The AHP procedure for imposing weight bounds
may be broken down into the following steps:
Step 1. A decision maker makes a pairwise comparison matrix of different
attributes, denoted by B with the entries of bhq (h = q = 1, 2, · · · , n).
The comparative importance of attributes is provided by the decision
maker using a rating scale. Saaty [23] recommends using a 1–9 scale.
Step 2. The AHP method obtains the priority weights of attributes by comput-
ing the eigenvector of matrix B (Eq. (16)), e = (e1 , e2 , · · · , ej )T , which
is related to the largest eigenvalue, λmax .
Be = λmax e. (16)
To determine whether or not the inconsistency in a comparison matrix
is reasonable the random consistency ratio, C.R. can be computed by
the following equation:
λmax − N
C.R. = , (17)
(N − 1)R.I.
700 M.S. Pakkar
where R.I. is the average random consistency index and N is the size
of a comparison matrix.
In order to estimate the maximum relational loss θmax necessary
to achieve the priority weights of attributes for each alternative, the
following set of constraints is added to the minimax DEA-based GRA
model:
where Zk∗ (θ) is the optimal value of the objective function for 0 ≤ θ ≤ θmax . We
define Δk (θ) as a measure of closeness which represents the relative closeness
Fuzzy Multi-Attribute Grey Relational Analysis 701
of each alternative to the weights obtained from the minimax DEA-based GRA
model in the range [0, 1] after adding the set of constraint (18) to it. Increasing
the parameter (θ), we improve the deviations between the two systems of weights
obtained from the minimax DEA-based GRA model before and after adding the
set of constraints (18). This may lead to different ranking positions for each
alternative in comparison to the other alternatives. It should be noted that in a
special case where the parameter θ = θmax = 0, we assume Δk (θ) = 1.
Table 4. Results of grey relational coefficient for nuclear waste dump site selection
the measure of relative closeness to the AHP weights for the site under assess-
ment is Δk (θmin ) = 0. On the other hand, solving the minimax DEA-based GRA
model for the site under assessment after adding the set of constraints (18), we
adjust the priority weights of attributes (outputs) obtained from AHP in such
a way that they become compatible with the weights’ structure in the minimax
DEA-based GRA model. This results in the maximum grey relational loss, θmax ,
for the site under assessment (Table 5). In addition, this implies that the mea-
sure of relative closeness to the AHP weights for the site under assessment is
Δk (θmax ) = 1.
Table 6 presents the optimal weights of attributes as well as its scaling factor
for all nuclear waste dump sites. It should be noted that the priority weights of
AHP (Table 3) used for incorporating weight bounds on the attribute weights
w
are obtained as ej = αj .
Going one step further to the solution process of the parametric goal pro-
gramming model, we proceed to the estimation of total deviations from the AHP
weights for each site while the parameter θ is 0 ≤ θ ≤ θmax . Table 7 represents
the ranking position of each site based on the minimum deviation from the pri-
ority weights of attributes for θ = 0. It should be noted that in a special case
where the parameter θ = θmax = 0, we assume Δk (θ) = 0. Table 7 shows that
Wells is the best alternative in terms of the grey relational grade and its relative
closeness to the priority weights of attributes.
Nevertheless, increasing the value of θ from 0 to θmax has two main effects
on the performance of the other sites: improving the degree of deviations and
reducing the value of the grey relational grade. This, of course, is a phenom-
enon, one expects to observe frequently. The graph of Δ(θ) versus θ, as shown
in Fig. 1, is used to describe the relation between the relative closeness to the
priority weights of attributes, versus the grey relational loss for each site. This
704 M.S. Pakkar
Table 5. Minimum and maximum grey relational losses for each nuclear waste dump
site
Table 6. Optimal weights of minimax DEA-based GRA model for all nuclear waste
dump sites bounded by AHP
w1 w2 w3 w4 α
0.1516 0.6308 0.3183 0.0579 1.1575
Table 7. The ranking position of each site based on the minimum distance to priority
weights of attributes
may result in different ranking positions for each site in comparison to the other
sites. In order to clearly discover the effect of grey relational loss on the rank-
ing position of each nuclear dump site, as shown in Table 8 in Appendix, we
performed a Kruskal-Wallis test. The Kruskal-Wallis test compares the medi-
ans of rankings to determine whether there is a significant difference between
them. The result of the test reveals that its p-value is quite smaller than 0.01.
Therefore, we conclude that increasing grey relational loss in the whole range
[0.0, 0.33] changes the ranking position of each site significantly. Note that at
Fuzzy Multi-Attribute Grey Relational Analysis 705
Wells 1
Rock 0.9
Anaheim
Duck 0.8
Nome
Duquesne 0.7
Yakima Epcot
0.6
Newark
Gary 0.5
Turkey
Santa 0.4
0.3
0.2
Δ (θ)
0.1
0
0 0.05 0.1 0.15 θ 0.2 0.25 0.3 0.35
Fig. 1. The relative closeness to the priority weights of attributes [Δ(θ)], versus grey
relational loss (θ) for each site
θ = 0 sites can be ranked based on Zk∗ (0) from the closest to the furthest from
the priority weights of attributes. For instance, at θ = 0, Nome, Newark and
Rock Sprgs with grey relational grades of one, are ranked in 12th , 4th and 2nd
places, respectively (Tables 5 and 7). However, with a small grey relational loss
at θ = 0.01, Nome, Newark and Rock Sprgs take 9th , 10th and 5th places in the
rankings, respectively. Using this example, as a guideline, it is relatively easy
to rank the sites in terms of distance to the priority weights of attributes. At
θ = 0.02, Newark moves up into 9th place while Nome and Rock Sprgs drop
in 10th and 6th places, respectively. It is clear that both measures, Zk∗ (0) and
Δk (θ), are necessary to explain the ranking position of each nuclear dump site.
4 Conclusion
Appendix
Table 8. The measure of relative closeness to the priority weights of attributes [Δk (θ)]
verses grey relational loss[θ] for each nuclear waste dump site
θ Nome Newark Rock Sprgs Duquesne Gary Yakima Turkey Wells Anaheim Epcot Duckwater Santa Cruz
0 0 0 0 0 0 0 0 0 0 0 0 0
Rank N/A N/A N/A N/A N/A N/A N/A 1 N/A N/A N/A N/A
0.01 0.0451 0.0445 0.2054 0.2051 0.1409 0.0921 0.0369 1.0000 0.7774 0.0418 0.2693 0.8839
Rank 9 10 5 6 7 8 12 1 3 11 4 2
0.02 0.0870 0.0882 0.4108 0.5901 0.2019 0.1596 0.0725 1.0000 0.8644 0.0835 0.4711 0.9474
Rank 10 9 6 4 7 8 12 1 3 11 5 2
0.03 0.1263 0.1310 0.6162 0.8726 0.2586 0.2270 0.1068 1.0000 0.9292 0.1251 0.6439 1.0000
Rank 10 9 6 4 7 8 12 1 3 11 5 2
0.04 0.1627 0.1734 0.8217 1.0000 0.3100 0.2941 0.1405 1.0000 0.9940 0.1667 0.7656 1.0000
Rank 11 9 5 3 7 8 12 1 4 10 6 1
0.05 0.1967 0.2158 1.0000 1.0000 0.3550 0.3610 0.1743 1.0000 1.0000 0.2082 0.8284 1.0000
Rank 11 9 5 1 8 7 12 1 4 10 6 1
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
0.29 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9070 1.0000 1.0000 1.0000 1.0000 1.0000
Rank 1 1 1 1 1 1 12 1 1 1 1 1
0.3 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9335 1.0000 1.0000 1.0000 1.0000 1.0000
Rank 1 1 1 1 1 1 12 1 1 1 1 1
0.31 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9600 1.0000 1.0000 1.0000 1.0000 1.0000
Rank 1 1 1 1 1 1 12 1 1 1 1 1
0.32 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9865 1.0000 1.0000 1.0000 1.0000 1.0000
Rank 1 1 1 1 1 1 12 1 1 1 1 1
0.33 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
Rank 1 1 1 1 1 1 1 1 1 1 1 1
References
1. Birgün S, Güngör C (2014) A multi-criteria call center site selection by hierarchy
grey relational analysis. J Aeronaut Space Technol 7(1):1–8
2. Cooper WW, Seiford LM, Zhu J (2011) Handbook on data envelopment analysis.
Springer, US
3. Deng JL (1982) Control problems of grey systems. Syst Control Lett 1(5):288–294
4. Dyer JS (1990) Remarks on the analytic hierarchy process. Manage Sci 36(3):249–
258
5. Goyal S, Grover S (2012) Applying fuzzy grey relational analysis for ranking the
advanced manufacturing systems. Grey Syst 2(2):284–298
6. Hashimoto A, Wu DA (2004) A DEA-compromise programming model for com-
prehensive ranking. J Oper Res Soc Jpn 2(2):73–81
7. Hatami-Marbini A, Saati S, Tavana M (2010) Data envelopment analysis with
fuzzy parameters: an interactive approach. Core Discuss Pap Rp 2(3):39–53
8. Hou J (2010) Grey relational analysis method for multiple attribute decision mak-
ing in intuitionistic fuzzy setting. J Converg Inf Technol 5(10):194–199
9. Javanbarg MB, Scawthorn C et al (2012) Fuzzy AHP-based multicriteria decision
making systems using particle swarm optimization. Expert Syst Appl 39(1):960–
966
Fuzzy Multi-Attribute Grey Relational Analysis 707
1 Introduction
Supplier selection is the most challenging issue in supply chain. In order to
develop sustainable supply chain companies need to select right supplier. Appro-
priate selection of supplier helps companies to provide high quality product at
right price and at right time. The term “Supply chain management” (SCM) can
be defined as it is a process that includes: sourcing raw material, convert raw
material to finish goods, and delivers product to final customer. The purpose of
SCM is to reduce inventory and production cycle time, to increase production as
well as achieve long terms goal of firms with respect to customer stratifications.
Appropriate selection of supplier is important task of purchasing department in
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 58
A Fuzzy Multi-criteria Decision Making Approach 709
2 Literature Review
Recently, supply chain management and supplier selection process have gained
great attention in the literature. Appropriate supplier selection is a multi-criteria
problem but traditionally supplier selection based on only one criteria price. The
supplier selection process is one of the most important task for every firm to
establish effective supply chain. In competitive environment, it is critical task
for firms to select right supplier because potential supplier helps organizations
to produce high quality product at reasonable price. In past, various methodolo-
gies have been used to select supplier [1]. Proposed integrated methodology data
envelopment analysis (DEA) and group analytical hierarchy process (GAHP) to
evaluate and select most efficient supplier. Chatterjee, et al. [12] proposed case
based reasoning and decision support system including multi-attribute analy-
sis to measure supplier management performance. Usually, supplier evaluation
consists price, quality, flexibility, but in this paper some environment factors
were considered in supplier selection process using knowledge-based system.
Humphreys et al. [18] presented two methodologies for supplier selection prob-
lem and compare relatives performance of organization. Vikor and outranking
methods were used to select supplier and compare the relative performance of
supplier. Multi-criteria decision making model was applied in a construction
industry for supplier evaluation and selection [21]. Multi-criteria decision mak-
ing enabled construction industry to build good relationships among its supplier,
managers, and partners.
The balance and ranking method were used to select supplier based on multi-
criteria; profitability of supplier, technological capability, conformance quality,
and relationship closeness. The application of model consists three steps; con-
struction of out-ranking matrix, determine relative frequency of each supplier,
then triangularised out-ranking matrix to obtain implicit order of each sup-
plier with the help of balancing method [24]. The activity based costing (ABC)
technique was proposed for evaluation and selection of vendor. The ABC tech-
nique helped to calculate total cost caused by vendor in a company’s production
process, to judge performance of supplier [19]. The multi-criteria group decision
model for supplier selection was applied. This model based on ELECTRE IV
and VIP analysis approach. The proposed model consists two stages. In first
stage, ELECTRE IV utilized to determine ranking of different criteria. In sec-
ond stage, VIP analysis method was used to select alternatives [2]. Dimensional
analysis technique proposed to measure performance of supplier and to get index
called vendor performance index (VPI). In the process of supplier performance
evaluation both qualitative and quantitative criterion were used. To overcome
blindness of human and vagueness fuzzy approach applied [16]. Supplier selec-
tion is crucial task for every organization. The analytical network process (ANP)
method was proposed to select the best supplier based on three criteria, business
structure of supplier, manufacture capability of supplier, and quality system of
supplier. These main three criteria further classified to 45 sub-criteria [10,27].
Neural network (NN) approach used to select potential supplier. The determi-
A Fuzzy Multi-criteria Decision Making Approach 711
nant factors quality, performance history, geographical and price were selected
in the process of supplier selection.
In Telecommunications Company AHP model was proposed for selection of
vendor. Vendor selection is a complex and multi-criteria decision making prob-
lem. In application process of AHP, two strategic factors (criteria) cost and
quality selected for evaluation of vendor, further both criteria categorized into
26 sub-criteria [23]. Shyura & Hsu-Shih [22] developed integrated model for ven-
dor selection. The hybrid model was included analytical network process (ANP),
TOPSIS, and nominal group.
A fuzzy multi-objective integer programming method was developed for sup-
plier selection problem which incorporate three objectives cost minimization,
maximize on-time delivery, and quality maximization [14]. Proposed strategy-
aligned fuzzy simple multi-attribute rating technique (SMART) to solve supplier
selection problem in supply chain. Strategy-aligned fuzzy SMART technique
involves three stages. In first stage, define objective and strategy, second step,
develop supply chain strategy, third stage recognize criteria for supplier selec-
tion [6,15]. In this study, fuzzy analytical hierarchy process method was proposed
with the concept of benefits, opportunities, cost, and risks (BOCR). With the
application of this approach supplier can be evaluated various aspects based on
quantitative and qualitative criteria. Fuzzy set theory was applied to overcome
human ambiguity in decision making process.
In this paper, fuzzy AHP method is applied to select best supplier. In fuzzy
AHP method, linguistic variables and triangular fuzzy numbers are used as a
pairwise comparison scale to determine priority of main criteria and sub-criteria.
After that, extent analysis method is applied to calculate final weight priority
of main decision criteria, sub-criteria and alternatives.
3 Fuzzy AHP
The analytical hierarchical process (AHP) introduced by [20]. AHP is very useful
method to solve multi-criteria decision making problem. It offers hierarchical
procedure to solve problem using both subjective and objective measures. The
hierarchical model of supplier selection problem consists four stages; objective,
criteria, sub-criteria, and last stage is decision of alternatives.
The advantage of AHP is simple and ease of use but the most important
disadvantage of this approach is that it uses nine-point scale which is unable to
handle uncertainty in the process of comparisons of decision variables. Tradition
AHP is unable to solve uncertain decision making problem, to overcome this
problem, linguistic variables and triangular fuzzy numbers are used to decide
priority of different decision variables. The extent analysis method is applied to
calculate priority of weight using triangular fuzzy numbers. The fuzzy analytical
hierarchical process is an extension of AHP which is used to handle fuzziness in
supplier selection process based on both quantitative and qualitative criteria.
Fuzzy set theory and fuzzy number
712 A. Sarwar et al.
Fuzzy set theory proposed by [29]. Fuzzy set theory helps to overcome vague-
ness of human, imprecise data, and uncertainty in decision making. Fuzzy set
theory is used to represent mathematically human vagueness and uncertainty
in decision making problem. A fuzzy set is a class of elements with continuum
grades of membership. The membership value of each object lies between 0 and
1. Triangular fuzzy number is a part of fuzzy number whose membership func-
tions are defined by three numbers [9]. Where parameter m donates the most
promising value, l and u indicates lower and upper bond of fuzzy event.
The membership function µÃ is defined as
⎧ ⎫
⎪
⎪ 0, x<1 ⎪
⎪
⎨ ⎬
(x − l/m − l) , l ≤ x ≤ m,
µÃ = . (1)
⎪
⎪ (u − x/u − m) , m ≤ x ≤ u ⎪ ⎪
⎩ ⎭
0, x>u
The extent analysis method introduced by [25]. In this paper extent analysis
method is applied for supplier selection problem. In traditional AHP, nine point
scale was used to determine priority of one criteria over another. In fuzzy AHP,
linguistics variables and triangular fuzzy numbers are used. The fuzzy AHP
based on extent analysis is applied for supplier selection problem. Let X =
{x1 , x2 , · · · , xn } represent object set of alternatives and P = {p1 , p2 , p3 , · · · , pn }
represent goal set of supplier selection. Each object is taken and extent analysis
for each goal, p1 is performed respectively [5]. Then m extent analysis value of
each object can be obtained, using following signs:
m
1
Mgi 2
, Mgi , · · · , Mgi , i = 1, 2, · · · , n, (2)
j
where Mgi (j = 1, 2, 3, · · · , m) are Triangular fuzzy numbers. The steps of extent
analysis method proposed by [25];
Step 1. The value of synthetic extent analysis with respect to ith object is
defined as
⎡ ⎤−1
m n m
Si = j
Mgi ⊗⎣ j ⎦
Mgi . (3)
j=1 i=1 j=1
m j
To obtain j=i Mgi , perform the fuzzy addition operation of m extent analy-
sis values for a particular matrix such that,
⎛ ⎞
m m m m
j
Mgi =⎝ lj , mj , uj ⎠ , (4)
j=i j=1 j=1 j=1
A Fuzzy Multi-criteria Decision Making Approach 713
m −1
n j
and to obtain i=1 j=1 Mgi , perform the fuzzy addition operation of
j
Mgi (j = 1, 2, · · · , m) values such that,
n m
n n n
j
Mgi = li , mi , ui , (5)
i=1 j=1 i=1 i=1 i=1
and then the inverse of the vector Eq. (5) is calculated such that,
⎡ ⎤−1
n m
⎣ j ⎦ 1 1 1
Mgi = n , n , n . (6)
i=1 j=1 i=1 ui i=1 mi i=1 li
T
W = (d (A1 ) , d (A2 ) , · · · , d (An )) , (11)
where Ai (i = 1, 2, · · · , n) are n elements.
Step 4. Via normalization, the normalized weight vectors are
T
W = (d (A1 ) , d (A2 ) , · · · , d (An )) , (12)
where W is a non-fuzzy numbers [25]. It gives the weight priority one alter-
native to another.
714 A. Sarwar et al.
4 Numerical Example
Selection of
supplier
Production
Product price Product Durability
capacity flexibity
1 1 1
Price = (2.5, 3.5, 4.5) ⊗ , , = (0.20, 0.36, 0.66), (13)
12.67 9.67 6.86
1 1 1
Quality = (3.17, 4, 5.5) ⊗ , , = (0.25, 0.41, 0.80), (14)
12.67 9.67 6.86
1 1 1
Flexibility = (1.9, 2.17, 2.67) ⊗ , , = (0.15, 0.22, 0.39). (15)
12.67 9.67 6.86
Next step is to calculate degree of possibility using Eqs. (7) and (8).
(0.25 − 0.66)
V (Price ≥ Quality) = = 0.89, (16)
(0.36 − 0.66) − (0.41 − 0.25)
V (Price ≥ Flexibility) = 1, (17)
V (Quality ≥ Flexibility) = 1, (18)
V (Quality ≥ Price) = 1, (19)
(0.20 − 0.39)
V (Flexibility ≥ Price) = = 0.57, (20)
(0.22 − 0.39) − (0.36 − 0.20)
(0.25 − 0.39)
V (Flexibility ≥ Quality) = = 0.42. (21)
(0.22 − 0.39) − (0.41 − 0.25)
716 A. Sarwar et al.
Table 10. Fuzzy pairwise comparison matrix of alternatives with respect to production
capacity flexibility
Table 11. Fuzzy pairwise comparison matrix of alternatives with respect to customiza-
tion flexibility
5 Conclusion
Suppler selection process is a broad and complex task for firms. In this paper,
multi-criteria decision making approach fuzzy AHP has been used which includes
both quantitative and qualitative criteria, to measure supplier performance. In
this model, linguistic variables and triangular fuzzy numbers are used to over-
come vagueness and uncertainty of decision makers. Multiple criteria helps deci-
sion makers to measure overall performance of supplier more efficiently. The
advantage of this approach is that it takes less time and more accurate to solve
supplier selection problem. Furthermore, fuzzy AHP model can be applied to any
manufacturing company to choose best supplier. The application of this model
is significantly effective and easily implement to choose best supplier.
A Fuzzy Multi-criteria Decision Making Approach 719
Acknowledgements. The authors would like to give our great appreciation to editors
and anonymous referees for their helpful and constructive comments and suggestions,
which helped to improve this paper. This research is supported by the Youth Program
of National Natural Science Foundation of China (Grant No. 71501137), the Gen-
eral Program of China Postdoctoral Science Foundation (Grant No. 2015M572480),
the International Postdoctoral Exchange Fellowship Program of China Postdoctoral
Council (Grant No. 20150028), the Project of Research Center for System Sciences
and Enterprise Development (Grant No. XQ16B05), and Sichuan University (Grant
No. SKQY201647).
References
1. Ahadian B, Abadi AGM, Chaboki RM (2012) A new DEA-GAHP method for
supplier selection problem. Manage Sci Lett 2(7):2485–2492
2. Alencar LH, Almeida AT (2008) Multicriteria decision group model for the selection
of suppliers. Pesquisa Operacional 28(2):321–337
3. Ayhan MB (2013) A fuzzy AHP approach for supplier selection problem: a case
study in a gear motor company. Int J Manag Value Supply Chains 4(3):11–23
4. Boer LD, Labro E, Morlacchi P (2001) A review of methods supporting supplier
selection. Eur J Purchasing Supply Manage 7(2):75–89
5. Chang DY (1992) Extent analysis and synthetic decision. Optim Tech Appl 1:352–
355
6. Chou SY, Chang YH (2008) A decision support system for supplier selection based
on a strategy-aligned fuzzy smart approach. Expert Syst Appl 34(4):2241–2253
7. Demirtas EA, Ustun O (2009) Analytic network process and multi-period goal
programming integration in purchasing decisions. Comput Ind Eng 56(2):677–690
8. Dickson GW (1966) An analysis of vendor selection systems and decision. In: Mate-
rials science forum, pp 1377–1382
9. Dubois D, Henri P (1980) Systems of linear fuzzy constraints. Fuzzy Sets Syst
3(1):37–48
10. Gencer C, Grpinar D (2007) Analytic network process in supplier selection: a case
study in an electronic firm. Appl Math Model 31(11):2475–2486
11. Ho W, Xu X, Dey PK (2010) Multi-criteria decision making approaches for supplier
evaluation and selection: a literature review. Eur J Oper Res 202(1):16–24
12. Humphreys P, Mcivor R, Chan F (2003) Using case-based reasoning to evaluate
supplier environmental management performance. Expert Syst Appl 25(2):141–153
13. Kahraman C, Tijen E, Buyukozkan G (2006) A fuzzy optimization model for QFD
planning process using analytic network approach. Eur J Oper Res 171(2):390–411
14. Kumar M, Vrat P, Shankar R (2006) A fuzzy programming approach for vendor
selection problem in a supply chain. Int J Prod Econ 101(2):273–285
15. Lee AHI (2009) A fuzzy supplier selection model with the consideration of benefits,
opportunities, costs and risks. Expert Syst Appl 36(2):2879–2893
16. Li CC, Fun YP, Hung JS (1997) A new measure for supplier performance evalua-
tion. IIE Trans 29(9):753–758
17. Pal O, Gupta AK, Garg R (2013) Supplier selection criteria and methods in supply
chains: a review. Int J Soc Behav Educ Econ Bus Ind Eng 7(10):2667–2673
18. Prasenjit C, Poulomi M, Shankar C (2011) Supplier selection using compromise
ranking and outranking methods. J Ind Eng Int 7(14):61–73
19. Roodhooft F, Konings J (1995) Vendor selection and evaluation an activity based
costing approach. Eur J Oper Res 96(1):97–102
720 A. Sarwar et al.
20. Saaty TL (1980) The analytic hierarchy process: planning, priority setting, resource
allocation. McGraw-Hill, New York
21. Schramm F, Morais DC (2012) Decision support model for selecting and evaluating
suppliers in the construction industry. Pesquisa Operacional 32(32):643–662
22. Shyur HJ, Shih HS (2006) A hybrid mcdm model for strategic vendor selection.
Math Comput Model 44(7–8):749–761
23. Tam MCY, Tummala VMR (2001) An application of the AHP in vendor selection
of a telecommunications system. Omega 29(2):171–182
24. Vahdani B, Zandieh M, Alemtabriz A (2008) Supplier selection by balancing and
ranking method. J Appl Sci 8(19):3467–3472
25. Veerabathiran R, Srinath KA (2012) Application of the extent analysis method on
fuzzy AHP. Int J Eng Sci Technol 4(7):649–655
26. Weber CA, Current JR, Benton WC (1991) Vendor selection criteria and methods.
Eur J Oper Res 50(1):2–18
27. Wei S, Zhang J, Li Z (1997) A supplier-selecting system using a neural network. In:
IEEE international conference on intelligent processing systems, vol 1, pp 468–471
28. Yahya S, Kingsman B (1999) Vendor rating for an entrepreneur development pro-
gramme: a case study using the analytic hierarchy process method. J Oper Res
Soc 50(9):916–930
29. Zadeh LA (1965) Fuzzy sets*. Inf Control 8(3):338–353
Cement Plant Site Selection Problem
with Carbon Emission Trading Mechanism
1 Introduction
Cement is the most widely used essential construction material which is the
main component of concrete, in addition, the cement industry is one of the most
intensive energy consumptions [4,5]. Cement industry has been always among
the largest CO2 emission sources. Almost 5–7% of global CO2 emissions are
caused by cement plants, while 900 kg CO2 is emitted to the atmosphere for
producing one ton of cement [3].
Along with the phenomenon that carbon emission trading is accepted by
many countries, the cement industry how to make product plan and select site to
get the maximum profit and customer satisfaction under carbon dioxide trading
mechanism, which are one of the most significant decision-making challenges and
practical significance problem. As a consequence, the cement industry worldwide
is facing growing challenges in conserving material and energy resources, as well
as reducing its CO2 emissions [7]. Because of the high carbon emission by cement
industry, a series of researches have been constructed to attempt to reduce carbon
emissions. Benhelal [2] presented new design of pyro-processing unit in a cement
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 59
722 L. Fan et al.
factory and proved such novelties new process can significantly reduce 66% of
CO2 emission compared to the existing process. Madlool [6] found that carbon
emission reduced by using energy efficiency measures in the cement industry.
However, there are few researches about cement plant site selection problem
in the traditional article. Ataei [1] used a multi criteria decision-making method
to rank alternative plant locations and found the best site. The problem of
location of cement plant mainly has considered the cement industry. In fact,
the customers purchase-choosing behavior should been taken into account in
the cement plant location problem. It is often difficult to estimate cement sale
revenues because these are decided by the customers. In turn, cement industry
can also have an impact on the customers purchase-choosing behavior. Therefore,
the cement plant location problem is formulated as a bi-level model with two
decision-makers: cement industry and customers. Because the purchase-choosing
behavior among the customers is uncertain, it is more reasonable to regard the
customer’s demands as a fuzzy-random variable. Therefore, the objective of this
study aims to develop a fuzzy-random bi-level model for cement plant location
problem with carbon emission trading mechanism.
The structure of the paper is as follows. Section 2 is the problem statement
part. A fuzzy-random bi-level multi-objective model for location of cement plant
with carbon emission trading mechanism is introduced in Sect. 3. And then a
solution algorithm for the problem based on the KKT condition and GA (Genetic
Algorithm) in Sect. 4. Section 5 gives a numerical example to illustrate the appli-
cation of the model. Finally, some conclusions are presented in Sect. 6.
2 Problem Statement
The problem considered in this paper is a decision making problem between the
cement industry and its consumers with carbon emission trading mechanism.
For cement plant location problem, because each consumer is independent indi-
vidual, cement plant site selection plan is not only determined by the cement
industry, but also influenced by consumers. The cement industry choose some
points to establish cement plants, decide the output and sale price, and after
obtaining the cement price, the consumers make their decision in an consumer
optimal manner to meet their demand and achieve the goal of minimal each
cost at the same time. Therefore, the cement industry influences the consumers’
purchase choosing behavior by adjusting the production plan and cement sale
price, whereas each consumer tries to meet their demand to make the minimal
cost based on decision made by the cement industry.
This paper considers the cement plant site selection problem as a leader-
follower game where the cement industry is leader and the consumers who can
freely chose the cement plant to purchase cement are the followers. In this sit-
uation, the cement plant site selection problem can be represented as a bi-level
problem where the cement industry is the upper-level and the consumer is the
lower-level (see Fig. 1).
Cement Plant Site Selection Problem 723
Fig. 1. The bi-level relationship between the cement plant and customers
3 Modeling Formulation
In this section, the relevant assumptions are first outlined, and then the bi-level
model for the cement plant site selection problem under the carbon emission
trading mechanism and considering fuzzy random is constructed. The mathe-
matical description of the problem is given as follows.
3.1 Assumptions
To construct a model for the location problem of cement plant, the following
assumptions are adopted:
(1) If customers buy cement, the cement plant must help the customer transport
the cement without fee.
(2) The total alternative points is fixed.
(3) There are not any old cement plants before the new ones to be built.
3.2 Notations
The following mathematical notations are used to describe the cement plant
location problem.
Index:
Certain parameters:
Uncertain parameters:
Decision variables:
It is not easy for the cement industry to make decision to get maximize profits
and service satisfaction at the same time. Moreover, cement plant which is one of
the major contributors to carbon emission, location problem is more meaningful
to study with carbon emission mechanism.
(1) Objective Functions
In this paper, we consider cement plant which own two objectives, as the
upper-level. In order to maximize economic profit, taking into account both
improving the sales and saving the cost.
1 Sale revenues
Cement industry revenues come from cement sales which related to the
sale
N quantity
I andJcement price. So the expected value of economic benefits is
n=1 Pn i=1 j=1 xijn .
Cement Plant Site Selection Problem 725
I
11
N
11
N N N J
Oik βn1 + Oin βn2 + αi1 Oin + αi2 xijn − R Ed (η̄˜).
i=1
10 n=1
6 n=1 n=1 n=1 j=1
(3)
3 Construction cost
The construction costs take into account fixed cost Ed (f˜ ¯ ) and vari-
N i
n=1 Oin Ed (v̄i1 ) +
able costs which are related to the cement quantity, ˜
N J
j=1 xijn Ed (v̄i2 ).
n=1
˜
According to the aforementioned information, the equivalent for the total
economic profits of cement industry is established as:
N
I
J
I
11
N
11
N
max F = Pn ui − Ed (η̄˜) Oin βn1 + Oin βn2
n=1 i=1 j=1 i=1
10 n=1 6 n=1
N
N
J
+ αi1 Oin + αi2 xijn − R
n=1 n=1 j=1
I
I
N
N
J
− Ed (f˜
¯i )− Oin Ed (v̄˜i1 ) xijn Ed (v̄˜i2 ) .
i=1 i=1 n=1 n=1 j=1
726 L. Fan et al.
(2) Constraints
The cement industry will choose no more than M points from the total
alternative points to built cement plant, so we have:
I
ui ≤ M, (4)
i=1
It acquire the variable costs and fixed costs which will quickly be sank cost
not more than G,
⎛ ⎞
I N
N
J
I
⎝ Oin Ed (v̄˜i1 ) + xijn Ed (v̄˜i2 )⎠ + Ed (f˜
¯i ) ≤ G. (6)
i=1 n=1 n=1 j=1 i=1
The lower-level problem represents the customers’ choice behaviors and the
demand of distributed among cement plants, that is to say each customer assigns
his demand among the cement plants to minimize the total cost.
(1) Minimum Cost Objective
The lower-level objective is the minimum cost, it is decided by the cement
price and purchase quantity, the customer’s total cost is as follows:
J
N
I
min L = Pn xijn . (7)
j=1 n=1 i=1
(2) Constraints
The purchase plan must satisfy the cement demand:
I
xijn ≥ Ed (D˜¯jn ). (8)
i=1
The total purchase quantity of cement plant i is not more than the capacity
of it:
J
xijn ≤ Oin . (9)
j=1
Cement Plant Site Selection Problem 727
The lower-level customer choose cement plant and the amount of cement buying
from it. The up-level cement industry choose some points from the fixed points
to get the maximum profit and improve the service satisfactory; The customer
try to influence the cement industry’s decision to make their cost minimum. On
the one hand, the lower cement price, the lower cost to the customer; On the
other hand, the higher cement price, the higher profit to the cement industry.
The location of cement plant can be represented as an game, they both can
influence but can not control the other part.
The complete location problem involves the cement industry output decisions
and sale price and the consumers purchase decision. Therefore, the global model
includes bi-level objectives and constraints. In Noncooperative environments, we
consider bi-level programming problems and it can be expressed as:
⎧
⎪ max F = N P I J x − E (η̄˜) I
⎪ 11
N
O β 11
N
O β
⎪
⎪ n ijn d 10 in n1 + 6 in n2
⎪
⎪ Oin ,Pn i=1 n=1
⎪
⎪
n=1 i=1 j=1 n=1
⎪
⎪ N N J I
⎪
⎪ +αi1 Oin + αi2 xijn − R − ˜
ui Ed (f¯i )
⎪
⎪
⎪
⎪
n=1 n=1 j=1 i=1
⎪
⎪
⎪
⎪ I N N J
⎪
⎪ − Oin Ed (v̄˜i1 ) + xijn Ed (v̄˜i2 )
⎪
⎪
⎪
⎪ ⎧ i=1 n=1 n=1 j=1
⎪
⎪
⎪
⎪ ⎪ I
⎪
⎪
⎪ ⎪
⎪ ui ≤ M i = 1, 2, · · · , I
⎪
⎪ ⎪
⎪
⎪
⎪ ⎪
⎪
i=1
⎨ ⎪ ui = i = 1, 2, · · · , I
⎪
⎪
⎪ 0 or 1,
⎪ ⎪
⎪ I N N J I (10)
⎪
⎪ ⎪
⎪ Oin Ed (v̄˜i1 ) + xijn Ed (v̄˜i2 ) + Ed (f˜¯i ) ≤ G
⎪
⎪ ⎪
⎪
⎪
⎪ ⎪
⎪ i=1 n=1 n=1 j=1 i=1
⎪
⎪ ⎪
⎪
⎪
⎪ ⎨ where x ijn solve
⎪
⎪
⎪ s.t. ⎪
⎪
J N I
⎪
⎪ ⎪
⎪ min L = Pn xijn
⎪
⎪ ⎪ ijn⎧ j=1 n=1
⎪ x
⎪
⎪ ⎪
⎪
i=1
⎪
⎪ ⎪
⎪ ⎪ I x
⎪
⎪ ⎪
⎪ ⎪
⎪ ijn ≥ Ed (Djn )
˜
¯ j = 1, 2, · · · , J, n = 1, 2, · · · , N
⎪
⎪ ⎪
⎪ ⎪
⎨ i=1
⎪
⎪ ⎪
⎪
⎪
⎪ ⎪
⎪ s.t. J
⎪
⎪ ⎪
⎪ ⎪ xijn ≤ Oin i = 1, 2, · · · , I, n = 1, 2, · · · , N
⎪
⎪ ⎪
⎪ ⎪
⎪
⎪
⎩ ⎪
⎩ ⎪
⎩ j=1
xijn ≥ 0 i = 1, 2, · · · , I, j = 1, 2, · · · , J, n = 1, 2, · · · , N
4 Solution Approach
The proposed model (9) is a bi-level decision making problem, which reflects the
interactive relationship between the cement industry and consumers. There are
many algorithms available to solve bi-level programming. Therefore, the bi-level
model (9) can be converted into a single level model with additional constrains
by using KKT optimal conditions. So, the single level transformed from (10) by
728 L. Fan et al.
The KKT conditions switched the cement sales from a game between cement
industry and consumers to single decision making problem only by cement indus-
try. The cement industry must try to control the sale price to get more market
share. The consumers make purchase decisions by the cement price that reflects
the competition.
The average sales increased rate of Conch cement is 31.01% in last three years,
ranking at 471 in total 1710 listed company and ranking at (9/29) in the con-
struction industry. At the Copenhagen climate conference in 2009, the Chinese
government pledged that China would reduce carbon emissions intensity (i.e.
carbon emissions per unit GDP) by 40% ∼ 45% based on 2005 levels by 2020.
In order to achieve this target, the government has proposed policy to control
carbon emission. As one of the largest company in the construction industry,
Conch cement must be more cautious when deals with decision problem like
cement plant selection problem with limited carbon emission allowance.
Cement n Ca Mg
n=1 β11 = 12.19% β12 = 20.31%
n=2 β21 = 23.16% β22 = 38.59%
airport, house, the relative information for these consumers, such as demand
for the cement are show in Table 4. The data about ca and mg percentage in
Table 1 are from the Conch cement industry. Most data in the Tables 2 and 3
were estimated based on expert consultations as well as the data in Table 4. The
“allocation cap” was calculated to be 849.6 × 104 Tonnes, according to the total
maximum output 944 × 104 Tonnes. The allowance price is (20, γ, 60), where
γ ∼ N (27.5, 10.36).
Almost 5–7% of global CO2 emissions are caused by cement plants, therefore, the
government allocated the allowance to the cement industry by the proportion.
But to ensure the efficiency of the proved model, the carbon emission allowance
R considering the following 3 scenarios.
Cement Plant Site Selection Problem 731
6 Policy Implications
The above results indicates that carbon emission allowance paly a important
role in the cement site selection problem. When the government allocated low
level allowance (i,e. 1 × 106 Tones) to cement company, it will control the output
which just meet the consumers’ demand, decrease the cost by limit the carbon
emissions and make higher sale price; When the cement company gets medium
level (i,e. 4×106 Tones), it will expand the scale suitably, increase the output and
take lower sale price policy to improve the sale quantity; When the government
allocated high level allowance (i,e. 8×106 Tones) to cement company, it will built
more cement plants, take low price policy to arouse consumers extra demand to
sale more cement.
As one of the most contributor to the carbon emission, the cement industry
has different decisions with different allowance. On the one hand, building new
cement plants and exploiting new market with high level allowance. The demand
of cement will increased rapidly with the expanded of population scale, the
cement company need to produce more cement to meet consumers’ demand; On
the other hand, the cement industry will control the output or do more research
to exploit new technology to decrease carbon emission in the cement produce
and transportation processes. Otherwise, if the cement company increase the
output without new technology, it will face huge emission cost.
7 Conclusion
In this paper, a bi-level programming model with fuzzy random variables is pro-
posed to deal with the cement plants site selection problem with carbon emission
trading mechanism. Getting total profits maximum and making the total cost
minimum are the cement company (i.e. the upper) and the consumers (i.e. the
lower) targets, respectively. In contrast to the previous studies, the research takes
the natural release and energy consumption which includes produce and trans-
portation process as the total carbon emissions from the cement company. To
solve the complex programming model, a KKT convert technique and a multi-
cut points-based genetic algorithm are taken as a solution method. Finally, the
results of the case study in Conch cement plants site selection problem are proved
the efficiency of the optimization model and method.
734 L. Fan et al.
Acknowledgements. We are thankful for financial support from the Research Center
for Systems Science & Enterprise Development, Key Research of Social Sciences Base
of Sichuan Province (Grant No. Xq15C01), the National Natural Science Foundation
of China for Younger Scholars of China (Grant No. 71601134) and Project funded by
China Postdoctoral Science Foundation.
References
1. Ataei M (2005) Multicriteria selection for an alumina-cement plant location in East
Azerbaijan province of Iran. J S Afr Inst Min Metall 105(7):507–513
2. Benhelal E, Zahedi G, Hashim H (2012) A novel design for green and economical
cement manufacturing. J Cleaner Prod 22(1):60–66
3. Benhelal E, Zahedi G et al (2013) Global strategies and potentials to curb co2
emissions in cement industry. J Cleaner Prod 51(1):142–161
4. Kim SH (2015) Ventilation impairment of residents around a cement plant. Ann
Occup Environ Med 27(1):3
5. Lamas WDQ, Palau JCF, Camargo JRD (2013) Waste materials co-processing in
cement industry: ecological efficiency of waste reuse. Renew Sustain Energy Rev
19(1):200–207
6. Madlool NA, Saidur R et al (2013) An overview of energy savings measures for
cement industries. Renew Sustain Energy Rev 19(1):18–29
7. Schneider M, Romer M et al (2011) Sustainable cement production present and
future. Cem Concr Res 41(7):642–650
8. Stefanovi GM, Vuc̆kovi G et al (2010) Co2 reduction options in cement industry -
The Novi Popovac case. Therm Sci 14(3):671–679
9. Strazza C, Borghi AD et al (2011) Resource productivity enhancement as means for
promoting cleaner production: analysis of co-incineration in cement plants through
a life cycle approach. J Cleaner Prod 19(14):1615–1621
10. Usón AA, López-Sabirón AM et al (2013) Uses of alternative fuels and raw mate-
rials in the cement industry as sustainable waste management options. Renew
Sustain Energy Rev 23(4):242–260
11. WBCSD (2002) The Cement Sustainability Initiative: Our Agenda for Action.
World Business Council for Sustainable Development (WBCSD), Switzerland
Representation and Analysis
of Multi-dimensional Data to Support Decision
Making in the Management
1 Introduction
• The coding and structure of data base to ensure flexible navigation through
the database, targeting generation of ad hoc queries and presentation of the
results in the form of various reports. Their display should be based on the
topological realization of algorithms for spatial analysis;
• The multi dimensional data manipulations to allow an easy organization of
aggregated information from the storage in the self-saved form of a hyper-
cubic model. This model would provide comfortable visualization and analysis
• Along with the original data it contains information, aggregated over all sub-
sets of measurements;
• Points in the data cube can be aggregated versions of the starting points of the
original data. For example, the dimension of a “quarter” of the cube of initial
data that determines the information about the progress for each quarter of
a year may be reduced to the dimension of “half of a year”.
The data hypercube consists of the array of all cells H(D, M ), corresponding
to the sets D and M .
A subset of H(D, M ), corresponding to some subsets of fixed values D , M .
We denote as H (D , M ).
Each cell h of the data hypercube H(D, M ) corresponds to only one possible
set of facts measurement Mh ⊂ M . The cell can be empty (contains no data),
or contains a value index - a measure.
The set of all measures included in the hypercube H(D, M ) is denoted by
V (H).
In a data hypercube one can perform either of the following operations (or
manipulations) with data:
• operation “slicing”;
• operation “rotation”;
• operation “convolution and detail”;
• operation “Data Aggregation”;
• operation “sampling”.
Consider the hierarchical dimension D with L levels (Fig. 2). The primary
data (events, facts) correspond to the lower level of the hierarchy (l = 0). Com-
putation of aggregates is made in accordance with the applicable method of
aggregation. In the case of summation, the values Xj1 of the units at the level of
Mj 0
the hierarchy l = 1 can be calculated by the formula Xj1 = i=1 xi , where Mj
is the number of values corresponding to the facts appearing subsidiaries to the
label j.
Mj
Xjl = i , l = 1, · · · , L, j = 1, · · · , Nl .
xl−1
i=1
The number
L of aggregated units in a data hypercube for a single measurement
is NA = i=1 Ni .
Generalizing, for the case of an arbitrary number of dimensions D, for the
number of aggregated units we obtain
D
Lj
D
NA = Nij − N0j ,
j=1 i=1 j=1
where Nij is number of labels the ith level of the hierarchy of measurements j,
l = 1, · · · , D, and Lj is the number of hierarchical levels of measurement j.
(4) The operation “convolution and detail” (Drill Down)
Drilling down is a specific analytical technique, where the user navigates among
levels of data ranging from summarized data at higher level to more detailed
data, located at lower level. This operation is carried out thanks to the hierar-
chical structure of the measurements. Readings (events, measurements) may be
combined in a hierarchy, consisting of one or more levels. For example:
Day ⇒ Month ⇒ Quarter ⇒ Year;
Manager ⇒ unit ⇒ Region ⇒ Company ⇒ Country;
742 S. Ayupov and A. Arifjanov
i
The set of edges of the graph, V , is a set of pairs of vertices vjk = {sji , ski+1 }.
The edge vjk ∈ V of the graph, thus connecting the j − th vertex Sji from the
i
The final vertex S0n+1 corresponds to the end of the procedure of forming a
request and obtain access to the cell h(w) of the data hypercube. Here w is the
path selected by user’s query on the graph G(S, V ).
In a diluted (scarce) data hypercube the requested cell h(w), generally speak-
ing, may be empty. Thus, time spent on the formation of a user query will be
“wasted”, because this particular query result is empty. The problem of optimiz-
ing the formation of user’s queries by using step-by-step fixing facts measurement
of the hypercube H(D, M ), is thus reduced to finding the set of paths Wtrue on
a network graph G(S, V ), leading mainly to non-empty cells [7].
Unfortunately, the use of the described multidimensional data models and
operations in the hypercube has shown poor performance on large volumes of
data. For example, if the hypercube contains information about the sales in one
year, and if it has only 3 measurements–Customers (250) Products (500) and
Dates (365), we obtain a matrix of facts of the size 250 × 500 × 365 = 45625000.
The total number of non-empty cells in the matrix can be practically only a
few thousand. Moreover, the more the number of dimensions, the more tenuous
will be the matrix. Therefore, to work with such matrix it is preferable to create
and use special methods of processing data in sparse matrices. To solve this
problem, it is possible by preliminary “cleaning” the data before to use them to
build the cubes. But this approach is not always applicable. Another drawback is
that the choice of a higher level of minuteness in the creation of a hypercube can
greatly increase the size of the multidimensional database. Because of these, and
some other additional reasons, commercially available multidimensional database
management systems are not able to handle large amounts of data. It is advisable
to use this multivariate model, when the size of the database is small and has a
homogeneous in time set of measurements.
In contrast to the multidimensional databases, the relational management
systems of databases are capable to store huge amounts of data. But they lose
their advantages when compared in the speed of execution of analytical queries.
When using relational database management systems, the principles for data
storage are organized in a special way. The most commonly used so-called radial
(or “star”) pattern. In this scheme, two types of tables are used: (1) Fact table
and (2) Several reference tables (dimension tables). The fact table usually con-
tains data, the most intensively used for analysis of data in the cube. If we draw
an analogy with the multivariate model, a record in the factual table corresponds
to a cell in the hypercube. In the reference tables are listed all the possible values
of each one dimension of the hypercube. Each dimension is described by its own
lookup table. Fact table is indexed on a complex key that is built from individual
keys help reference tables. It provides a link-up table with the factual by suitable
sets of key attributes.
To shorten the response time in an analytical system one can use some special
tools. In the composition of powerful relational database management systems
there are typically included query optimizers. When creating a data warehouse
based on relational database management systems, their presence is of particular
importance. Optimizers analyze the request and determine the best position, in
744 S. Ayupov and A. Arifjanov
regard to some criterion, of the sequence of operations to access the database for
this specific request implementation. For example, in this way one can minimize
the number of physical disk accesses for the query. The query optimizers use
sophisticated algorithms for statistical processing, which operate on the number
of entries in the tables, ranges of keys, etc.
Each of the models described above has both, advantages and disadvantages.
Multivariate model allows implementation of fast analysis of data, but cannot
store large amounts of information. Relational model, by contrast, has virtually
no limit on the amount of accumulated data, but such database does not provide
the desirable speed in the run of analytical queries, as the multidimensional
database.
database. The main advantage of OLAP lies in the wide possibility of forming
ad-hoc queries for analytical databases. However, our studies show that from a
theoretical point of view, unlimited opportunity to form ad hoc queries to OLAP
hypercube when working with the above model, under certain conditions can lead
to complex problems, associated with ensuring the integrity of the multidimen-
sional data. A similar problem arises, for example, when using the methods of the
additive decomposition of a hyper-cube’s structure into sub-cubes of smaller size.
In such cases an ad hoc query, being addressed to the complete hypercube (as a
whole), first processes the sub-cube, and the results are then consolidated into
the overall result. With ad-hoc queries indexing hypercube becomes arbitrary,
varies from request to request, and may contribute to errors in the procedure of
consolidation in the settlement of the results from sub-cubes. We have investi-
gated the issue and presented a new method for its solution. It eliminates data
corruption, despite the non-additive indexing of additively decomposed hyper-
cubes. The conditions for indexing data in sub-cubes were analyzed. An assertion
was stated, that allows to identify any subset of undistorted data sub-set Xsec
and the potentially faulty data subsets X’sec at the intersections of indices in
the following form
n−1
Xsec ≡ X\Xsec = X\ U ( U (Xi IXj )),
i=1 i<j≤n
5 Example
As an example, consider the formation of OLAP-cube based on existing relational
fact table sales from managers, M1, M2 and M3 cars Nexia, Tico and Damas in
the years of 2007, 2008 and 2009. To do this, it is needed to select the values of
all measures and carry out the aggregation of all the obtained values, and record
it in the cube. Consider the application of the obtained models on the example
of a cube with 3 dimensions (Fig. 5).
Here measurements are: the model made of cars (products, a categorical
variable), the sale person (manager, another structural attribute), and the year
of production (time dimension), i.e. this is the triplet C, M, T ;
Facts are: Model car (Nexia, Tico, Damas), labeled as (N, T, D); Managers
(Manager 1, Manager 2, Manager 3) labeled as (M 1, M 2, M 3); The time (year)
recorded (2007, 2008, 2009) labeled as (Y 4, Y 5, Y 6);
Measures are: the volume of sales (7, 7, 4, etc.).
6 Conclusions
It is shown that the use of the conventional multidimensional data models and
operations in the hypercube does not provide the desired speed performance
on large volumes of data. This problem can be solved by pre-treatment of data
before to use them to build the data cubes. Another drawback is that the choice
of a high level of detail in the creation of a hypercube can significantly increase
the size of a multidimensional database. Because of these and some other rea-
sons, commercially available multidimensional database management systems
are not able to handle large amounts of data. It is advisable to use a standard
(conventional) multivariate model, if the size of the database is small and has a
homogeneous set of measurements. Largest integrated effect of improved perfor-
mance and scalability provides a combined strategy that combines partitioning
and parallel processing sections. Then the available storage capacity increases in
proportion to the number of sections in the fact table, and query performance
increases by several times.
It was found that the specificity of the multivariate analysis cannot imme-
diately transfer the methods for partitioning the fact table on the hypercube.
This is due to the difference in the processes of indexing data for hyper-cubes,
and for relational tables.
References
1. Arifjanov AS (2008) Algorithms for multidimensional reporting and analysis of data.
Uzbek J Probl Inf Energ 8(1):65–72
2. Arifjanov AS (2014) Some questions in mathematical modeling of multidimensional
data reporting and analysis. In: Book of abstracts. Flint international statistical
conference “FLINT:1 City Hundred Years under Variability”
3. Ayupov S, Arifjanov A (2017) Information–analytical technologies of decision sup-
port in management of power systems. Springer, Singapore
4. Codd EF, Codd SB, Salley CT (1993) Providing OLAP (on-line analytical process-
ing) to user-analysts: an it mandate, vol 32. E.F. Codd & Associates, pp 12–20
5. Hrustalev EM (2013) Aggregation of data in olap-cubes. http://www.olap.ru
6. Sakharov AA (2017) The concept of building and implementing information sys-
tems, with a focus on data analysis. http://olap.ru/basic/saharov.asp (in Russian)
7. Zabotnev MS (2017) Methods of presenting information in a sparse data hypercube.
http://www.olap.ru
Fuzzy Logic Applied to SCADA Systems
1 Introduction
The generation of electricity through wind energy systems is demanded by the
industry in order to increase the competitiveness of the business. The main-
tenance of these systems becomes very complex because the stochastic city of
the random loads can cause catastrophic failures [3,12]. The optimization of the
maintenance planning is a crucial factor for the efficiency of the wind farms [15].
With this purpose, wind turbines (WT) can be monitored by SCADA or con-
dition monitoring systems to detect failures [13,14], risks and to take necessary
actions online [8]. Several studies have proved the efficiency of this systems in
the WTs [9] and other types of industries [7,13,19].
False alarms generated by SCADA system are a important problem because
they cause unnecessary stops, false interventions by the maintenance team, loss
of production and, consequently, extra costs [11,17,18]. Some studies aim to
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 61
750 T. Benmessaoud et al.
eliminate false alarms of WTs [5]. Chen et al. [5] present a study on the treatment
of SCADA using an artificial neural network ‘ANN’. Qiu et al. proposes two
methods (sequential and probabilistic) for the analysis of alarms for two large
onshore wind farms considering the defects of pitch and converter systems [20].
This paper proposes a new approach based on fuzzy logic for controlling
wind farms considering the big data collected by SCADA system, and give a
complementary response to strengthen the response of the SCADA. Figure 1
shows the general structure of a fuzzy system [1].
Rule Base
Input
Output
Fuzzification Defuzzification
Interface Interface
Decision-making unit
(fuzzy) (fuzzy)
In the fuzzification part, any numeric value of the input data is converted
into a linguistic value between 0 and 1. The fuzzy system requires functions for
each input data:
• Input (1): Variable (1). Subsets: low, average and maximum.
• Input (2): Variable (2). Subsets: low, average and maximum.
• Input (3): Variable (3). Subsets: low, average and maximum.
• Input (n): Variable (n). Subsets: low, average and maximum.
Fuzzy inference is a method that interprets the values in the input vector and,
based on fuzzy rules, assigns values to the output vector [16]. With this purpose,
it is necessary to stablish the rules that will perform the defuzzification process
to provide the output. The Fuzzy rules (IF antecedent THEN consequent) in
expert system are usually is following [10]:
IF Var(1) is A11 and/or Var(2) is A21, · · · THEN y is B1
else
IF Var(1) is A12 and/or Var(2) is A22, · · · THEN y is B2
else
IF Var(1) is A1n and/or Var(2) is A2n, · · · THEN y is Bn
where Var(1), Var(2), · · · , Var(n) are the fuzzy input (antecedent) variables, y is
a single output (consequent) variable, and A11, · · · , A1n are the fuzzy sets [5].
Generally, there are n input variables consisting of 3 fuzzy linguistic variables.
Therefore, a total of 3n rules will provide all possible combinations of the input
variables.
Fuzzy Logic Applied to SCADA Systems 751
2 Methodology Proposed
35
30
25
Unacceptable
Tbear (ºC)
20
15
Tu
10
Acceptable
Ta
5
Good
0
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Time(10 min intervals)
The data considered for this real case study is obtained from the European
Project entitled OPTIMUS [4]. The database used in this paper come from a
SCADA system that provides different measures every 10 min in the period from
01/01/2015 to 28/03/2015. The system measure 37 physical variables, but in this
paper only 4 parameters have been considered to simplify the example. These
parameters correspond to the speed of main shaft (Vel), vibration at the main
shaft (Vibr), oil temperature (Toil) and bearing temperature (Tbear). Figure 4
shows these parameters.
Figure 5 shows the preprocessing of the variable Tbear before of being inputed
in the fuzzy system. The blue line corresponds to the “Temperature of bearing
(Tbear)”. The red line evaluates the moving average (MA) calculated for a period
of 2 h. The black line represents the absolute value of the difference blue and
the red line. This new variable will be named “Difference of Tbear” and noted
(DTbear).
Fuzzy Logic Applied to SCADA Systems 753
100
Tbear
Average
80 Difference
60
T (ºC)
40
20
0
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
samples
The same procedure will be applied to the rest of variables. The new variables
will be obtained by the following equations:
Fig. 7. Rule based Fuzzy Inference System for generation of probabilistic alarms
The red line of each variable shows the value of the variable. The displace-
ments to the left or the right of these lines generates a new position of the
output, and therefore, a new probability of alarm. Once, the fuzzy system is
defined, it is possible to represent surfaces that explain the behavior of the sys-
tem under different conditions. For example, Fig. 8 represents the probability of
alarm depending on DTbear and DVibr.
Fig. 8. Surface view of alarms probability with respect to DTbear and DVibr
less than certain value, the probability has very minimal values, in opposite, the
probability of alarm increase exponentially when these values are exceeded.
The system built has been proved by performing a simulation with the
SCADA data. The results of the simulations are shown in Fig. 9. The system
provides a certain probability of alarm regarding to the inputs.
A total of 10768 inputs have been analyzed through the created fuzzy system.
The outcomes are:
• 10407 normal measures. This represent the 96.7% of the total data.
• 361 orange alarms. This represent a 3.3% of the total data.
• No red alarms. In the period studied there is not any value exceeding the
limits of critical alarm.
This method allows to transform the data collected by the SCADA system
into probability of alarms. It can be a useful information for complementing the
decision making. The methodology can aid to reduce false alarms because when
a critical alarm arises from the SCADA system, the response of the fuzzy system
can reinforce that alarm.
4 Conclusions
In this paper, a new methodology based on fuzzy logic is proposed in order to
analyze the SCADA data and provide an alternative decision support. The main
purpose is to process the signals collected by the SCADA system from a different
perspective. These signals are converted into new variables to be inputted in the
fuzzy system. A fuzzy system has been created using several standards and
considering the signals of the SCADA system from a statistical point of view.
The creation of the fuzzy systems implies the definition of a set of fuzzy rules.
756 T. Benmessaoud et al.
In this case, the variables considered correspond to the distance of the value
measured by the SCADA to the simple moving average. The more distance to
the average, the more probability of being an abnormal measure, and therefore,
the more probability to generate an alarm.
Three different outcomes of the fuzzy system have been considered. Firstly,
the values are in range of normal behavior and no actions are required. Sec-
ondly, orange alarms where the probability of alarm reaches exceeds a defined
threshold but it is not a critical point. Finally, red alarm when the probability
is unacceptable and the system need an urgent action.
The results of this methodology can become a statistical support for the
generation of alarms. The methodology can be also used as a complementary
information for evaluate the priority of each alarm.
References
1. Abreu GD, Ribeiro FJ (2003) On-line control of a flexible beam using adaptive
fuzzy controller and piezoelectric actuators. Sba Controle & Automação Sociedade
Brasileira De Automatica 14(4):319–338
2. Acciona (2017) Wind power evolved aw1500 technical specifications. http://
www.acciona-windpower.es/media/1390114/11052015-aw11051500 inusa
abril-11052012.pdf
3. Benmessaoud T, Mohammedi K, Smaili Y (2013) Influence of maintenance on the
performance of a wind farm. Przeglad Elektrotechniczny 89(3):174–178
4. Camacho E, Requena V et al (2014) Demonstration of methods and tools for the
optimisation of operational reliability of large-scale industrial wind turbines. In:
International conference on renewable energies offshore renew
5. Chen B, Qiu YN et al (2011) Wind turbine scada alarm pattern recognition. In:
Renewable power generation, pp 1–6
6. Errichello R, Muller J (1994) Application requirements for wind turbine gearboxes.
NASA STI/recon Technical report N 95
7. Garcı́aárquez FP, Chacón Muñoz JM (2012) A pattern recognition and data analy-
sis method for maintenance management. Int J Syst Sci 43(6):1014–1028
8. González-Carrato RRDLH, Márquez FPG, Dimlaye V, Ruiz-Hernández D (2014)
Pattern recognition by wavelet transforms using macro fibre composites transduc-
ers. Mech Syst Sig Process 48(1–2):339–350
9. González-Carrato RRDLH, Márquez FPG, Dimlaye V (2015) Maintenance man-
agement of wind turbines structures via MFCS and wavelet transforms. Renew
Sustain Energy Rev 48:472–482
10. Khanna V, Cheema RK et al (2013) Fire detection mechanism using fuzzy logic.
Int J Comput Appl 65(12):82–97
11. Marquez FPG (2006) An approach to remote condition monitoring systems man-
agement. In: The institution of engineering and technology international conference
on railway condition monitoring, pp 156–160
Fuzzy Logic Applied to SCADA Systems 757
Sachiko Oshima(B)
1 Introduction
In this paper the contents and/or expressions of the previous paper published
by the first author [8] is reused for readers’ convenience and understanding. The
criticality accident that occurred in JCO company in 1999 (called Tokai-mura
JCO-rinkai-jiko) in Japanses must be most serious, and it should not be very
easy to understand why the nuclear chain reactions could proceed in a small
area of sedimentation tank for a finite period of time. In this sense, it should
be quite important to carry out the careful examination of criticality accidents
from the nuclear physics point of view. It should be, of course, difficult to claim
that the JCO accident can be a target of the scientific study as it is impossible
to make the experimental study of the JCO type accidents. However, we believe
that the basic mechanism of the criticality accident should be clarified why it
could naturally occur in the small area.
Here, we explain briefly the theoretical examination of the criticality accident,
which are carried out in terms of the multiple scattering theory, and we show
why the nuclear chain reactions can proceed in the small area [8]. In particu-
lar, the nuclear fission reactions (nucleon-nucleon collision together with nuclear
fission) is traced each by each, and the microscopic processes why and how the
criticality accident occurred is clarified. As a result, this paper provides some
specific reasons why the chain reactions can proceed, and this can be done by
making use of the mean free path which is the result of the nuclear multiple
scattering theory.
When the causal event of this criticality accident is identified, another ques-
tion as to why the criticality stop must be analyzed. This study tries to find
an answer for this question, though not necessarily sufficient. This mechanism
of stopping criticality may be related to the quick settle of the uranium com-
pound. The calculation of this paper tells a possible dangerous situation which
was thought to be due to the eighth batch, if it were carried into the sedimen-
tation tank. The estimated energy release after the virtual eighth batch should
become the same order of magnitude as the Chernobyl nuclear accident.
Finally, a procedure is designed to prevent in advance criticality accidents
from the standpoint of three manufacturing resources that is Human, Machine
and Material [11], grounded in the above physical theoretical discussion.
2 Research Background
2.1 Nuclear Chain Reactions
Nuclear fission reaction by incident neutrons can be written as Bohr et al. [1].
n + 235 U → A1 + A2 + (2 ∼ 3)n, (1)
where A1 , A2 are new nuclei which are produced in the reactions. In this reaction,
there are two important points. The first one is concerned with two or three
neutrons which are produced in the reactions. The second point is that the
probability of this nuclear reactions is strongly based on the incident neutron
energy, and the biggest cross section is for the incident neutron with almost zero
energy (thermal energy).
The chain reactions indicate that the produced neutrons should be absorbed
by another 235 U such that the nuclear fission can proceed further on. In addition,
if the chain reactions continue to proceed without the aid of other external
neutron sources, then this situation is called a criticality stage. In reactors, this
criticality must be kept by controlling the number of neutrons involved in the
chain reactions.
In normal reactors, a few percent enriched uranium (235 U exists a few percent
of total amount) should be commonly used, but in this JCO accident, 18.8%
enriched uranium were used, and this high enrichment should be one of the
strong reasons why the nuclear reactions run wild.
760 S. Oshima
This derivation of the mean free path (2) is based on the Glauber theory [4], and
this theoretical frame work is well examined in atomic and nuclear reactions [2,3].
Here, ρ denotes the number density of 235 U in solution and σf corresponds to
the nuclear fission cross section of 235 U induced by neutrons. In fact, the number
density of 235 U in one batch solution is ρ ≈ 1.5 × 1020 numbers/cm3 which is a
constant. On the other hand, the nuclear fission cross section σf of 235 U induced
by neutrons crucially depends on the incident energy of neutrons. The incident
energy dependence of the observed cross sections σf can be written as [7].
585b : En ≈ 0.025eV
σf ≈ (3)
1b : En ≈ 1MeV,
k = p cos θ. (6)
762 S. Oshima
Since the observed scattering cross section does not depend on the scattering
angles, we can make an average over the angles, and we obtain the average
energy after the scattering
1 π k2 1 π p2 1
En = dθ = cos2 θdθ = En . (7)
π 0 2M π 0 2M 2
This means that a neutron should lose a half of its energy in each scattering
process.
(2) The Mean Free Path of Neutrons inside Water
Now we calculate the mean free path of neutrons after the scattering with
protons in one batch solution. The number density of protons in one batch
solution is ρp ≈ 4.9 × 1022 numbers/cm3 . The neutron-proton cross section at
low energy is observed as σnp ≈ 20b [6] and thus the mean free path of neutron
in one batch solution becomes
1
λp = ≈ 1cm. (8)
ρp σnp
Therefore, a prompt neutron with 1 MeV energy should have its energy after it
travels around 25 cm,
25
1
En = 1MeV × ≈ 0.03eV. (9)
2
This neutron does not have to travel linearly, but in any case, it should become
a thermal neutron.
(3) Mean Free Path of Thermal Neutron in the n − 235 U Fission Process
We can easily calculate the mean free path of the thermal neutron before the
nuclear fission in one batch solution. Since σf = 585b, we find
1
λf = ≈ 11 cm. (10)
ρσf
From these considerations, we see that prompt neutrons with 1 MeV should
travel around 25 cm, and then they become thermal neutrons. Further, after
they travel 11 cm, they can induce nuclear fissions. Thus, if one carries 50 of
the uranium nitrate solution into the sedimentation tank with 45 cm diameter
and 25 cm height, then nuclear chain reactions may well start quickly and proceed
further on.
(4) Reaction Time of Neutrons
Now we see that when prompt neutrons travel 36 cm, then they can induce
nuclear fissions. Therefore, we should estimate the duration time that is neces-
sary to travel this 36 cm. Since the nuclear reaction time must be smaller than
10−15 second, we can ignore this time duration. Since the prompt neutron with 1
MeV should spend τ 0 ≈ 7.6 × 10−10 second to proceed 1 cm, its energy becomes
On Causal Analysis of Accident and Design of Risk-Proof Procedure 763
After that, this neutron becomes thermal, and it should proceed 11 cm before
the nuclear fission. Since the thermal neutron may have the energy of 0.03 MeV,
it should take τth ≈ 46 µs. Thus, the total time that is necessary for the prompt
neutron to induce a fission reaction should be Ttotal ≈ 61 µs.
Further, we evaluate the neutron number at the beginning, and this neutron
should come from the spontaneous fission of 238 U . The number of neutrons
in one batch solution must be around 20, and a half of them are assumed to
contribute followed reaction. The energy release from the nuclear fission must
be around 200 MeV in each reaction, and therefore the total energy becomes
The duration time of this nuclear reactions can be estimated and should be
around Tf ≈ 2.4 s, which should correspond to the time that the uranium com-
pound is coming down to the bottom.
(2) Nuclear Fission in the Sixth Batch
The same calculation can be carried out for the sixth batch case. In this
case, we see that the total energy must be 1000 times smaller than that of the
seventh batch case. This is not very large, but at the sixth batch, the nuclear
chain reactions already started, and indeed there were a small burst.
From this calculation, we now understand the reason why the criticality
stopped. In case the uranium were settled at the bottom of the tank, then the
nuclear chain reaction cannot proceed further since the prompt neutrons cannot
lose their energy because of the lack of water.
(3) Nuclear Fission in the Eighth Batch
From now on, we only present a possible scenario of nuclear accident, if the
eighth batch were carried into the tank. In this case, the number of uranium
involved in the nuclear fission must be proportional to the height of water, and
thus it should be 22.9/19.7 more than the seventh batch. Thus, the number
becomes
22.9
N = 40000 × ≈ 46500. (15)
19.7
This means that the number of nuclear fissions should be also increased and the
total number becomes
This energy 4.8 × 1010 J corresponds to 11 ton of TNT powder which is quite a
serious explosion. The accident of Chernobyl nuclear power plant is believed to
correspond to around 100 ton of TNT powder, and therefore, if the eighth batch
were thrown away, then the accident would have been more than serious.
5 Concluding Remarks
The purpose of this study aims to examine a risk-proof procedure for nuclear
materials operation through the causal analysis of JCO accident in 1999. In the
analysis, the basic mechanism of the accident is discussed in terms of the nuclear
multiple scattering theory. As the result, some specific reasons why the chain
reactions can proceed and why the criticality could stop. The control factors of
operation resources are extracted referred to physical phenomena regarding to
fissions.
This paper aims to assist managers and academic researchers in the area of
atomic power generation. As atomic power plants rely on many human opera-
tions, mangers have to cope with operational effectiveness and safety. Therefore,
they have to concern to provide systematic procedure based on the proposed
methodologies.
Especially, this paper proposed control factors to prevent accident. Subse-
quently, because of its early stage, we hope related studies to this approach is
advanced by researchers interested in various areas.
References
1. Bohr A, Mottelson BR (1998) Nuclear Deformations. Nuclear Structure
2. Fujita T, Hüfner J (1979) Inclusive hadron-nucleus scattering at high energy. Phys
Lett B 87(4):327–330
3. Fujita T, Hüfner J (1980) Momentum distributions after fragmentation in nucleus-
nucleus collisions at high energy. Nucl Phys A 343(3):493–510
4. Glauber RJ (1959) Lectures in Theoretical Physics. Interscience
5. JCO accident investigation committee (2005) Criticality accident in JCO; its solu-
tion, fact, cause and study (jco rinkaijika; sonozenbou no kai jijitsu youin taiou).
In: Iinkai JJC, Gakkai NG (eds) Atomic Energy Society of Japan. Tokai University
Press, Kanauawa
6. Nuclear Data Center (2016). http://wwwndc.jaea.go.jp/j40fig/jpeg/h001 f1.jpg
7. Okajima S, Kugo T, Mori T (2012) Reactor Physics (Genshiro Butsurigaku).
Ohmsha, Tokyo
On Causal Analysis of Accident and Design of Risk-Proof Procedure 767
1 Introduction
Many difficult inventory problems involve production control and asset man-
agement, but those that involve logistics systems are important and have gar-
nered strong interest in recent years. Particular focus has been paid to inventory
problems. Doboshas presented many papers on inventory problems. In 2001, he
presented the problem of reverse logistics, adjusting the relationship of holding,
production, and disposal costs [4]. In 2005, he investigated production inventory
adjustment [5]. In 2007, he presented a paper on the total production cost of two
companies for adjustment [6]. Minner et al. [18] have also presented papers on
inventory problems. In 2003, they evaluated reverse logistics, where the inven-
tory has several supply methods. In 2005, they presented a paper on the problem
of adjusting shipping, replenishment, and lost sales opportunities for two inven-
tories [19]. In 2008, Thangam et al. [28] presented a paper on how to determine
replenishment for Poisson demand. In 2003, Mahadevan et al. [16] treated a facil-
ity as an inventory problem where the returned products are remanufactured.
In 2004, Miranda et al. [20] analyzed the inventory decision problem using the
Lagrangian relaxation method and subgradient methods; their ordering point
method was based on the economic order quantity. In 2009, Rieksts et al. [24]
analyzed the inventory problem with ordering intervals using power- of- two
policies.
There are many other inventory problems, including the reduction of the
total safety inventory quantity, or on-hand inventory [8], and the calculation of
the inventory value at each step, or echelon inventory [12].
In this study, we propose a new model where inventories are managed at
distribution centers (DCs), taking actual conditions into account. Holding costs
only occur in DCs, but additional inventory costs are incurred for the product
value because a product near dealers has more value. For inventory costs other
than the holding cost, we can get an interest charge if the product is exchanged
with cash. Another factor is the lost product value due to age depreciation. We
calculated the annual supply and demand value, and created demand data that
were based on the Poisson demand of time-series fluctuations. Logistics models
such as these are known as the NP-hard problem [3]. Soft computing methods
such as simulated annealing, neural networks, and genetic algorithms (GA) are
well-suited to solve this problem [1,17,26,29].
In this study, we adopted random key-based genetic algorithm (rk-GA) with
distributed environment scheme (des-rkGA) as the proposed method, which is
an improved version of rk-GA; we compared the proposed method with rk-GA
and spanning tree-based GA (st-GA) to confirm its suitability [9].
We propose a model that addresses many of the different inventory problems
studied earlier; we demonstrate des-rkGA algorithm to solve the multi-logistics
inventory problem, and present the computational results with the effectiveness
by the proposed algorithm.
Therefore, the inventory cost without holding costs and the product value,
inventory value, inventory time, and inventory holding ratio have a trade-off
relation. In this section, we describe the inventory holding ratio, safety inven-
tory, production adjustment, pipeline inventory, and inventory on a production
process, lot size inventory, and DC inventory as well as a summary of the inven-
tory problems evaluated in this study. Although there are many kinds of inven-
tories, we adopted the idea of a value chain for the inventories; we used the
pipeline, lot size, production process, and DC inventories.
The general outline of the logistics model used for the inventory in this study
is shown in Fig. 2.
Indices:
i = 1, 2, · · · , I : index of suppliers;
j = 1, 2, · · · , J : index of plants;
k = 1, 2, · · · , K : index of DCs;
l = 1, 2, · · · , L : index of customers;
t = 1, 2, · · · , T : index of cycles in the logistics system. Tcycle intervals are
described later.
Parameters:
A : number of unit parts to constitute;
c1ij : shipping cost of unit parts or material from supplier i to plant j
[yen];
c2jk : shipping cost of unit production from plant j to DC k [yen];
c3kl : shipping cost of unit production from DC k to customer l [yen];
gij
1
: shipping time of unit parts or material from supplier i to plant j [h];
gjk
2
: shipping time of unit production from plant j to DC k [h];
gkl
3
: shipping time of unit production from DC k to customer l [h];
Tplant : producing time of plants [h];
α : safety inventory coefficient;
σ : standard deviation of demand;
Tcycle : cycle time in the logistics system. It shows how much time it takes
for a load to be moved once [h];
h : inventory holding ratio (2.1 value chain.);
H : holding cost of unit production in a DC [yen];
ek : inventory cost or DC k [yen];
Mcost : material cost [yen];
rj1 : fixed cost for operating plant j [yen];
rk2 : fixed cost for operating DC k [yen];
Ui1 : upper limit of supply for parts and materials in supplier i;
Uj2 : upper limit of supply for production in plant j;
Uj3 : upper limit of supply for production in DC k;
pconst
j : unit cost production at steady state [yen];
pexceed
j : unit cost production at excess state, [yen];
shortage
pj : unit cost production at shortage state [yen];
Itotal : total inventory cost without holding cost;
Ndelay : number of delays for plant production;
W1 : number of plants that can be operated;
W2 : number of DCs that can be operated;
Raverage : production quantity for one period of cycle time as calculated
from the annual average production quantity;
Si1 : supplier production ratio; ratio of supplier i to gross supplier product
capability;
Sj2 : plant production ratio; ratio of plant j to gross plant product
capability;
zklevel : base inventory level in DC k.
772 H. Inoue et al.
Decision Variables:
u1i (t) : supply amount of parts and materials in supplier i at cycle t;
u2j (t) : supply amont of production in plant j at cycle t;
u3k (t) : supply amont of production in DC k at cycle t;
u4l (t) : demand amount of production in customer l;
zk (t) : inventory volume for DC k at cycle t [yen];
Sk3 (t) : DC demand ratio; demand ratio of DC k to the gross DC demand
quantity;
Rj2 (t) : shipping quantity in plant j;
Rk3 (t) : receive cargo quantity in DC k;
Bl (t) : back order quantity in customer l;
Dl(t) : order quantity at this period in customer l;
DlDC (t) : request quantity to DC at this period in customer l;
Δzk (t) : difference of inventory quantity and base inventory level;
Δzmax (t) : maximum of difference of inventory quantity and base inventory
level;
zkreq (t) : request quantity in DC k;
p1val : product value when delivery is completed to plants [yen];
p2val : product value when delivery is completed to DCs [yen];
p3val : product value when delivery is completed to customers [yen];
Ujexceed : threshold for determination when exceeding production;
Ujshortage : threshold for determination when reducing production;
Decision Variables:
pj (uj 2 ) : producing cost of unit production in plant j, [yen]
x1ij (t) : amount supplied of unit parts or material from supplier i to plant j
at cycle t;
x2jk (t) : amount supplied of production from plant j to DC k at cycle t;
x3kl (t) : amount supplied of production from DC k to customer l at cycle t;
p1j (t) : operating flag for plant j at cycle t (= 1 when plant j is used,
= 0 otherwise);
p2k (t) : operating flag for DC k at cycle t (= 1 when DC k is used,
= 0 otherwise).
When considering the product value and inventory holding ratio in addition to
the holding cost, the inventory cost becomes high near the customer (dealer),
as shown in Fig. 3 [23]. The inventory holding ratio is determined by the con-
stant number by interest rate when the product is cashed, the decrease in prod-
uct value, etc. In this study, we treated the inventory holding ratio as uniform
because we were dealing with engineered products such as automobiles that do
not experience degradation. We adopted the value chain concept for all products.
Multi-stage Logistics Inventory for Automobile Manufacturing 773
The safety inventory is the inventory required to prevent stock out. The safety
inventory and service level have a mutual trade-off relation. The formula for the
safety inventory is shown below.
Isafety = ασ Tcycle + max(gjk ) ∀j, k, (2)
Order fluctuations mean that factories have to adjust production. Extra charges
include extra pay, etc. if production is over the steady state, and extra fixed
costs for the equipments if production is under steady state. The formula for
production adjustment is shown below.
The pipeline inventory denotes the inventory during shipping. There is a trade-off
between the pipeline inventory and shipping cost. In this study, we constructed a
model based on the idea of an inventory holding ratio. We treated each product
during shipping between a supplier and plant and between a DC and customer
774 H. Inoue et al.
as pipeline inventory. The formula of the pipeline inventory cost Ipileline is shown
below.
⎧ ⎛
⎨T I
J J
K
Ipipeline =h ⎝p1val gij
1 1
xij (t) + p2val gjk
2 2
xjk (t) (4)
⎩
t=1 i=1 j=1 j=1 k=1
K
L
+p3val gkl
3 3
xkl (t) .
k=1 l=1
The product value when shipped to DCs is approximated by its value when
shipped to plants and the average shipping cost.
T J K
k=1 cjk xjk (t)
2 2
t=1 j=1
p2val = p1val + T J K
. (6)
k=1 xjk (t)
2
t=1 j=1
The product value when shipped to dealers is approximated by its value when
shipped to DCs and average shipping cost.
T K L
l=1 ckl xkl (t)
3 3
t=1 k=1
p3val = p2val + T K L
. (7)
l=1 xkl (t)
3
t=1 k=1
The lot size inventory is the inventory of the completed products during shipping.
We treated completed products as lot size inventory. All products are shipped
from plants to DCs in the same interval. The lot size inventory cost was cal-
culated as half the product of the shipping values, production values in plants,
and inventory holding ratio, as shown in Fig. 4; we used this formula because
all products were shipped from a plant to a DC in the same time intervals. The
formula is shown below.
J
T K
gjk
2 2
xjk (t)(p1val + Pj (u2j (t)))
Ilotsize = h . (9)
t=1 j=1 k=1
2
2.7 DC Inventory
As mentioned above, the formula of the total inventory cost without the holding
cost used in this study consists of the pipeline, production process, lot size, and
DC inventory costs; it is shown as follows.
776 H. Inoue et al.
L
K J
+ p3val gkl
3 3
xkl (t)) + Tplant u2j (t)(p1val + Pj (u2j (t))/2) (12)
k=1 l=1 j=1
T
J
K K
T
gjk
2 2
xjk (t)(p1val + Pj (u2j (t)))
+ + Tcycle p2val zk (t)
t=1 j=1 k=1
2 t=1 k=1
T
K
+ HTcycle zk (t).
t=1 k=1
3.1 Assumptions
In this study, we constructed the logistics model with the following assumptions.
A1. The transit times are known between suppliers and plants, plants and DCs,
and DCs and customers.
A2. The shipping costs are known between suppliers and plants, plants and
DCs, and DCs and customers.
A3. The supplies are delivered without delay to a factory in population to the
planned production.
A4. In this model, suppliers provide multipurpose parts for effective optimiza-
tion. The parts are examined and classified for easy assembly work. This
process is carried out assuming that A package parts are used in the assem-
bly of one car. The conditions for assembling one car require the package
parts of A units. Other parts are not targeted in this model because the
supply route is decided from the first time the supplier side is entrusted
with delivery, few of the detailed parts have management value in the
logistics system, etc.
A5. The products made at a plant are shipped to a DC by lot size.
A6. The DCs have space to accept products from plants.
A7. We consider only the inventory costs in DCs.
A8. The customer addresses are known.
A9. The existence of inventory is known in advance through an inventory check;
orders for inventory that is out of stock are treated as reservations.
A10. A product has the same value when received by any factory, DC, or dealer.
A11. All products are delivered within the limits time of a Tcycle .
Generally, when a logistics system is seen from the functional constitutive prop-
erty, it is modeled by a three-stage production network and distribution system,
which is called a supply chain network. The first stage is the supplier phase,
which involves parts and a supplier. The second stage is the plant phase, which
consists of a production plant or outsourcing. The third stage is the DC phase
and consists of a distribution center or storehouse. A sample of an actual auto-
mobile company’s logistics system is shown in Fig. 6. In many cases, the actual
logistics system is comprised of three stages. A three stage logistics system model
is shown in Fig. 7. The logistics model used in this study had 14 suppliers, 2 fac-
tories, 4 DC, and 22 customers. The mathematical model used in this study is
shown below.
778 H. Inoue et al.
T
I
J J
K
min Z1 = A c1ij x1ij (t) + c2jk x2jk (t)
t=1 i=1 j=1 j=1 k=1
K
L T
I
J
+ c3kl x3kl (t) + h (p1val gij
1 1
xij (t)
k=1 l=1 t=1 i=1 j=1
J
K K
L
+ p2val gjk
2 2
xjk (t) + p3val gkl
3 3
xkl (t))
j=1 k=1 k=1 l=1
J
+ Tplant u2j (p1val + Pj (u2j (t))/2) (14)
j=1
J
T K
gjk
2 2
xjk (t)(p1val + Pj (u2j (t)))
+
t=1 j=1 k=1
2
K
T T
K
+ Tcycle p2val zk (t) + HTcycle zk (t)
t=1 k=1 t=1 k=1
J
T J
K
+ u2j (t)P (u2j (t)) + rj1 p1j (t) + rk2 p2k (t)
t=1 j=1 j=1 k=1
J
s. t. x1ij (t) ≤ u1i (t), ∀i, t (15)
j=1
I
x1ij (t) ≤ u2j (t)p1j (t), ∀j, t (16)
i=1
Multi-stage Logistics Inventory for Automobile Manufacturing 779
J
x2jk (t) ≤ u3k (t)p2k (t), ∀k, t (17)
j=1
K
x3kl (t) ≥ u4l (t), ∀l, t (18)
k=1
J
J
x1ij (t) = x2jk (t), ∀j, t (19)
j=1 j=1
K
K
x2jk (t) = zk (t − 1) − zk (t) + x3kl (t), ∀j, t (20)
k=1 k=1
J
p1j ≤ W1 (21)
j=1
K
p2k ≤ W2 (22)
k=1
x1ij (t), x2jk (t), x3kl (t), zk (t) ≥ 0, ∀i, j, k, l, t (23)
p1j (t), p2k (t) = {0, 1}, ∀j, k, t (24)
Pj (u2j ) = Pjcont + max(Pjexceed (u2j (t) − Ujexceed ), 0)
+ max(Pjshortage (Ujshortage − u2j (t)), 0) (25)
T I J
j=1 cij xij (t)
1 1
t=1 i=1
p1val = Mcost + T I J
(26)
j=1 xij (t)
1
t=1 i=1
T J K
k=1 cjk xjk (t)
2 2
t=1 j=1
pval = pval +
2 1
T J K
(27)
k=1 xjk (t)
2
t=1 j=1
T K L
l=1 ckl xkl (t)
3 3
p3val = p2val + t=1 T
k=1
K L
. (28)
l=1 xkl (t)
3
t=1 k=1
Gen and Lin [7] surveyed genetic algorithms in Wiley Encyclopedia of Computer
Science and Engineering and recently many researchers applied GA to various
areas in logistics systems. Inoue and Gen [10] reported multistage logistics sys-
tem with inventory considering demand by hybrid GA, Neungnatcha et al. [22]
reported adaptive genetic algorithm (AGA) for solving sugarcane loading sta-
tions with multi-facility services problem, Jamrus et al. [11] reported discrete
particle swarm optimization (PSO) approaches and extended priority based-
HGA for solving multistage production distribution under uncertainty demands
780 H. Inoue et al.
and Lee et al. [13] reported multi-objective hybrid genetic algorithm (MoGA) to
minimize the total cost and delivery tardiness in a reverse logistics.
Lin and Gen [15] proposed a random key-based genetic algorithm (rk-GA) for
solving AGV (automatic guided vehicle) dispatching problem in flexible man-
ufacturing system (FMS). Now we are going to use it for multistage logistics
system with inventory. Now we define the following example of the cost matrix:
The algorithm created using the rk-GA technique has three logistics stages.
Figure 7 shows the third stage process. Figure 8 shows a sample cost matrix.
Figure 9 shows a sample rk-GA chromosome.
Gen and Cheng successfully applied rk-GA encoding to the shortest path
and project scheduling problems in 2000 [2]. For transportation problems, a
chromosome consists of the priorities of sources and depots, which make up a
transportation tree; its length is equal to the total number of sources m and
depots n, i.e., m + n. The transportation tree corresponding to a given chro-
mosome is generated by the sequential arc between sources and depots. At each
Multi-stage Logistics Inventory for Automobile Manufacturing 781
step, a single arc is added to a tree that selects a source (depot) with the highest
priority and connects it to a depot (source) to minimize cost.
Figure 13 shows the brief decoding at each stage. Figure 14 shows the process
of the m-logistics problem. In this study, we used the one-point crossover, which is
the simplest method when using rk-GA. We used insertion and swap mutations.
We used the roulette wheel approach, which selects the chromosome in ascending
order of fitness. Examples of one-point crossover, insertion mutation, and swap
mutation are shown in Figs. 10, 11 and 12, respectively.
The new generated chromosome is evaluated. It is selected in ascending order
of fitness based on the number of popSize in the parent and newly generated
chromosomes. The order of fitness then helps determine the next generation of
chromosomes.
782 H. Inoue et al.
Inversion and displacement mutations are used in st-GA. The inversion muta-
tion select two positions within a chromosome at random and then inverts the
substring between these two positions. The displacement mutation selects a sub-
string at random and inserts it in a random position.
The total logistics system process can be explained as follows. Figure 15 shows
the whole logistics system process in logistics cycle periods. The safety inventory
is given first each cycle period. The inventory quantity is renewed last in each
cycle period. The plant product shipping quantity is based on past demand,
because of the production time in plants. In this study, this is called the number
of delays for plant production Ndelay where the plant production time is the
number of order times. This is shown below.
Multi-stage Logistics Inventory for Automobile Manufacturing 783
Tplant
Ndalay = . (29)
Tcycle
Here, we use an example for the shipping products when the customer’s
total demand quantity changes from 600 before delay cycle times (Ndelay ) to
1200. A pull-type demand quantity is applied to DC-customer and supplier-plant
product distributes based on the demand quantity at the time. The shipping
quantity is 1200. A push-type demand quantity is applied to plant-DC product
distributions based on the demand quantity before the number of delay times
for plant production (Ndelay ). The shipping quantity is 600.
The load of the customer’s demand is shared by the total inventory quantity
in DCs and total production in plants and suppliers. An example process is
shown in Fig. 16. Step 1 shows a renewal of the demand quantity (u4l ) and back
order quantity (Bl ). Step 2 calculates the planned order quantity (u2j (t)). Step 3
calculates the order quantity in suppliers (u1i (t)). Step 4 calculates the planned
shipping quantity (Rj2 (t)). Step 5 calculates the quantity of cargo received in
DCs (Rk3 (t)).
Nominated Same
Group of Genes Immigation Generation
Island-A
Island-B
Island-D
Island-C
it is easy to disperse processing to several PCs. There are also two process-
ing methods for GA. The synchronous method synchronized the time for each
generation. The asynchronous method does not do so. We adopted the asyn-
chronous method because we would need a wait time for the slowest island if we
adopted the synchronous method. We created nine islands with crossover prob-
ability (PC) and lower mutation probability (PM) than the center island. Two
of nine islands were chosen at random with the generation timing of the center
island. The direction of immigration was decided at random, and the popula-
tions immigrated at a 0.1 immigration rate. Next, 10% of each island’s worst
chromosomes were destroyed, and 10% of other islands’ best chromosomes were
adopted as immigrations. The concrete values of PC and PM used by this study
are discussed in detail in the next section. We performed the experiment using
parallel processing with three PCs.
5 Numerical Experiments
We performed prior experiments to determine PC and PM for the center island
and obtained the solution is shown in Table 1. We adopted PC = 0.4 and PM =
0.6 as the center island values for st-GA and PC = 0.6 and PM = 0.4 as the
center island values for rk-GA because they provided the best solutions.
We adopted PC = 0.6 and PM = 0.4 as the center island values for des-rkGA,
because it is based on rk-GA. We created 9 islands with PC = (0.4, 0.6, 0.8) and
PM = (0.2, 0.4, 0.6) for the experiments.
popSize denotes the population size. maxGen is the maximum generation
size used as the terminating condition for the experiments. We performed exper-
iments repeated 20 times using maxGen = 5000 and popSize = (20, 50, 100).
However, we reported maxGen = 1000 as sufficient for the evolutive process
because no more improvement was detected after 1000 generations. Table 2 shows
the evolutive processes for the best value by each method. The best value Z1
was at gen = (300, 500, 1000).
Figure 18 shows the evolutive processes for each method when popSize was
100. The proposed Des-rkGA produced the best final result because its evolutive
786 H. Inoue et al.
PM PC
0.2 0.4 0.6 0.8
st-GA 0.2 451, 461, 818 442, 315, 078 447, 088, 045 454, 269, 696
0.4 445, 044, 804 440, 918, 667 444, 501, 564 444, 782, 298
0.6 442, 653, 511 440, 602, 712 445, 820, 848 452, 372, 968
0.8 445, 865, 777 443, 723, 937 451, 882, 261 454, 711, 777
rk-GA 0.2 412, 925, 351 406, 037, 962 408, 355, 139 413, 251, 110
0.4 406, 280, 823 405, 783, 627 405, 476, 817 405, 988, 745
0.6 408, 604, 249 406, 269, 725 405, 763, 570 413, 679, 127
0.8 412, 120, 741 406, 777, 058 411, 116, 273 413, 073, 894
PopSize Gen
300 500 1000
st-GA 20 551, 429, 175 547, 095, 287 541, 769, 426
50 499, 464, 718 485, 094, 520 482, 185, 269
100 447, 629, 126 423, 064, 081 402, 412, 188
rk-GA 20 467, 164, 557 463, 804, 677 433, 653, 257
50 452, 496, 499 434, 436, 525 394, 435, 691
100 366, 335, 225 365, 780, 289 364, 869, 285
des-rkGA 20 418, 809, 097 396, 053, 541 389, 092, 924
50 353, 999, 460 346, 130, 700 346, 030, 174
100 348, 536, 438 345, 863, 047 345, 695, 925
Island Gen
10 20 50 70
A 590, 914, 974 505, 586, 248 442, 219, 078 407, 869, 638
B 545, 289, 874 519, 883, 420 431, 958, 888 411, 047, 134
C 596, 086, 519 530, 215, 824 413, 252, 989 399, 855, 446
D 589, 080, 440 496, 206, 544 433, 337, 814 386, 073, 858
E 569, 117, 032 535, 673, 932 419, 654, 636 412, 582, 256
F 556, 063, 137 497, 222, 107 454, 229, 946 381, 231, 373
G 582, 061, 836 535, 413, 007 412, 034, 067 417, 752, 697
H 548, 377, 594 520, 688, 854 446, 742, 473 378, 950, 050
I 572, 220, 316 493, 015, 494 415, 259, 267 382, 084, 372
des-rkGA 545, 289, 874 493, 015, 494 412, 034, 067 378, 950, 050
Multi-stage Logistics Inventory for Automobile Manufacturing 787
speed was faster than the other compared methods. As shown in Table 2, Z1 was
(402, 412, 188) when st-GA was used. It was (364, 869, 285) with rk-GA and (345,
695, 925) with des-rkGA. des-rkGA showed 16.41% and 5.55% improvements
compared to st-GA and rk-GA, respectively. des-rkGA was confirmed to provide
stable results because the standard deviation was only 3210 compared to 20,
338 with st-GA and 7780 with rk-GA. Table 3 shows the evolutive process of
each island when popSize was 100 as in Table 2. The highlight shows the best
solution for each generation, of which the best was with des-rkGA. In a prior
experiment, the PC and PM for island E were the best combination. However, in
the evolutive process for each island, many of the best solutions were produced
at other islands. Table 4 shows the solution at each immigration rate, and the
best solution by rk-GA is shown as reference. The experimental results show
that if the immigration rate surpassed 50%, the results become bad. Figure 19
was created from Table 4; the best value was produced at 10% migration rate.
was 100. As shown in Table 2, there were differences when the number of gener-
ations was small.
In this experiment, we used three PCs of the same kind dual core AMD1212
2.0 GHz/2 MB; the memory size was 2 GB, and the development language was
C#.
The computational time is shown in Table 6. We experimented with the test
data for 90 days. When converted to 1 day, the computation time was 35.78,
19.98, and 20.45 s using st-GA, rk-GA, and des-rkGA, respectively. The average
generation number when the solution arrived at the maxGen value was 685,
467, and 78 using st-GA, rk-GA, and des-rkGA, respectively. The generation
time is shown in Table 6 as time to arrive at the maxGen value. The time was
55.56, 11.07, and 5.74 s using st-GA, rk-GA, and des-rkGA, respectively. There
was no drastic improvement because the total CPU processing time was 9.45 s,
but the CPU processing time could be distributed. Moreover, the average of the
generation numbers when the des-rkGA value became better than maxGen of
rk-GA was 165, and it took 3.15 s. We also confirmed the advantage of des-rkGA
in terms of computer time. This was due to using parallel processing with 3 PCs
of the same kind. We believe that the PC environment affects the solutions.
6 Conclusions
Using date of an actual automobile company in this study, we proposed random
key-based genetic algorithm with distributed environment scheme (des-rkGA),
for a multi-stage logistics system that calculates inventory values for many differ-
ent cases. We proposed a logistics system that keeps the safety inventories only
in DCs and that can cope with the location allocation problem. We performed
numerical experiments with test data that were based on the disclosed data of a
Multi-stage Logistics Inventory for Automobile Manufacturing 789
References
1. Altiparmak F, Gen M, Lin L (2004) A priority-based genetic algorithm for supply
chain design. Comput Ind Eng 13:22–25
2. Cauty R (2000) Genetic algorithms and engineering optimization. Wiley, Hoboken
3. Gen M, Cheng R (1997) Genetic algorithms and engineering design. Wiley, New
York
4. Dobos I (2001) Production strategies under environmental constraints: continuous-
time model with concave costs. Int J Prod Econ 71(1–3):323–330
5. Dobos I (2005) The effects of emission trading on production and inventories in
the Arrow-Karlin model. Int J Prod Econ 93–94(1):301–308
6. Dobos I (2007) Tradable permits and production-inventory strategies of the firm.
Int J Prod Econ 108(1–2):329–333
7. Gen M, Lin L (2009) Genetic algorithms. In: Wah B (ed) Wiley encyclopedia of
computer science and engineering. Wiley, pp 1367–1381
8. Graves SC, Willems SP (2003) Supply chain design: safety stock placement and
supply chain configuration. Handbooks Oper Res Manage Sci 11:95–132
9. Inoue H (2008) Utilization of genetic algorithm for transportation planning
improvement in multi-objective logistics system. J Soc Plant Eng Jpn 19:252–259
(in Japanese)
10. Inoue H, Gen M (2012) A multistage logistics system design problem with inventory
considering demand change by hybrid genetic algorithm. IEEJ Trans Electron Inf
Syst 95(5):56–65
11. Jamrus T, Chien CF et al (2015) Multistage production distribution under uncer-
tain demands with integrated discrete particle swarm optimization and extended
priority-based hybrid genetic algorithm. Fuzzy Optim Decis Making 14(3):265–287
12. Lagodimos AG, Koukoumialos S (2008) Service performance of two-echelon supply
chains under linear rationing. Int J Prod Econ 112(2):869–884
13. Lee JE, Chung KY et al (2015) A multi-objective hybrid genetic algorithm to
minimize the total cost and delivery tardiness in a reverse logistics. Multimedia
Tools Appl 74(20):9067–9085
14. Lin L, Gen M (2008) An effective evolutionary approach for bicriteria shortest path
routing problems. IEEJ Trans Electron Inf Syst 128(3):416–423
15. Lin L, Gen M (2009) A random key-based genetic algorithm for agv dispatching
in FMS. Int J Manufact Technol Manage 16(1):58–75
16. Mahadevan B, Pyke DF, Fleischmann M (2003) Periodic review, push inventory
policies for remanufacturing. Soc Sci Electron Publ 151(3):536–551
17. Min H, Zhou G (2002) Supply chain modeling: past, present and future. Comput
Ind Eng 43:231–249
790 H. Inoue et al.
1 Introduction
employer’s place of business, under the employer’s control, and with the mutual
expectation of continued employment”; any arrangement that lacks one or more
of these attributes is nonstandard [7]. There are many different forms of non-
standard work arrangements, such as temporary, contract, part-time and tempo-
rary agency work, to enhance organizational flexibility and reduce employment-
related cost [7]. The temporary employment, as a most widely adopted form of
nonpermanent employment, refers to employment featured with shorter length
of contracts and lower expectations of continued employment.
According to the latest report by United Nations International Lab our Orga-
nization, in the United States, one in four employees worked part time in 2014,
up from 19.6% in 2009. In 33 European countries, 12.3% employees were aver-
agely on temporary contracts in 2014. In Asia, the portion of temporary workers
are even higher, ranging from 24% in Philippines to 67% in Viet Nam. The
percentage is sizeable in China, India, Indonesia and Malaysia.
In response to the growing use of nonstandard workers, more scholars are
conducting empirical and theoretical research on this phenomenon. Much of the
studies focus on the differences between standard and nonstandard employees in
work-related attitudes, including satisfaction, commitment, loyalty, and in-role
& extra-role behaviors, like organizational citizenship behavior, turnover and
work performance [2,5,12,14].
Other than permanent employment, Contrast with the wide-acceptance in
management practices, scholars have argued that there are quite a few draw-
backs of temporary employment. Many studies reported that temporary jobs
are connected with lower work status, characterized with low-wage, low-welfare
and high-stress [4]. Scholars also observed that temporary employees often put
in less effort than standard employees, hence alternative work arrangements
increase the difficulty in human resource management [12].
Previous studies in U.S. and Euro examined the relationship between tempo-
rary employment and employee’s work attitudes and performance. Christin and
Linn [14] recruited a sample of 350 entry-level employees from 6 restaurants in
United States, and found that part-time workers did show some organizational
citizenship behavior (OCB) differences compared to their full time colleagues.
Thorsteinson [5] did a meta-analysis to demonstrate that, full-time employees
were more involved in their jobs than part-time employees and there was little
difference between full-time and part-time employees in job satisfaction, organi-
zational commitment, and turnover intention.
Compared with the abundant empirical studies in western countries, research
on temporary employment has not attracted enough attention in China. To
bridge the gap in the literature, we use a two-wave panel design study to exam-
ine whether the different work arrangements would influence employees’ OCB
and turn over intention. We have two major contributions to the literature.
Firstly, this study is to investigate the relationship between temporary employ-
ment status and OCB and turnover intention in Chinese cultural context. Our
research fills this gap in the literature and advances current understanding on
blended workforce arrangement and employees’ OCB and turnover intention.
The Impact of Temporary Employment on Employees’ Organizational 793
3 Methodology
3.1 Data
Town Hospital, Sunduan Town Hospital and Fushan Community Hospital. All
the hospitals are public community hospitals with more than 150 employees,
providing service to people living in the town or the community.
The sample consists of both standard employees and temporary employees,
including doctors, nurses and other employees like drivers and accountants who
provide assistant service. Using information obtained from the leaders of 4 hos-
pitals, we distributed questionnaires to each individuals separately. We finally
got 209 valid questionnaires with a response rate of 80.2% (shown in Table 1).
N %
Sample Valid sample 209 57.89
Invalid sample 152 42.11
SUM 361 100
3.2 Measurement
Unless otherwise noted, we use a 5-point Likert scale (“strongly disagree” = 1 to
“strongly agree” = 5) for all scales. Each scale’s coefficient α is noted below in
Table 5. Because the questionnaires are distributed in Chinese organizations, we
follow the strict procedure of translating, back-translating and cultural adjust-
ment of the original scales to generate a Chinese version.
(1) Employment Status
We collect the information of subordinates’ employment status with a three-
step procedure. Firstly, the supervisors are asked to report the subordinates’
employment status two weeks before the survey. Then in the first wave sur-
vey, the subordinates report their own employment status. The survey ques-
tion is “You are currently 1 ‘permanent employee’, 2 ‘contract employee’, or 3
796 X. Qian et al.
about other job opportunities” (Cronbach’s α = .94). All of items of the turnover
intention scale are listed in Table 3.
(4) Organizational identification
Organizational identification was measured by using a 6-item scale developed
by Mael and Ashforth [9]. Examples are: “When someone criticizes my team, it
feels like a personal insult”, “I am very interested in what others think about
my team” (Cronbach’s α = .91). All the items of the turnover intention scale
are listed in Table 4.
(5) Control Variables
We controlled five demographic variables, which are age, gender, education back-
ground, organizational tenure, and employees’ difference. Age and tenure were
measured in years. Gender was measured as a dummy variable, 1 for male and
0 for female. As for education background, we used 1 to represent bachelors
and above, 2 to represent technical school graduates, 3 to represent senior high
school graduates and below. Treatment difference is employees’ cognition on the
organization’s treatment differences between temporary workers and permanent
workers on income level, welfare, uniform, training opportunities, etc.
4 Results
Tables 5 and 6 provide the descriptive statistics and correlations of the variables
in this study. In the Table 5, the mean, standard deviation, minimum and max-
imum value of each variable are reported. As the figures in Table 5 shows, in a
final sample (N = 209), 26.3% are male. The average age of the participants
are 37.6 years old (SD = 8.76). The average organizational tenure is 11.7 years
(SD = 8.83). As for education background, we used 1 to represent bachelors
and above, 2 to represent technical school graduates, 3 to represent senior high
school graduates and below. The average education background of participants
is 1.41 (SD = 0.69).
Table 6 reports the correlation matrix of the research variables. As expected,
independent variables of employment status are significantly correlated with
organizational identification (r = 0.17), turnover intention (r = −0.12) and
OCB (r = 0.12). The significance of correlation between employment status
and the organizational identification, turnover intention and OCB indicate that
permanent employees show higher organizational identification, less turnover
intention and more OCB compared, as compared to temporary employees.
The Cronbach’s alphas of major variables are listed in Table 6. The Cron-
bach’s alphas of organizational identification, OCB and turnover intention’s
Cronbach’s alphas are 0.91, 0.91 and 0.94. All the figures exceed the accept-
able criteria, which is 0.70, which suggests that the validity of the scales chosen
in this study is acceptable.
To further determine the factor structure of your data set, we conduct the con-
firmatory factor analysis of OCB, turnover intention and organizational identi-
fication. For OCB, the factor loadings are range from 0.60 to 0.85. For turnover
The Impact of Temporary Employment on Employees’ Organizational 799
Variable 1 2 3 4 5 6 7 8 9
1. Age 7.1 - - - - - - - -
2. Edu 0.47∗∗∗ 1 - - - - - - -
3. Gender 0.27∗∗∗ 0.128∗ 1 - - - - - -
4. Tenure 0.67∗∗∗ 0.30∗∗∗ 0.09 1 - - - - -
5. Treatment 0.03 0.14∗∗ 0 −0.06 1 - - - -
difference
6. ES −0.13∗∗ −0.53∗∗∗ −0.03 0.03 −0.32∗∗∗ 1 - - -
∗∗
7. OI −0.06 −0.11 −0.15 −0.02 −0.08 0.17∗∗ −0.91 - -
8. TUR −0.20∗∗∗ −0.03 0.08 −0.23∗∗∗ −0.01 −0.12∗∗ −0.25∗∗∗ −0.94 -
9. OCB 0.02 −0.09 0.03 −0.05 −0.01 0.12∗ 0.57∗∗∗ −0.35∗∗∗ −0.91
Note: N = 209; ∗ p < 0.1, ∗∗ p < 0.05, ∗∗∗ p < 0.01.
intention, the factor loadings are from 0.82 to 0.95. For organizational identi-
fication, the factor loadings are from 0.67 to 0.90. The detailed factor loading
of each item of OCB are listed in Table 7 as an illustration. Since component
reliability of each items are higher than the criteria, i.e., 0.45. The measurement
in this study is acceptable.
We then test the Hypotheses 1 and 2 first. Then after we mean-center the
variables, we use moderated regression analyses to test Hypotheses 3 and 4.
800 X. Qian et al.
Standardized Coef. OIM Std. Err. z P > |z| 95% Conf. Interval
m56←OCB 0.598 0.049 12.28 0 0.502 0.693
cons 4.19 0.216 19.37 0 3.766 4.614
m57←OCB 0.732 0.036 20.27 0 0.661 0.803
cons 4.265 0.22 19.41 0 3.834 4.695
m58←OCB 0.695 0.04 17.52 0 0.617 0.773
cons 4.222 0.218 19.39 0 3.796 4.649
m59←OCB 0.758 0.034 22.54 0 0.692 0.824
cons 4.471 0.229 19.49 0 4.022 4.921
m60←OCB 0.685 0.04 16.98 0 0.606 0.764
cons 3.715 0.194 19.11 0 3.334 4.096
m61←OCB 0.846 0.024 34.88 0 0.799 0.894
cons 4.375 0.225 19.45 0 3.934 4.816
m62←OCB 0.762 0.033 22.81 0 0.696 0.827
cons 4.834 0.246 19.62 0 4.351 5.317
m63←OCB 0.812 0.028 28.53 0 0.756 0.868
cons 4.697 0.24 19.58 0 4.227 5.165
Note: LR test of model vs. saturated: chi2(20) = 107.71, Prob > chi2 =
0.0000
5 Conclusion
In this study, we investigate the relationships between temporary employment,
employees’ OCB and turnover intention. We also propose that employee’s orga-
nizational identification is a moderator in the above relationships. Results show
that if education is not controlled, standard employees show more OCB than
temporary employees. Compared with temporary employees, standard employees
show less turnover intention. Organizational identification positively moderates
the relationship between temporary employees and turnover intention, indicat-
ing that if a standard employee has a higher organizational identification, the
impact of employment form on turnover intention would be much stronger.
802 X. Qian et al.
Acknowledgements. The authors thank for the support by the National Science
Foundation of China (Grant No. 71402108), the Humanities and Social Sciences Foun-
dation of the Ministry of Education (Grant No. 14YJC630103), the Fundamental
Research Funds for the Central Universities (No. 2016JJ019) and Young Faculty
Research Fund of Beijing Foreign Studies University (2016JT003).
The Impact of Temporary Employment on Employees’ Organizational 803
References
1. Conway N, Briner RB (2002) Full-time versus part-time employees: understanding
the links between work status, the psychological contract, and attitudes. J Vocat
Behav 61(2):279–301
2. Davisblake A, Broschak JP, George E (2003) Happy together? How using non-
standard workers affects exit, voice, and loyalty among standard employees. Acad
Manage J 46(4):475–485
3. Dyne LV, Ang S (1998) Organizational citizenship behavior of contingent workers
in singapore. Acad Manage J 41(6):692–703
4. Johnson CD, Messe LA, Crano WD (1984) Predicting job performance of low
income workers: the work opinion questionnaire. Pers Psychol 37(2):291–299
5. Kalleberg AL (2000) Nonstandard employment relations: part-time, temporary and
contract work. Ann Rev Sociol 26:341–365
6. Kalleberg AL (2001) Organizing flexibility: the flexible firm in a new century. Br
J Ind Relat 39(4):479–504
7. Kalleberg AL, Reskin BF, Hudson K (2000) Bad jobs in America: standard and
nonstandard employment relations and job quality in the United States. Am Sociol
Rev 65(2):256–278
8. Knippenberg DV, Sleebos E (2006) Organizational identification versus organi-
zational commitment: self-definition, social exchange, and job attitudes. J Organ
Behav 27(5):571–584
9. Mael F, Ashforth BE (1992) Alumni and their alma mater: a partial test of the
reformulated model of organizational identification. J Organ Behav 13(2):103–123
10. Organ DW (1988) Organizational citizenship behavior: the good soldier syndrome.
Adm Sci Q 41(6):692–703
11. Peters LH, Jackofsky EF, Salter JR (1981) Predicting turnover: a comparison of
part-time and full-time employees. J Organ Behav 2(2):89–98
12. Stamper CL, Dyne LV (2003) Organizational citizenship: a comparison between
part-time and full-time service employees. Cornell Hotel Restaurant Adm Q
44(1):33–42
13. Stirpe L, Trullen J, Bonache J (2013) Factors helping the hr function gain greater
acceptance for its proposals and innovations: evidence from spain. Int J Hum Res
Manage 24(20):3794–3811
14. Thorsteinson TJ (2003) Job attitudes of part-time vs. full-time workers: a meta-
analytic review. J Occup Organ Psychol 76(2):151–177
Integration of Sound and Image Data
for Detection of Sleep Apnea
1 Introduction
Sleep apnea syndrome (SAS) is a disorder in which breathing stops during sleep.
SAS can cause sleepiness during the day, which may lead to traffic accidents.
In addition, SAS induces a serious circulatory organ disease [6]. SAS symptoms
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 65
Integration of Sound and Image Data for Detection of Sleep Apnea 805
2 Methods
In this study, we have improved the previously proposed method. We used PSG
data and evaluated the accuracy of the improved method to classify SAS and
non-SAS more precisely by integrating the results of sound and image processing.
Overnight PSG data were collected from the sleep studies of 55 patients (nine
females, 61.2 ± 12.4 years) who were suspected to suffer from SAS and referred
to the Kanazawa Medical University (Ishikawa, Japan). The subjects provided
written consent to participate in the sleep studies, which were conducted in 2014.
Four subjects were diagnosed as normal and 51 were diagnosed with different
levels of SAS. To obtain more non-SAS subject data, overnight pulse oximeter
data were collected from 24 employees of the Industrial Research Institute of
Ishikawa or CosmoSummit Co., Ltd. in 2014–2015. These employees, none of
whom complained about sleep apnea symptoms, also provided written consent,
took data according to a data acquisition manual themselves. From the analysis
of the pulse oximeter data, three subject datasets were considered to indicate
SAS, and 24 subject datasets were considered to indicate non-SAS. Subject
information, such as age, gender, height, and weight, was also collected (Table 1).
The overnight sounds of the participants were recorded using a microphone
(SONY ECM-360) placed near the subject’s head. The sampling rate was
11025 Hz at 16-bit resolution. Overnight video of the participants was recorded
using a camera (IDS UI-1220LE-M-GL) with a lens (TAMRON12VM412ASIR)
that is sensitive to infrared light. The camera was placed at the side of the bed.
An example video frame is shown in Fig. 1:
806 T. Kasahara et al.
Since various sounds, e.g., subject and bedding movements, and coughs, are
included in the recorded sound data, we used an unsupervised method proposed
by Azarbarzin et al. [1] to extract snore sounds. The unsupervised snoring sound
extraction was performed as follows. After applying a band pass filter (BPF, 150–
5000 Hz) to the recorded data, a section in which the volume is greater than
a predefined threshold continues for 0.4 to 2.0 s was extracted as an episode.
Extracted episodes include snoring episodes and episodes due to noise other
than snoring. For one episode, 50% overlap is multiplied by a 50-ms window,
and short-time Fourier transform (STFT) was performed. The STFT results for
an episode were averaged, and the spectral intensities in the range 0–5000 Hz (in
practice, BPF in the range 150 to 5000 Hz) were summed every 500 Hz to create a
10-dimensional vector. Principal component analysis was performed using the 10-
dimensional vector of all episodes. We reduced the dimensionality of the feature
space to a two-dimensional vector, and classification was performed using the
Fuzzy c-Means clustering method. In this study, we adopted the number three
classes employed in Azarbarzin’s method. Among the three classes, clusters with
a greater number of episodes were considered snoring clusters, and episodes in
this cluster were considered snoring episodes.
Integration of Sound and Image Data for Detection of Sleep Apnea 807
3 Results
In sound processing, non-SAS data with a small snore count were greatly devi-
ated from the origin when plotted on 2D feature coordinate graph, and learning
in the SVM did not provide proper results. The SVM result obtained using all
data is shown in Fig. 2. Since snoring was not detected sufficiently, data with
than five snores per hour were assumed to be non-SAS, and data classified by
the SVM for the remaining data were taken as the sound processing result. The
SVM result using data with less than five snores per hour is shown in Fig. 3:
In image processing, MSE analysis was performed to calculate the Bhat-
tacharya distance, and the parameters sorted in descending order are shown in
Table 2. The top of 2 parameter sets of (m, r, t) = (4, 0.10, 30), (2, 0.25, 34) were
adopted for SVM. The sample entropy when (m, r) = (4, 0.10) is shown in Fig. 4.
The result of SVM is shown in Fig. 5.
Accuracy was calculated by leave-one-out cross validation. With only image
processing, the accuracy was 81.7%, sensitivity was 81.5%, and specificity was
82.1%. With only sound processing, the accuracy was 85.4%, sensitivity was
85.2%, and specificity was 85.7%. These results show that sound processing
Fig. 3. SVM result in sound processing with more than five snores per hour
Integration of Sound and Image Data for Detection of Sleep Apnea 811
outperformed image processing. In the integrated result, the accuracy was 89.0%,
sensitivity was 92.6%, and specificity was 75.0%, which shows a decrease in
specificity compared to the sound processing results; however, improvements to
accuracy and sensitivity were observed. Table 3 compares classification results
and the final results for image and sound processing.
Overnight PSG was used to obtain the number of apnea events and the
number of hypopnea events. Subjects were classified as apnea or hypopnea type
by comparing these events. In the sound processing results, there were eight
false negatives out of 54 positive cases; however, in the integrated results, false
negatives were reduced to four. In this case, among the four subjects that changed
from false negatives to true positives, three subjects were hypopnea type. Of the
19 apnea type subjects, only one was mistaken as a false negative with sound
processing. Thus, sound processing is effective for detecting apnea type SAS but
less effective for hypopnea type SAS. However, by integrating the sound and
image processing results, it is possible to reduce errors by half.
812 T. Kasahara et al.
Screening methods that can easily detect SAS are expected. Therefore, we
analyzed sound data and image data obtained using microphones and cam-
eras, which are inexpensive and familiar sensors. We have proposed a screening
method to classify SAS and non-SAS subjects. In this study, we focused on the
number of snores per hour. When the number of extracted snores was large, we
used sound processing due to good accuracy. However, low accuracy was obtained
with sound processing when the number of snores was small; thus, we proposed a
method using image processing. The experimental results demonstrate that the
proposed integrated method (sound and image data) can screen at high accu-
racy compared to screening with only sound or image data. Furthermore, the
proposed method is effective for screening hypopnea SAS.
Integration of Sound and Image Data for Detection of Sleep Apnea 813
References
1. Azarbarzin A, Moussavi Z (2011) Automatic and unsupervised snore sound extrac-
tion from respiratory sound signals. IEEE Trans Biomed Eng 58:1156–1162
2. Azarbarzin A, Moussavi Z (2012) Snoring sounds variability as a signature of
obstructive sleep apnea. Med Eng Phys 27:479–485
3. Costa M, Goldberger A, Peng C (2002) Multiscale entropy analysis of complex
physiologic time series. Phys Rev Lett 89:068102
4. Gederi E, Clifford G (2012) Fusion of image and signal processing for the detec-
tion of obstructive sleep apnea. In: 2012 IEEE-EMBS International Conference on
Biomedical and Health Informatics, pp 890–893
5. Kailath T (1967) The divergence and bhattacharyya distance measures in signal
selection, communication technology. IEEE Trans Commun Technol 15:52–60
6. Kario K (2009) Obstructive sleep apnea syndrome and hypertension: ambulatory
blood pressure. Hypertens Res 32:428–432
7. Nomuara K, Kasahara T et al (2015) Development of a system for detecting
obstructive sleep apnea by combining image and sound processing. J Nanjing Univ
Aeronaut Astronaut 27:12–20
8. Moorman Richman J JS (2000) Physiological time-series analysis using approx-
imate entropy and sample entropy. Am J Physiol Heart Circulatory Physiol
278:H2039–H2049
9. Tanigawa T, Tachibana N et al (2004) Relationship between sleep-disordered
breathing and blood pressure levels in community-based samples of japanese men.
Hypertens Res 17:479–484
10. Yang C, Cheung G et al (2016) Sleep apnea detection via depth video & audio
feature learning. IEEE Trans Multimedia 1–5
11. Young T, Palta M et al (1993) The occurrence of sleep-disordered breathing among
middle-aged adults. New Engl J Med 17:1230–1235
Pricing Strategy Study
on Product Crowdfunding
1 Introduction
Product crowdfunding refers that the investors put their money into develop-
ing a product or service in accordance with the fundraisers; when the product
or service begins to be pre-sold or has the conditions for external sales, the
fundraisers will provide the developed product or service for the investors. Prod-
uct crowdfunding needs a third party to be supported by the public. The relevant
information of the products, services and producers can be obtained through the
platforms, which would save transaction cost while realizing a certain advertising
effect.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 66
Pricing Strategy Study on Product Crowdfunding 815
2 Literature Review
For the platforms of the product crowd funding, pre-sale is generally the main
way, which functions to find prices well and can help the project launchers or
entrepreneurs to create some favorable conditions for price discrimination. Pigou
[8] defines that the price discrimination refers to different prices charged by firms
for different markets, consumers or different numbers of purchases, and he also
puts forward a three-level price discrimination theory. As a kind of theory, price
discrimination belongs to the category of the pricing strategy without judging.
In the industries with more competition, price discrimination is widely used in
a variety of flexible forms. It is an effective pricing strategy, not only helping to
enhance the competitiveness of enterprises to achieve its business objectives, but
also adapting to the psychological differences of consumers to meet the needs
at multiple levels. Rochet and Thanassoulis [10] declares that the price discrim-
ination is an economic phenomenon and a trading strategy, which is popularly
used by the sellers; and according to economic thinking, the implementation of
the price discrimination strategy for the sellers is positive in terms of the welfare
and can thus expand their own profits.
Nocke et al. [7] studied the relations between the product crowd funding and
the price discrimination in the context of asymmetric information. In an intertem-
poral setting in which the individual uncertainty is resolved over time, advance-
purchase discounts can serve to the price discrimination between consumers with
different expected valuations for the product. Consumers with a high expected val-
uation will purchase the product before learning the actual valuation at the offered
advance-purchase discount; consumers with a low expected valuation will wait and
purchase the goods at the regular price only in the occasion where their realized
valuation is high. The consumers with higher expectations of participation in the
816 M. Luo et al.
product quality in the case of unknown product quality should enjoy discounts and
have the lower prices than those for the regular customers. Enterprises have to tar-
get at the customers with different expectations. Belleflamme et al. [11] isolated
some important features of crowdfunding on the basis of a unique, hand-collected
dataset and proposed a model of crowdfunding that encompasses several of these
key features. By constructing the profit function, they obtained the pricing expres-
sion of the product and the ownership of the product. On this basis, the enterprise
financing limit is given, and the uncertainty of the product quality and informa-
tion are elaborated. Hu et al. [2] and Ping [9] maximized the profits of producers as
the objective function optimization model to analyze the pricing mechanism of the
crowdfunding products, drawing the conclusion that the model will help to open up
the initial public offering of the product market, and the profits of manufacturers
can be expanded with the same total sales.
Lawton and Marom [5] pointed out that crowdfunding is not only a new
source of funding for projects, but also a way to quickly attract the attention of
users, form early user communities; and because of the large number of media
reports and discussions, pre-release has become a common corporate marketing
strategy, so crowdfunding is still a very effective marketing tool. Based on the
choices of consumers, Bayus [1] applied the model of maximizing their own prof-
its to jointly determine the time choice and pricing decision from the consumer
utility function. Zhang et al. [14] and Luo et al. [6] constructed a two-stage pric-
ing model based on the impact of IWOM on customers’ perceived value to the
product. IWOM can change the cognitive value of customers to the products,
the customers who have profound insight seek the overall effectiveness of maxi-
mizing products purchasing and brands spreading, and the dominant sellers can
maximize the two-stage total profits through using reasonable pricing strategies.
Huang [3] and Krishnan et al. [4] elaborated the pricing strategy and analyzed
four strategies, such as new product pricing strategy and discount strategy, which
can help enterprises make full use of pricing strategy to provide reasonable mar-
ket prices to obtain the most competence. But all the above literatures just study
the traditional price strategy and cannot explain the popular free phenomenon
nowadays. 1P business model proposed by Wang [12,13] breaks the pricing space
of the business strategy by integrating the third party to obtain a new competi-
tive advantage, with putting the pricing into the center position and innovatively
stressing positioning and locking the third party to pay, which can make profit
with the price that is lower than the average cost and can promote with the
price that is higher than the expectation of customers.
In this paper, through the deep research on price discrimination theory and
crowdfunding pricing mechanism, we not only consider the influence of crowd-
funding price on needs, but also further study the effect of network brand spread-
ing and advertising, constructing the maximum profit function of the product
crowdfunding in the two occasions to obtain the pricing expression under the
constraint for financing amount. This paper also studies the pricing theory of
business model proposed by Wang [13] and applies it into the crowdfunding
business model with drawing the corresponding pricing strategy.
Pricing Strategy Study on Product Crowdfunding 817
Qd = d(P ).
Only if the total payment of the participant is not less than the total amount
of financing K can the raisers get the money from the platforms smoothly. It
requires:
n
Pi Qi ≥ K.
i=1
To maximize the profits of crowdfunding, it is required to consider how to
design the relations between various levels of prices and quantities, that is, to
figure out the maximum profit objective function with some constraints
n
n
max R = Pi Qi − K − β Qi
i=1 i=1
⎧
⎪
⎪ Pn = An − Ep Qn ,
⎨ n
s. t. Pi Qi ≥ K,
⎪
⎪
⎩ i=1
Pi ≥ 0.
where the function relation between the price and the demand is
Qn = (An − Pn )/Ep .
An refers to the price which the crowdfunding consumers are willing to pay and
EP is the price elasticity of demands.
Assuming that β is the variable cost component of producing a unit of this
product and the initial funding amount K is the fixed cost amount that the firm
can produce the required product smoothly. The total cost of product is
n
C = K + Kc + β Qi .
i=1
Pricing Strategy Study on Product Crowdfunding 819
Only if the total payment of the participant is not less than the total amount
of financing K can the raiser get the money from the platform to produce
smoothly. It needs to meet with:
n
Pi Qi ≥ K.
i=1
Taking the advertising effect into account, it is needed to figure out the
maximum of profit objective function with constraints to obtain the maximum
profits of the crowdfunding
n
n
max R = Pi Qi − K − Kc + (Gr − β) Qi .
i=1 i=1
The introduction of the third-party investors in the product pool will make
the unit product price P no longer equals to the target customer’s payment
price P C, for it should be added with the price of PB paid by the third-party
investors B, that is, P = P C + P B, P C = P − P B; when the profit is zero,
π = (P C + P B) − AC = 0, P C = AC − P B < AC. This proves that even if
the price of the product sold to the target customer is less than the price of the
product and the average cost, it is also profitable, and how much the profit will
be depends on the price paid by the third party and the number of the shared
cost.
The theoretical business model of 1P whose pricing space breaks the upper
and lower limits of pricing is shown in Fig. 1.
Obviously, price discrimination can make the sellers benefit as much as pos-
sible, for the consumer surplus belonging to the buyers will also be transferred
to the sellers through price discrimination. Price discrimination is economically
efficient. The maximum benefit of sellers equals to the value of social welfare
maximization through price discrimination. Of course, the sellers have to be able
to distinguish different characteristics of the buyers to make price discrimination
work. This difference may exist in the buyers’ demand intensity, the quantity
of purchase or the price elasticity of demand. The basic principle of implement-
ing price discrimination is that the marginal returns in different markets are
equal and equal to marginal cost. The crowdfunding management can regulate
a higher price for the market with less price elasticity of demand, implementing
the strategy of “less sales but more profit”, while a lower price can be regulated
for the market with greater price elasticity of demand, and the maximum profit
can be obtained through the implementation of “less profit but more sales”.
them will be placed on the product experience, then the high-quality product
will win a good name, and the manufacturers will thus obtain rich profits.
In short, it needs to take the actual situation of crowdfunding products into
account to choose skimming pricing, penetrating pricing or satisfying pricing.
Considering the cost of advertising and error-trying, the maximum product profit
can be gained with using a crowdfunding way to sell and make a sound pricing
strategy because of the novelty and advertising needs of these products.
5 Conclusion
5.1 Theoretical Contribution
In this paper, we studied profoundly the price discrimination theory and the
pricing mechanism of the crowdfunding pricing. By considering the effect of the
Pricing Strategy Study on Product Crowdfunding 823
crowdfunding pricing on demand to compose the profit function and the con-
straint conditions of the financing, it was figured out the crowdfunding pricing
expression concerning with the price discrimination. Compared with the tradi-
tional sales modes, the total sales of the crowdfunding products remain the same,
while the proportions of crowdfunding consumers and conventional consumers
change oppositely. The crowdfunding mode can distinguish the different pref-
erences of different consumer groups, according to which the market’s attitude
towards the products is likely to be judged. The unit price given by crowdfund-
ing consumers is higher than that of conventional consumers. The crowdfunding
raisers are faced up with a funding ceiling, and if the ceiling is touched, the
crowdfunding will be a failure. Not only can the new medium and small enter-
prises finance without much difficulty by crowdfunding, but also more profits
can be achieved for the businesses compared with the traditional sale modes.
Also, we further studied the maximum profit function of the product crowd-
funding with the advertising effects. Finally, we studied the pricing theory of
the business model in which the third parties pay in the theory of 1P, and then
applied it to the business model of crowdfunding to get the corresponding crowd-
funding pricing strategies, aiming to obtain the maximum profit. The paper is
expected to contribute to the theoretical research in the crowdfunding financing
and pricing field, and also provide references for enterprises and entrepreneurs
to make decisions on pricing strategies for their products.
Due to the short rising period of the Internet crowdfunding, the domestic and
foreign literature about its product pricing is correspondingly scarce. Crowd-
funding product pricing research also belongs to the emerging field, which has
great research value and profound potential. Moreover, with the rapid iteration
of technologies and business models, the industry of crowdfunding is changing
incessantly and the pricing strategy research should follow the development of
the situation. This paper gives the theoretical expression of the crowdfunding
pricing using the price discrimination as well as the maximum profit function
considering the effects of advertising, and there may be more factors which should
be taken into account when planning the actual cases.
824 M. Luo et al.
References
1. Bayus BL (1997) Speed-to-market and new product performance trade-offs. J Prod
Innov Manag 14(6):485–497
2. Hu M, Li X, Shi M (2014) Product and pricing decisions in crowdfunding. Soc Sci
Electron Publishing 34(3):331–345
3. Huang ZY (2012) Discussion on product pricing strategies. J Nanchang Coll Educ
27(11):12–18 (in Chinese)
4. Krishnan TV, Bass FM, Jain DC (1999) Optimal pricing strategy for new products.
Manag Sci 45(12):1650–1663
5. Lawton K, Marom D (2013) The crowdfunding revolution: how to raise venture
capital using social media (ebook). McGraw Hill, New York
6. Luo Y, Gao L, Huang J (2015) Price and inventory competition in oligopoly tv
white space markets. IEEE J Sel Areas Commun 33(5):1002–1013
7. Nocke V, Peitz M, Rosar F (2011) Advance-purchase discounts as a price discrim-
ination device. J Econ Theor 146(1):141–162
8. Pigou AC, Todd AJ (1934) The economic of welfare. Am J Sociol 19(1):23–30
9. Ping YY (2015) The pricing mechanism of commodity crowdfunding. Zhongnan
Univ Econ Law 40(3):23–30 (in Chinese)
10. Rochet JC, Thanassoulis JE (2016) Stochastic delivery and intertemporal price
discrimination with multiple products. Social Science Electronic Publishing
11. Sahm M, Belleflamme P, Lambert T et al (2014) Corrigendum to “crowdfunding:
tapping the right crowd”. J Bus Ventur 29(5):610–611
12. Wang JG (2015) An investigation into modeling marketing. J Peking Univ
52(4):95–110 (in Chinese)
13. Wang JG (2016) The classification, innovation and design of business model in
business ecosystem. Peking Univ 6(1):1–17 (in Chinese)
14. Zhang MX, Lei M, Zheng XN (2013) The impact of online word-of-mouth dissem-
ination on the oligarch seller’s pricing strategy. Mark Sci 9(2):71–89 (in Chinese)
The Support System of Innovation-Driven
Strategy in Private Enterprises: A Theoretical
Model
1 Introduction
Managers should put additional importance on innovation that plays a significant
role as a bridge between competitive strategies and firm performance in a devel-
oping economy environment [1]. As stepping into the “new normal”, the devel-
opment of economy is not investment-driven, but innovation-driven in China,
which reflects the reconstruction of economic engine and switching of develop-
mental strategy [12]. Much of research has examined that private enterprises
(PE) are superior to state-owned enterprises (SOE) about innovative ability
[8]. According to the statistics, private enterprises have more R&D projects,
R&D investment and patents than state-owned enterprises. Therefore, private
enterprises with innovative potential play a leading role in innovation-driven
strategy. However, market mechanism with inherent weaknesses has blocked the
development of innovation in China. “Perfect market” also breeds Opportunism
which jeopardizes innovative performance of PEs [4]. Although with huge inno-
vative potential, PE faces systemic risk. When in the immature market, entre-
preneurs improve transformational leadership with the expectation of favorable
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 67
826 T. Xie et al.
business environment that the government creates. The dualistic Innovation Sys-
tem dominated by government agencies and enterprises in parallel is extensively
implemented in China [16]. There are some divergences concerning relationship
between the government and enterprises, which roots in complexity and instabil-
ity of innovation. The theory of Innovation System cannot solve new problems
of PEs.
How does PE build the internal support system to keep innovative perfor-
mance? How does the government improve the business ecosystem without inter-
ference to PE This article will solve these problems and develops a theoretical
model of Innovation-driven Support System, which can change traditional inno-
vation paradigm and offer a framework to further empirical studies. To do this
we build our discussion on extensive relevant literatures and enterprise practices.
Using methods and theories of the system, the Support System of Innovation-
Driven Strategy (SSIDS) is built.
In the following sections, we first review current researches and state defi-
ciencies. We then go on to analyze connotation, structure and characteristic of
SSIDS. Finally, we discuss the implications of our theoretical model for managers
and provide directions for further research.
2 Literature Review
The inflexion point theory demonstrates that according to the decreasing law of
marginal utility, contribution degree of basic elements, such as natural resources
and capital, present descending trend. Therefore, the long-term development of
economy depends on intellectual and technological factors. The inflexion point
theory promotes innovation-driven theory, which manifests investment-driven
strategy is replaced by innovation-driven strategy. The concept of innovation-
driven was initially used by Porter [13]. Porter argued that economic develop-
ment could be divided into four periods: factor-driven stage, investment-driven
stage, and innovation-driven, wealth-driven stage. In innovation-driven stage,
the effect of basic factors and investment is led by innovation and a system of
innovation-driven strategy taking technological innovation as core is built.
After that innovation-driven development became national strategy, Chinese
academics conduct quiet a few examinations regarding innovation-driven strat-
egy. Liu [11] contended that the innovation-driven development means the
change of economic engine in China. And he further demonstrated that the
growth of economy has transformed from depending on learning and imitation
into relying on self-designing, R&D, knowledge-creating. In addition, Zhang [17]
analyzed the main features of innovation-driven development, i.e. people ori-
ented, acquiring first-mover advantage and entrepreneur-driven. Li and Wang
[10] proposed internal innovation-driven elements including enterprise culture,
leaders full of entrepreneurship, expected return; and external innovation-driven
elements including technical progress and institutional environment. Meanwhile,
WANG Tao and Qiu [16] emphasized that the innovative environment domi-
nated by entrepreneurs is immature and constructed innovation-driven strategy
The Support System of Innovation-Driven Strategy 827
There are lots of external elements of enterprise that are closely related to
innovation-driven strategy. External support system is exactly consisted of these
elements and their mutual relationship together. It is necessary to implement the
innovation-driven strategy and can be generalized as the environmental subsys-
tem of innovation. As shown in Fig. 5, the system commonly includes market
environment, policy environment and service environment.
kinds of innovative talents for private enterprises. Finally, it promotes the forma-
tion of a sound financial market so that private enterprises can raise innovation
funds from this important capital source.
Based on above analysis of internal and external support system, we build a
new support system of innovation-driven strategy in private enterprises with the
purpose of improving performance of this strategy (as illustrated in Fig. 6). The
support system of innovation-driven strategy totally differs from the traditional
innovation systems that define the role of the government and private enterprise.
The SSIDS is an open, dynamics and complex system as it involves a wide range.
4 Conclusion
This article analyzes internal support system form innovative impetus, behav-
ior and capability, discusses external support system from market mechanism,
laws and regulations, service. Based on internal support system and external
support system, a new theoretical model, SSIDS, is built, which is conductive to
innovative practices of private enterprises.
Innovation is unpredictable system engineering. According to Schumpeter’s
theory of innovation, entrepreneurs are situated in primary status. However, at
the initial stage of the market, Chinese private enterprises lack of mature entre-
preneurs. Therefore, traditional theory of Innovation System insists that both
the government and enterprises are situated in primary status, which is harmful
to development of Chinese entrepreneurs. Contrary to the dualistic main-body
Innovation System, the SSDIS with a multi-nested structure is instrumental and
assistant. In the SSDIS, the government provides service and supervision; private
enterprises carry out innovative practice. This article builds a theoretical model
and change innovation paradigm. Nevertheless, many empirical studies are nec-
essary to the validation and optimization of SSIDS model. How the SSIDS model
operates is a valuable question for further research.
References
1. Bayraktar CA, Hancerliogullari G et al (2016) Competitive strategies, innovation,
and firm performance: an empirical study in a developing economy environment.
Technol Anal Strateg Manage
2. Carlsson B (1991) On the nature, function and composition of technological sys-
tems. J Evol Econ 1(2):93–118
3. Chakraborty S, Thompson JC, Yehoue EB (2016) The culture of entrepreneurship.
J Econ Theory 163(3):288–317
4. Chen B (2014) On the connotation and characteristics of innovation driven and the
condition of realizing innovation driven: from the perspective of realizing “China
dream”. J Theor Ref 7:123–134 (in Chinese)
5. Cooke PN, Heidenreich M, Braczyk HJ (2004) Regional innovation systems: The
role of governances in a globalized world. Eur Urban Reg Stud 6(2):187–188
6. Edquist C (1997) Systems of innovation: technologies, institutions and organiza-
tions. Soc Sci Electron Publ 41(1):135–146
7. He M, Li F (2009) Innovation supporting system of Zhongguancun science park. J
Beijing Inst Technol (Soc Sci Ed) 2:46–55 (in Chinese)
8. Jiang S, Gong L, Wei J (2011) The path to the catch-up of the innovative ability
of the late comers of enterprises in the transitional economy: comparing SOE with
POE. Manage World 12:43–72 (in Chinese)
9. Lancker JV, Mondelaers K et al (2016) The organizational innovation system: a
systemic framework for radical innovation at the organizational level. Technovation
52–53:40–50
10. Li Y, Wang J (2014) The research on the relationship between driving factors and
performance of innovation in the context of vertical integrated industry chain. Res
Finan Econ Issues 7:21–29 (in Chinese)
11. Liu Z (2011) From late-development advantage to first-mover advantage: theoretic
thoughts on implementing the innovation-driven strategy. Ind Econ Res 4:65–72
(in Chinese)
12. Liu Z (2015) Innovation-driven strategy in new high-level opening-up. J Nanjing
Univ (Philos Humanit Soc Sci) 2:37–47 (in Chinese)
13. Michael P (1990) The competitive advantage of nations. Ashgate, Brookfield
14. Mir M, Casadesüs M, Petnji LH (2016) The impact of standardized innovation
management systems on innovation capability and business performance: an empir-
ical study. J Eng Technol Manage 41:26–44
15. Trevor M (1989) Technology policy and economic performance. lessons from Japan.
R&D Manage 19(3):278–279
16. Wang T, Qiu G (2014) A research on “bidirectional-driven” effects of innovation-
driven strategy. Technoeconomics Manage Res 6:44–57 (in Chinese)
17. Zhang L (2013) Innovation-driven development. China Soft Sci 1:80–88 (in Chi-
nese)
The Emission Reduction and Recycling of Coal
Mine Water Under Emission Allowance
Allocation of Government
1 Introduction
Coal is still the most abundant fossil fuel in 2015 and it provided around 30.0%
of the global primary energy need, though oil and natural gas reserves have
increased over time. As such an important fossil fuel contributing to worldwide
energy generation, coal will still takes a huge market share in world energy and
coal mining is still a major industry for developing economy. But coal mining can
cause a lot of pollution for water, soil and air environment, and these ecological
environment issues will deepen the contradiction between the land and humans,
and constrains economic and social development. However, for safety, signifi-
cant quantities of groundwater are discharged in underground mining [17,19]. If
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 68
836 N. Ma and S. Zheng
untreated, the water combining with heavy metal ions such as iron and mercury
ions would definitely pollute the groundwater [7], which has many pollutants
and waste the cherish water resource [16,18]. Thus, it is of importance for the
coal mine to control the mine drainage.
In order to mitigate the environmental damage, several policy instruments
have been developed to attempt to reduce pollution emissions, such as emis-
sion taxes, command-and-control, and cap-and-trade [5,6,9]. The cap-and-trade
mechanism, also known as the emission trading scheme (ETS), is an applica-
tion of Coase [3], and has showed its effectiveness of controlling emissions and
has been successfully applied in practice [2]. Therefore, this paper adopts this
mechanism to control the mine drainage. In this mechanism, the initial carbon
emission allowances are defined and allocated by local government for free or at
auction or a combination of both [4,20] and coal mines decide their coal pro-
duction under the emission allowance. Based on this mechanism, there are two
kinds of decision makes in this problem. However, government only concerns the
whole benefit and coal mines only consider their individual benefits. There is a
conflict between government and coal mimes. Therefore, in order to solve this
conflict, this paper adopts bi-level programming to balance the benefit between
the whole and individuals. Meanwhile, this paper also considers the effect of
mine water recycling on reducing waste water emission. Coal mines can have
more coal production by recycling mine water to reduce waste water emission
under the limited emission right.
Based on the above discussion, this paper proposes an optimization model
that integrates a trade-and-cap mechanism and waste water recycling to mitigate
mine water pollution. In Sect. 2, the government use to control pollution through
emission allowance allocation, and the coal mines treat and reuse mine water to
guarantee production under limited emission right. Then, as an abstraction of
the real problem, a bi-level multi-objective model is built based on the discus-
sion and a algorithm is applied in Sect. 3. In Sect. 4, a case study is presented
to demonstrate the significance of the proposed model and solution method.
Conclusions and future research directions are given in Sect. 5.
2 Problem Statement
In this paper, we consider a trade-and-cap mechanism to control the total waste
water emission. Government decides the number of permits to discharge specific
quantities of a specific mine water per time period. Coal mines need to decide
their production plan according to permits allocated by government, because
the quantity of mine water is related to the coal production. Meanwhile, coal
mines can reduce their actual waste water emission though mine water recycling.
In this problem, government hope to minimize the total waste water emission
in the terms of ecology. But coal production is an important finance source
for government. It’s unreasonable to only consider ecology or economy singly.
Therefore, government have to make a balance between economic and ecological
benefits. And coal mines as profitable organizations only consider their economic
benefits. The relationship between government and coal mines is shown as Fig. 1.
The Emission Reduction and Recycling of Coal Mine Water 837
Fig. 1. The logic diagram of emission right allocation and mine water recycling
3 Modelling
Based on the analysis above, the hierarchical interests equilibrium between gov-
ernment and coal mines, and the economic and ecological conflicts are respec-
tively the key points of solving the mine water emission reduction. Therefore,
we adopt the bi-level programming to achieve the interests equilibrium which
DMs in a hierarchical relationship to solve the emission right allocation, and the
multi-objective programming to balance the economic and ecological benefits.
3.1 Notation
Index:
Parameters:
Bij
R
: Unit economic benefit of reusing the waste water treated by method j in
coal mine i;
Cij
R
: Unit economic cost of treating the waste water by method j in coal mine
i;
θi : Coal drainage coefficient when coal mine i produces unit coal.
Decision variables:
Qi : Waste water emission allowance of government allocating to coal mine i;
xi : Coal production amount of coal mine i;
yij : Waste water amount of coal mine i treating by method j.
For the market-based coal mines, pursuit of the highest profit is the absolute
priority when the managers make decisions. Generally, coal mines’ profit come
mainly from coal production and sales. In this paper, we also consider the recy-
cling and utilization of coal mine water. So the economic influence coming from
waste water reuse and selling or buying emission right should be considered in
the economic objective.
J
J
max fi = (P C − C C ) × xi + r
(Bij − Cij
r
) × yij − Cid × (xi × θi − yij ).
j=1 j
(4)
xi ≤ Cimax , i = 1, 2, · · · , I. (5)
J
yij = xi × θi , i = 1, 2, · · · , I. (6)
j
J
xi × θi − yij ≤ Qi , i = 1, 2, · · · , I. (9)
j
I
max F1 = (P C − C C ) × xi × ω
i=1
I
min F2 = Qi
i=1
⎧ I
⎪
⎪ i=1 Qi ≤ H
⎪
⎪
⎪
⎪ Qi ≥ 0, i = 1, 2, · · · , I
⎪
⎪
⎪
⎪ max fi = (P C − C C ) × xi + j=1 (Bij
J r
− Cij
r
) × yij (10)
⎪
⎪
⎪
⎪ − Ci × (xi × θi − j yij )
d J
⎪
⎪ ⎧
⎨ x ≤ Cimax , i = 1, 2, · · · , I
⎪
⎪
s.t. ⎪
⎪
i
J
⎪
⎪ ⎪
⎪ xi × θi ≤ j=1 Tijr + T d i = 1, 2, · · · , I
⎪
⎪ ⎪
⎨ J
⎪
⎪
⎪ j yij ≤ xi × θi , i = 1, 2, · · · , I
⎪
⎪
⎪
⎪ ⎪
⎪ 0 ≤ yij ≤ Tij , i = 1, 2, · · · , I, j = 1, 2, · · · , J
⎪
⎪ ⎪
⎪
⎪
⎪ ⎪
⎪ 0 ≤ yij ≤ Dij , i = 2, · · · , I, j = 1, 2, · · · , J − 1
⎪
⎩ ⎪
⎩ J
xi × θi − j yij ≤ Qi , i = 1, 2, · · · , I.
4 Solution Approach
In this section, we adopt a KKT to solve the bi-level problem and fuzzy goals
programming to solve multi-objective problem.
⎧
⎪ 1, if fξ (q, x, y) ≤ fξB ,
⎨
fξ (q,x,y)−fξW
μξ (fξ (q, x, y)) = fξB −fξW
, if fξB < fξ (q, x, y) ≤ fξW , (11)
⎪
⎩
0, if fξ (q, x, y) ≥ fξW .
Using Eq. (11), the multi-objective model can be transformed into a single
objective model by weighting and summing the negative deviations between the
actual satisfactory level and the aspiration satisfactory level for each objective,
as shown in model (12):
max F = ωξ × μξ (Fξ (q, x, y))
ξ∈Ξ
4.2 KKT
Karush-Kuhn-Tucker (KKT) approach has been proven to be a valuable analy-
sis tool with a wide range of successful applications for bi-level programming
[11]. The fundamental strategy for the KKT approach is that it replaces the
follower’s problem with its KKT conditions and appends the resultant system
to the leader’s problem [8,12]. Based on previous work by Shi et al. [15], we
842 N. Ma and S. Zheng
transformed the bi-level model into a single-level model. However, KKT cannot
guarantee the equality between lower decision makers when there are multiple
lower decision makers in a bi-level problem. But in this paper, we have used
fuzzy goal programming to transform the bi-level model into single model and
can guarantee the benefit balance between lower decision makers by the satisfac-
tory deviation. Therefore, in this paper, we just adopt KKT to seek the optimal
solution of lower model under upper decision maker’s strategy. And the model
(10) is finally transformed as Eq. (13).
max F =ω1 × μξ (F1 (q, x, y)) + ω2 × μξ (F2 (q, x, y))
⎧
⎪
⎪
I
F1 = i=1 (P C − C C ) × xi × ω
⎪
⎪
⎪
⎪ F2 = i=1 Qi
I
⎪
⎪ ⎧
⎪
⎪ ⎪ 1, if fξ (q, x, y) ≤ fξB ,
⎪
⎪ ⎨
⎪
⎪ f (q,x,y)−f W
⎪ μξ (fξ (q, x, y)) =
⎪
ξ ξ
, if fξB < fξ (q, x, y) ≤ fξW ,
⎪
⎪ ⎪
⎩
fξB −fξW
⎪
⎪ if fξ (q, x, y) ≥ fξW ,
⎪
⎪ 0,
⎪
⎪
⎪
⎪ μ̄ij − μij (fij (q, x, y)) ≤ σ̂
⎪
⎪
⎪
⎪
I
i=1 Qi ≤ H
⎪
⎪
⎪
⎪ Q i ≥ 0, i = 1, 2, · · · , I
⎪
⎪
⎪
⎪ x i ≤ Ci
max
, i = 1, 2, · · · , I
⎪
⎪
⎪
⎪
J
xi × θi ≤ j=1 Tijr + T d i = 1, 2, · · · , I
⎪
⎪
⎪
⎪ J
⎪
⎪ j yij ≤ xi × θi , i = 1, 2, · · · , I
⎪
⎪ ≤ yij ≤ Tij , i = 1, 2, · · · , I, j = 1, 2, · · · , J
⎪
⎨ 0
s.t. 0 ≤ yij ≤ Dij , i = 2, · · · , I, j = 1, 2, · · · , J
⎪
⎪ J
xi × θi − j yij ≤ Qi , i = 1, 2, · · · , I
⎪
⎪
⎪
⎪ (P − C ) − Cid × θi + v1 + (v2 − v3 + v11 ) × θi = 0
C C
⎪
⎪
⎪
⎪ J
⎪
⎪ j=1 (Bij − Cij ) + Ci + v3 + v4 − v5 + v6 − v11 = 0
d
⎪
⎪
⎪
⎪ v1 g1 (Qi , xi , yij ) + v2 g2 (Qi , xi , yij ) + v3 g3 (Qi , xi , yij )
⎪
⎪
⎪
⎪ +v4 g4 (Qi , xi , yij ) + v5 g5 (Qi , xi , yij ) + v6 g6 (Qi , xi , yij )
⎪
⎪
⎪
⎪ +v7 g7 (Qi , xi , yij ) =
⎪
⎪
0
⎪
⎪ g (Q , x , y ) = C max
− xi ≥ 0, i = 1, 2, · · · , I
⎪
⎪
1 i i ij i
⎪
⎪
J
g2 (Qi , xi , yij ) = j=1 Tijr + T d − xi × θi ≥ 0, i = 1, 2, · · · , I
⎪
⎪
⎪
⎪ g3 (Qi , xi , yij ) = xi × θi − j yij ≥ 0,
J
⎪
⎪
⎪
⎪
⎪
⎪ g4 (Qi , xi , yij ) = Tij − yij , i = 1, 2, · · · , I, j = 1, 2, · · · , J
⎪
⎪
⎪ g5 (Qi , xi , yij ) = yij ≥ 0, i = 1, 2, · · · , I, j = 1, 2, · · · , J
⎪
⎪
⎪
⎪ g6 (Qi , xi , yij ) = Dij − yij ≥ 0, i = 2, · · · , I, j = 1, 2, · · · , J
⎪
⎪
⎩ g (Q , x , y ) = Q − x × θ + J y ≥ 0.
7 i i ij i i i j ij
(13)
5 Case Study
5.1 Case Presentation
Lu’an coal field is located on the southeastern edge of the middle eastern Qin-
shui coal field, which is in the center of Shanxi Province. There are nine major
The Emission Reduction and Recycling of Coal Mine Water 843
collieries (i.e., i = 4) in this area. These are the Shiyijie mine, the Changcun
mine, the Zhangcun mine and the Wangzhuang mine.
In Lu’an coal field, there is average annual rainfall of 584 mm, and an aver-
age annual evaporation of 1732 mm. With the capacity expansion of the coal
mines, coal mining quantity has significantly increased, and this caused mine
drainage increases simultaneously. Moreover, untreated mine water damages the
surrounding water environment. Meanwhile, due to the water from the mine
drainage, the groundwater supplement is insufficient, which affects plant growth
and adds to environmental degradation. Therefore, this paper adopt Lu’an coal
field as a case to show the practicality of the model.
Detailed data for the research region were obtained from the Statistical Yearbook
of Chinese coal industry (National Bureau of Statistics of the Peoples Republic
of China (2013)), the Statistical Yearbook of Chinese energy (National Bureau
of Statistics of the Peoples Republic of China (2013)) and field research. Certain
parameters taken from the Statistical Yearbooks and data published by the
companies are shown in Tables 1, 2 and 3.
First, the upper and lower bounds of the government and coal mines objective
functions were calculated to identify the membership function for the fuzzy goals,
as shown in Table 4.
Then, we can build the fuzzy goals though the optimal solutions and worst
solutions of each objective of government and coal mines. And we set ω1 =
ω2 = 0.5, μ̄1 = μ̄2 = μ̄3 = μ̄4 = 1. Finally, through adapting the maximum
deviation σ̂, we can get some groups optimal solutions under different attitude
of government, as Table 5.
Government H (104 t) ω
1000 0.17
r (RMB/m3 )
Bij r (RMB/m3 )
Cij r (104 m3 )
Tij r (104 m3 )
Dij
j=1 j=2 j=3 j=1 j=2 j=3 j=1 j=2 j=3 j=1 j=2 j=3
i = 1 2.8 2.8 4.1 2 1.7 1.5 5 20 120 26 38.5 359
i = 2 2.8 2.8 4.1 0 1.66 1.49 0 15 90 20 28.1 240
i = 3 2.8 2.8 4.1 0 1.72 1.52 0 10 75 17 30 305
i = 4 2.8 2.8 4.1 1.98 1.59 1.4 3 34 100 24.9 35.7 367
844 N. Ma and S. Zheng
Cimax (104 t) Pic (yuan/t) Cic (yuan/t) θi (m3 /t) Tid (104 m3 ) Cid (RMB/m3 )
i = 1 230 780 219 2.15 500 0.63
i = 2 170 780 230 2.12 320 0.6
i = 3 136 780 227 2.14 283 0.57
i = 4 250 780 220 2.12 490 0.61
coal mines, its satisfactory level drop continuously. Because coal mines only
concern about economic benefit, the ecological benefit decreases faster than eco-
nomic benefit with σ̂ draping. σ̂ from 1 to 0.6, the ecological objective value drop
to the worst value. And σ̂ from 0.6 to 0.38, the whole economic benefit decreases
and the benefit balance between coal mines is improved. When σ̂ < 0.38, there
is no feasible solution for model (13).
Comparing these results, we can find government prefer allocate emission
right to these coal mines with better waste water recycling capacity because
which can guarantee the economic and ecological benefit simultaneously in the
biggest extend. And when government consider the whole satisfactory level, the
upper benefit would be damaged. Therefore, government should push these coal
mines faster the reformation step to achieve the economy and ecology develop-
ment together.
6 Conclusion
This paper proposed a bi-level multi-objective programming model to deal with
economic and ecological conflicts in large scale coal fields. An environmental pro-
tection based mining quotas competition mechanism was established using the
proposed model which considers not only the relationship between the govern-
ment and coal mines, but also the equilibrium between economic development
and environmental protection. To solve the complex bi-level multi-objective pro-
gramming model and an extended KKT and fuzzy goal programming approach
were combined as a solution method. The proposed method was then applied
to the Lu’an coal field which includes four major coal mines. By inputting the
data into the model and computing it using the proposed solution approach, the
effectiveness of the model was demonstrated.
References
1. Baky IA (2010) Solving multi-level multi-objective linear programming problems
through fuzzy goal programming approach. Appl Math Model 34(9):2377–2387
2. Clo S (2009) The effectiveness of the EU emissions trading scheme. Clim Policy
9(3):227–241
3. Coase RH (1960) The problem of social cost. J Law Econ 3:1–44
4. Cong RG, Wei YM (2012) Experimental comparison of impact of auction format
on carbon allowance market. Renew Sustain Energ Rev 16(6):4148–4156
5. Gersbach H, Requate T (2004) Emission taxes and optimal refunding schemes. J
Public Econ 88(3):713–725
6. Goulder LH, Hafstead MA, Dworsky M (2010) Impacts of alternative emissions
allowance allocation methods under a federal cap-and-trade program. J Environ
Econ Manag 60(3):161–181
7. Haibin L, Zhenling L (2010) Recycling utilization patterns of coal mining waste in
China. Resour Conserv Recycl 54(12):1331–1340
8. Hanson MA (1981) On sufficiency of the Kuhn-Tucker conditions. J Math Anal
Appl 80(2):545–550
846 N. Ma and S. Zheng
1 Introduction
Vehicle routing problem was first proposed by Dantzing and Ramser [5]. After
more than 50 years of research, it has attracted wide attention of scholars in the
field of operations research and combinatorial optimization. The vehicle routing
problem with time window (VRPTW) is add the time window constraint on the
basis of the capacity constraint model. At present, domestic and foreign schol-
ars have done a lot of research on the VRPTW problem, Desrochers et al. [6]
solved VRPTW problem by combined branch-and-bound algorithm and column
generation algorithm. Bent and Hentenryck [3] proposed a two-stage hybrid algo-
rithm for VRPTW problem. In addition, the algorithm for the VRPTW problem
is also includes genetic algorithm [15], particle swarm optimization algorithm
[1], ant colony system [17], etc. However, considering the customer service time
requirements are not completely rigid, Lin [12] considered customer satisfaction
level and proposed vehicle routing problems based on the fuzzy time window.
Ghannadpour et al. [9] presented a multi-objective dynamic vehicle routing prob-
lem with fuzzy time windows considered customer-specific time windows and pro-
posed the genetic algorithm (GA) and three basic modules to solve the problem.
While these studies have contributed significantly to solving the vehicle routing
problem with single time window, customers usually have more than one time
period to receive service, they ignored the vehicle routing problem with multiple
time windows (VRPMTW).
The vehicle routing problem with multiple time windows (VRPMTW) is
an extension problem of the traditional vehicle routing problem, refers to each
customer has a number of non overlapping time windows to receive services,
delivery vehicles must choose one time window for service, and compared with
the single time window of vehicle routing problem, VRPMTW is much closer to
the reality. The research on VRPMTW is mainly focused on two aspects: model
and algorithm. In the aspect of model, Doerner et al. [7] presented the model of
vehicle routing problem with multiple interdependent time windows and solved
the problem by several variants of a heuristic constructive procedure as well as
a branch-and-bound based algorithm. Ma et al. [14] considered delivery can be
splited and established the split delivery vehicle routing problem with multiple
time windows (SDVRPMTW) model. Yan et al. [18] applied VRPMTW model
in military oil transport path optimization and solved problem with particle
swarm optimization algorithm. And the research of algorithms mainly focus on
the ant colony system algorithm [8], simulated annealing algorithm [13], hybrid
intelligent algorithm [4], hybrid variable neighborhood-tabu search heuristic [2]
and intelligent water drop algorithm [11], etc. However, customer usually have
multiple fuzzy time windows and service time requirements are not completely
rigid in reality. This kind of problem is more complicated and more realistic than
VRPMTW, so it is necessary to carry out relevant research.
In this paper, VRPMTW in a fuzzy environment is considered and cus-
tomer satisfaction is quantified by the time to start service. We aim to minimize
transportation cost, minimize vehicles, and maximum customer satisfaction, thus
establish the model of vehicle routing problem with multiple fuzzy time windows.
The particle swarm optimization algorithm is used to solve the problem and the
experimental results are analyzed and discussed.
The vehicle routing problem with multiple fuzzy time windows can be stated as:
A distribution center has m cars to serve n customers, customer i has Wi non
overlapping fuzzy time windows. Each customer’s needs, the maximum loading
capacity of each vehicle and the distance between any two customers is known.
Vehicles starting from the distribution center, select one time window of cus-
tomer to serve and return distribution center until completion of distribution.
Modeling and Solving the Vehicle Routing Problem 849
On the basis of satisfying customer demand, vehicle loading capacity and cus-
tomer time window, the objective function is optimized through the reasonable
path planning. The model considers the following constraints: (1) Vehicle loading
capacity constraint: the actual loading of each vehicle must not exceed the max-
imum loading capacity. (2) Multi fuzzy time windows constraint: each customer
has multiple fuzzy time windows, but vehicle can only choose a time window ser-
vice. (3) Customer satisfaction constraint: customer satisfaction is greater than
the value of the decision maker. (4) Visit uniqueness constraint: each customer
can only served by a car and can only serve once. (5) Central constraint: vehicle
start from the distribution center and after implement the customer service to
return. (6) Loop elimination constrain: the path of each vehicle can only exist
the loop between the starting point and point, there can be no other circuit.
2.2 Notations
The variables and parameters used in the model are:
L = {1, 2, · · · , n} : customer set, with 0 and n + 1 said distribution center
K = {1, 2, · · · , m} : vehicle set;
Qk : the loading capacity of vehicle k;
C : vehicle start-up costs;
qi : the demand of customer i;
Dk : the longest running distance of vehicle k;
cij : the cost of traveling from nodes i to j;
ti : the service start time of customer i ;
si : the service duration time of customer i ;
tij : the travel time from customer to customer i to j ;
Wi : the number of customer time windows;
[Eiα , aα
i , bα
i , Lα
i ] : the fuzzy time window α of customer i;
Eiα : the endurable earliness time of time window α of
customer i;
aαi : the earliest start service time which customer i expect
to be served in time window α;
bα
i : the latest start service time which customer i expect to
be served in time. window α;
xijk : a binary variable, xijk = 1, if vehicle travels from
customer i to customer j, otherwise, xijk = 0;
yiα : a binary variable, yiα = 1, if vehicle serve the time
window α of customer i, otherwise, yiα = 0.
When the customer has multiple trapezoidal fuzzy time window can accept
the service, customer satisfaction can be defined by the membership function of
the service start time:
⎧
⎪
⎪ 0, ti < Eiα
⎪
⎪
⎨ (ti − Eiα )/(aα i − Ei ) , Ei < ti < ai
α α α
2.4 Modelling
The vehicle routing problem with multiple fuzzy time windows is as follows:
n n+1 m
min Z1 = cij xijk (2)
i=0 j=0 k=1
n
min Z2 = C x0jk , (3)
j=0
1 n
max Z3 = μi (ti ), (4)
n i=1
s. t. μi (ti ) ≥ ηi , ∀i ∈ L, (5)
⎛ ⎞
n n
⎝ qi xijk ⎠ ≤ Qk , ∀k ∈ K, (6)
i=1 j=0
n n+1
dij xijk ≤ Dk , ∀k ∈ K, (7)
i=0 j=1
n+1
x0jk = 1, ∀k ∈ K, (8)
j=1
Modeling and Solving the Vehicle Routing Problem 851
n+1
xin+1k = 1, ∀k ∈ K, (9)
i=1
n m
n
m
xijk = xjik = 1, ∀j ∈ L, (10)
i=1 k=1 i=1 k=1
xijk ≤ |S| − 1, S ⊆ L, ∀k ∈ K, (11)
i,j∈S×S
i ≤ Ei , ∀i ∈ L, α ∈ {1, 2, · · · Wi − 1} ,
α+1
Lα (12)
Wi
max yiα Eiα , (ti + si + tij ) xijk ≤ tj , ∀i, j ∈ L, ∀k ∈ K, (13)
α=1
Wi
tj ≤ j , ∀j ∈ L,
yjα Lα (14)
α=1
Wi
yiα = 1, ∀i ∈ L, (15)
α=1
xijk = 0 or 1∀i, j, k, (16)
yiα = 0 or 1, ∀i ∈ L, α ∈ {1, 2, · · · , Wi } . (17)
Objective (2) minimizes the total travel cost of all vehicles which is the most
important objective to the decision makers. Objective (3) minimums the number
of vehicles. Objective (4) maximizes the average customer satisfaction. Equa-
tion (5) ensures that each customer satisfaction is higher than ηi , ηi is given by
the decision maker based on experience. Equation (6) ensures that the loading
of each vehicle is not more than the maximum loading capacity. Equation (7)
makes sure the distance of each vehicle is not more than the longest running
distance. Equations (8) to (10) ensure that each customer is served by a car and
the vehicle flow keep balance. Equation (11) is to eliminate the sub loop. Equa-
tion (12) is to sort time windows by time. Equations (13) and (14) makes sure
that customer is served in time window. Equation (15) ensures each customer
has a time window to be served. Equations (16) and (17) is the variable range.
Construct the expression of the solution and make the solution correspond to the
particles in the PSO algorithm is the key link of the algorithm. Salman et al. [16]
described the expression of solution in detail, we draw lessons from this method.
A 3L-dimensional matrix is used to represent the solution of the proposed model
with L customers, each customer corresponds to three dimensional: vehicle num-
ber vector, vehicle ranking vector and time windows number vector. In order to
express clearly and calculate conveniently, the 3L-dimensional matrix of each
particle is divided into three L-dimensional vectors, they are Xv (vehicle num-
ber vector), Xr (vehicle ranking vector) and Xw (time windows number vector).
Corresponding velocity vectors V are Xv , Xr and Xw .
For example, there are three vehicles to complete the distribution tasks of
eight customers, the position vector of a particle is: Due to the original assump-
tion that the vehicle starting from the distribution center and finally return to
Consumers 1 2 3 4 5 6 7 8
Xv 1 1 1 2 3 3 3 3
Xr 1 3 2 1 4 1 3 2
Xw 1 2 1 1 1 1 2 1
Vehicle 1 0 → 1 → 3 → 2 → 0
Vehicle 1 0 → 4 → 0
Vehicle 1 0 → 6 → 8 → 7 → 5 → 0
In the coding, the 0 omitted in order to simplify the calculation. The third
line represents the time window number of each customer to be served, customer
1 choose the first time window service, for the customer 2 choose second time
window service and for customers 3 choose the first time window service, etc.
According to the objective function (2), (3), (4) and the constraint conditions
design the fit value function, as shown in the following
Modeling and Solving the Vehicle Routing Problem 853
⎧⎡ ⎛ ⎞ ⎤ ⎫
n
n
m
m ⎨ n
n ⎬
min Z = Cm + cij xijk +β1 max ⎣ ⎝di xijk ⎠ − Qk ⎦ , 0
⎩ ⎭
i=0 j=0 k=1 k=0 i=1 j=0
n
n
+ β2 max(Eiα − ti , 0) + β3 max(ti − Lα
i , 0) (18)
i=1 i=1
n
+ β4 max [ηi − μi (ti ) , 0],
i=1
4 A Case Study
In this section, we use an example to illustrate the model of vehicle routing
problem with multiple fuzzy time windows. Example is as follows: a distribution
center with six vehicles to provide services for 15 customers and each customer
has two fuzzy time windows, the maximum loading capacity of each vehicle is 50
tons and the furthest distance is 120 km, the average vehicle speed is 35 km/h
with the driving cost is 5 yuan/km and the start-up cost of each vehicle is 100
yuan, the average customer satisfaction level is set to 0.7 (Table 1).
Customer di si Coordinate Ei1 a1i b1i L1i Ei2 a2i b2i L2i
1 4 0.3 (1,8) 6.5 8 9 10.5 12.5 14 15.6 17.1
2 9 0.4 (9,−8) 7 8.5 9.5 11 11.5 13 15 16.5
3 6 0.4 (−16,−20) 7 8.5 9.8 11.3 12 13.5 14.6 16.1
4 2 0.2 (19,12) 7 8.5 9.5 11 11.5 13 14.8 16.3
5 9 0.3 (−21,−15) 7 8.5 10 11.5 12 13.5 14 15.5
6 3 0.5 (3,25) 7 8.5 10.4 11.4 11.5 13 14 15.5
7 5 0.3 (12,1) 6.5 8 9 10.5 11.5 13 14 15.5
8 7 0.2 (14,−19) 7 8.5 9.5 11 11.3 12.8 14.5 16
9 8 0.3 (5,16) 6.5 8 9.3 10.4 10.5 12 14 15.5
10 7 0.2 (−20,18) 7 8 10.2 10.9 11 12.2 13.4 14.9
11 5 0.3 (−25,9) 6 7.5 9 10.4 10.5 12 14 15.5
12 8 0.2 (20,−18) 7.5 9 10 11.5 12 13.5 15 16.5
13 8 0.4 (−11,−7) 7 8.5 9.5 11 11.5 13 14 15.5
14 5 0.2 (−3,−17) 6.5 8 9.4 10.5 11 12.5 14 15.5
15 10 0.5 (2,−30) 7.5 9 11 12.5 13 14.5 16 17.5
Particle swarm optimization parameters set as: population size is 200, itera-
tions gen = 3000, c1 = c2 = 2. There are eighteen times to achieve the average
level of satisfaction in the twenty experiments, the eligible results of the experi-
ment as shown in Table 2.
On the basis of satisfying the satisfaction level, the optimal experimental
results are: two cars driving 232.35 km and total distribution cost is 1361.75
yuan, average customer satisfaction was 0.85. The optimal experimental results
are shown in Table 3.
In order to compare the optimization results of the proposed model and the
VRPMTW model, using the PSO algorithm to solve the case in accordance with
the VRPMTW model ignored the customer fuzzy time windows, the experimen-
tal results are compared shown in Table 4. According to the comparison of the
experimental results can be found that on the basis of vehicle routing problem
Modeling and Solving the Vehicle Routing Problem 855
with multiple fuzzy time windows, keeping a higher average customer satisfac-
tion degree can effectively reduce vehicle travel distances and lower distribution
costs. Secondly, decision maker can set satisfaction degree parameters accord-
ing to his own different emphasis on distribution costs and satisfaction degree
and change distribution strategies. Compared to VRPMTW, this model is more
flexible and realistic.
5 Conclusions
The vehicle routing problem with multiple fuzzy time windows has been widely
used in the practical life, but the study of such problems are relatively few
recently. Based on the multiple fuzzy time windows provided by customers, this
paper adopts the membership function of start service time to quantify customer
satisfaction degree, and establish universal vehicle routing problem with multiple
fuzzy time windows as well as solve mathematical model by means of the particle
swarm algorithm to minimize the total distribution costs and maximize customer
satisfaction degree. The results of numerical example show that compared with
VRPMTW, this model can reduce distribution costs more effectively. In addition,
the flexible setting of customer satisfaction parameters and the expansion of
the time window can further reduce the distribution cost, it has more realistic
meaning and reference value for decision makers.
However, there are more complex and uncertain factors in practical appli-
cations such as vehicle travel time, the demand of customers and so on. The
exploration of such extensive problems will be the main research focus on the
next step.
References
1. Ai TJ, Kachitvichyanukul V (2009) A particle swarm optimisation for vehicle rout-
ing problem with time windows. Int J Oper Res 6(19):519–537
2. Belhaiza S, Hansen P, Laporte G (2014) A hybrid variable neighborhood tabu
search heuristic for the vehicle routing problem with multiple time windows. Com-
put Oper Res 52:269–281
3. Bent R, Van Hentenryck P (2004) A two-stage hybrid local search for the vehicle
routing problem with time windows. Transp Sci 38(4):515–530
4. Bitao P, Fei W (2010) Hybrid intelligent algorithm for vehicle routing problem
with multiple time windows. Int Forum Inf Technol Appl 1:181–184
5. Dantzig GB, Ramser JH (1959) The truck dispatching problem. Manage Sci
6(1):80–91
6. Desrochers M, Desrosiers J, Solomon M (1992) A new optimization algorithm for
the vehicle routing problem with time windows. Oper Res 40(40):342–354
Modeling and Solving the Vehicle Routing Problem 857
7. Doerner KF, Gronalt M et al (2008) Exact and heuristic algorithms for the vehicle
routing problem with multiple interdependent time windows. Comput Oper Res
35(9):3034–3048
8. Favaretto D, Moretti E, Pellegrini P (2013) Ant colony system for a VRP with
multiple time windows and multiple visits. J Interdiscip Math 10(2):263–284
9. Ghannadpour SF, Noori S et al (2014) A multi-objective dynamic vehicle rout-
ing problem with fuzzy time windows: model, solution and application. Appl Soft
Comput 14(1):504–527
10. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: 1995 Proceedings
of the IEEE International Conference on Neural Networks, vol 4, pp 1942–1948
11. Li Z, Zhao F, Liu H (2014) Intelligent water drops algorithm for vehicle routing
problem with time windows. In: 11th International Conference on Service Systems
and Service Management, vol 24, pp 1–10
12. Lin JJ (2006) Multi-objective decision making for vehicle routing problem with
fuzzy due time. IEEE Int Conf Syst Man Cybern 4:2903–2908
13. Ma H, Zuo C, Yang S (2009) Modeling and solving for vehicle routing problem
with multiple time windows. J Syst Eng 24(5):607–613 (in Chines)
14. Ma HW, Ye HR, Xia W (2012) Improved ant colony algorithm for solving split
delivery vehicle routing problem with multiple time windows. Chin J Manage Sci
1:43–47 (in Chinese)
15. Ombuki B, Ross BJ, Hanshar F (2006) Multi-objective genetic algorithms for vehi-
cle routing problem with time windows. Appl Intell 24(1):17–30
16. Salman A, Ahmad I, Al-Madani S (2002) Particle swarm optimization for task
assignment problem. Microprocess Microsyst 26(8):363–371
17. Toklu NE, Gambardella LM, Montemanni R (2014) A multiple ant colony system
for a vehicle routing problem with time windows and uncertain travel times. J
Traffic Logistics Eng 2(1):52–58
18. Yan H, Gao L et al (2015) Petrol-oil and lubricants support model based on mul-
tiple time windows. J Comput Appl 35(7):2096–2100
Optimization of Operating of the Systems
with Recurrent Service by Delays
1 Introduction
Consider a queuing system, where: t1 , t2 , · · · , tn is random sequence when ser-
vice starts in the system, x is customer arrival instant. At each instant ti all
customers who arrived at the interval [ti−1 , ti ) immediately will get service (see,
Fig. 1). Such models are typical for applications and can be used in traffic, com-
munication systems, network of computers and others. Denote t∗1 , t∗2 , · · · , t∗n is
(random sequence) service starts in the system with delays of beginning service
(see, Fig. 1).
The problem is: Is it possible by introducing delays to reduce a customer
average waiting time? May be it is a Paradoxical Idea to introduce delay of
the beginning service for reducing a customer average waiting time? Idea to
introduce delays of beginning service belong to various authors [1–4]. Below we
will give some examples of queueing systems with delays of beginning service.
Example 1. The Main Building of M.V. Lomonosov Moscow State University.
There are several lifts in the hall (see, Fig. 2).
If at least two lifts are coming to the first floor almost at the same time, then
one of them must be delayed!. Mathematical model has the following form (see,
Fig. 3).
t1 t2 ... tn
t1* t2*... t n*
distance
Initial system
0 t1 t2 ti-1 …… ti tk
2 Communication System
How long time we can keep communications?
III
II
satelites
I II I
0 t1 t2 t3 ……
3 Mathematical Models
Denote η1 = t1 η2 = t2 − t1 , · · · , ηn = tn − tn−1 ; η1 , η2 , · · · , ηn are independent
and identically distributed random variables with distribution function F (x),
Eη1 = μ, V arη1 = δ2.
Stationary flow of customers arrives to service. At the each instant ti all
customers, who arrived at the interval [ti−1 , ti ) immediately will get service.
Denote w is customer average waiting time before service (Fig. 5).
From the sequence t1 , t2 , · · · , tn we pass to the new sequence for which we
have
η1∗ = t∗i − t∗i−1 = ηi + g(ηi ), (ηi = ti − ti−1 ),
where g ∈ G is a class of measurable and nonnegative functions. Denote w(g) is
customer average waiting time before service in a system with
Eη12
MF (g) = w(g) − w, c= ,
2Eη1
control function g(.).
t1 t2 tn
t1* t 2*
F(x)
min MF (g) = MF (g ∗ ).
g∈G
Eη 2
c= .
2Eη
Definition 3. We call g̃(x) an optimal function if
Theorem 4. Under the conditions of Theorem 1, the optimal function has the
following form
g̃(x) = max {0, (c1 − x)} = (c1 − x)+ ,
where c1 is the unique solution of the equation
∞
c21 = (c1 − x)2 dF (x).
c1
Optimization of Operating of the Systems with Recurrent Service 863
c1 =0.9;
d =0.8;
0 t1 t2 tn
Consider two different service systems: w1 (g) and w2 (g) are customer average
waiting time with delay function g(x). Problem is: when
w1 (0) w2 (0)
> ?
w1 (g1∗ ) w2 (g2∗ )
i.e. where (in which system) service can be improved more strong? Introduce k,
which means variation coefficient, i.e.
Eη 2 k w1 (0) w2 (0)
k= 2; k1 > √2 ⇒ ∗ > .
(Eη) 1 + k2 − 1 w1 (g1 ) w2 (g2∗ )
0 t1 t2 t3 …… tn
t 1* t2
t1 t 1* t2 t 2*
Model B:
Example 5. F (x) = 1 − e − x, x ≥ 0.
∗
w = 1; σ 2 = 1; g ∗ (x) = (1 − x)+ , w∗ = 0.5; σ 2 = 0.
The gain in customer average waiting time is 50% and variance equals zero
(Fig. 10).
Optimization of Operating of the Systems with Recurrent Service 865
<1
0 t1 t2 t3 …… tn
=1 >1 =1
* *
t
1 t 2 t3*
c1 =0.9;
d =0.8;
0 t1 t2 tn
0 t1 t2 t3 …… tn
t1* t2
t1 t1* t2 t2*
Theorem 12. Optimal function has the form g̃(x) = max{0, min(a(x), c1 − x)},
∞
2
where c1 is a unique solution of the equation c21 = (x − c1 ) dF (x).
c1
µ =1;
d =0.8;
0 t1 t2 tn
References
1. Carter GM, Chaiken JM, Ignall E (1972) Response areas for two emergency units.
Oper Res 20(3):571–594
2. Hajiyev AH, Jafarova H (2010) Mathematical models of complex queues and their
applications. Proc Inst Math Mech ANAS 33:57–66
3. Hajiyev AH, Djafarova HA, Mamedov TS (2010) Mathematical models of moving
particles without overtaking. Dokl Math 81(3):395–398
4. Newell GF (1974) Control of pairing of vehicles on a public transportation route,
two vehicles, one control point. Transp Sci 8(3):248–264
An Epidemic Spreading Model Based
on Dynamical Network
Yi Zhang(B)
1 Introduction
Human beings are still at the risk of the outbreak of epidemic diseases
now even after the development of modern medicine and the appearance of
antibiotics [1,2]. The epidemic disease cause big disaster to human in history.
For example, the SARS outbreak in 2003 and the H1N1 outbreak in 2009. These
epidemic disease swept the globe in a short time, and caused a great loss, even
more change the life way of human. With the worsening environment, epidemic
disease is break out more frequently, and affect the humanity’s survival and devel-
opment. Therefore, the epidemic rule, transmission mechanism and the strategy
of preventing disease are major problem which need to be solved. These problems
have attracted many researchers attention increasingly.
The mathematical modeling of epidemic disease spreading has been exten-
sively studied for a long time [13]. The main mathematical approach is the so-
called compartmental models which are composed of ordinary differential equa-
tions [6–8,11]. In this kind of approach, the entire population are divided into
different compartments and each compartment corresponds to an epidemiolog-
ical state which depends on the characteristics of the particular disease being
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 71
An Epidemic Spreading Model Based on Dynamical Network 869
modeled. The commonly used compartment; the susceptible (S), is not infected
but likely infected individuals, the exposed (E) is already infection but don’t
have infectious individuals, the infected (I) consists of individuals who have
been infected and had infectious, the removed (R) consisting of individuals who
have been restored or death. A classical compartmental model is the SIR model,
which was constructed by Kermack and Mckendric in 1927, and they also pro-
posed SIS model in 1932. Based on these research, a lot of compartmental model
were proposed. According to the different propagation process of disease, com-
mon epidemic model are: SI, SIS, SIR, SIRS, SEIR, SEIRS, SEI and SEIS.
However, most of research of epidemic on complex networks assume that the
network structure remains the same, it does not fit the reality. Some papers have
realized the dynamic nurture in epidemic spreading. A model was built to inves-
tigate the disease spreading dynamics and synchronization behavior on complex
networks [12]. A SIRS epidemic model with feedback mechanism on adaptive
scale-free networks is presented [9]. The real social network is the evolution of
dynamic interaction between nodes and links [5]. In fact, people cut back on
frequency of going out once the outbreak of epidemic, and the government take
some measures to reduce mobility of people. Because of action of self-protection
and government protection, the topology of the network is dynamic change. That
is, the mobility of people change topology of networks, thus change the topo-
logical feature of network and the way of epidemic spreading. Therefore, it is
significant that dynamic network affected by action of protection is considered
in epidemic model. And it is goes some way to providing an in-depth under-
standing of epidemic spreading and how to control such epidemic spreading on
dynamic social networks.
In this paper, We establish a function to quantify the dynamic change of
the network, and propose a epidemic spreading model which consider both the
change of network and the speed of the change. In this model, the function can
dynamically measures the probability that the susceptible becomes the infected.
A variant of the SIR model is then built, in which the traditional fixed parameters
replaced with our probability function, and conduct the stability analysis of
model. Then investigate how different values for parameters can effect epidemic
spreading using MATLAB in ER random networks. Finally, we conclude this
paper and provide several avenues for further research.
2.1 Function
where c is the parameter to reflect the speed of taking the measure of protection.
A larger c indicates that the individual take the measure of protection in a shorter
time. Figure 1 shows the probability as a function of t, given a different c. When c
tends to zero, the speed of taking the measure of protection slows down. We have
carefully checked that p(t) decreases sharply when c ∈ (0.1, 1), it is suggested
that parameter c < 0.1.
In this case, the parameter for c was (0.1, 1). p(t) is the probability, and in
(0, 1), p(t) reflects the probability that an individual become a infected from an
susceptible. Therefore, p(t) tends to zero as times passes.
2.2 Model
Denote S(t), I(t), R(t) as the density of the susceptible, infected, removed at
time t, S(t)+I(t)+R(t)=1. As shown in Fig. 2, the epidemic spreading rules can
be summarized as follows:
dS(t)
= −p(t)S(t)I(t). (1)
dt
An Epidemic Spreading Model Based on Dynamical Network 871
(2) At each time step, some infected individuals become removed individuals
because of death or immunity. We suppose that the above case happens
at probability q. Therefore, the reduced speed of the infected individuals
dI(t)/dt is proportional to the number of the infected individuals I(t), Addi-
tionally, an susceptible individual becomes a infected individual at the rate
of p(t), so we have
dI(t)
= p(t)S(t)I(t) − qI(t). (2)
dt
(3) The increasing speed of the removed individuals dR(t)/dt is proportional to
the number of existing I(t) from (2), so we get
dR(t)
= qI(t). (3)
dt
Based on the previous discussion, by integrating Eqs. (1)–(3), the follow-
ing global expected model can now be formulated for the epidemic spreading
process:
⎧
⎪
⎪
dS(t)
dt = −p(t)S(t)I(t)
⎪
⎨ dI(t)
dt = p(t)S(t)I(t) − qI(t) (4)
⎪
⎪
dR(t)
⎪ dt = qI(t)
⎩ S(0) = S0 , I(0) = 1 − S0 , R(0) = 0.
872 Y. Zhang
From differential equations theory, system (4) and (5) are homogeneous,
which means that analyzing the properties of system (5), is equal to analyz-
ing the properties of system (4). First, a required theorem is introduced.
Our model is a non-autonomous differential dynamic system, the general
form of which is
dx
dt = f(t, x) (6)
f(t, 0) = 0, x ∈ Rn .
Proof. Let the right side of each of the differential Eq. (8) be equal to zero in
the system which gives the equation
−p(t)S(t)I(t) = 0, (7)
p(t)I(t) − qI(t) = 0. (8)
The feasible region for the equations is R2 , and so we study the equations in
a closed set A = {(S, E, I) ∈ R2 | S + I ≤ 1, S, I ≥ 0}
From Eq. (8), we can get I(t) = 0(p(t) = q). Substituting I(t) = 0 into
Eq. (7), we can get the system has the equilibrium P ∗ = (S, I) = (S ∗ , 0)
(0 ≤ S ∗ < 1).
An Epidemic Spreading Model Based on Dynamical Network 873
In order to ascertain V (S, I) and V (S, I), and taking the value F (S) = 1,
G (I) = 1, we get V (S, I) = S + I, and
can be seen in the above analysis. Therefore, the probability is a variable which
changes over time in our model. In the following sections, we examine the results
with a different values for parameter c to demonstrate how the act of isolated
affects epidemic propagation. Simulations were carried out using MATLAB.
To compare the results, the proposed model with different value for c was
tested on a artificial networks–ER random networks with the network size at
N = 10000 and the average degree <k> = 16 [4]. Two different nodes are
connected with a probability of p = 0.0016, so we can get random networks with
−1)
N nodes, and pN (N 2 edges. ER random networks degree-distributions are an
approximate Poisson distribution <k> = pN ≈ p(N − 1). This distribution
reaches its peak value at the average degree <k> [3].
Given that the other parameters are fixed, we compared the epidemic spread-
ing processes on the random networks with different values for c. So here, we
set c = 0.005, c = 0.03, c = 0.7 when q = 0.05 respectively. Figure 3 illustrates
how the density of the infected changes over time for different values of c in a
random network. From a macroscopic perspective, we found that as parameter
c decreased, the number of the infected peaked increased, because a smaller c
indicates a faster movement speed. The green line indicates a scenario in which
the individuals move quickly, that is, the topology of network change quickly.
The blue line represents a scenario in which the individuals move speed is slow
relatively, and the red line represents a scenario in which the individuals hardly
move. It can be seen that the higher parameter c is, the smaller the infected
peak value. Figure 4 describes how the density of the removed changes with
changes in the parameter c. We found that as parameter c decreased, the num-
ber of the end the size of the removed increased, because a smaller c indicates a
faster movement speed. The final value for the removed density R(t) is greater,
which indicates the number of people affected by an epidemic when c is smaller.
Clearly, the smaller the value of c when other parameters are fixed, the broader
the epidemic’s influence. A smaller c indicates that if the individuals have a faster
movement speed, less susceptible individuals change into removed individuals,
and therefore the influence of the epidemic decreases. Generally speaking, it can
be seen that epidemic spread more broadly and last longer when c is smaller.
However, there is a more significant impact from protection measures in random
networks.
Given that the other parameters are fixed, we compared the epidemic spread-
ing processes with different values for q. we set q = 0.05, q = 0.1, and q = 0.2.
Figures 5 and 6 illustrates how the density of the infected and removed changes
over time for the different values of q. From the Fig. 5, we can see that the bigger
876 Y. Zhang
the value of q, when other parameters are fixed, the smaller peak value of the
infected. It can be seen that the higher parameter q is, the larger the infected
peak value, and the slower the epidemic terminates. From the Fig. 6, it is easy to
find that the final value for the removed density R(t) is greater, which indicates
the number of people is infected when q is smaller. Clearly, q is the transforma-
tion probability that infected individuals become removed individuals, in reality,
along with the increase of q, there is more infected individuals become removed
individuals. As a result, the number of removed increases, the influence of the
epidemic decreases.
4 Conclusions
In this paper, we proposed a variant of the SIR epidemic spreading model by
considering the dynamic change of networks because of the behavior of protec-
tion of individuals. In our model, we established a function p(t) to describe the
dynamic change of networks. p(t) also was a probability that the infected from
the susceptible at time t. The parameter c of p(t) reflecting the change speed of
taking measure of protection. If the individuals take protection measures quickly,
c is greater, with the epidemic effect is small. On the contrary, if c is smaller,
the epidemic effect is greater. Generally speaking, as time passes, because of the
behavior of protection, the probability that the susceptible contact the infected
is decrease gradually, and go to zero finally. This function as a parameter was
added to dynamic differential equations, and a variant of the SIR model was
built. At the same time, a dynamic analysis of the model was conducted.
We then do the simulation results from our epidemic spreading model using a
different values for c, q in ER random networks. The simulations showed that epi-
demic spread faster and more broadly when c is smaller. That is, epidemic often
spread faster and more broadly when people take protection measures slowly. At
the same time, the simulation results suggested that the spread of epidemics is
influenced significantly by parameter c in a random network. q is the transforma-
tion probability that infected individuals become removed individuals, in reality,
along with the increase of q, the infected peak value is large, and the epidemic
terminates slowly. These results indicated that epidemic spreads faster and more
broadly in network when the smaller the parameter c, the larger parameter q.
The dynamic networks plays an important role in epidemic spreading. With
this in mind, in the future we plan to study feature of dynamical networks.
In addition, further studies will be conducted on epidemic control strategies
according to different network topologies.
References
1. Albert R, Barabási AL (2002) Statistical mechanics of complex networks. Rev Mod
Phys 74(1):47–97
An Epidemic Spreading Model Based on Dynamical Network 877
Dan Zhang(B) , Yufeng Ma, Aixin Wang, Yue He, and Changzheng He
1 Introduction
CNNIC [10] has pointed out that as of June 2016, Chinese netizens has reached
710 million, the size of microblog users to 242 million. The huge user resources of
microblog provide a rich fertile soil of marketing activities for enterprises. Many
companies have realized the importance of microblog. Therefore, they launch
the online marketing on microblog. However, how to assess the effectiveness of
microblog marketing becomes the practical problems faced by enterprises.
With the Matthew effect of microblog is gradually prominent, the number of
the domestic and foreign scholars who study on the effect of microblog marketing
are on the rise. Scholars on microblog marketing research mainly focused on three
aspects: marketing effect, the impact of microblog marketing and marketing
strategy. In the study of the marketing effect of microblog, Leung [13] attempted
to explore the marketing effectiveness of two different social media sites (Face-
book and Twitter) in the hotel industry, Integrated the attitude-toward-the-ad
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 72
Analysis of Enterprise Microblog Marketing 879
2 Related Work
2.1 Enterprise Microblog Marketing Effect Evaluation System
There are relatively abundant indicators to assess the effect of microblog mar-
keting. Through quantities of case studies and quantitative research methods,
some scholars found that there are many factors which have influence on the
enterprise microblog marketing effectiveness. Xue [18] found that sales promo-
tion is easy to promote consumer to make information share by forwarding to
a friend or to make comments. In addition, Chang [5] thought liking or sharing
social media messages can increase the effects of popular cohesion and message
diffusion. What’s more, Zhu [20] argued that social media marketing efforts need
to be congruent and aligned with the different needs of social media users.
Bi [3] thought that the comments and reposts can reflect the microblog mar-
keting effects. Saulles [16] considered not only the simply counting Twitter fol-
lowers and volumes of tweets as indicators of effectiveness but also utilises social
authority scoring from the digital marketing analysts. Based on the above find-
ings, this paper selects the number of fans, the number of microblog, the fre-
quency of releasing microblog, and prizes value as the input evaluation indexes.
The selected output evaluation indexes are the number of fans forwarding, the
880 D. Zhang et al.
number of fans comments, the number of fans point of praise, and non negative
sentiment index.
From the input index perspective, this paper puts forward active comments
and passive comments. Active comments means that the numbers of blogger’s
active replies to followers. Here the follower’s comments do not express the ten-
dency of doubt or the desire to get any responds from the blogger. However, the
passive comments refer to the number of the replies. The replies are the blogger
who make to answer the doubts or the questions which fans put forward to. In
fact, there are obvious marks in these comments. For example, these comments
consist of the symbol@, or the interrogative and obvious punctuation, microblog
expressions.
From the output index perspective, the original microblog forwarded and
the original microblog comments were proposed. The reason is that the official
microblog generally contain original and non-original microblog(namely forward-
ing other’s microblog). The total numbers of the original microblog forwarded
also contain the original microblog forwarded and the non-original forwarded.
Therefore, in order to more clearly measure the spread of the original microblog,
the article puts forward the two indicators.
Based on the above analysis, the evaluation index system of enterprise
microblog marketing effectiveness is established, as is shown in Fig. 1.
Later, the related DEA methods are mostly building on the idea of Farrell to
develop a mathematical programming model. Other than comparing efficiency
across DMUs within an organization, DEA has also been used to compare effi-
ciency across firms.
DEA model has many forms. This paper selects the BCC model which can
evaluate the effectiveness of scale and technical. Sun [17] proposed that we can
suppose there are n decision-making units, each unit DMUj (j = 1, 2, · · · , n) has
m inputs and s outputs. Respectively, they are showed by input X and output Y .
To evaluate DM Uj , Cooper [11] thought that we can use the linear program-
ming model (P) and dual programming model (D), as shown below:
⎧
⎪
⎪ max μT y0 = Vp
⎨
ω xj − μTi ≥ 0(j = 1, 2, ..., n)
T
(p) = (1)
⎪
⎪ ω T x0 = 1
⎩
ω ≥ 0, μ ≥ 0,
⎧
⎪
⎪ min VD = θ
⎪
⎪ n
⎪
⎪ xj λj + s− = θx0
⎪
⎪
⎨ j=1
(D) = n
(2)
⎪
⎪ yj λj − s+ = y0
⎪
⎪ j=1
⎪
⎪
⎪
⎪ λj ≥ 0, (1 ≤ j ≤ n)
⎩
s+ ≥ 0, s− ≥ 0.
In the above formula, s− and s+ are slack variable. λj is the weights coefficient
of input and output indicators. θ represents the ratio of input reduction. If
θ∗ = 1, s−∗ = s+∗ = 0, the unit is considered to be valid. If θ∗ = 1, the s−∗ ,
s+∗ are not all 0, the unit is considered weak effective. If θ∗ < 1, the unit is
considered to be invalid. When using the model to measure technical efficiency,
technical efficiency can be decomposed into pure technical efficiency and scale
efficiency.
3 Data Collection
3.1 Sample Selection
The research time was setting on March 5, 2014-March 11, 2014. When choosing
the brands, this paper discretionarily chooses five different industries, which is
Women’s Clothing, Mobile Phones, Skin Care, Home Furnishings, and Snacks.
Then the microblog of enterprises who have the sina official certification will
be determined to as the study sample. The final choice of enterprises list is as
shown in Table 1.
882 D. Zhang et al.
Category Women’s clothing Mobile phones Skin care Home furnishings Snacks
1 Ochirly Apple EsteeLauder Lin shi mu ye Three squirrel
2 HSTYLE Samsung Pechoin Bell Land Be & Cheery
3 Vero Moda MIUI Lancome Gudxon Haoxiangni
4 ELF SACK Huawei Laneige Coleshome FERRERO ROCHER
5 Artka VIVO L’Oreal QuanU ZHOUHEI YA
6 INMAN Coolpad Kiehl’s Flisa Lou lan mi yu
7 GIRDEAR Nokia MEIFUBAO Hegou BESTORE
8 ONLY HTC Marykay KUKa Xinnongge
9 LIEBO Sony AFu LUHU Xiyumeinong
10 PeaceBird Lenovo CHCEDO YIMILOVE Houstage
This paper collects 1363 microblog and 128483 comments on the 50 sina official
microblog. In the all comments, there are 123782 fans comments, 2584 active
comments and 2117 passive comments. The definition of main input and output
indicators are as follows:
(1) The input indicators
Followers and microblog are all gotten from the pameng crawler.
The frequency of releasing microblog (FRW). This paper collects microblog
of enterprise during 7 days. The frequency of post microblog = the numbers of
microblog/7.
The number of active comments (NAC). This paper invites two researchers
to judge the emotion of the comments. When judge the comments, the researcher
according to the emotion and tone in the sentence pretend to be a follower to
determine whether the comment has the tendency to question.
The number of passive comments (NPC). Use the similar collecting method
with the active comments.
Prizes value (PV). Through sina advanced search, screening the microblog
which contain prizes information. Then, recording the number of prizes and
price. At last, this paper calculates the price of the total prize according to the
market price.
(2) The output indicators
The number of microblog forward (NWF), the number of fans praise (NWL), the
original microblog forwarded (OWF), the original microblog comment (OWC).
They are all gotten from the pameng crawler.
The number of follower comments (NWC). Follower comments = microblog
comments − active comments − passive comments.
Non negative sentiment rates (NNS). Non negative sentiment rates = positive
emotion + neutral emotion)/all kinds of emotion. This represents the ratio of
non-negative sentiment occupying the total emotional. This rate can also reflect
the ratio of negative emotion from the side. The enterprise collects negative
Analysis of Enterprise Microblog Marketing 883
emotions to effectively understand the user, and then help enterprises to improve
the service.
In order to improve the accuracy of the emotional data, this paper use arti-
ficial method to judge the emotion in the comments. When collecting the com-
ments, this paper regards the spam comments as the neutral emotion to avoid
its interference.
After the comments collection is to classify the emotion of the comments.
This paper use the content analysis method. Then, these two researchers will
classify the comments which belong to one enterprise that is chosen randomly
among 50 enterprises. These comments will be used to practice the researcher
repeatedly to ensure the accuracy. Then, the Kappa Statistic in reliability studies
will be used to test the researcher. As a result, the researcher’s kappa value is
0.87. That shows a good consistency. Therefore, these two researchers can be
used in the phase of emotional classification. Finally, we can get the number
of positive comments, negative comments and neutral comments. Average these
three comments are the brand’s corresponding comments. We can use these data
in our study.
Then, each brand’s microblog indexes in 7 days (except for the frequency to
send microblog, the prize value, non-negative sentiment index) are also on aver-
age. Due to the numerous tables, this paper only selects the ladies enterprises’
data, as shown in Tables 2 and 3.
From the table we can see, there are many data whose value is 0 in the
column of active comments, passive comments and prize value. That is because
that different enterprise has different marketing strategy. Thus, some input data
value of enterprises is 0. Besides, this paper uses the infinitesimal (0.0001) to
replace the 0 due to the forbiddance that the input DEA data cannot be 0.
enterprise’s score, and then classified calculate the score of each category. The
higher score shows that the category’s marketing effect is poorer.
Experimental data shows that the top 30 enterprises’ marketing effects are
very ideal, because the comprehensive technical efficiency, pure technical effi-
ciency and scale efficiency value are 1. The comprehensive technical efficiency of
last 20 enterprises are different, as shown in Table 5.
The data shows that in the top 30 enterprises, mobile phones companies have
8, accounting for 26.7%; Home Furnishing class has 7, accounting for 23.3%;
skin care companies have 6, occupying 20%; snacks has 5, accounting for 16.7%;
women’s clothing enterprises have 4, occupying 13.3%.
In whole 50 enterprises, achieving the satisfactory efficiency of enterprises in
the mobile phones industry accounted for 16%, followed by home furnishing for
14%, skin care for 12%, snacks for 10%. Women’s clothing class accounts for
only 8%. The above data shows that in five industries, the microblog marketing
effect of mobile phones companies are doing relatively well, women’s relatively
weak.
Analysis of Enterprise Microblog Marketing 887
Then, classifying the ranking scores of all brands, mobile phone has the lowest
score 78 points. Home furnishing gets 138 points. Skin care is 155 points. Snack is
223 points. Women’s clothing gets the highest score 246 points. Therefore, from
the perspective of input and output, the input-output ratio of mobile phones is
the most ideal, which show that there are good microblog marketing effect in
this category. The home furnishing and skin care industries are also relatively
well. The input of snacks, women’s clothing is redundant, while the output is
insufficient. They should properly adjust the resources reasonably, display the
resources value to maximize to increase enterprise microblog marketing effect.
In summary, this paper finds the marketing effect of different goods of differ-
ent types of enterprises on sina microblog is different. The best effect is mobile
phones industry, followed by Home Furnishing, skin care, snacks. The worst is
the women’s clothing industry.
3.4 Results
(1) Microblog Marketing Input-Output Analysis
As can be seen from the previous section, the invalid marketing effects of enter-
prises have 3 kinds:
• The technical efficiency, pure technical efficiency, scale efficiency are all not 1;
• The technical efficiency, pure technical efficiency is not 1, scale efficiency is 1;
• The pure technical efficiency is 1, the technical efficiency, scale efficiency is
not 1.
The paper selects the first representative case to analyze. There are some
enterprises satisfying the condition, randomly select one company. As a result,
this paper chooses Be & Cheery as the sample. Table 6 is Be & Cheery’s unit
adjusted value: Table 6 The adjustment values of Be & Cheery.
The comprehensive efficiency value of Be & Cheery is 0.142. The pure tech-
nical efficiency value is 0.143. The scale efficiency value is 0.999. The compre-
hensive efficiency and pure technical efficiency value are very low, that shows
there is a big problem. From the table we can see, there are input redundancy
in 6 input indexes, especially on the number of followers. The original value of
followers is 80.650, but the actual target value is 5.508. Be & Cheery spends too
much energy on the increasing number of followers, and this part of input do
not actually convert into the corresponding output, which cause a great waste of
resources. Be & Cheery releases 42 microblog within 7 days, with high number
of active and passive comments. Relatively speaking, official microblog is active,
but the active behavior does not bring good interaction. The average number of
forwarding and comments are all only 5, the number of praise is 2. The output
is significantly inadequate. Besides, the prizes of Be & Cheery are attractive,
the price reaching 1489 yuan. However, they do not get corresponding fans par-
ticipation. In addition, Be & Cheery is good at guiding the followers’ emotion
and controlling the scale of marketing. The future improvement is mainly on the
allocation of resources. To sum up, the above adjustment values can provide an
important basis for the evaluation of enterprise marketing effects.
888 D. Zhang et al.
comments may cause certain extent damage to the product of enterprise. There-
fore, some enterprises must pay much attention to the negative emotion and
know the origin of it. That can help the enterprise to eliminate the negative
emotion.
For the positive emotions, the enterprise should take the initiative to interact
with the followers and encourage users to maintain this positive emotion, then
to increase the reputation of enterprise. For the neutral emotions, the enterprise
should show its advantage to consumers with the objective data.
3 Both active and passive comments should be concerned
Most followers want to get reply or attention when they give comments to the
blogger. If bloggers can timely find this kind of behaviors of followers, and then
give timely response, that can largely reduce the negative emotions and increase
the positive interaction with followers.
4 Diversify the types of microblog
Zhang [19] believed that the types of microblog can to a certain extent influence
the browsing and forwarding of fans. Generally the type of blog is divided into:
pictures, short chain, video, text, or a mixture of the four types. In order to
maintain the freshness of microblog, the enterprise should cross use multiple
types of posts to give user a new experience.
4 Conclusions
This paper through the literature research, put forward the evaluation indicators
of the microblog marketing effect in enterprises, and set up a set of comprehensive
evaluation system.
Combining with the method of data envelopment analysis (DEA), this paper
proposes the evaluation model, with input and output of the quantitative evalu-
ation method to evaluate the microblog marketing effect between different enter-
prises. In the empirical part, this paper selects 50 different enterprises to verify
the feasibility of the model. According to the objective data, this paper makes
a comparison of different enterprise marketing effect, and provides a feasible
improvement measures for it.
In fact, this paper also has certain limitation in the selection of indicators.
The article does not use the enterprise actual sales as the evaluation index.
Besides, this paper only studies the different enterprises in the short period. The
future research can increase the time and carry on deeper research on microblog
marketing effects.
References
1. Baird CH, Parasnis G (2011) From social media to social customer relationship
management. Strategy Leadersh 39(5):30–37
2. Barry AE, Bates AM et al (2016) Alcohol marketing on twitter and instagram:
evidence of directly advertising to youth/adolescents. Alcohol Alcohol 51(4):487–
492
3. Bi L (2013) Enterprise micro-blogging marketing effect evaluation model and
empirical research based on micro-blogging dissemination of information flow. J
Intell 7:67–71
4. Bressler BRG (2010) The measurement of productive efficiency. In: Western Farm
Economic Association, Proceedings, vol 3, pp 253–290
5. Chang YT, Yu H, Lu HP (2015) Persuasive messages, popularity cohesion, and
message diffusion in social media marketing. J Bus Res 68(4):777–782
6. Charnes A, Cooper WW, Rhodes E (1978) Measuring the efficiency of decision
making units. Eur J Oper Res 2(6):429–444
7. Chatterjee P (2001) Online reviews: do consumers use them? Adv Consum Res
28:133–139
8. Chen H (2015) College-aged young consumers interpretation of twitter and mar-
keting information on twitter. Young Consum 16(2):208–221
9. Clark EM, Jones CA et al (2016) Vaporous marketing: uncovering pervasive elec-
tronic cigarette advertisements on twitter. Plos One 11(7):e0157304
10. CNNIC (2016) Statistical report on internet development 38th China internet net-
work
11. Cooper WW, Seiford LM, Tone K (2001) Data envelopment analysis: a comprehen-
sive text with models, applications, references and dea-solver software. Springer,
Heidelberg
12. Kafeza E, Makris C, Vikatos P (2016) Marketing campaigns in twitter using a
pattern based diffusion policy. In: 2016 IEEE International Congress on Big Data
(BigData Congress), pp 125–132
13. Leung XY (2015) The marketing effectiveness of social media in the hotel industry:
a comparison of facebook and twitter. J Hospitality Tourism Res 39(2):147–169
14. Liu G, Amini MH et al (2016) Best practices for online marketing in twitter:
an experimental study. In: IEEE international conference on electro information
technology, pp 504–509
15. Park SB, Ok CM, Chae BK (2016) Using twitter data for cruise tourism marketing
and research. Travel Tourism Mark 33(6):885–898
16. Saulles MD (2015) Push and pull approaches to using twitter as a marketing tool.
In: Proceedings of the European conference on social media, pp 105–111
17. Sun YL (2008) Evaluation on the capacity of agricultural sustainable development
in sichuan province based on dea method. Soft Sci 6:100–103 (in Chinese)
18. Xue JP, Yu WP, Niu YG (2013) Research on brand communication effect of e-
commerce enterprises’ microblogging - a case study of 51buy. com’s microblogging.
Soft Sci 12:67–71 (in Chinese)
19. Zhang L, Peng TQ et al (2014) Content or context: which matters more in infor-
mation processing on microblogging sites. Comput Hum Behav 31(1):242–249
20. Zhu YQ, Chen HG (2015) Social media and human need satisfaction: implications
for social media marketing. Bus Horiz 58(3):335–345
Accounting for Clustering and Non-ignorable
Missingness in Binomial Responses:
Application to the Canadian National Survey
on Child Safety Seat
1 Introduction
Subjects are likely to decline participation in a survey if the variables measured
are socially sensitive. This, raises the issue of non-ignorable missingness whereby,
the missignness mechanism depends on the value of the response itself. The
current research was motivated by data from a large Canadian survey in which
200 retail parking lots across Canada were random chosen and vehicles entering
to the parking lot were asked consent to participate in a survey which measured
the correct use of child safety seats in vehicles. Age, height, weight and type of
car safety seat used were recorded for up to three children in the participating
vehicles. By using these variables, a binary correct use variable was then defined.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 73
892 S.E. Ahmed et al.
yi | b ∼ Binomial (mi , pi ) ,
b ∼ M V N (0, D) ,
We use the logit link function to relate covariates (both fixed and random)
and the pi as follows
where the logit link function is again used to connect covariates and probability
of non-response πi = Pr(ri = 1 | xi , yi ) as follows
where b̂ is solution to
∂h(b)
−1
T (y − μ(b)) − D b = 0.
∂b b=b̂ b=b̂
n
2 . mi 1 b̂2i
logf (y|x, β, σ ) = yi ηi (b̂) − mi log 1 + exp ηi (b̂) + log −
i=1
yi 2 σ̂ 2
1 2
− log mi exp ηi (b̂) 1 + exp ηi (b̂) σ̂ 2 + 1 . (8)
2
and f (yi | xi , β, σ 2 ) is the marginal density of yi obtained from Eq. (8). Note that
if the original yi is observed, then wi (yi ) = 1. Now, the expected log-likelihood
for all n observations in the complete binomial sample is
mi
n mi
n
L(α, β, σ 2 ) = wi (yi )logf (ri | zi , α) + wi (yi )logf (yi | xi , β, σ 2 ).
i=1 yi =0 i=1 yi =0
(10)
This last equality indicates that the model for the complete data and that of
the missingness indicator can be separated as in Ibrahim et al. [3], and therefore,
estimation of the regression parameters can be carried out separately.
(1) Set the initial weights to 1 and fit ordinary weighted GLMM to the aug-
mented (complete) data (y, x) by using any statistical software that fits
GLMM via penalized quasi-likelihood methods, and hence, obtain the start-
ing values of β, b, and σ 2 .
(2) Set the initial weights to 1 and fit weighted GLM to the missing indicator
data (r, z) and obtain the starting values of α.
(3) By using (9), update the weights wi (yi ) based on the current values of β, α,
b, and σ 2 .
(4) Update β, b, and σ 2 by fitting weighted GLMM to the augmented (complete)
data (y, x) with the current values of wi (yi ).
(5) Update α by fitting weighted GLM to (r, z) with the current values of wi (yi ).
(6) Update the weights wi (yi ) based on the new β, α, b, and σ 2 .
(7) Repeat steps 4–6 until β and α converge.
896 S.E. Ahmed et al.
where
mi
n
H(α̂, β̂, σ̂ 2 ) = ŵi (yi )Si (α̂, β̂, σ̂ 2 |ri , yi , xi )Si (α̂, β̂, σ̂ 2 |ri , yi , xi )
i=1 yi =0
n
− Ui (α̂, β̂, σ̂ 2 |ri , yi , xi )Ui (α̂, β̂, σ̂ 2 |ri , yi , xi ) ,
i=1
= ((ri − π̂i ), (ri − π̂i )xi1 , · · · , (ri − π̂i )zip , (ri − π̂i )yi , di , di xi1 ,
· · · , di xip , ∂li /∂σ 2 ,
where
mi σ̂ 2 p̂i (1 − p̂i )(1 − 2p̂i )
di = yi − μ̂i (b̂) − ,
2(mi σ̂ 2 p̂i (1 − p̂i ) + 1)
∂li b̂2i mi p̂i (1 − p̂i )
= − .
∂σ 2
2 (σ̂ 2 )
2 2(mi σ̂ 2 p̂i (1 − p̂i ) + 1)
The matrix
Q1 (α̂) 0
Q(α̂, β̂, σ̂ 2 ) =
0 Q2 (β̂, σ̂ 2 )
is such that the (j, k)th element of Q1 (α̂) is given by
∂ 2 L(α, β, σ 2 )
Q1,jk (α̂) = − ,
∂αj ∂αk α=α̂
3 Application
The data analyzed in this example are based on a Canadian child safety seats
survey described in Snowdon et al. [5]. The main objective of the survey was
to measure whether children traveling in vehicles on Canadian roads are cor-
rectly restrained in safety devices that are appropriate for their ages, weights
and heights. Drivers of vehicles entering in 200 randomly selected retail park-
ing lots across Canada were asked to participate in the survey. For the vehicles
whose drivers agreed to participate in the survey, heights, weights, ages and type
of safety restraints used were recorded for up to three child occupants in the vehi-
cle. For vehicles whose drivers refused to participate in the survey, the number
of children in the vehicle were recorded. In general, there is no consensus on the
definition of correct seat for a child. There are several criteria of correct use based
on one or combination of age, weight and height. In this article, we employed
correct use definition given in Table 1, based only on child’s age. We defined
the number of children who are correctly restrained in a vehicle as yi , assumed
Table 1. Definition of correct use of CSS based on the child’s age groups
Table 2. Sample proportions of correct use of CSS and Marginal probabilities of correct
use adjusted for missingnes
Table 4. Odds ratios of correct use of CSS and their 95% confidence intervals for all
provinces as compared to Ontario
Table 6. Probabilities of non-response, π̂0 , π̂1 , π̂2 , π̂3 , respectively, when correct use
within vehicle is zero, one, two and three
4 Conclusion
In this manuscript, we proposed a method for handling non-ignorable missing
responses in a clustered binary data. We employed GLMM along with penalized
quasi-likelihood estimation method for fitting the parameters of the complete
data and GLM for fitting those of the missingness mechanism and combined
them via the EM algorithm. Procedures for estimating the marginal probabilities
of the response are derived via delta method. The proposed methods are then
applied to a Canadian national survey for estimating rates of correct use of child
safety seats in vehicles. The proposed methods are attractive in the sense that
they only require existing statistical software that have capabilities for fitting
GLMM and GLM.
References
1. Breslow NE, Clayton DG (1993) Approximate inference in generalized linear mixed
models. J Am Stat Assoc 88(421):9–25
2. Huang R, Carriere KC (2006) Comparison of methods for incomplete repeated mea-
sures data analysis in small samples. J Stat Plann Infer 136(1):235–247
3. Liechty JC, Roberts GO (2001) Missing responses in generalised linear mixed models
when the missing data mechanism is nonignorable. Biometrika 88(2):551–564
4. Louis TA (1982) Finding the observed information matrix when using the EM algo-
rithm. J Roy Stat Soc 44(2):226–233
5. Snowdon AW, Hussein A et al (2009) Are we there yet? Canada’s progress towards
achieving road safety vision 2010 for children travelling in vehicles. Int J Inj Control
Saf Promot 16(4):231–237
6. Song XK (2007) Correlated data analysis: modeling, analytics, and applications.
Springer, New York
7. Tierney L, Kadane JB (1986) Accurate approximations for posterior moments and
marginal densities. J Am Stat Assoc 81(393):82–86
8. Ugarte MD (2009) Missing data methods in longitudinal studies: a review. Test
18(1):1–43
9. Zeger SL, Albert PS (1988) Models for longitudinal data: a generalized estimating
equation approach. Biometrics 44(4):1049–60
Equity and Sustainability Based Model
for Water Resources Planning
1 Introduction
In the water resources allocation system, there exists the interaction between
regional authority and sub-areas in a river basin [11,12]. Efficiency and sustain-
ability of water resources allocation often conflict with each other, for without
considering ecological water to destroy the sustainability of water resources allo-
cation, other water sectors that can generate economic benefits can obtain more
water, the efficiency of water resources allocation will improve [4,5,8]. In addi-
tion, to guarantee the efficiency and sustainability of water using will lead the
contradiction between two-level decision makers (i.e., regional authority and sub-
areas).
Rogers and Louis [9] pointed out that there were a variety of activities and
objectives in a water system as well as complicated supply and demand contra-
dictions, which brought pressures to regional authority and sub-areas managers,
because these contradictions and ineffectiveness management limit the economic
development and environment protection. Therefore, an equitable and efficient
water resources allocation is an important measure to deal with water crisis and
improve water management, especially when it is very lack of water.
Based on the above, this paper considers to establish a user-friendly water
resources allocation model, with two-level structure, in which the regional
authority and sub-areas are the upper level and lower level decision makers
respectively, as there are multiple decision makers in the lower level, it is a
decentralized bi-level model. In the model, the regional authority as the leader
aims to maximize the equity (i.e., minimize the Gini coefficient), the sub-areas
as the followers expect to maximize the economic benefits. In addition, prac-
tical water resources allocation process faces complex uncertain environments,
technology progress and equipment update and climatic change will bring uncer-
tainty to parameters in the water transport process, and it is hard to describe the
uncertainty using simple random variable or fuzzy variable, therefore, the fuzzy
random variable is considered. Based on the discussion above, a decentralized
multi-objective bi-level model with fuzzy random coefficients will be established
in this paper.
The reminder of this paper is structured as follows. In Sect. 2, the problem
statement is given, including the bi-level structure of the model and the motiva-
tion to consider fuzzy random variable. Then an expected decentralized bi-level
model is established in Sect. 3. In Sect. 4, a case study is conducted to verify the
applicability of the model. Finally a conclusion remark is given in Sect. 5.
2 Problem Statement
For a regional water resources allocation system, in the case of insufficient water
supply, the limited water should be allocated optimally to each sub-area, and
then be allocated by each sub-area to different water sectors (i.e., ecological
water, municipal water, industrial water, agricultural water) [10]. The fair allo-
cation of water can balance the contradiction between water supply and demand,
therefore, the regional authority the leader in the upper level expects to max-
imize the equity of water allocation, in the lower level, the sub-areas hope to
maximize the benefits through water allocation based on different allocation
principles (i.e., efficiency principle, stress principle, priority principle) with the
analysis of relationship between supply and demand. The bi-level structure of
the water allocation model is shown in Fig. 1.
The need to address uncertainty in water allocation system is widely recog-
nized, there is a strong motivation for considering the fuzziness and randomness
in water resources planning problems [1,3,6]. For example, the loss ratio of water
transfer is not fixed because of the effect of many uncertain elements such as
lack of historical data, technology progress and equipment update, dynamics of
904 Y. Tu et al.
feedback
Lower level: Allocation among water sectors
Ecological water (meet first) Ecological water (meet first) Ecological water (meet first)
yes
O bjective :
Benefit maximization
Fig. 2. The flowchart of the loss ratio of water transfer as a fuzzy random variable
3 Modelling
3.1 Assumptions
(2) The leader (i.e., regional authority) and the followers (i.e., sub-areas) will
make rational decision makings for avoiding unsatisfying solutions.
(3) Loss ratio of water transfer is considered as a fuzzy random variable, in which
the parameters are determined using data analysis based on historical data
and experience.
3.2 Notations
Indices
i: index of sub-area;
j: index of water sector;
k: index of special water sector that requirements should be met;
Parameters
Q: available water of basin;
ei,j : economic benefit for using water of water sector in sub-area;
ei : economic benefit for using water in sub-area;
oi,j : amount of the pollutant BOD in unit wastewater discharge of water sector
in sub-area;
wi,j : wastewater discharge coefficient of water sector in sub-area;
vi : economic benefit of sub-area;
qi : water source of sub-area;
Zimin : minimum capacity of the physical connection between water source and
sub-area;
Zimax : maximum capacity of the physical connection between water source and
sub-area;
Simax : maximum capacity of sub-area;
dmin
i,j : minimum water requirement of water sector in sub-area;
dmax
i,j : maximum water requirement of water sector in sub-area;
edmax
i,j : minimum ecological water requirement of sub-area;
Based on the problem statement and the above preparatory work, the decen-
tralized bi-level model for water resources allocation with analysis of supply and
demand is established as follows.
Equity and Sustainability Water Resources Planning 907
m
m
1 xk xl
max G =
x
m
xk
ek − el (1)
2m ek
k=1 l=1
k=1
m
n
min P = 0.01oi,j wi,j yi,j (2)
x
i=1 j=1
m
s.t. xi ≤ Q (3)
i=1
t
(1 − E[ᾱ
˜ iloss ])xi + qi ≥ dmin
i,k , ∀i (4)
k=1
zimin ≤ xi ≤ zimax , ∀i (5)
(1 − E[ᾱ
˜ iloss ])xi + qi ≤ smax
i , ∀i (6)
For given x, y solves
n
max vi = ei,j yi,j , ∀i (7)
y
j=1
Equation (1) represents the first objective of the leader (i.e., regional
authority), which aims to maximize the equity of the water allocation based on
Gini coefficient [13]. Equation (2) represents the second objective of the leader
(i.e., regional authority), which aims to minimize the environmental pollution to
achieve the sustainability in water using at most. Constraint in Eq. (3) requires
the water allocated to all sub-areas should be less than the available water in
the river basin. Constraints in Eq. (4) require the water allocated to sub-area
should meet the water using of special waters in sub-area. E[•] is the expected
value operator proposed in Liu [7]. Constraints in Eq. (5) describe the range of
water allocated to sub-area. Constraints in Eq. (6) restrict all water sub-area
can get should be less than its maximum capacity. Equations in Eq. (7) repre-
sent the objectives of the followers (i.e., sub-areas), which aim to maximize the
economic benefit of all the sub-areas. Constraints in Eq. (8) define the range of
water allocated to water sector of sub-area. Constraints in Eq. (9) require that
the ecological water should be met in each sub-area.
4 Model Analysis
From the decentralized bi-level model, the following results can be obtained.
3
(1) When (1 − E[ᾱ ˜ loss
i ])xi + qi − edi
min
< j=1 dmini,j , allocation principle: equity
principle in the upper level, priority principle in the lower level.
908 Y. Tu et al.
3
(2) When (1 − E[ᾱ ˜ iloss ])xi + qi − edmin ≥ j=1 dmin i,j ,
i
3
xi ≥ min
j=1 di,j +edi
min
− qi /(1 − E[ᾱ
˜ iloss ]), and ei1 = ei2 = ei3 , allocation
principle: equity principle in the upper level, stress principle in the lower
level. 3
(3) When (1 − E[ᾱ ˜ iloss ])xi + qi − edmin
i ≥ j=1 dmin i,j ,
3
xi ≥ min
j=1 di,j +edi
min
− qi /(1 − E[ᾱ
˜ i ]), and ei1 = ei2 = ei3 , allocation
loss
5 Case Study
Qujiang basin is taken as a case example to illustrate the proposed model.
Qujiang river is the largest tributary of Jialing river basin in the upper reaches
of Yangtze, it is across three provinces in southwest China, in this paper, the
part of Qujiang basin in Sichuan Province is studied (see Fig. 3). From Fig. 3,
the available water is supplied to five sub-areas: Bazhong (BZ), Nanchong (NC),
Guang’an (GA), Guangyuan (GY), Dazhou (DZ). The data for the related para-
meters can be seen in Tables 1, 2 and 3.
Sichuan Province
China
Qujiang
Guangyuan
Bazhong
Qujiang
Guang an
˜ loss
Sub-area ᾱ qi (104 m3 ) smax (104 m3 ) zimax (104 m3 ) zimin (104 m3 ) ei,j (104 m3 )
i i
IND AGR MUN
BZ (0.42, αloss
1 , 0.47) 42764 50000 53157 47905 60.61 42.55 43.86
αloss
1 ∼ N (0.45, 0.01)
NZ (0.36, αloss
2 , 0.40) 23870 40000 45320 39514 54.35 34.48 32.26
αloss
2 ∼ N (0.38, 0.04)
GA (0.28, αloss
3 , 0.32) 33997 40000 46889 18622 67.57 47.62 47.17
αloss
3 ∼ N (0.30, 0.01)
GY (0.45, αloss
4 , 0.55) 2874 30000 38543 12448 74.07 31.25 37.88
αloss
4 ∼ N (0.50, 0.09)
DZ (0.29, αloss
5 , 0.35) 115382 60000 67832 26308 86.96 43.11 45.45
αloss
5 ∼ N (0.32, 0.01)
Note: IND—Industrial water; AGR—Agricultural water; MUN—Municipal water.
Before solving the proposed model, the expected values of loss ratios of water
transfer should be given. For the loss ratios of water transfer are triangle fuzzy
random variables, in which the random parameters follow normal distributions.
The expected value operator for triangle fuzzy random variables proposed in Xu
et al. [12] can be applied to deal with fuzzy random loss ratios of water transfer.
Thus, the model can be subsequently processed with the expected value of loss
ratios of water transfer.
For dealing with the multiple objectives in the upper level, the weighted sum
method based on satisfactory degree proposed by Gang et al. [2] is applied, the
weights are set as: for the first objective (i.e., the equity objective), for the second
objective (i.e., the pollution control objective). Based on the data in Tables 1, 2
and 3, the water allocation planning of this case can be shown in Table 4.
From the results in Table 4, we know that the total available water in Qujiang
basin (i.e., 4089 × 106 m3 ) is less than the total minimum demand in all sub-
areas, but is more than the total minimum demand for agricultural sectors and
municipal sectors, which can be known that the water shortage situation is seri-
ous currently in the basin. In this paper, the regional authority makes decisions
on allocating water based on equity (mainly) and pollution control. Through
the comparison of the results with the practical allocation, it can be found that
there is scarce water, the minimum demand for agricultural and municipal water
sectors of all sub-areas can be satisfied using the proposed model. However, for
more economic benefits, more water is allocated for industrial sectors, and the
minimum demand for agricultural and minimal using cannot be satisfied in prac-
tice, which is not reasonable for the steady and development of the river basin.
What’s more, the decision making of the proposed model can effectively control
the pollution under the premise of guaranteeing equity.
6 Conclusion
References
1. Ajami NK, Hornberger GM, Sunding DL (2008) Sustainable water resource man-
agement under hydrological uncertainty. Water Resourc Res 44(11):2276–2283
2. Gang J, Tu Y et al (2014) A multi-objective bi-level location planning problem for
stone industrial parks. Comput Oper Res 56:8–21
3. Guo P, Huang GH et al (2010) A two-stage programming approach for water
resources management under randomness and fuzziness. Environ Model Softw
25:1573–1581
4. Hu Z, Chen Y et al (2016) Optimal allocation of regional water resources: from a
perspective of equity–efficiency tradeoff. Resour Conserv Recycl 109:102–113
5. Lartigue C (2015) Efficient use of water resources for sustainability. Springer,
Switzerland
6. Li YP, Liu J, Huang GH (2014) A hybrid fuzzy-stochastic programming method
for water trading within an agricultural system. Agric Syst 123(2):71–83
7. Liu B (2004) Uncertainty theory: an introduction to its axiomatic foundations.
Springer, Berlin
8. Ni J, Xu J, Zhang M (2016) Constructed wetland planning-based bi-level opti-
mization to balance the watershed ecosystem and economic development: a case
study at the Chaohu lake watershed, China. Ecol Eng 97:106–121
9. Rogers JW, Louis GE (2007) A financial resource allocation model for regional
water systems. Int Trans Oper Res 14(1):25–37
10. Song WZ, Yuan Y et al (2016) Rule-based water resource allocation in the Central
Guizhou Province, China. Ecol Eng 87:194–202
11. Tu Y, Zhou X et al (2015) Administrative and market-based allocation mechanism
for regional water resources planning. Resour Conserv Recycl 95:156–173
12. Xu J, Tu Y, Zeng Z (2012) Bilevel optimization of regional water resources alloca-
tion problem under fuzzy random environment. J Water Resour Planning Manage
139(3):246–264
13. Yitzhaki S (1979) Relative deprivation and the Gini coefficient: reply. Q J Econ
93(2):321–324
SCADA and Artificial Neural Networks
for Maintenance Management
Ingenium Research Group, Universidad Castilla-La Mancha, 13071 Ciudad Real, Spain
alberto.pliego@uclm.es
1 Introduction
The wind energy is currently the most important renewable energy, the capacity
installed currently is more than 420 GW and it is estimated to be more than 1000GW
in 2030 [3]. The maintenance and operation costs of conventional wind turbines are
12% of the total costs, but the wind energy is evolving towards the offshore location. It
causes an important increase of these costs, being for offshore wind farms around 23%
of the total costs [12].
SCADA systems are widely introduced in wind turbines (WTs) due to their effec-
tiveness has been proved in other industries for detection and diagnostics of failures
[9, 11, 16]. They are presented as an inexpensive and optimal solution [20] to control
feedback for the health monitoring while reducing the operation and management costs
[19]. Nevertheless, they also present some minor disadvantages due to the operational or
reliability conditions [14, 21]. These systems consider a large amount of measurements
such as temperatures or wind and energy conversion parameters [18]. Data have raised
considerable interest in different areas, e.g. wind power forecasting [17], production
assessment [22] or fault detection [4, 6, 8, 10].
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 75
SCADA and Artificial Neural Networks for Maintenance Management 913
In the case of WTs, the introduction of SCADA systems verifies the efficiency when
their components are deteriorated. This degradation can indicate problems of different
nature such as misalignments in the drive-train, friction caused by bearing or gear faults.
The basic elements of the performance monitoring consist on a first collection of raw
values by the sensors. After the application of the appropriate filters, anomalies are
detected. Finally, a diagnosis will be provided. The anomaly detection includes a series
of techniques that range from simple threshold checks to statistical analyses [5, 7, 15].
It has been demonstrated that the WT suffers a gradual loss of production. Figure 1
shows that the power has a decrease year by year (this study has been developed by
Sheffield university in the OPTIMUS project [2]). The SCADA data has been consid-
ered over 5 years and the wind speed-power curve has been estimated.
A103
750
700
650
2010
2011
600 2012
2013
2014
550
500
This paper present a signal processing considers the minor changes in the behavior
of the WT. Some alarms will be affected by the decrease of the power shown in Fig. 1.
The following alarms will be activated considering the power as a cause.
• Activation of the ice safe mode: One of the statements considered for this alarm is
that the power is low for the measured wind.
• High aerodynamic deterioration: It is considered for low power with high wind.
If the power presented a minor reduction, then some false alarms would be activated
because the SCADA was prepared for a higher power. However, if the control laws are
dynamic, this problem will be solved.
2 Proposed Method
The method proposed aims to improve the control of the system by following a super-
vised iterative process. The objective is to determine if the SCADA control is coherent
914 A.P. Marugán and F.P.G. Márquez
with the historical data and, therefore, if the system has changed. The main capability
of the method is to find differences in the behavior of the system over the time. Figure 2
shows a flowchart of the method proposed.
The iterative process starts with a database of the historical SCADA data. In this
paper, two different types of data are considered: the value of the measured parameters
and an alarm report. This data is employed to build an artificial neural network (NN)
with a supervised training. The values of the parameters are employed as the inputs and
the alarms will represent the outputs of the NN.
NN are used in problems that cannot be formulated as an exact method or an analyti-
cal solution. NN learns by itself and provides a good solution for the problem simulating
the biological neurons in a reasonable time. An artificial NN consists of neurons that
are simple processing units and weighted connections between those neurons [1].
It can be used for improving the control of the WT when the NN that defines the
logic of the system has been generated. The SCADA system will provide online data
of the condition of the WT. These data will be processed following two different eval-
uations: the SCADA system processing and the NN processing. Both outcomes will be
compared. If the outcomes are equal, then the data will be added to the database, there
for the NN processing will became an adaptive process.
3 Case Study
The European Optimus project [2] has provided the SCADA data used in this work.
It measures a lot of parameters, but after a filtering process, only 34 parameters (see
Table 1) have been considered for the analysis presented hereby. The values of this
parameters will be considered as inputs for the NN.
Basically, the NN receives a dataset and make a training process to recognize sev-
eral patterns. The training process fits the different weights to provide the output. If
the output is known, then the training is defined as supervised, otherwise, it is called
unsupervised training. The condition of the WT corresponds to the desired outputs of
the NN. The data used to design the NN is divided in following groups:
SCADA and Artificial Neural Networks for Maintenance Management 915
No Signal No Signal
1 General accumulator blade 1 pressure 18 Environmental temperature
2 General accumulator blade 2 pressure 19 Drive end side generator bearing temperature
3 General accumulator blade 3 pressure 20 Non-drive end side generator bearing temperature
4 Phi cosine 21 Generator winding temperature
5 Turbulence level 22 Nacelle temperature
6 Oscillation level 23 Lower gearbox radiator
7 Vibration level 24 Upper gearbox radiator
8 Pitch 1 angle 25 Gearbox bearing temperature
9 Pitch 2 angle 26 Transformer 1 temperature
10 Pitch 3 angle 27 Transformer 2 temperature
11 Active power 28 Transformer 3 temperature
12 General accumulator pressure 29 Grid voltage
13 Brake pressure 30 Total reactive power
14 Hydraulic group pressure 31 Generator speed
15 SP pitch angle 32 Rotor speed
16 Hydraulic group oil temperature 33 Wind speed
17 Gearbox oil temperature 34 Yaw
where m is the number of elements of each input and n is the number of possible outputs,
being m = 34, n = 5, and therefore, h = 13.03.
Figure 4 shows the statistics of the NN generated by using the SCADA data. This
is a global confusion matrix and corresponds to the sum of the training, validation and
testing sets. The rows of the matrix show the outcomes of the NN for the dataset used.
916 A.P. Marugán and F.P.G. Márquez
m n
(hidden layer) Alarm 1
1
2 Alarm 2
Number of neurons
3 Alarm 3
h=13 Alarm 4
…
1 28 0 0 0 1 96.6%
28.6% 0.0% 0.0% 0.0% 1.0% 3.4%
2 0 9 1 0 1 81.8%
0.0% 9.2% 1.0% 0.0% 1.0% 18.2%
3 0 0 9 0 1 90%
Output
0.0% 0.0% 9.2% 0.0% 1.0% 10%
4 0 0 0 9 0 100%
0.0% 0.0% 0.0% 9.2% 0.0% 0.0%
5 0 2 1 0 36 92.3%
0.0% 2.0% 1.0% 0.0% 36.7% 7.7%
6 100% 81.8% 81.8% 100% 92.3% 92.9
0.0% 18.2% 18.2% 0.0% 7.7% 7.1%
1 2 3 4 5 6
Target
The columns show the real output, i.e. the output established by the alarms. The diago-
nal of this matrix (grey cells) contains the desired solutions. The sixth row and the sixth
column show a summary of the rights (green percentage) and wrongs (red percentage).
The outcomes of the NN agree more than 90% with the alarms that the SCADA
system generates. The method proposed establishes that the control of the SCADA
needs to be uploaded when the alarms cannot be predicted by the NN with enough
accuracy. If the SCADA control and this NN have the same response, then the data
processed will be added to the database. Therefore, a “healthy” dataset will be created
and the control will be adapted to the real conditions over the time.
SCADA and Artificial Neural Networks for Maintenance Management 917
4 Alarm Prediction
The method proposed here by also allows for predicting a possible alarm. In the previ-
ous Sect. 3, a NN was generated by inputting the data that the SCADA system employs
for generating the alarms. In this section, the input will be the data obtained before the
alarm is activated, i.e. the objective is to predict an alarm before the SCADA system
generates it. The NN is performed using the same techniques that in previous section. In
this case the NN will be designed to distinguish whether alarm will be activated or not.
Figure 5 shows the scheme used for the prediction. The inputs employed for performing
the NN are the dataset collected before the alarms are activated.
Following the structure of Fig. 5, a NN has been performed obtaining the results
shown in Fig. 6. This confusion matrix shows that the NN can predict a 20.1% of the
alarms. Despite it is a low percentage when the NN suggests an alarm, it has a success
of 62.1%.
This method can be used for determining the predictability of some alarms. This
can be a useful tool to identify possible alarms before the WT can be damaged.
5 Conclusions
In this work, a new methodology is presented for extracting information from SCADA
dataset. The methodology is based on the generation of neural network from the quanti-
tative (value of parameters) and qualitative dataset (alarms) of the SCADA system. The
methodology has two different purposes. In first place, the adaptation of the SCADA
918 A.P. Marugán and F.P.G. Márquez
control rules to the variable condition of the wind turbines. A neural network has been
generated to determine which data should be added to the database. The creation of a
“healthy” database allows for adapting the SCADA control rules to the real condition
of the wind turbine over the time. Secondly, an additional neural network has been cre-
ated for making predictions of the activation of alarms. This can be used to identify an
abnormal state of the wind turbine earlier than the SCADA.
Acknowledgement. The work reported herewith has been financially by the Spanish Ministerio
de Economı́a y Competitividad, under Research Grant Ref.: DPI2015-67264-P.
References
1. Arbib MA (2003) The handbook of brain theory and neural networks. MIT Press
2. Camacho E, Requena V et al (2014) Demonstration of methods and tools for the optimisation
of operational reliability of large-scale industrial wind turbines. In: International conference
on renewable energies offshore renew
3. Council GWE (2015) Global wind report annual market update 2014. Technical report, http://
www.gwec.net/wp-content/uploads/2015/03/GWEC Global Wind 2014 Report LR.pdf
4. Gómez Muñoz CQ, Márquez FPG (2016) A new fault location approach for acoustic emis-
sion techniques in wind turbines. Energies 9(1):40
5. González-Carrato RRDLH, Dimlaye V, Ruiz-Hernández D (2014) Pattern recognition by
wavelet transforms using macro fibre composites transducers. Mech Syst Signal Process
48(1):339–350
6. González-Carrato RRDLH, Márquez FPG, Dimlaye V (2015) Maintenance management
of wind turbines structures via mfcs and wavelet transforms. Renew Sustain Energy Rev
48:472–482
7. Kim K (2011) Use of scada data for failure detection in wind turbines. In: ASME 2011 5th
International conference on energy sustainability, pp 2071–2079
8. Kusiak A, Li W (2011) The prediction and diagnosis of wind turbine faults. Renew Energy
36(1):16–23
9. Márquez FPG, Garcı́a-Pardo IP (2010) Principal component analysis applied to filtered sig-
nals for maintenance management. Qual Reliab Eng Int 26(6):523–527
10. Márquez FPG, Muñoz JMC (2012) A pattern recognition and data analysis method for main-
tenance management. Int J Syst Sci 43(6):1014–1028
11. Márquez FPG, Pérez JMP, et al. (2016) Identification of critical components of wind turbines
using fta over the time. Renew Energy 87
12. Marugán AP, Márquez FPG, Pérez JMP (2016) Optimal maintenance management of off-
shore wind farms. Energies 9(1):46
13. Masters T (1993) Practical neural network recipes in C++. Academic Press
14. Muñoz CQG, Márquez FPG, Tomás JMS (2016) Ice detection using thermal infrared radiom-
etry on wind turbine blades. Measurement 93:157–163
15. Munoz CQG, Arenas JRT, Marquez FPG (2014) A novel approach to fault detection and
diagnosis on wind turbines. Global Int J 16(6):1029–1037
16. Mylaraswamy D, Olson L, Nwadiogbu E (2007) Engine performance trending. In: AIAC12-
HUMS conference
17. Song Z, Jiang Y, Zhang Z (2014) Short-term wind speed forecasting with markov-switching
model. Appl Energy 130(5):103–112
18. Sun P, Li J et al (2016) A generalized model for wind turbine anomaly identification based
on scada data. Appl Energy 168:550–567
SCADA and Artificial Neural Networks for Maintenance Management 919
19. Wymore ML, Dam JEV et al (2015) A survey of health monitoring systems for wind turbines.
Renew Sustain Energy Rev 52:976–990
20. Yang W, Tavner PJ et al (2010) Cost-effective condition monitoring for wind turbines. IEEE
Trans Industr Electron 57(1):263–271
21. Yang W, Court R, Jiang J (2013) Wind turbine condition monitoring by the approach of scada
data analysis. Renew Energy 53(9):365–376
22. Zang X, Sun H, Trivedi KS (1998) A bdd-based algorithm for reliability graph analysis.
Department of Electrical Engineering
Advances in Engineering Management
of the Eleventh ICMSEM
Advances in Green Supply Chain, Resource
Optimization Management, Risk Control and Integrated
Project Management Based on the Eleventh
ICMSEM Proceedings
Jiuping Xu(B)
1 Introduction
2 Literature Review
To better analyze the pertinent research fields and possible research directions, we
reviewed the most popular research areas in the most recent EM research. What
emerged was that the green supply chain, resource optimization management, risk con-
trol, and integrated project management have been the most widely studied in recent
years. In this section, we review the related literature to analyze the developmental
tracks in these four areas.
low resource efficiency and high energy consumption can result in a serious waste
of resources putting significant pressure on resources and environmental governance.
Vadenbo presented a general multi-objective mixed-integer linear programming (MILP)
optimization model aimed at providing decision support for waste and resource man-
agement in industrial networks [15]. Further, energy resource shortage problems associ-
ated with rapid social and economic development have been of critical concern to both
national and local governments worldwide for many decades [9]. Water, as one of the
most important resources on earth has therefore received a great deal of research atten-
tion, with water optimization management being a major focus in the past few years
[1, 12, 18]. ROM involves all aspects of social life, so research strives to further develop
this area using modern computer technology.
Risk control (RC) is the coordinated and economic application of resources to mini-
mize, monitor and control the impact probability of unfortunate events, or to maximize
the realization of opportunities after the identification, assessment and prioritization of
risks. Muriana and Vizzinib presented a deterministic technique for assessing and pre-
venting project risks by determining the risk of the Work Progress Status [10]. Valtonen
et al. studied public risk management related to the use of public land development
by analyzing case studies in Finland and the Netherlands, both of which have strong
public land development traditions [16]. By identifying risks, specific state support and
special project management measures have been developed to limit the negative influ-
ence of the possible project risks [17]. Therefore, developing a general framework to
analyze corporate risk management policies and ensure risk control is vital [6]. In brief,
RC includes transferring risk to another party by avoiding risk, reducing the negative
effects of risk, and accepting some or all of the consequences of a particular risk.
Integrated project management (IPM) is a philosophy that recognizes the different ele-
ments involved in projects to apply strong team leadership and encourage a collabora-
tive ethos with clear purposes and strategies to ensure success. Planning, organizing,
securing, and managing resources to successfully complete specific project goals and
objectives are the main IPM research areas, the applications for which have been used
in manufacturing, construction, design engineering, industrial engineering, technology,
production and many other areas. Atkinson provided some thoughts about the success
criteria for project management in which cost, time, and quality have become inextri-
cably linked to project management success over the last 60 years [2]. An integrated
methodology was developed for planning construction projects under uncertainty that
relied on a computer supported risk management system to identify the risk factors in
the integrated project [11]. IPM has also been applied to integrated waste management
systems to identify the optimal breakdown between materials and energy recovery from
municipal solid waste [3]. In all, integrated project management is a complex subject
and needs to be examined from several perspectives.
926 J. Xu
With EM as the key component, the open source software tool NodeXL was used to
facilitate the learning of the concepts and methods. To begin with, some key words were
processed to unify expressions and keywords with the same meaning. This preliminary
process reduced the number of keywords, making it possible to develop an efficient
network. At the same time, the vertices’ shapes were set to determine the betweenness
and closeness centrality. When the degree of a vertex’s betweenness and closeness cen-
trality was more than five, the shape of this vertex was a red diamond in the resulting
figure. The aim of this analysis was to determine the key EM concepts that connected
with other research topics through the primary nodes. After calculating, clustering and
filtrating, the results in Fig. 2 illustrate the important research relationships.
The above analysis highlighted the key areas focused on in the proceedings vol-
ume II; green supply chain management, resource optimization management, risk con-
trol and integrated project management problems related to green and pro-environment
concepts. Tao focuses on a simplified manufacturing and distribution supply chain plan-
ning network focused on carbon emission constraints, for which a bilevel programming
model was mathematically formulated. Through the study of related emissions poli-
cies, Lu finds that under different carbon policies, revenue sharing contracts are able to
coordinate the supply chain with the manufacturer’s revenue sharing proportion under
Advancement of GSC, ROM, RC and IPM Based on the 11th ICMSEM Proceedings 927
a cap-and-trade being always less than a situation in which there is no carbon emis-
sions constraint. Machado and Duarte present an overview of Industry 4.0, a new devel-
oping concept in the field of industrial engineering, and lean and green supply chain
management, for which a conceptual model is developed. Wang et al. concludes that
when big data suppliers are part of the supply chain competition and one party gains a
dominant supply chain position, collaborative decision-making is the key to enhancing
overall supply chain profitability. These studies highlight the many new characteristics
involved in green supply chains, such as big data and carbon emissions reductions.
Resource optimization management primarily covers resource processing, exploita-
tion, production, and consumption. Arifjanov and Zakhidov consider an issue for the
development of methods for the estimation of electrical loads for various electricity cus-
tomers in residential and public buildings, and address issues related to the integration
of distributed generators to the UES to ensure the optimal management of technologi-
cal modes. Gómez Munoz et al. develops an optimal maintenance strategy that includes
the NDT system to reduce costs and increase the competitiveness of renewable energy
sources. For water resource optimization, Tu et al. proposes an equity and sustainability
based model that considers equity, economic benefit, and ecological environment and
Liu et al. uses a DEA method to establish an evaluation model to empirically study the
urban agglomeration of Chengdu city from 2008–2014. Because of the wide use and
serious shortage of some resources around the world, scientific research into resource
optimization management has increased to assist governments and enterprises around
the world.
Risk control is important in all organizations and enterprises. In this section, Zhou
et al. presents a model to mitigate the risk of budget overruns during raw materials pro-
curement, and Chen et al. provides a risk allocation model using a fuzzy comprehen-
sive evaluation method aimed at optimizing risk allocations, reducing transaction costs,
and enhancing comprehensive benefits from the perspective of state-owned enterprises.
Beside these studies, to better control risks occurring in financial markets, Shibli et
al. investigates the link between the foreign exchange markets and the stock market in
928 J. Xu
Bangladesh by examining the volatility spillover between the markets, volatility persis-
tence, and the asymmetric effect of information on the volatility of these two financial
markets. In addition, Zhang et al. studies a projection pursuit risk assessment model
using a combined method to model PPP risk under the background of Big Data. Effec-
tive risk control can reduce potential risk factors and assist managers gain increased
benefits.
The last section in Volume II focuses on the developments in integrated project
management. Elchan defines the scientific problems associated with management at the
beginning stages of an innovation project in the departments of a technology park at a
higher education school in Azerbaijan. Based on principal-agent theory and game the-
ory, a theoretical framework for a knowledge network conflict coordination mechanism
is constructed by Wei, which divides the conflict coordination mechanism into three
levels; contract mechanism, self-implementation mechanism, and a third-party conflict
coordination mechanism. Sheng et al. proposes suggestions and countermeasures for
the equalization of public services in urban and rural areas from three integrated project
aspects and Tu et al. develops two staged fuzzy DEA models with undesirable outputs to
evaluate the banking system. The overall process of leading, organizing, staffing, plan-
ning, and controlling activities requires managers to use systems viewpoints, methods,
and theories to optimize the work involved when seeking effective integrated project
management under limited resource constraints.
The statistics from the 2963 articles recorded output were saved and converted into
CiteSpace which transformed the data into a format that could be identified by the soft-
ware to allow for parameter selection. In this operation, the time span was from 2000
to 2017 with the time slice set at one year and the theme selection based on the titles,
abstract subject words, identifiers, and keywords to allow for node selection. Then, each
zone with the 30 highest keyword records were clustered and analyzed, from which a
map was drawn for the minimum spanning tree. As shown in Fig. 3, by setting the
Advancement of GSC, ROM, RC and IPM Based on the 11th ICMSEM Proceedings 929
“Threshold = 30”, a total of 348 nodes were obtained, with the overall network density
being 0.0098. Therefore, the system frequency identified green supply chain manage-
ment, environmental management, project management, risk management, and mod-
els and systems as the highest ranked areas. This not only displayed the most popular
research fields in engineering management but also implied the future EM development
trends.
With a reference frequency running from high to low, the top thirty were analyzed
from the 348 keywords. As shown in Table 1, the keywords such as green supply chain
management, risk management, environmental performance and sustainability had a
relatively high centrality.
Using the keyword with the label title clustering, 40 categories were identified; how-
ever, only the top 10 categories are shown in Fig. 4. The other topics were relatively
dispersed so are not displayed, which indicated that scientific engineering management
research was relatively loose. In particular, the new keywords for 2015 and 2016 have
not resulted in significant research attention.
930 J. Xu
quency research areas and timezone view diagram were identified, as shown in Fig. 5.
Most of these high frequency words appeared in the early days, indicating that engineer-
ing management research has been focused around these topics for quite a long time.
The analysis of the top 30 most frequent words (Table 2), found that engineering, busi-
ness and economics, computer science, environmental science and operations research
are currently the most popular research areas.
Advancement of GSC, ROM, RC and IPM Based on the 11th ICMSEM Proceedings 933
From Tables 1 and 2, the papers presented in this year’s ICMSEM proceedings
volume II closely reflect the most pertinent engineering management research areas;
green supply chain (systems management, industry, green and environmental protec-
tion), resource optimization management (environmental sciences, water resources,
engineering education), integrated project management (project management, indus-
try, integrated project) and risk control (risk management, risk, design, information).
In addition, the computer science and environmental science research highlights how
high-tech can spur social progress and environmental protection.
We believe that EM should focus on the study of specific EM problems as well
as popularizing MS knowledge. Excellent academic research can effect developments
across the world, but a further focus on regional areas is also needed. To ensure a bright
EM future, there is a need for inspiration, practical theories, effective methods, and
extensive applications in future developments. To achieve this, EM knowledge needs to
be popularized, which is the duty of all MSEM academics. In the future, more focus on
low carbon emissions, environmental protection, big data, energy utilization, and other
popular EM issues are needed.
5 Conclusion
Engineering management is a complex area that involves all engineering aspects. The
open source software tool NodeXL identified the four areas covered in the eleventh
ICMSEM proceedings Volume II, from which we summarized the key research in this
year. To analyze the EM and ICMSEM development trends, we identified the main
search terms and keywords using CiteSpace. Our key objective is to continue to improve
the quality of papers in the proceedings and to ensure the ICMSEM organization is
dynamic and appealing to EM researchers worldwide. EM research is continuously
developing and new trends are appearing every year; however, more research is neces-
sary so as to popularize EM developments and provide a more active research forum.
Acknowledgements. The author gratefully acknowledges Jingqi Dai and Lin Zhong’s efforts on
the paper collection and classification, Zongmin Li and Lurong Fan’s efforts on data collation
and analysis, and Ning Ma and Yan Wang’s efforts on the chart drawing.
References
1. Afshar A, Massoumi F et al (2015) State of the art review of ant colony optimization appli-
cations in water resource management. Water Resour Manage 29(11):3891–3904
2. Atkinson R (1999) Project management: cost, time and quality, two best guesses and a phe-
nomenon, its time to accept other success criteria. Int J Project Manage 17(6):337–342
3. Consonni S, Giugliano M et al (2011) Material and energy recovery in integrated waste
management systems: project overview and main results. Waste Manage 31(9–10):2057–
2065
4. Fahimnia B, Sarkis J, Davarzani H (2015) Green supply chain management: a review and
bibliometric analysis. Int J Prod Econ 162:101–114
5. Farr JV, Buede DM (2003) Systems engineering and engineering management: keys to the
efficient development of products and services. Eng Manage J 15(3):3–9
934 J. Xu
6. Froot KA, Scharfstein DS, Stein JC (1993) Risk management: coordinating corporate invest-
ment and financing policies. J Financ 48(5):1629–1658
7. Govindan K, Sarkis J et al (2014) Eco-efficiency based green supply chain management:
current status and opportunities. Eur J Oper Res 233(2):293–298
8. Lambert DM, Cooper MC (2000) Issues in supply chain management. Ind Mark Manage
29(1):65–83
9. Ming Z, Song X et al (2013) New energy bases and sustainable development in China: a
review. Renew Sustain Energ Rev 20(4):169–185
10. Muriana C, Vizzini G (2017) Project risk management: a deterministic quantitative technique
for assessment and mitigation. Int J Project Manage 35:320–340
11. Schatteman D, Herroelen W et al (2008) Methodology for integrated risk management and
proactive scheduling of construction projects. J Constr Eng Manage 134(11):885–893
12. Singh A (2014) Simulation and optimization modeling for the management of groundwater
resources. I: distinct applications. J Irrig Drainage Eng 140(4):04013,021
13. Singh A, Trivedi A (2016) Sustainable green supply chain management: trends and current
practices. Competitiveness Rev 26(3):265–288
14. Srivastava SK (2007) Green supply-chain management: a state-of-the-art literature review.
Int J Manage Rev 9(1):53–80
15. Vadenbo C, Guilléngosálbez G et al (2014) Multi-objective optimization of waste and
resource management in industrial networks - part ii: model application to the treatment
of sewage sludge. Resour Conserv Recycl 89(4):52–63
16. Valtonen E, Falkenbach H, Krabben EVD (2017) Risk management in public land devel-
opment projects: comparative case study in Finland, and The Netherlands. Int J Covering
Aspects Land Use, Land Use Policy 62:246–257
17. Wang HB (2007) Application of risk management to wind power project. Sci Technol Man-
age 43:48–51
18. Wang R, Li Y, Tan Q (2015) A review of inexact optimization modeling and its application
to integrated water resources management. Front Earth Sci 9(1):51–64
19. Zhu Q, Sarkis J, Lai KH (2008) Confirmation of a measurement model for green supply
chain management practices implementation. Int J Prod Econ 111(2):261–273
Green Supply Chain
Modelling a Supply Chain Network of Processed
Seafood to Meet Diverse Demands
by Multi-branch Production System
1 Introduction
Fish and seafood have recently become important materials to meet diversified
customer needs resulting from the food crisis caused by population explosion and
the requirement of seafood which is essentially healthy (i.e. low calorie count
and nutrient intakes) and which provides convenient meals for a two-income
family, a child-rearing family, and aging society in advanced counties. To match
the resultant explosive demand, the globalization of the seafood supply chain
has increasingly progressed with technological innovation. For example, the cold
chain has expanded to maintain the freshness of materials through improvement
of freezing technologies for transport worldwide [11]. A traceability system has
been developed to obtain the trust of customers by informing them of the safety
of materials by a physical sensor system and information technologies [6].
In the Japanese context, to aim for the creation of a tourism nation and for
the next summer Olympic Games in 2020, an enrichment of the food supply
chain is required. The Japanese dietary culture called ‘Washoku’, registered as
a 2013 Intangible Cultural Heritage by UNESCO, must be a powerful weapon
for executing the strategy. Processed seafood, one core ingredient of ‘Washoku’,
is manufactured in small and medium enterprises (SMEs) mostly located in
the area that experienced the Great East Japan earthquake (Higashi Nihon
Daishinsai) on 11 March 2015. Furthermore, the occurrence of climate change
including global warming and abnormal weather and over-fishing by neighbour-
ing countries causes a decrease of marine resources in the Pacific Ocean. These
SMEs operate on a small scale and sell processed seafood to identified customers
according to their requirements. For the future, they must overcome difficul-
ties and reinvent themselves to obtain new markets and customers worldwide
through business innovation.
Many studies related to the seafood supply chain have presented new per-
spectives, including open innovation of seafood value chain [12], an interna-
tional distribution system [1,2], firm structure [4], quality assurance with labelled
seafood products [7], a sustainable system [3], marketing and economic innova-
tion [5,9,14], seafood supply chain management [10], and an inventory system
[8]. However, there are not enough studies related to innovation of the business
model which give an overall perspective to change the present business structure.
Based on the recognition of current practical and academic conditions, the
present study aims to explore the business model of the processed seafood indus-
try with the concept level as the starting point of the study. Specifically, the
network between a food processing company and its customers is focused on
understanding the exciting business style of the food processing company and
grasping the possibility of cultivating routes to new customers.
2 Methodology
This study tries to draw the network structure between one food processing com-
pany and its candidate customers because a clarification of the relation among
participants firstly needs to retrieve investment points for business expansion.
The description is performed based on an observation and an interview of sev-
eral seafood processing companies in the rebuilding project after the Great East
Japan earthquake for three years. The authors of the present study have grad-
ually understood these companies business structure through investigation and
a trial-and-error method. This study presents the results. The method to build
the described model is business modelling which reveals the following things
pursuant to the aim of the study [15]:
The business model is emerging as a new unit of analysis.
• Business models seek to explain how value is created, not just how it is cap-
tured.
Start
End
The journey to develop a processed seafood business model in this study starts
with the understanding of daily eating habits as follows. The main processes to
eat seafood have not changed from ages past, as shown in Fig. 2: fishing, cleaning,
processing, seasoning, and eating. The middle three processes exist mainly to add
value to fresh fish. These can be omitted and exchanged in the cooking process.
Many techniques and skills are included to make the materials taste even better.
The detailed explanation of each type is as follows:
(1) Fishing process
This process is considered a preparation part for processed seafood produc-
tion involving obtaining the materials. The quality level of the materials depends
on size, weight, appearance, freshness, variety, and so forth. Over-fishing and cli-
mate change seriously affect a haul of fish. A fishing quota by multiple nations
and enclosed aquaculture with the newest biotechnologies are countermeasures
for sustainable marine resources.
(2) Cleaning process
The complicated body of fish and seafood causes a decline in the productivity
of seafood processing. It is basic and important to protect customers’ safety from
dangerous parts which they cannot eat (i.e. hard fish, thick bone, and the internal
organs of a fish including poisonous substances). The variety of processes includes
carving, boning, scaling, and filleting. Accurate fileting of large fish like salmon
requires immense skill with yield rate and waste disposal.
(3) Processing process
This process is the main portion of seafood processing. Representative meth-
ods of the process are grilling, reducing, frying, steaming, and drying. The selec-
tion of each method relates to a core business of a seafood processing company
with a large investment in production facilities. The design of the process pro-
file of production by the selected method is a company secret relating to the
organoleptic feel of processed seafood because many companies adopt the com-
mon categories of methods noted above.
(4) Seasoning process
This process is the portion that adds tastiness and flavour-value to a
processed item. In Japan, there are five traditional popular seasonings, such
as sugar (Sato), salt (Shio), vinegar (Su), soy sauce (Shoyu), and soybean paste
(Miso). They characterize the taste of Japanese seafood called ‘Washoku’, which
was registered as a 2013 Intangible Cultural Heritage by UNESCO. Chemical
seasoning widens the possibility of new ways to enjoy processed seafood.
(5) Eating process
This is the final process to synthetically evaluate the quality of the four
processes by consumers. In this stage, not only the taste of processed seafood,
but also a dishing, including a selected plate and a side dish, influences the total
impression. The family styles of advanced countries, such as a two-income family,
a child-rearing family, and the aging society, also require the convenience of a
microwave oven and reheat pouches.
Modelling a Supply Chain Network of Processed Seafood 941
In the present age, the division of the five processes between fisheries and
consumers depends on consumers’ requirements. It indicates the following five
models proposed in Fig. 3.
Model 1 Consumers
Customer candidates for (II) the processed seafood industry are rich in vari-
ety. The main candidates are four types of seafood service companies: other
food processing companies, food service companies, fast-moving consumer goods
(FMCG) companies, and consumers. The explanations of each type are as
follows:
(1) Other food processing companies
These customers use the product supplied from seafood processing companies
as one material of their final products. For example, raw materials are supplied
from a company which manages a large-capacity frozen warehouse to store them
cheaply and in huge quantities. Cleaned materials (i.e. seafood paste and fish cut
into small cubes) also have high value for customers who do not have technical
know-how.
Modelling a Supply Chain Network of Processed Seafood 943
Other food
processing
Defrosting companies
Food
Cleaning
service
companies
Processing
FMCG
companies
Seasoning
Consumers
Fig. 4. Relationship between a seafood processing company and seafood service com-
panies.
4 Concluding Remarks
This study aims to reveal the supply chain model of processed seafood to mul-
tiple customers via a multi-branch production system. The findings consist of
four paths: (1) DTP, (2) CTS, (3) PTS, and (4) STR. They clarify the supply
chain network in the industry and consider the opportunity of the business. A
future study could provide a mathematical formulation of the network model to
quantitatively simulate and evaluate the impact of each path.
References
1. Abrahamsen MH, Håkansson H, Naudè P (2007) Perceptions on change in business
networks: Norwegian salmon exporters and japanese importers. In: 23rd Annual
IMP Conference, Manchester
2. Abrahamsen MH, Naudé P, Henneberg SC (2011) Network change as a battle of
ideas? analysing the interplay between idea structures and activated structures.
Imp Group 5(2):122–139
3. Alden R (2011) Building a sustainable seafood system for maine. Maine Policy Rev
20(1):87–95
4. Brydon K, Dana LP (2011) Globalisation and firm structure: comparing a family-
business and a corporate block holder in the new zealand seafood industry. Int J
Globalisation Small Bus 4(2):206–220
5. Chang KS, Waters EC (2006) The role of the alaska seafood industry: a social
accounting matrix (SAM) model approach to economic base analysis. Ann Reg Sci
40(2):335–350
946 K. Murata et al.
Abstract. This paper proposes a new hybrid global supply chain strat-
egy for a global blood sugar manufacturing company. In this approach,
products are analyzed based on product families. Product families can
share cells across the globe if needed. The results are compared with the
host market strategy where three manufacturing plants located in three
regions to meet world demand. Manufacturing plants are designed con-
sidering layered cellular design approach under stochastic demand. This
approach allows three cell types; (1) dedicated cells where each dedicated
cell can be used only by one family, (2) shared cells where a cell can be
shared by two product families, and (3) remainder cells where a cell can
be used by three or more families. The main focus of this paper is to com-
pare both alternatives with respect to number of manufacturing cells and
number of machines needed. In this case, number of machines remained
the same for both approaches. However, the proposed new approach led
to more dedicated cells.
1 Introduction
This research aims designing a manufacturing system for a global blood sugar
strip manufacturer. Three manufacturing facilities located in different continents
are assumed to meet the demand of these regions. By using layered cellular man-
ufacturing concepts, number and type of manufacturing cells are determined for
each manufacturing facility considering stochastic demand data and production
rates. Later, this supply chain strategy is compared with the proposed one in
this paper where some cells are shared globally. A probabilistic method is used
to design the cells.
The type of a manufacturing system mostly depends on the layout of the
manufacturing system. As mentioned in Süer, Huang and Maddisetty [19], man-
ufacturing systems can be classified into four categories based on the layout:
process layout, fixed layout, cellular layout and product layout. Fixed Layout
is used when products are heavy and large and therefore are mostly stationary.
Therefore, products typically stay in the same position and workers, machines
and equipment are brought to the products to perform the tasks [19]. Product
Layout is appropriate when production volume is high and product variety is
low. Product layout is usually very efficient since it can be configured to perfec-
tion for a single product but this makes it an inflexible system. Process Layout
is the best system for low production volume systems with high product variety
[19]. These systems are very flexible since they can handle many different prod-
ucts but unfortunately they are not very efficient since they requires extensive
material handling and thus lead to long leadtimes. On the other hand, Cellular
Layout is more flexible than Product Layout and more efficient than Process
Layout. It suits well for high product variety with low to moderate demand [19].
Figure 1 shows the classification of four layout types. Obviously, a manufactur-
ing system can indeed consist of any number of layout types as well such both
Cellular Layout and Process Layout etc.
Cellular Manufacturing is based on the grouping of similar products with
respect to similar processes into one or more similar cells as required. In the real
manufacturing world, many uncertainties exist in the system such as demand
uncertainty, supply uncertainty and processing uncertainty. These uncertain-
ties have been discussed extensively in many research works. Süer, Huang and
Maddisetty [19] proposed to consider the uncertainties of product demand and
processing times. By probabilistic market demand calculation, the part-family
assignment is achieved [19]. Then, low utilized cells are grouped to increase the
utilization of the system i.e., to reduce the number of machines and thus invest-
ment, even though some of these cells may no longer be pure cells. These different
cell types, namely, dedicated (DC), shared (SC) and remainder (RC) cells are
also shown in Fig. 1.
Supply chain is the network connecting suppliers, manufacturers, distribu-
tion centers and customers [16]. Dicken [7] discussed many supply chain models.
Among them, host market production model is illustrated in Fig. 2. In this strat-
egy, each of the geographic regions covers its own demand. In this study, this
strategy will be compared with the newly proposed hybrid global supply chain
strategy that considers product family definitions and layered cellular approach.
2 Literature Review
Technology in cellular manufacturing. Rajamani, Singh and Aneja [17] also men-
tioned that GT played an important role in cellular manufacturing. Singh and
Mohanty [18] observed that not many works in the literature used fuzzy con-
cepts to deal with multi-objective framework in the process planning. Chen and
Cheng [5] proposed a supplementary procedure to solve the limitation of Adap-
tive Resonance Theory (ART). They mentioned that the performance of ART
depended on the initial matrix of bottleneck process. The proposed supplemen-
tary procedures could improve the reliability of results. Moreover, a new math-
ematical model based on cell utilization was conducted by Mahdavi, Javadi,
Fallah-Alipour and Slomp [15]. A comparison of part-machine grouping from
this proposed method with the mathematical model from Chen and Cheng [5]
was tested. The model from Mahdavi, Javadi, Fallah-Alipour and Slomp [15]
tended to produce better results. A mixed integer non-linear model was ana-
lyzed by Bulgak and Bektas [3] for CMS. In this paper, the proposed model was
an integrated approach to combine production planning and system reconfigu-
ration. This CMS model was a new model, which includes sequence, duplicate
machines, capacity of machines and lot splitting.
The literature reviews discussed so far included the deterministic CMS prob-
lem. However, cellular manufacturing is difficult to design in the real world due
to uncertainty of manufacturing process. In order to deal with the uncertainty
of product demand along with processing time, another research is proposed
by Süer, Huang and Maddisetty [19]. A heuristic methodology was conducted
to distinguish cell types in the CMS - Dedicated Cell (DC), Shared Cell (SC)
and Remainder Cell (RC). The product family configuration and cell allocation
are accomplished by mathematical analysis. The designed manufacturing system
turned to successfully solve the uncertainty of product demand and processing
time through simulation method. The methodology conducted by Süer, Huang
and Maddisetty [19] is implemented in the current research for the purpose of
designing the manufacturing system given the market demand, part-family for-
mations, and the operations required to process the products. Erenay, Suer, Jing
[8] also developed a mathematical model for designing alternative layered cellular
system and they compared both works.
3 Problem Definition
In this research, a blood glucose test strip manufacturer is studied and alter-
native global supply chain strategies are compared. The strategies compared
are, (1) independent facilities per region (host market strategy) and (2) newly
proposed hybrid approach that allows family-based cell sharing across multi-
ple facilities. Customers from three continents are considered in this study and
they are Europe, Asia and North America. Three manufacturing facilities are
assumed to produce these products and they are located in Ireland, China and
Puerto Rico. The production data and manufacturing processes are obtained
from Lobo [12]. All of the data are converted into common units by considering
market share, revenue, and product price from Ates [1].
Hybrid Global Supply Chain Design 951
Product Machines
M1 M2 M3 M4 M5 M6
P1 1 1 1 - - -
P2 1 - 1 - - -
P3 1 1 1 - - -
P4 - - - 1 1 -
P5 - - - 1 1 -
P6 1 - 1 - - 1
P7 1 - 1 - - 1
Family Cells
C1 C2 C3 C4 C5 C6 C7
F1 1 1 - - - - -
F2 - - 1 1 - - -
F3 - - - - 1 1 1
Family Cells
C1 C2 C3 C4
F1 1 - 1 1
F2 - 1 1 1
F3 - - - 1
(DC) (DC) (SC) (RC)
In this section, two alternative supply chain design strategies are discussed. Strat-
egy 1 is the host market strategy where each facility in a continent covers the
demand of the same continent. Strategy 2 discusses the proposed hybrid global
supply chain strategy where cell sharing opportunities are explored across vari-
ous facilities with product families in mind.
In this strategy, each region produces many types of products to meet the
demand of its own demand. Products are produced independently in different
facilities, which lead to no transportation and information sharing between dif-
ferent regions. Figure 3 shows that the blood sugar strips are produced in three
manufacturing facilities-China, Ireland and Puerto Rico [11].
In this strategy, each region produces some of their own products. For example,
products in Family 1 and Family 2 are produced in their own facilities in their
continents. However, products in family 3 are produced in a single location and
distributed to the remaining regions. Obviously, manufacturing cell(s) producing
products in F 3 are shared by various regions while this requires transportation
of some products (Fig. 4).
Hybrid Global Supply Chain Design 953
Historical demand values of four companies - Roche, LifeScan, Bayer and Abbott
from 2002 to 2010 are used to forecast the 2011 demand [1]. In this research,
demand values for families are assumed to follow normally distribution. Standard
Deviation (σ) values for each product family are generated as a percentage of
the corresponding mean demand (20%–25%). From [1], mean demand by family
in all markets is shown in Table 5 along with Standard Deviations (σ).
954 J. Jiang and G.A. Süer
Calculate Capacity
Requirements
Determine Probability
of Demand Coverage
Compute Expected
Cell Utilization Values
Compare Results
Once mean demand and standard deviation values are known, the mean
capacity requirements by product family are calculated by using Eq. (1) in [14].
Bottleneck Processing Time is defined as the longest processing time among
operations in the cell.
Bottleneck Processing Time
MCRFamilyNo = MeanDemand × (hr). (1)
60
For example, the mean capacity requirements for Product Family 2 in the
manufacturing system of China region is decided by Mean Demand of Prod-
uct Family 2 in China region which is 7,098,188. BPT (Bottleneck Processing
Time) is 1/80 = 0.0125 min in China region. The results of Mean Capacity
Requirements and Standard deviation for different regions are shown in Table 6.
0.0125
MCRF 2 = 7, 098, 188 × = 1479 h,
60
0.01252
Standard DeviationCapacityF 2 = 1, 703, 5652 × = 335.
3600
Hybrid Global Supply Chain Design 955
The demand coverage probability is calculated for various cell levels. It shows
the probability that a given number of cells will meet the demand. Capacity
Requirements are assumed to follow the normal distribution. Each cell is avail-
able 2000 h per year. Demand Coverage Probability (DCP) for a family and cell
combination is calculated by Eq. (2).
2000 × CellNo. − MCRFamilyNo
DCPFamilyCell = Normsdist . (2)
STDEVCapacity
Demand Coverage Probability for all cells and families are calculated based
on Mean Capacity Requirement and Standard Deviation. The Mean Capacity
Requirement for Product Family 2 for China region is 1479 and Standard Devi-
ation is 355. Based on these values, the Demand Coverage Probability for the
first cell for family 2 is 93%. In other words, one cell covers demand at a very
high level.
2000 × 1 − 1479
DCPF 1C1 = Normsdist = 0.93.
335
All of the results of Demand Coverage Probability for different regions are
shown in Table 7. For family 2 in China facility, by adding a second cell, the
Demand Coverage Probability jumps to 99.99%.
where
where
For example, Expected Cell Utilization of Product Family 1 for China region
is decided by Probability that the Cell Required number larger than 1, Percent-
age utilization of 1st cell when CR > 1, Probability that CR between 0 and 1
and Percentage utilization of 1st cell when CR between 0 and 1.
Hybrid Global Supply Chain Design 957
All of the results of Expected Cell Utilization calculations for different regions
are given in Table 8.
Family 1 2 3 4 5
1 - 1.00 0.89 0.78 0.70
2 1.00 - 0.89 0.78 0.70
3 0.89 0.89 - 0.70 0.80
4 0.78 0.78 0.70 - 0.89
5 0.70 0.70 0.80 0.89 -
958 J. Jiang and G.A. Süer
Consider a coverage
segment with highest ECU
No
Yes
Yes
Yes
Family China
1 2 3 4 5(C + PR) 6(C + I + PR)
1 - - - 0.15 - -
2 - 0.01 - 0.73 - -
3 - - 0.87 0.06 - -
4 1.00 0.96 - 0.06 0.52 + 0.42 -
5 - - - - - 0.33 + 0.33 + 0.31
DC SC DC RC DC DC
Table 12. Cell types for Puerto Rico and Ireland based on new approach
in this paper. For example, when sorting the ECU for Ireland region, the highest
ECU is 100% for Family 4 in Cell 1. Then Product family 4 is allocated to cell
1. When the second cell with 92% utilization is considered, it is allocated to a
new cell − Cell2. Table 10 is used to search the similar Families with Product
Family 4. From Table 9, Families 1, 2 and 5 can be considered to share a cell
with Product Family 4 (similarity coefficient > 0.77). However, merging this cell
with Family 1 or Family 5 will exceed 10% utilization. The only option is to
merge Cell 2 (1% utilization) of Family 2 with Family 4. In Ireland case, there
are two Dedicated Cells, two Shared Cells and one Remainder Cell. The results
are summarized in Table 10. The heuristic procedure is given in Fig. 6.
Table 12. In short, China facility becomes a global supplier for Product Family 5
and partial global supplier for Product Family 4. All three facilities supply their
own markets with respect to product Families 1, 2 and 3 (Table 11).
10 Conclusion
The comparison between two designs is presented in Table 13. It shows that total
number remains the same for both designs. However, the number of workers
is reduced in the new design. This is due to balancing issues in the cells. In
other words, each cell is equipped with all the machines needed to run all the
families (maximum number of machines) while some of them remained idle while
running different families. Another benefit was that there were more dedicated
cells under the new design. Furthermore, number of cells with a utilization value
greater than 0.90 was higher. It is expected that flowtimes will be shorter in
dedicated cells. This in turn reduces the work-in-process inventory. For a more
complete analysis, transportation costs, labor costs as well as investments costs
have to be included. These results could not be included due to space concerns.
References
1. Ates O (2013) Tglobal supply chain and competitive business strategies: a case
study of blood sugar monitoring industry. https://etd.ohiolink.edu/
2. Brush TH, Marutan CA, Karnani A (1999) The plant location decision in multi-
national manufacturing firms: an empirical analysis of international business and
manufacturing strategy perspectives. Prod Oper Manag 8:109–132
3. Bulgak AA, Bektas T (2009) Integrated cellular manufacturing systems design
with production planning and dynamic system reconfiguration. Eur J Oper Res
192:414–428
4. Canel C, Das SR (2002) Modeling global facility location decisions: integrating
marketing and manufacturing decisions. Ind Manag Data Syst 102:110–118
5. Chen SJ, Cheng CS (1995) A neural network-based cell formation algorithm in
cellular manufacturing. Int J Prod Res 33:20–542
Hybrid Global Supply Chain Design 961
6. Chopra S, Meindl P (2013) Supply chain management: strategy, planning & oper-
ations, 2nd edn. Prentice Hall, Upper Saddle River
7. Dicken P (1992) Global shift: the internationalization of economic activity, 2nd
edn. Paul Chapman Publishing, London
8. Erenay B, Suer GA et al (2015) Comparison of layered cellular manufacturing
system design approaches. Comput Ind Eng 85:346–358
9. Haug P (1992) An international location and production transfer model for high
technology multinational enterprises. Int J Prod Res 30:559–572
10. Hyer NL (1992) Education: Mrp/gt: a framework for production planning and
control of cellular manufacturing. Decis Sci 13:681–701
11. Jiang J, Suer GA (2016) Alternative global supply chain design strategies for a
blood sugar strip manufacturer considering layered cellular design. In: Proceedings
of the global joint conference on industrial engineering and its application areas
2016, Istanbul, Turkey, 14–15 July, pp 14–15
12. Lobo R (2006) Comparison of connected vs disconnected cellular systems using
simulation. Ph.D. thesis, Ohio University
13. Lowe TJ, Wendell RE, Hu G (2002) Screening location strategies to reduce
exchange rate risk. Eur J Oper Res 136(3):573–590
14. Maddisetty S (2005) Design of shared cells in a probabilistic demand environment.
Ph.D. thesis, Ohio University
15. Mahdavi I, Javadi B et al (2007) Designing a new mathematical model for cellular
manufacturing system based on cell utilization. Appl Math Comput 190:662–670
16. New SJ, Payne P (1995) Research frameworks in logistics: three models, seven
dinners and a survey. Int J Phys Distrib Logistics Manag 25:60–77
17. Rajamani D, Singh N, Aneja YP (1990) Integrated design of cellular manufacturing
systems in the presence of alternative process plans. Int J Prod Res 28:1541–1554
(in Chinese)
18. Singh N, Mohanty BK (1991) A fuzzy approach to multi-objective routing problem
with applications to process planning in manufacturing systems. Int J Prod Res
29:1161–1170
19. Süer GA, Huang J, Maddisetty S (2010) Design of dedicated, shared and remainder
cells in a probabilistic demand environment. Int J Prod Res 48:92–114
Design of Closed-Loop Supply Chain Model
with Efficient Operation Strategy
1 Introduction
In general, designing closed loop supply chain (CLSC) model is to maximize the
utilization of resources. Especially, return management such as reuse and resale
of the recovered parts and products has been focused on many companies. For
example, by 1997, approximately 73,000 U.S. companies had earned $53 billion
though resale activities of remanufactured products [1]. This means that return
management using used product becomes more and more important.
Return management using used product is usually consisted of three aspects.
The first aspect is the operational management of returned product, the sec-
ond one is that of remanufacturing activity, and the third one is that of sec-
ondary market [6]. Ozkir and Basligil considered the operational management of
returned product [8]. They found that regulating the rate of returned product
can increase the total profit and revenue. Subramoniam found that remanufac-
turing activities can increase profit, save resource and energy, create new market
[9]. Van Daniel et al. suggested that con-sidering both new market with new
product activity and secondary market with resale activity can get more profit
than the new market alone [7].
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 79
Design of Closed-Loop Supply Chain Model 963
2 Proposed CLSC-OS
The proposed CLSC-OS is an integrated model combining forward logistics (FL)
and reverse logistics (RL), which is consisted of suppliers in areas 1, 2, 3, and 4,
product manufacturer, part inventory distribution center, product distribution
center and retailer in FL and customer, collection center, recovery center, sec-
ondary market and waste disposal center in RL. For effective use of the recovered
parts from recovery center, part inventory distribution center is also taken into
consideration. The conceptual network structure of the proposed CLSC-OS is
shown in Fig. 1.
In Fig. 1, the production and recovery flows are as follows. New part types
1, 2, 3, and 4 (NP1, NP2, NP3, and NP4) are produced at the suppliers of
areas 1, 2, 3, and 4, respectively. NP1, NP2, NP3, and NP4 are then sent to
part inventory distribution center. Recovered parts (RP1, RP2, RP3, and RP4)
from recovery center are also sent to part inventory distribution center. Product
manufacturer produces product using NP1, NP2, NP3, NP4, RP1, RP2, RP3,
and RP4 from part inventory distribution center. The product is sent to retailer
through product distribution center and sold to customer. The returned product
from customer is collected at collection center and then sent to recovery center.
At recovery center, the returned product is checked and classified into recover-
able and unrecoverable products. The quality of the recoverable product with
a1 % is recovered at recovery center and then resold at secondary market. The
unrecoverable product is disassembled into recoverable and unrecoverable parts.
The quality of the recoverable part with a2 % is recovered at recovery center and
then sent to part inventory distribution center. The unrecoverable part with a3 %
is sent to waste disposal center to be burned or landfilled.
Especially, in the proposed CLSC-OS, the part inventory distribution cen-
ter can regulate the transportation amount of NP1, NP2, NP3, NP4, RP1, RP2,
964 X. Chen et al.
RP3, and RP4. For instance, if product manufacturer want to produce 100 prod-
ucts and a2 is 10% (10 recovered parts = 100 returned product ×10%), then part
inventory distribution center sends 90 NP1, 90 NP2, 90 NP3, and 90 NP4 as new
parts and 10 RP1, 10 RP2, 10 RP3, and RP4 as recovered parts to product man-
ufacturer.
3 Mathematical Formulation
Beofre suggesting the mathematical formulation for the CLSC-OS, the following
assumptions should be considered.
• Unit transportation costs of all facilities at each stageare different and known.
• The handling capacity of the facilities at a stage is the same or greater than
that of previous ones.
• The return rate from customer is fixed.
• The discount rates of recovered product and part are calculated according to
the quality of returned product and part.
The index sets, parameters, and decision variables for the mathematical for-
mulation are as follows.
Index set
a : index of area at supplier; a ∈ A;
b : index of new product; b ∈ B;
h : index of supplier; h ∈ H;
i : index of part inventory distribution center; i ∈ I;
j : index of product manufacturer; j ∈ J;
k : index of product distribution center; k ∈ K;
l : index of retailer/customer; l ∈ L;
m: index of collection center; m ∈ M ;
n : index of recovery center; n ∈ N ;
o : index of secondary market; o ∈ O;
p : index of waste disposal center; p ∈ P
Parameter
FDSha : fixed cost at supplier h of area a;
FDIi : fixed cost at part inventory distribution center i;
FDOj : fixed cost at product manufacturer j;
FDDk : fixed cost at distribution center k;
FDCm : fixed cost at collection center m;
FDRn : fixed cost at recovery center n;
UHSha : unit handling cost at supplier h of area a;
UHOj : unit handling cost at product manufacturer j;
UHDk : unit handling cost at product distribution center k;
UHCm : unit handling cost at collection center m;
UHRn : unit handling cost at recovery center n;
UHEe : unit handling cost at secondary market e;
UHWp : unit handling cost at waste disposal center p;
UTShai : unit transportation cost from supplier h of area a to part inventory
distribution center i;
UTIij : unit transportation cost from part inventory center i to product
manufacturer j;
UTOjk : unit transportation cost from product manufacturer j to product
distribution center k;
UTDkl : unit transportation cost from product distribution center k to
retailer/customer l;
966 X. Chen et al.
where
s.t. ysha = 1, ∀a ∈ A (7)
h
yii = 1 (8)
i
yoj = 1 (9)
j
ydk = 1 (10)
k
ycm = 1 (11)
m
yrn = 1 (12)
n
(U HShai × ysha ) − (T Cii × yii ) = 0, ∀a ∈ A (13)
h i
(T Cii × yii ) − (U HOj × yoj ) = 0 (14)
i j
(UHOj × yoj ) − (UHDk × ydk ) = 0 (15)
i k
(UHDk × ydk ) − T Cul = 0 (16)
k l
T Cul − (UHCm × ycm ) = 0 (17)
l m
(T Cii × yii ) − α2 (UHRn × yrn ) ≥ 0 (18)
i n
(HCrcn × x rcn ) − α3 (U HWp × ywp ) ≥ 0 (19)
n p
(UHRn × yrn ) − (UHCm × ycm ) = 0 (20)
n o
(UHRn × yrn ) − α1 UHEe ≥ 0 (21)
n e
ysha = {0, 1}, ∀h ∈ H, a ∈ A (22)
yii = {0, 1}, ∀i ∈ I (23)
yoj = {0, 1}, ∀j ∈ J (24)
ydk = {0, 1}, ∀k ∈ K (25)
ycm = {0, 1}, ∀m ∈ M (26)
yrn = {0, 1}, ∀n ∈ N (27)
UHShai , UHOj , UHDk , UHCm , UHRn , UHEe , UHWp ≥ 0
∀h ∈ H, a ∈ A, ∀i ∈ I, j ∈ J, ∀k ∈ K, (28)
∀m ∈ M, ∀n ∈ N, ∀e ∈ E, ∀p ∈ P
Design of Closed-Loop Supply Chain Model 969
The objective function is divided into two parts. First part is to maximize
total revenue and second one is to minimize total cost. Maximizing the total
revenue is represented in Eq. (2) and minimizing the total cost is in Eq. (4).
Each objective function should be maximized and minimized under satisfying
all constraints which represented from Eqs. (7)–(28). Equations (7)–(12) mean
that only one facility at each stage should be opened. Equations (13)–(21) show
that the capacity of each opened facility is the same or greater than that of the
previous one. Equations (22)–(27) means that each decision variables can take
the value 0 or 1. Equation (27) refers to non-negativity.
5 Numerical Experiments
In numerical experiment, various scenarios are considered for evaluating the rate
of the total revenue in Eq. (1) and the total cost in Eq. (4). First, a scale for
970 X. Chen et al.
Case a1 a2 a3
1 0.6 0.3 0.1
2 0.6 0.2 0.2
3 0.6 0.1 0.3
4 0.7 0.2 0.1
5 0.7 0.1 0.2
6 0.8 0.1 0.1
Table 3 shows various scenarios. Each scenario has four factors using the
case of remanufacturing, return rate, discount rate and profit premium rate.
By the each scenario, various relationships between total revenue and total cost
are compared. For instance, Scenario 1 indicates that various cases in Table 2
and various profit premium rates (Pm = 0.4, 0.3, 0.2, 0.1, and 0.0) under the
fixed values at return rate and discount rate are considered to solve the two
mathematical formulations in Eqs. (1) and (4). The computation results are
used for comparing the relationships between total revenue and total cost.
Each scenario in Tables 2 and 3 under the scale of the proposed CLSC-OS is
implemented using GA approach. The parameters used in GA approach is that
population size is 20, crossover rate 0.5, and mutation rate 0.3. The GA approach
is executed under the following computation environment: Matlab R2015 under
Design of Closed-Loop Supply Chain Model 971
IBM compatible PC 3.40GHZ processor (Inter Core i7-3770 CPU), 8GB and
Window 7.
Figure 3 shows the computation results of total revenue-cost rates in Scenario
1. Each curve with different colors shows the changes of the total revenue-cost
rates under the changes of profit premium rate. For instance, the top blue-colored
curve shows a fixed values of 1.4 in the total revenue-cost rates of all cases under
the fixed values of 0.4 profit premium rate, 100% return rate and 0.5 discount
rate. Similar situations are also shown in the other colored-curves. Therefore,
the results shown in Fig. 3 indicates that the changes of remanufacturing rate
has no significant influence on the increase of total profit.
Figure 4 shows the changes of the total revenue-cost rates under various profit
premium rates, when return and discount rates are fixed at 100% and 0.5, respec-
tively. As the changes from profit premium rate 0.4 to 0.0, the total revenue-cost
rate is continuously decreasing. Therefore, the results shown in Fig. 4 indicate
that the profit premium rate has significant influence on the increase of total
profit.
Figure 5 shows the changes of the total revenue-cost rates for each profit
premium rate under various discount rates, when return rate is fixed at 100%.
For instance, the top blue-colored curve shows that total revenue-cost rates is
972 X. Chen et al.
increasing according to the changes of discount rate from 0.5 to 0.2. Similar sit-
uations are also shown in the other colored-curves. Therefore, the results shown
in Fig. 5 indicate that the change of discount rates has significantly influence on
the increase of total profit.
6 Conclusion
The first objective of this study is to design a CLSC-OS with various facilities at
each stage and the second one is to find a good alternative for increasing total
profit under the changes of remanufacturing rate, discount rate, profit premium
rate, and return rate.
For achieving the two objectives, the CLSC-OS has been proposed. The pro-
posed CLSC-OS has supplies at areas 1, 2, 3, and 4, product manufacturer,
product distribution center, and retailer in FL and customer, collection cen-
ter, recovery center, secondary market and waste disposal center in RL. This
is represented in a mathematical formulation which has two objectives of the
maximization of total revenue and the minimization of total cost under various
constraints. The mathematical formulation is implemented using GA approach.
In numerical experiment, four types of scenarios with the changes of remanufac-
turing rate, discount rate, profit premium rate, and return rate have been con-
sidered and executed using GA approach. The experimental results have shown
that the changes of the profit premium rates and the discount rates have no
significant influence on the increase of total profit, but the changes of the profit
premium rates and the discount rates have significant influence on the increase
of total profit. Therefore, decreasing the profit premium rates and the discount
rates can be a good alternative for increasing the total profit (= total revenue
− total cost) in the proposed CLSC-OS.
However, in this paper, since only four scenarios have been considered, more
various scenarios with larger-scaled CLSC-OS will be considered to compare the
changes of remanufacturing rate, discount rate, profit premium rate, and return
rate. This will be left to future study.
References
1. Atasu A, Sarvary M, Wassenhove LNV (2008) Remanufacturing as a marketing
strategy. Manage Sci 54(10):1731–1746
2. Gen M, Cheng R (1997) Genetic algorithms and engineering design. Wiley, New
York
3. Gen M, Cheng R (2000) Genetic algorithms and engineering optimization. Wiley,
New York
4. Gen M, Zhang W et al (2016) Recent advances in hybrid evolutionary algorithms
for multiobjective manufacturing scheduling. Comput Ind Eng doi:10.1016/j.cie.
2016.12.045
5. Goldberg DE (1989) Genetic algorithms in search, optimization and machine learn-
ing, vol 13. Addsion-Wesley, Reading
6. Guide VDR, Wassenhove LNV (2009) The evolution of closed-loop supply chain
research. Oper Res 57(1):10–18
7. Li J (2010) The potential for cannibalization of new products sales by remanufac-
tured products. Decis Sci 41(3):547–572
8. Ozkır V, Başlıgıl H (2012) Modelling product-recovery processes in closed-loop
supply-chain network design. Int J Prod Res 50(8):2218–2233
9. Subramoniam R, Huisingh D, Chinnam RB (2009) Remanufacturing for the auto-
motive after market-strategic factors: literature review and future research needs.
J Cleaner Prod 17(13):1163–1174
10. Yun YS, Chung HS, Moon C (2013) Hybrid genetic algorithm approach for
precedence-constrained sequencing problem. Comput Ind Eng 65(1):137–147
Optimization of Supply Chain
Based on Internet Plus
Wen Hua(B)
Abstract. “Internet plus supply chain” one of the internet plus strate-
gic action, basing on information technology and platform, promotes the
high fusion of elements of supply chain and modern science & technology,
and upgrades efficiency of supply chain. The basic factors-information
technology guarantee and information sharing, which keep the supply
chain operation efficiently, protect the “three flow”-material flow, infor-
mation flow and fund flow efficient and order movement on the supply
chain. This paper analyses the situation and the problem about “three
flow”, and discusses how to improve the efficiency of supply chain sys-
tem, from client, to manufacture, and to logistics and financial service
with the help of internet technology.
1 Introduction
In 2015, China Premier Li Keqiang puts forwards the “Internet plus” strategic
action plan in the government work report to integrate the Internet with tradi-
tional industries by means of the Internet platform and information technology,
to encourage industrial networks and Internet banking, and to guide Internet-
based companies to increase their presence in the international market.
The internet provides information and communication technology and plat-
form for enterprises, promotes the effective communication and coordination
between enterprises, business partners, customers and inner-enterprise, and
makes it possible to implement supply chain effectively. The enterprises in sup-
ply chain can make the synergistic responds quickly, reduce costs of supply chain
effectively and improve enterprise efficiency. Even The ratio of logistics cost to
GDP keeps declining from 18.4% in 2007 to 16.0% in 2015, and is 0.6% lower
than the year earlier [8], it is still higher. The logistics cost in our country is 5%
higher than developed countries, which shows the obvious gap. Global Logistics
Performance Index (LPI) issued by the World Bank every two years is a interna-
tional index that demonstrates the domestic logistics abilities of one country or
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 80
976 W. Hua
one region to participate global supply chain. The 2016 assessment result shows
China ranks at 27th [9] and is in the second echelon. As a whole, the status
quo of the weak soft power in circulation filed, such as high cost, low efficiency,
low intensive level, insufficient industrial support and issues of trust, standard,
talents, safe and environmental protection, constraints the efficient operation of
supply chain.
The supply chain is a sequential and interconnected organization structure
including the design, production and sales and a value chain involving suppliers,
manufacturers, transportation and all other components. Meanwhile, the supply
chain links business activities to achieve accurate forecast, less inventory and
meet real-time needs based on information system. This paper starts with the
situation and the problem about “three flow”, and discusses how to improve the
efficiency of supply chain system, from client, to manufacture, and to logistics
and financial service with the help of internet technology.
In the Internet plus era, the supply chain is showing a trend of flat model to
ensure that logistics, information flow and fund flow perform effectively and
orderly in the system. As globalization lengthens and broadens the supply chain,
the competition is more and more intense and the competition between enter-
prises becomes the competition between supply chains.
Modern logistics links every supply chain activity from raw material procurement
to retails, achieving external integration and reflecting integration advantage.
With the speeding up of globalization, raw material procurement and product
sales expand in scale in China’s manufacturing enterprises. Procurement and
sales are attached with the same importance. In terms of transportation, owing
to China’s road especially expressway construction, consummation and construc-
tion of highway networks and transformation of consumption patterns, the road
transportation scope was widen and highway logistics accounted for 75.52% of
the total freight volume (2015) as shown in Table 1.
The data has shown [3] that: In China, there exist over 7 million logistics
companies with more than 16 million vehicles and an average of 2 cars for each
company. The trucks empty driving rate is 50% and the cargo are transferred
5 to 6 times before transport drivers. As the individual drivers owe more than
90% of road transport capacity, the road logistics has low degree in scale and
intensification and road transport is in small, scattered, chaotic, disorder state.
All in all, long-distance road transportation has no advantage in economy and
environmental protection.
Revenue composition of logistics enterprises: transport and warehousing busi-
ness make up for 60%, freight forwarding business for 13.9%, distribution and
distribution processing for 5% [2], indicating that logistics enterprises focus on
transport and warehousing that are the base business with low profit margins.
And for profit margin: integrated logistics enterprises accounts for 8.9%, ware-
housing enterprises for 6.9% and transportation enterprises for 3.3%.
In general, for the majority of small and medium-sized logistics enterprises,
they are lack of fund, labor force, capacity and application of information tech-
nology; the labor-intensive operation relies on a large number of labors; they have
insufficient capacity to provide value-added logistics service since their business
focus are basic logistics services such as transportation, warehousing. What’s
more, the general low profitability can be attributed to two reasons. One is that
demand of logistics services reduces as China’s economy slows down. At the
same time, logistics enterprises has weak ability to provide quality service and
low efficiency, which cannot meet the current market demand or meet in the way
the market needs.
19.35% [3]. Application of logistics information technology and the software does
not mean that the supply chain has achieved information sharing, collaborative
response and quick response. Only by breaking the blocked business model and
interest chain established by the system and institution can we cross the bound-
aries of organizations, regions and industries.
On the whole, the financing difficulties are the bottleneck of enterprise devel-
opment for many SMEs. With the concept of supply chain management is
strengthen, the overall foundation of the supply chain is increasingly appar-
ent. This development has gone through three stages. The first stage, logistics’
factors combine with financial activities. The second stage, the development of
supply chain finance moves from factors to integrating management of logistics
operation, business operation and financial management. The third stage, inter-
net + supply chain finance relieve information asymmetry and reduce the costs
of information access and information processing, and the industry supply chains
are optimized through financial resources [10]. The Supply chain finance takes
the core enterprise as the starting point, and the systematically arranges financ-
ing for all member enterprises in the supply chain. Supply chain finance inputs
the funds into the small and medium-sized upstream and downstream enterprises
in the supply chain, which provides the financial support for the supply chain.
The application of Internet achieves financial support for supply and demand,
that is, the third-party payment platform reduces financing costs for enterprises
in supply chain, and consequently the bottleneck of funding is alleviated.
The essence of “Internet plus” is to online and digitalize the traditional indus-
try. When the commodity display and transaction behaviors shift to Internet
and online, information and data can flow, exchange, mine, transfer and share.
Besides, supply and demand can be connected at the lowest cost. However,
Optimization of Supply Chain Based on Internet Plus 979
the enormous information and data are featured with scattered property, low
value intense, low connection. In this case, big data, cloud storage, data mining
and other technologies are adopted to provide valuable information for decision-
making by data integration, processing and analysis.
The integration of Internet and industry moves forwards generally from the
downstream to the upstream in the supply chain and changes consumers in the
first place. According to the Internet Statistical Report of 2015, the popularity
rate of Chinese netizens is 50.3%, among which there are 413 million, that 60.03%
of the netizens [4] making up for 30.20% of the total population conduct online
shopping and transactions (26.68% in 2014). Reports [1] have shown that China’s
online retail market has 3.8 trillion yuan volume, which accounts for 12.7% of
total retail sales(10.6% in 2014), represent the proportion of retails that applying
Internet. The application rate of E-commerce, from the consumer → retail →
production → transport services along the supply chain, are respectively 26.68%,
10.6%, 8.6%, 4.0% in 2014 as shown in Fig. 1.
Fig. 1. Application rate of E-commerce in different links of supply chain (Year 2014)
user experience by giving a sense of involvement and control. Now a lot of out-
standing enterprises are marching in deepening their involvement in the supply
chain management. They not only collect customer information, but also focus
on guiding consumers to participate in collaborative intelligent development of
product development, customized products, and experience processes. Enter-
prises are expected to improve the ability to satisfy customer needs based on the
Internet and enhance the quality of supply capacity.
Supply and demand coexist. The supply side reform means that we shouldn’t
only focus on demand stimulation and it should be emphasized that the economic
growth can’t simply depend on stimulating the demand. In the long run, the main
supporting factor lies in effective supply’s response to high-quality demand and
guidance. (Jia Kang, 2015) [5] While, the supply side reform should utilize the
production factors including the labor, land, resources, capital, technology to
achieve the intensive production model characterized by economic innovation
and efficiency, and to achieve the upgrading from low-level supply to high-level
supply through knowledge, technology, products and institutional innovation.
(1) Data Analysis Transformation-Big Data Analysis
With the speeding up of convenient Internet and wide application of new tech-
nologies, the consumption patterns of end-users have changed and demand
expectations are becoming diversified and personalized. In this case, manufactur-
ing enterprises should quickly respond to external changes and effectively mine
the data potential in order to solve supply chain management issues such as
demand forecasting, replenishment forecasting, inventory optimization, foundry
collaboration, production arrangements and supplier management. Besides, the
manufacturing enterprises should monitor and manage the process and results
including process tracking, exception reminder, order tracking, transaction his-
tory data and so on. The solution is the big data analysis which is characterized
by dynamic real-time, multi-dimension, public sharing and low cost. Based on
the connection of data, big data can help all the node enterprises in the supply
chain to watch and control every activity including product design, purchas-
ing, manufacturing, order processing, logistics, financial services and consumers.
Besides, it can display inventory, production, orders, circulation, distribution
and other activities in real time, and assist the enterprises to optimize, forecast
and adjust the balance of supply and demand.
The production systems can cope with the uncertainty brought by the rapidly
changing environment. By application of information technology and Internet,
the production system can do as follows: 1 Grasp the true needs of the market.
Process the fragmented information through big data, cloud computing and other
information technology, thus the demand trends will assist decision-making; 2
Improve production flexibility. It means to promote the production management
that quickly responds to multiple products, small batch and rapidity. The “rigid”
production mode of large-scale production transforms to the adaptable and
“flexible” production mode; 3 Integrate external advantage resources timely.
982 W. Hua
The enterprises in supply chain can share information at a lower cost through
the information platform.
(2) Manufacturing and production transformation-Quick response
O2O model, characterized by small batch, multiple species and fast response, has
become a popular market demand. The “Internet plus” mode accelerates this
consumption and further improve the consumer expectations. The changes are so
fast that manufacturing enterprises are confronted with the following situations:
1 Due to poor versatility of personalized products, inventory risk is greatly
increased. 2 As the technology and product updates speed up, the marketabil-
ity value of the inventory and performance are reduced, bringing huge economic
losses. Therefore, it is necessary to adjust the established production manage-
ment, system arrangement, equipment selection, logistics and warehousing busi-
ness pattern that targeted for large-scale production, instead, to transform to
flexible manufacturing by full application of Internet technology.
From perspective of production management, it is necessary to adjust the
production arrangement timely, quickly and frequently and adopt a flexible and
diversified combination of production factors. Besides make full use of internal
and external resources advantages to prevent excessive production and meet the
market demand at the same time. From the perspective of production equip-
ment, various artificial intelligence tools, manufacturing equipment and com-
puting methods are widely used in the manufacturing process. Manufacturing
intelligence is fully utilized in the scheduling, design, processing, control, plan-
ning and other aspects of the production. Manufacturing enterprises must com-
prehensively utilize manufacturing technology, management science, computer
science, information technology, and conduct comprehensive processing of inter-
nal and external information and reasonably schedule the procurement, logistics
and production. The flexible production system based on information technology
can respond to the changing production and sales, and reduce response time to
changes, besides, production costs can fluctuate with the amount fluctuation.
(3) Structural transformation of supply side-Provide quality supply
On one hand, the overcapacity is prominent when shortage economy is shifting
towards surplus economy in China. On the other hand, as Chinese per capital
income continues to grow, the basic needs upgrade to development demands and
the demands are diversified and personalized. In the Internet era, consumers
have more choices in products, service, consumption channels and manners. The
popularity of overseas purchase heat demonstrates that people welcome contin-
uous protected products of high quality and fair, just and honest service. When
consumers are dissatisfied with the products and services, they turn to lower cost
channels as alternatives. Therefore, the effective way to resolve overcapacity and
overstocked products is to change the export-oriented supply system to improve
the effective domestic demand, provide favorable environment for domestic high-
quality supply enterprises to ensure supply keep ups with the pace of demand.
Expanding domestic demand has always been an important means of China’s
economic development. In practice, domestic demand growth comes mainly from
investment as the contribution of terminal consumers is limited. Supply side
Optimization of Supply Chain Based on Internet Plus 983
4 Conclusion
The wide coverage and availability of the Internet has transformed the sup-
ply chain from a single, linear, discontinuous information flow into a complex
information network, which increases the possibility of choosing excellent sup-
ply chain partners and expands the access of quality products and service for
consumers. The impact of “Internet plus” on the supply chain, on one hand, is
reflected in the application of information technology and big data, by which
companies achieve real-time information, data collection and analysis, result-
ing that business decision-making is no longer simply based on structured data;
on the other hand, the impact is reflected in the changes of production mode
and the circulation ways. The Internet plus promotes the development of multi-
industry convergence and transformation and upgrading of traditional industries,
and fundamentally reduces the contradictions between high inventory and low
consumption satisfaction as well as between overcapacity and sluggish manufac-
turing.
The transformation process from the traditional supply chain management
to the “Internet plus supply chain” is not the simple application of informa-
tion technology to supply chain management, but the conversion and innovation
of information technology from subordinate roles to the dominant rules, which
promote customers’ true demands capturing, supply chain visualization process,
agile manufacturing, real-time data acquisition and sharing, and partner collabo-
ration, in the environment of continuous technology development of internet, big
data and cloud computing. A reviewed way of thinking, about market, custom,
production and enterprise value even the business ecosystem, is revealed.
References
1. CEBRC (2016) Online retail market data monitoring report in China in 2014
[DB/OL]. China E-Business Research Centre. http://www.100ec.cn
2. CFLP (2016) statistical investigation report on logistics of key enterprises in China
in 2015 [DB/OL]. China federation of logistics & purchasing, China logistics infor-
mation center. http://www.clic.org.cn/wlxxwlyx/262043.jhtml
3. CFLP (2015) China logistics development report (2014–2015). China Federation of
Logistics & Purchasing. China Logistics Association. China Fortune Press, Beijing
4. CINIC (2016) Statistics on the development of the 37th China internet network
[DB/OL]. China Internet Network Information Centre. http://www.clic.org.cn/
wlxxwlyx/262043.jhtml
5. Jia K, Shu J (2015) Economic ‘new framwork’ and ‘new supply’: the pursuit of
important link and epitomize in innovation. Sci Technol Prog Policy 11:8–14
6. Li J, Liu C, Zhang N (2016) Anlyzing of design process and operational mode of
different crowd-sourcing supply chains with internet plus. Sci Technol Prog Policy
11:24–31
7. Lin CY (2008) Determinants of the adoption of technological innovations by logis-
tics service providers in China. Int J Technol Manage Sustain Dev 7(1):19–38
8. NDRC (2016) The national logistics operation status briefing in 2015 [DB/OL].
National development and reform commission. http://www.sdpc.gov.cn/jjxsfx/
201605/t20160531 806056.html
9. NDRC (2016) The world bank logistics performance index in 2016 [DB/OL]. China
Logistics Information Center. http://www.clic.org.cn/kjwl/270223.jhtml
10. Song H, Chen SJ (2016) Development of supply chain finance and internet supply
chain finance: a theoretical framework. J Renmin Univ China 35:10–16
Assessing the Recycling Efficiency of Resource
in E-Waste Based on MFA of Reverse Logistics
System
1 Introduction
WEEE is the waste of electrical or electronic products, it covers a wide spectrum
ranging from consumer goods such as discarded refrigerators, air conditioners,
washing machines, televisions and computers, mobile phones to capital goods
such as some unqualified products, parts and scraps in the production and repair
processes [6].
Because of containing precious metals (such as gold, copper, aluminum and
silver) and plastic, the reutilization of WEEE has magnificent recovery value and
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 81
Assessing the Recycling Efficiency 987
involved the material inputs, outputs, and inventories [8]. According to the law
of conservation of mass, MFA could get an ultimate conclusion that indicates the
status of material input, stock and output in each stage of life cycle. It is based
on the material balance theory (Total material input = Total material output
+ Stock), by means of material flow analysis, we can estimate and evaluate the
condition of development, utilization and waste management of natural resources
and various substances. The basic framework of material flow analysis method
is shown in Fig. 2.
Fig. 3. The material flow analysis frame of the reverse logistics system
Category Comment
Main source The daily life of the residents;
Electronic products and household appliances manufacturers
and sellers;
Illegal imports of e-waste from abroad.
Recovery channel Door-to-door recovery;
For trading-in old one for new one in the Shop;
Flea Market of electronic products and household appliances;
Garbage recycling by government/The glean and collect scrap
man;
After-service station
(3) Reprocessing
In the reprocessing stage, e-waste can be reacquired the value that reuse,
remanufacture or recycle through reasonable reprocessing for recycled products
or parts. Reuse only aims at the recycled products can be used directly through
cleaning or simple maintenance, such as the toner cartridge which can be used
again through the simple work. Remanufacturing aims at the e-waste can enter
the manufacturing stage again after dismantling, replace or repair. And the
recycle aims at the parts, glass and plastic in the recycled e-waste.
(4) Redistribution
The recycled e-waste can be as commodity to reuse and enter the consumer
market or donate to the consumers in poor areas directly after the inspection
and reprocessing stage. In this stage, distribution is the most important work,
which can make the entire recovery process for the WEEE operate efficiently.
(5) Waste Treatment
For the e-waste that has no circular economic value or can bring large harm to
the environment, the available material in them can as the recycling resources
to be reused through the reasonable treatment, such as dismantling, melting,
refining and electrolysis. While for the e-waste cannot be reused, the disposing is
partitioned in two ways, one which is stored them in landfills; the other approach
is sent them to an incinerator.
Table 2. The duration of use and the waste rate of the television
Duration of use(year) < 8 8–9 9–10 10–11 11–12 12–13 13–14 14–15 15–16 > 16
Waste rate 6% 6% 10% 13% 15% 15% 13% 10% 6% 6%
The sales date of K television manufacturing company over the years is shown
in Table 3.
The market supply method A formula as follows:
Q= S i × Pi , (1)
992 M. Wang et al.
Fig. 5. The specific data in reverse logistics system of discarded television. All units
measures in tons
P1 P2 P3 P4 P5 P6 P7 P8 I0
P1 0 0 0 0 0 0 0 0 3413.2
P2 3413.2 0 0 0 0 0 0 0 308.4
P3 0 2639.8 0 0 0 0 0 0 0
P4 0 0 1028.1 0 0 0 0 0 823.5
P5 0 0 923.9 1036.9 411.8 274.2 0 0 754.5
P6 0 0 0 0 2302.3 0 203.2 0 0
P7 0 432.3 0 0 0 2231.3 0 0 0
P8 0 649.5 687.8 814.7 687.2 0 0 0 0
O0 0 0 0 0 0 0 2460.4 2839.2 0
994 M. Wang et al.
According to the Eqs. (2) and (3), the total flow of each node is shown in
Table 5. In order to assess the overall recycling efficiency of the system, we give
the mathematical expression Eq. (4)–(6).
n
Ii + fij
ni = j=1
n , (4)
oi + j=1 fji
ni − 1
Ri = , (5)
ni
where, ni is the relation of the input and the final output of a single node. When
ni = 1 indicates that the recycled metal resources through the node i, but not
returned to the node i for recycling. When ni > 1 indicates that the recycled
metal resources through the node i, and returned to the node for recycling with
a direct or indirect way.
ni is recycling efficiency of a single node. When Ri = 0 indicates the flow of
the recycled metal resources through the node i is unidirectional. When Ri > 0
indicates the recycled metal resources through the node i can return the node
for circulating and using.
Assessing the Recycling Efficiency 995
1 2 3 4 5 6 7 8
ni 1 1 1 1 1.14 1.09 1.19 1
Ri 0 0 0 0 12.30 % 8.30% 16% 0
4 Conclusions
According to the analysis and discussion above, we concluded that the recycling
efficiency of metal resources in waste television of K television manufacturing
company which is representative in the television industry, so the recycling effi-
ciency has a certain reference value for home appliance manufacturing industry
in China.
There is not a sound assessment standard to evaluate the recycling efficiency
in China, so this paper take Japanese experience as reference. In Japan, the
“Household appliance recycling and reuse” started from 2001 specifies that the
recycling efficiency of useful resource (metal in the majority) from discarded
television by the television manufacturers must be above 55% [17]. Obviously,
the recycling efficiency of the television company was hard to reach the criterion
with the present resource recycle level, so it is feasible to make optimization for
reverse logistics recycling system of e-waste in China, and improve circulation
efficiency of the recycling system, so we give the following suggestions:
(1) From the above calculation process, the overall recycling efficiency lever
mainly depends on P5 , P6 and P7 (Fig. 5). Thus, we can increase the numeri-
cal value of P5 , P6 and P7 (Table 6) to increase the overall recycling efficiency,
that means improving the flow of manufacturers, retailers and consumers is
the key problem to improve the recycling efficiency of the system.
(2) Improving the secondary utilization ratio of the material in the recycling
system, such as improving the technological level of decomposition and dis-
mantling of e-waste in the recycling process, the recycled material in e-waste
can be increased, but the input of raw material can be reduced, and then
minimize the wastage of resource from the external environment.
996 M. Wang et al.
References
1. Agamuthu P, Kasapo P, Nordin NAM (2015) E-waste flow among selected institu-
tions of higher learning using material flow analysis model. Resour Conserv Recycl
105:177–185
2. Alshamsi A, Diabat A (2015) A reverse logistics network design. J Manufact Syst
37:589–598
3. Alumur SA, Nickel S et al (2012) Multi-period reverse logistics network design.
Eur J Oper Res 220(1):67–78
4. Ayvaz B, Bolat B, Aydin N (2015) Stochastic reverse logistics network design for
waste of electrical and electronic equipment. Resour Conserv Recycl 104:391–404
5. Bazan E, Jaber MY, Zanoni S (2015) A review of mathematical inventory models
for reverse logistics and the future of its modeling: an environmental perspective.
Appl Math Model 40(5–6):4151–4178
6. Bertram M, Graedel TE et al (2002) The contemporary european copper cycle:
waste management subsystem. Ecol Econ 42(1–2):43–57
7. Bing X, Bloemhof JM et al (2015) Research challenges in municipal solid waste
logistics management. Waste Manag 48:584–592
8. Brunner PH, Rechberger H (2006) Practical handbook of material flow analysis.
Int J Life Cycle Assess 10(9):293–294
9. Galvez D, Rakotondranaivo A et al (2015) Reverse logistics network design for a
biogas plant: an approach based on MILP optimization and analytical hierarchical
process (AHP). J Manufact Syst 37:616–623
10. Govindan K, Paam P, Abtahi AR (2016) A fuzzy multi-objective optimization
model for sustainable reverse logistics network design. Ecol Ind 67:753–768
11. Mangla SK, Govindan K, Luthra S (2016) Critical success factors for reverse logis-
tics in indian industries: a structural model. J Clean Prod 129:608–621
12. Simon W, Noel D, Matt C (2001) Waste from electrical and electronic equipment.
US Environmental Protection Agency
13. Song X (2008) Establishment and economic analysis of the standardized recycling
system for waste and used home applications. Doctoral dissertation
14. Su LX, Wang ZH et al. (2012) MFA-based resource cycling efficiency analysis for
a closed-loop supply chain system. Ind Eng J
Assessing the Recycling Efficiency 997
15. Van Der Wiel A, Bossink B, Masurel E (2012) Reverse logistics for waste reduction
in cradle-to-cradle-oriented firms: waste management strategies in the Dutch metal
industry. Int J Technol Manage 60(1/2):96–113
16. Wu Z (2013) Reverse logistics management l the WEEE as the research objects.
Science Press in Beijing
17. Yu P (2012) An analysis of Japan’s home appliance recycling. Contemporary Econ-
omy of Japan
18. Zeng X (2008) Mechanism and technology of nonferrous metals recycling from
typical e-waste parts. Doctoral dissertation
Analysis of Supply Chain Collaboration with Big
Data Suppliers Participating in Competition
Abstract. The combination of big data theory and supply chain man-
agement theory has brought the subversive change for the enterprise coop-
eration mode, the paper considered that when the big data suppliers to
participate in the competition of the supply chain, the condition of over-
all profit of the supply chain in the two cases of cooperation and non-
cooperation decision making. Firstly, the profit model of each member
enterprise is established by the demand function theory, and the whole
profit model of the supply chain is obtained in the case of decentralized
decision-making, then, the profit model for common decision making of
supply chain based on the maximization of supply chain is analyzed. After
compared the two profit models, the conclusion was obtained that when
big data suppliers join supply chain competition and one party gains the
dominant position in the supply chain, collaborative decision-making is
the key to enhancing the overall profitability of the supply chain.
1 Introduction
The method of supply chain management based on mutual cooperation between
enterprises is one of the most effective business strategies adopted by enterprises
in the fierce market competition, and the rise of big data industry is injected new
blood on Supply Chain Management. Data analysis methods are applied to all
aspects of supply chain management to effectively curb the procurement short-
age, lag of logistics, production bottlenecks. There are some significant achieve-
ments in the area of big data supply chain management. Firstly, supply chain
management has been intelligent, and a new generation of radio frequency iden-
tification technology can provide more reliable data for supply chains, these
data flow in the supply chain nodes, so that enterprises can exchange informa-
tion timely, the operating mechanism of the system can be optimized and the
decision making become automated. In addition, big data has also brought dis-
ruptive changes for the supply chain logistics. At present, enterprises can already
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 82
Analysis of Supply Chain Collaboration 999
achieve real-time monitoring of logistics and route optimization in the use of big
data technologies. Transport capacity and efficiency have been greatly improved
than before. The focus of future research is to predict supply chain behavior
rely on large data. Accurate forecasting is one of the effective measures to help
enterprises save costs, and the accuracy of large data projections is not reached
by other methods.
Current research on competition and cooperation model of the supply chain
is very mature, Yang and Bai, who build up on the Hotelling model to analyze
profit-sharing contracts between the supplier and the core firm, and study the
impact of upward decentralization on the core firm and the whole supply chain
under the assumption that the supplier does not have any advantage on produc-
tion costs [12]. Wu and Han, who researched the optimal production decisions
in closed-loop supply chains consisted of two competing manufactures and one
retailer with remanufacturing costs disruptions [9]. Xu et al. [11] gave a literature
review of supply chain contracts selection and coordination based on the supply
chain under competing multi-manufactures. Shi, who studied in non-cooperative
static game and Stackelberg game optimal advertising strategy in supply chain
under master-slave, proves that relevance of the manufacturer and the retailer’s
own ads level and cost and overall supply chain advertising level and costs [3].
Strategic of partner selection of supply chain model was constructed by Xin and
others based on game theory, and get optimal strategies for select partners in
the case of incomplete information [10]. Zhang, who thought the whole supply
chain benefits and members of associated enterprise level knowledge sharing is
very important, In light of this, he builds supply chain profit distribution model
based on knowledge-sharing cost [14]. Chen et al. [1] proposed the question of
how “big data analysis affects the value creation of the supply chain”, and it was
answered through the use of dynamic capability theory as a unique information
processing capability.
Research on supply chain based on the theory of large scale data, Wu [8]
based on dynamic differential game theory constructed the dynamic corporation
model of three-level supply chain under retailer paid contract, union pay con-
tracts, with big data service providers to participate in, and discusses contract
parameters on the profit impact of supply chain. Wang and Cong, [7] who stud-
ied the cloud computing service providers involved in two-stage supply chain
coordination problem and introduced in the competition model of supply chain
revenue-sharing contract and punished the contract so as to verify that the sup-
ply chain coordination and decision. Waller [5] established a matrix model to
solve when it should be applied or avoid data analysis, theory of how to use
predictive analytics improve issues such as promoting lower total logistics costs.
Tan [4] through impedimented to data acquisition analysis, set up a correspond-
ing data infrastructure framework, the framework of internal data, available
through collection of deep mining, collection capacity required data to build a
data network and helped provide supply chain optimal decision. A systematic
framework for the SAM (Roadmap, Strategic Consolidation, and Assessment)
roadmap was roadmap to help companies avoid getting caught in a mess of data
1000 S. Liu and H. Wang
by Sanders [2]. And the concept of supply chain analysis (SCA) driven by big
data was proposed and the function, process-based, collaborative and agile SCA
Maturity Mode was established by Wang and Angappa [6].
This paper analyzes the overall profitability of the supply chain under dif-
ferent decision-making situations when the big data suppliers participate in the
supply chain competition. The supplier and manufacturer’s demand curve is
taken as the input to construct the profit model of the supplier and the manu-
facturer when the manufacturer assumes the big data cost respectively. Through
the analysis of the model, the profit model of the supplier and the manufac-
turer is obtained respectively in the decision - The overall profitability of the
supply chain. After comparing the two profit models, we get the corresponding
conclusion.
The rest of the paper is organized as follows. In Sect. 2, the problem of is
described and some assumptions is proposed. In Sect. 3, the model of decen-
tralized decision-making profit and common decision making are built. In the
Sect. 4, two models are compared and analyzed. And there is a numerical analysis
in Sect. 5. In Sect. 6, conclusions are given.
The paper considers the supply chain system as a kind of supply and demand
relationship between a supplier and a manufacturer. The manufacturer produces
a product with a fixed life cycle and mature market demand. The supplier pro-
vides raw materials. The input-output ratio of the raw material to the final
product is 1: 1. In order to solve the problem of lagging information transmission
among member firms and to improve the accuracy of market demand forecast-
ing, the third-party big data enterprise is used to help the supply chain to build
a big data decision platform. The cost is borne entirely by the manufacturer.
Because of exists of the supply chain big data platform, the information on the
supply chain is completely symmetrical. Assuming that the profit-sharing prob-
lem in cooperative decision-making has already reached a corresponding contract
between supply and demand, the paper will not consider it.
Suppose that the manufacturer’s market demand function is D(pA ) = a −
bPA + bK(t), then we can get the manufacturer’s price curve PA = ab − 1b D, for
the convenience of calculation, let ab = m, 1b = n, then there is PA = m − nD +
K(t). It is worth noting that, where m is the limit price that the market can
afford [13], that is to say, at this price, the purchase rate of the commodity is 0;
Suppose that the market demand function of the suppliers is D(pB ) = α − βPB ,
we can get PB = α α
β − β D, let β = s, β = t, then PB = s − tD, empathy, s is
1 1
D : Market demand;
PA : The price of the product produced by the manufacturer;
PB : The price of the raw materials supplied by the supplier;
GB : The prices of the raw materials supplied by the sub-suppliers;
K(t) : Big data horizontal curve;
r : Big data cost coefficient;
ΦA : The profits of the manufacturers;
ΦB : The profits of the suppliers;
ΦAB : The total profit of the supply chain when making decisions together
Among them, the big data cost is F = rt 2 D, that is, the manufacturer’s
output is positively correlated with the big data costs, the higher the output,
the greater the cost of the big data.
ΦB = (PB − GB ) × D. (2)
s − PB
It can be known from the supplier’s demand function that D = t . There-
fore, its profit is:
1
ΦB = s (PB − GB ) − PB 2 + PB GB .
t
Find the partial derivative of PB in the supplier profit model and let it be
equal to 0, and then the function of raw material price provided by the suppliers
and cost is obtained:
∂ΦB 1
= [s − 2PB + GB ] = 0,
∂PB t
s + GB
PB = . (3)
2
The manufacturer’s profit formula can be known from formulas (3) and (1):
s + GB rt
ΦA = [(m − nD + k(t) − )] × D − D.
2 2
1002 S. Liu and H. Wang
s + GB m + k(t) − s +2GB − rt
2 rt m + k(t) − s +2GB − rt
2
ΦA = [(m − nD + k(t) − )] × − × .
2 2n 2 2n
4 Comparative Analysis
Through the above analysis, it obtains the profit model of the supplier and
the manufacturer in the case of decentralized decision-making and collaborative
decision-making when the manufacturer is fully committed to the big data cost.
In the case of decentralized decision-making, suppliers and manufacturers do
not take into account the overall interests of the supply chain to maximize, but
entirely from the perspective of their own interests to make decisions. Because
of the existence of big data platform, although the suppliers and manufacturers
make decisions discretely, the transmission of information flow is transparent.
The manufacturer makes the decision first, and the supplier makes its own pro-
duction decision according to manufacturer’s production plan.
1004 S. Liu and H. Wang
(e − GB )
2
(e − s +2GB )(e − 3s −2 GB )
ΔΦ = −
4n 4n
2
(e − GB ) (2e − s − GB )(2e − 3s + GB )
= −
4n 16n
2 2 2
(s − GB ) + 4[(e − GB ) − (e − s) ]
= .
16n
According to the above it can be known that s is the limit price that the
market can afford, so there must be GB s, so, (e − GB )2 − (e − s)2 0, and easy
to proof that ΔΦ 0, that is to say the profit in cooperative decision-making is
higher than that in decentralized decision-making.
It can be known that manufacturers dominate the entire supply chain and
completely control the supply chain’s big data platform by analyzes and contrasts
the two models, but it does not mean that the manufacturer can control the
information flow entirely. Because in the case of decentralized decision-making,
manufacturers do not take into account the conditions of suppliers when making
decisions, so the interests of suppliers are damaged and lead to supply chain
breaks. In the case of cooperative decision-making, manufacturers and suppliers,
under the condition of ensure their own interests, determine the optimal number
of production according to the conditions of both factors. After reaching the
corresponding contract, the cooperation tends to be stable, which is to make the
overall profit maximization of the supply chain as a prerequisite.
The above analysis can also prove that only cooperation can make the supply
chain system to achieve the maximum profit, the joining of big data suppliers
does not change the competition pattern of supply chain. Although manufactur-
ers dominate the supply chain, choose to cooperate with suppliers will achieve a
win-win situation.
Analysis of Supply Chain Collaboration 1005
5 Numerical Analysis
In this secondary supply chain, the overall profit of the supply chain with the
2
raw material prices and market demand changes, Suppose that K(t) = t2 and
large data cost coefficient r = 0.5, consider the profit situation of supply chain
when t = 10, raw material price and market limit price change, the concrete
numerical analysis is shown in Table 1.
Table 1. Profitability of raw material price and market time price change
It can be seen from the numerical analysis that when the price of raw materi-
als is kept constant, the market limit price of the supplier is unchanged and the
increase of the market limit price of the manufacturer will increase the overall
profit of the two supply chains; The limit price of manufacture does not change
and the increase of the market limit price of supplier has no effect on the overall
profit of the supply chain. In addition, from the table can clearly see the com-
mon decision-making big data supply chain profits than traditional supply chain
profits increase of at least 50.5%.
6 Conclusion
Big data theory has spawned many new service and industry models, and it
is challenging for traditional industries to use big data to change their supply
chains operating. How big data can be used to implement change, what should
be targeted to gain competition advantage is the first issue enterprise should
consider.
The model constructed in the paper focuses on the competition problem
brought by big data suppliers join to the supply chain, and examines the supply
chain cooperation mode from the profit perspective. It is not sharing big data
cost by manufacturers and suppliers in the model, which is a common phenom-
enon in practice. In order to gain a dominant position in the supply chain, one
enterprise chooses to bear the cost of big data alone to achieve the control of big
1006 S. Liu and H. Wang
data platform. But it does not means that the enterprise will be able to achieve
an increase in profits, partly because of the cost of containment, on the other
hand the stability of cooperation could be impacted after enterprise obtaining a
dominant position, and the overall profit of the supply chain maybe reduced in
that condition. Therefore, the supply chain member companies choose to coop-
erate with each other and make decisions collaboratively is the key to improve
the overall profit of supply chain through introducing the third-party big data
providers. The analysis of the article is also just to verify this conclusion.
References
1. Chen DQ, Preston DS, Swink M (2015) How the use of big data analytics affects
value creation in supply chain management. J Manage Inf Syst 32(4):4–39
2. Sanders NR (2016) How to use big data to drive your supply chain. Calif Manage
Rev 58(3):26–48
3. Shi KR, He P, Xiao TJ (2011) A game approach to cooperative advertising in a
two-stage supply chain. Ind Eng J 12:6–9
4. Tan KH, Zhan YZ et al (2015) Harvesting big data to enhance supply chain inno-
vation capabilities: an analytic infrastructure based on deduction graph. Int J Prod
Econ 165:223–233
5. Waller MA, Fawcett SE (2013) Click here for a data scientist: big data, predictive
analytics, and theory development in the era of a maker movement supply chain.
J Bus Logistics 34(4):249–252
6. Wang G, Gunasekaran A et al (2016) Big data analytics in logistics and supply
chain management: certain investigations for research and applications. Int J Prod
Econ 176:98–110
7. Wang HC, Cong JJ (2016) Study on coordination of cloud computation service
supply chain under joint contracts. Logistics Technol 5:157–160
8. Wu C, Zhao DZ, Pan XY (2016) Comparison on dynamic cooperation strategies
of a three-echelon supply chain involving big data service provider. Control Decis
7:1169–1171
9. Wu H, Han XH (2016) Production decisions in manufactures competing closed-loop
supply chains under remanufacturing costs disruptions. Comput Integr Manuf Syst
4:1130–1140
10. Xin L, Jia Y (2011) Supply chain strategic partner selection based on game theory.
Syst Eng 4:123–126
11. Xu L, Qi XU, Liu X (2015) A literature review on supply chain contracts under
competing multi manufacturers. Technoeconomics Manage Res 7:3–7
12. Yang DJ, Bai Y (2015) Supply chain core firms competition and decentralization
based on hotelling model. Proc Natl Acad Sci 102(4):1157–1162
13. Zhang A (2012) A comparative analysis of several demand function. J Jilin Province
Inst Educ 10:153–154
14. Zhang TF, Zhang YM (2011) Analysis of knowledge sharing in supply chain node
based on game theory. Stat Decis 17:184–186
Fuzzy GM(1,1) Model Based per Capital
Income Predicted of Farmers in the World
Natural and Cultural Heritage Area: Take
Leshan City for an Example
1 Introduction
In the past few decades, China has experienced rapid economic growth and
significant reduce of poverty [3]. However, at the same time, a lot of contradic-
tions and problems have exposed. In recent years, the problem of slow growth
in farmers’ income has become the focus of the society from all walks of life.
The difficulties faced by the current farmers’ income growth have become a big
obstacle to the virtuous cycle of the whole national economy. Therefore, the
study of farmers’ income growth is of far-reaching significance.
Since the reform and opening, China has carried out a series of significant
reform in the countryside. In 1978, the rural reform which is market-orientation
is a historic turning point of the agricultural development of China: not only
break the bondage of the traditional system but also promoted the growth of
farmers’ income greatly. From 1978 to 2008, the per capita net income of farmers
increased from 134 to 4761 [1]. In particular, over the past 30 years, the changes
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 83
1008 Z. Meng et al.
of the growth of farmers’ income in China are as follows. With the inflation
factors deducted, from 1978 to 1985, it is growing at 15.2% yearly; from 1986
to 1991, at 2.7% yearly; from 1992 to 1996, at 5.6% yearly. Besides, after 1980,
1996 is the fastest growth year of farmers living income at the rate of 9% [7].
After 1997, enter the stage of slow growth while the growth rate of farmers’
income was 4.6% which was the half of 1996. In 1998, their income continued
going down while the speed of growth was only 4.3%. The year of 1999 and 2000
were the same and the speed of growth was 3.8% in 1999. The farmers’ income
was into a trough at 2.1% in 2000. In 2001, there was a recovery growth at 4.8%.
However, it reduced to 4.3% in 2003. It achieved a steady increase from 2004 to
2008 and 2008 was at 8% [6].
In conclusion, there is a large change of growth range of per farming capita
income and it will cause a negative influence to the development of rural economy
even the whole national economy.
As a result, the prediction of per farming capita income becomes the focus of
public concern. The key of long-term agriculture development is solving the prob-
lem of agriculture, rural areas and farmers (the three agriculture-related issues)
reasonably and effectively. And the problem of per capita income of farmers is
not only the core of this issue but also the key to solve the problem of the three
agriculture-related issues. According to the history of the economic development
of China, it’s clear that the per capita income of farmers has an obvious change
trend. In order to solve the problem of the three agriculture-related issues better,
it needs to forecast the per capita income of farmers and grasp the change trend
of it accurately as much as possible. However, some uncertain factors which are
in the statistical process have become the chief problem of the prediction. For
per capita income of farmers, there are many kinds of uncertain factors such as
quantitative and non-quantitative factor, known and unknown factor.
Neglecting those non-quantified and unknown factors, the uncertain factors
affecting the per capita income of farmers can be divided into two categories: (1)
The part of production grew and sold by farmers themselves; (2) The production
cost of agriculture.
There are many methods for forecasting the uncertain factors. In this paper,
the prediction is based on the fuzzy gray prediction model.
The gray forecasting method has been applied in the modeling process of
the dynamic system successfully in different fields such as agriculture, ecology,
economy, statistics, industry and environment. In the absence of long-term his-
torical data, it can use a system model to predict the incomplete or uncertain
information.
According to the statistical method of per farming capita income from the
statistical yearbook, the factors affecting per farming capita income (P) mainly
include: the total income of farmers (TI), the total population of agriculture
(IP). The relationship between them can be expressed by the following formula:
TI
P = IP .
Leshan city of Sichuan province, as an area with World double-heritages,
will be used as a case to study. The problem faced by the area with World
Fuzzy GM(1,1) Model Based Per Capital Income Predicted of Farmers 1009
2 Problem Statement
At present, per farming capita income is an important index which reflects the
development of rural economy [5]. Therefore, it is very significant to predict and
grasp accurately the changed trend of per farming capita income. This part will
introduce the condition of per farming capita income of Leshan city and use it as
an example to forecast per farming capita income with the fuzzy gray prediction
model.
Due to the continuous application of the agricultural science and technology
and the continuous improvement of the policies supporting agriculture by gov-
ernment, the total agricultural income of Leshan city has risen from 677.67 to
1992.15 million yuan in the past ten years. As shown in Fig. 1.
Fig. 1. The total agricultural output value of central district of Leshan city
From 2000 to 2009, the total agricultural output value of central district of
Leshan city increased at a stretch. In addition to the year of 2006 and 2007,
the data increased from 107.71 million yuan to 1.7 billion yuan. The overall
increase in agricultural output value is in a small range and a lower growth rate.
In order to a sustained and rapid growth of agricultural economy, the process of
agricultural production should use advanced agricultural science and technology
to increase the production and quality of agricultural products. At the same
time, the policies published by government also need to be further improved.
1010 Z. Meng et al.
Fig. 2. The agriculture population of central district of Leshan city from 2000 to 2009
From 2000 to 2009, although there was a slight rebound from 2005 to 2007,
the agricultural population of central district, Leshan city is of the gradual
decline in the trend of change. The main reasons are as follows: On the one hand,
affected by the Asian financial turmoil, from 2000 to 2005, crop prices were gen-
erally low and the enthusiasm of farmers was severely affected. However, since
2006, with the effect of financial crisis declining, the price of agricultural products
rebounded and the enthusiasm of farmers was encouraged again. On the other
hand, along with the rapid development of Chinese economy, the construction
of urbanization is carried out in the vast rural areas. The acceleration of the
urbanization process makes the local farming population decrease continuously.
Thus, the agricultural population in central district, Leshan city also showed
a decreasing trend. In summary, the total value of agricultural production of
central district, Leshan city, has followed an increasing trend while the number
of agricultural population has continued to decrease. Affected by the total value
of agricultural production and the number of agricultural population, per capita
income of farmers should also show an upward trend, as shown in Fig. 3.
From 2000 to 2009, the per capital income of farmers was increasing continu-
ously with slow speed and low growth rate. The reasons are as follow: First, with
the progress of science and technology, many kinds of advanced science and tech-
nology have been widely applied to agricultural production; second, the Chinese
government has introduced various policies to support agricultural development
which reduces the burden of farmers. For a sustained, rapid and steady growth
of per capita income of farmers, it is necessary to improve the conversion rate of
agricultural science and technology and perfect the various supporting policies
of agriculture constantly.
Fuzzy GM(1,1) Model Based Per Capital Income Predicted of Farmers 1011
Fig. 3. The per capital income of farmers in central district, Leshan city between 2000
and 2009
3 Modeling
The gray system theory was first proposed by Deng [2], who used the differential
equation as the forecasting model and the least square method to obtain the
coefficient of the equation which has a wide range of application. In the gray
forecasting system, the original data is defined as an accurate. However, in fact,
the original data is not accurate which unable deal with the fuzzy phenomenon
in reality effectively. In this paper, define the original data of per capita income
of farmers as triangular fuzzy number and forecast the fuzzy numbers with gray
forecasting system. Thus, this model consists of two parts: one is the fuzzy part;
the other is the gray forecasting part.
3.1 Fuzzification
As mentioned earlier, the per capita income of farmers is affected by two factors:
the total value of agricultural production (TI) and the agricultural population
TI
(IP). The relationship between them is: P = IP .
When calculating the per capita income of farmers, this method has some
shortcomings. Due to some indicators which unable get accurate enough data,
it can only be estimated by experience. As a result, it will reduce the accuracy
and scientificalness of the end result.
In the statistical process of the total income of farmers, some indicators
unable be calculated accurately such as the part of self-production and mar-
keting and the agricultural production cost of farmers. The self-production and
marketing is the process to cost their own agricultural products. These agricul-
tural products do not enter the market so that their value can not be measured
accurately. In fact, the agricultural production cost includes labor costs while
farmers own labor costs always unable be carried out. As a result, it unable be
measured accurately. In these two parts of the statistical process, some indexes
are often judged by experts’ experience which is lack of precision.
1012 Z. Meng et al.
Thus, the total income of the farmers in a region is only an estimate rather
than an exact value. In this situation, it can be obscured by triangular fuzzy
number to obtain a triangular fuzzy number of per capita incomes of farmers.
In order to solve the above problem, Zadeh proposed the fuzzy theory in
1965. Zadeh represent the fuzzy set by membership function following the idea
that general set is represented by feature function [8].
Definition 2. Set A as a fuzzy set in domain X. If ∀α ∈ [0, 1], the α- cut sets
of A are all convex set so that the fuzzy set A is called as the convex fuzzy set.
Definition 3. If the fuzzy set M is a normal convex fuzzy set which is defined
in the real field R, it is satisfied: (1) There is a unique point x0 ∈ R, making
μM (x0 ) = 1(x0 is called as the mean value of M ); (2) μM (x) is continuous from
left to right, then M is called as a fuzzy number. The meaning of the fuzzy
number M is “the approximate real number of x0 ”.
From the definition of fuzzy number, the α- cut sets of Mα actually is a closed
interval in real number field R: Mα = {x ∈ R|μM (x) ≥ α} = [mlα , mrα ] mlα and
mrα represent severally the left and right end points of the closed interval Mα .
The general expression of the fuzzy number M is μM (x) = {L(x), l ≤ x ≤
m; R(x), m ≤ x ≤ r}; and L(x) is the right continuous increasing function,
R(x) is the left continuous decreasing function, 0 ≤ L(x), R(x) ≤ 1. If both
the function L(x) and R(x) are linear functions, M is called as triangular fuzzy
number, often denoted as M (l, m, r). The total income of farmers is converted to
triangular fuzzy number and expressed as M (T I − , T I, T I + ). The membership
function shown in Fig. 4.
The relationship between the per capita income of farmers and the total
income of farmers is:
TI
P = .
TP
We can get the triangular fuzzy number of the historical data of the per
capita income of farmers: M (P − , P, P + ).
The GM(1,1) forecasting model is described below: Regard the original data of
the per capita income of farmers as a set of original data series p(0) , we can get:
− + − +
p(0) = {(p(0) (1), (p(0) (1), p(0) (1)), · · · (p(0) (n), (p(0) (n), p(0) (n))}
and
− + − +
p(1) = {(p(1) (1), (p(1) (1), p(1) (1)), · · · (p(1) (n), (p(1) (n), p(1) (n))}.
4 Application
In order to get the triangular fuzzy number of per capita income of farmers in
central district, Leshan city, it’s necessary to fuzzify the historical data of per
capita income of farmers in this area in the past ten years. The historical data
are as shown in Table 1:
1014 Z. Meng et al.
Table 1. The original data of per capita income of farmers in central district, Leshan
city between 2000 to 2009
Year 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
Value (Yuan) 2306 2408 2573 2748 3154 3753 3829 4497 5192 5608
We suppose that:
−
P0 (k) = (PK , Pk , Pk+ ),
Pk − Pk− = α,
Pk+ − Pk = β.
Table 2. The triangular fuzzy number of original data of per capita income of peasants
in central district, Leshan city between 2000 to 2009
Year 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
P− 2206 2308 2473 2648 3054 3653 3729 4397 5092 5508
P 2306 2408 2573 2748 3154 3753 3829 4497 5192 5608
P+ 2406 2508 2673 2848 3254 3853 3929 4497 5292 5708
By now, input the triangular fuzzy numbers of historical data of the per
capita income of farmers in the central district of Leshan city to the gray fore-
casting model and get the predicted values are as Tables 3 and 4 shown:
Table 3. The predicted values of per capita income of farmers in the central district
of Leshan city between 2001 to 2009
Year 2001 2002 2003 2004 2005 2006 2007 2008 2009
P− 2260 2532 2838 3180 3563 3993 4474 5014 5619
P 2166 2435 2737 3077 3459 3888 4371 4914 5524
P+ 2354 2630 2938 3283 3667 4097 4578 5114 5714
Table 4. The predicted values of per capita income of farmers in the central district
of Leshan city between 2001 to 2009
−
Fig. 5. The actual values and predicted values of PK
per capita income of farmers has a sustained and rapid growth is important
to the development of the entire national economy and the construction of a
harmonious socialist society. According to the above forecast results, per capita
income of farmers in central district, Leshan city is in the overall upward trend.
In addition, the higher degree of fit between the predicted value and the original
data indicates the validity of the fuzzy-gray forecasting model and also proves
the scientificity of the predicted value. (As shown in Figs. 5, 6 and 7).
In the past decade, due to the continuous progress of agricultural science and
technology and the continuous improvement of agricultural supporting policies,
to a certain extent, it improved the productive rate of agriculture and reduced
the burden on farmers. However, the per capita income of farmers is growing at
a low speed with a low growth rate. Although there is a rising tendency, the per
capita income of farmers is fluctuated because of some related economic factors.
In this view, the suggestions are as follows:
First of all, establish the work thinking of how to achieve stable increasing in
income of farmers. The overall work thinking: the aim is to improve the income
of farmers; the orientation is market; the support is resource; the basis is the
structural adjustment; the driving force is science and technology; the way is
1016 Z. Meng et al.
+
Fig. 7. The actual values and predicted values of PK
Third, the Government should implement the policy and measures to ensure
that farmers’ income increase. (1) It is of great significance for agricultural
development and farmers’ incomes to increase financial investment reasonably
and carry out the agricultural subsidy policy. The funds for supporting agri-
culture should focus on poverty alleviation, agriculture foundational facilities,
the research and application of agricultural technology and the green ecolog-
ical agriculture. (2) Deepen the reform of rural financial. In the innovation of
rural financial system, developing small and medium-sized banks and rural finan-
cial guaranty companies will solve the problems of rural finance fundamentally
and provide a strong financial support for the development of three agriculture-
related issues. It has an important role in regulation and protection to improve
the production and income with the development of rural economy.
Finally, continue to promote the strategic adjustment of agricultural struc-
ture and improve the quality and efficiency of agriculture comprehensively. At
this stage, improving the efficiency of agriculture should be market-oriented
focusing on high quality and diversification of agricultural varieties. It means
a change from yield-oriented to quality and efficiency.
6 Conclusion
Through the analysis of the per capita income of the farmers during the 30 years
of reformation and opening, it is found that there are still problems in the growth
process such as low growth rate and large fluctuation. Steady growth of farmers’
income supports strongly for the economic with rapid growth of China and the
smooth progress of reformation and opening.
In this paper, a forecasting model of the per capita income of farmers is
established by fuzzy theory and gray forecasting system. As a case for our study,
the central district, Leshan city provides with the relevant data. First, use the
correlation theory of triangular fuzzy number to deal with the uncertain question;
then use the gray forecasting model to predict the triangular fuzzy number which
ensures the accuracy of the forecast results. At the same time, it provides a
scientific basis for the formulation and implementation of relevant policies.
Acknowledgements. This work was supported by the Humanities and Social Sci-
ences Foundation of the Ministry of Education (Grant No. 16YJC630089), and the
Sichuan Center for System science and Enterprise Development Research (Grant No.
Xq16C04).
References
1. Chen LQ (2013) Discussion on growth path about per capita annual net income of
rural households of guangxi farmers. J South Agric 44(8):1402–1407
2. Deng J (1990) Application of grey system theory in China. In: International Sym-
posium on Uncertainty Modeling and Analysis, 1990, Proceedings, pp 285–291
3. Lin JY (2012) Demystifying the Chinese Economy. Cambridge University Press,
New York
1018 Z. Meng et al.
Yuyan Luo, Zhong Wang(B) , Yao Chen, Yahong Wang, and Jiming Xie
1 Introduction
In recent years, with the transforming of the concept of ecological civilization
construction, to realize low carbon development and cycle development has been
the important goal of the international economic development, the development
and the transformation of the target. The development direction of enterprise
has changed from a single to rely on economic benefit as the guidance to the eco-
logical cycle of performance oriented transformation. The evaluation of ecologi-
cal cycle operating performance has become an important research topic, which
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 84
1020 Y. Luo et al.
The evaluation index system should accurately reflect the actual situation of
enterprise’s production and operation management. The establishment of index
system is a key part of the whole ecological cycle operating performance’s evalu-
ation. It is essential to select the right index and guarantee the credibility of the
results, therefore, this paper combines with the construction principles includ-
ing system’s scientificity and accuracy, effectiveness and comparability, stability
and dynamic, operability, combination of qualitative and quantitative and so
on, and based on a large number of research results of scholars and the actual
work of ecological cycle transformation in enterprises, using analytic hierarchy
process and expert-consulting method to select the indicators, builds the eco-
cycle operational performance evaluation index system which includes economi-
cal operation, resource impact control, technology research and development and
eco-cycle innovation. However, based on the performance evaluation of ecological
recycling operation of Baotou steel, this research is aimed at finding that there
exists missing data of the related index data in development of the technology
and pollution control investment and resource compensation at resource impact
control layer by collecting concrete data of Baotou steel such as annual report,
social responsibility report and so on. Thus considering the actual operation of
Baotou steel, we delete the indicators including R & D investment proportion
of total amount, the number proportion of R & D in technology research and
development level and pollution control investment growth rate and resource
compensation fee increase rate in resource impact control layer. The final eco-
logical recycling performance indicators are as follows in Fig. 1.
3 Entropy-Topsis Mode
Target Criterion
Index layer
layer layer
Resource impact
B2: Resource comprehensive recycling level
control(B)
system
Normalization:
yab
Qab = n (1 ≤ a ≤ n, 1 ≤ b ≤ m). (3)
a=1 yab
m 2
Da =
+
(wab − wb+ ) (a = 1, 2, · · · , n), (9)
b=1
m 2
Da − = (wab − wb− ) (a = 1, 2, · · · , n). (10)
b=1
where Ha is the a object and how close the solution to, the value of Ha is
between 0 and 1, the larger the value of Ha , the smaller the distance between
the evaluation object and the positive ideal solution, the greater the distance
from the negative ideal solution, the closer to the optimal level.
1024 Y. Luo et al.
4 Case Study
Baotou steel group (hereinafter referred to as Baotou steel) is one of the earliest
construction of iron and steel industrial bases after the founding of the People’s
Republic of China, it was first constructed in 1954 and was put into production
in 1957. Turned into a corporate enterprise in 1998, it is the most important iron
and steel industrial base and the largest rare earth industrial base in China, and
one of the biggest rare earth scientific research and production bases as well.
Always committed to diversify development, except to the major business of
steel and rare earth, it also owns the mining and the steel industry. During the
“twelfth five-year”? “big steel”? will be built and commit to become the world’s
largest production base of rare earth steel and the most competitive rare earth
production and scientific research unit and its annual sales revenue will reach
more than 100 billion yuan. By 2015, the group’s total assets have amounted
to 144.9 billion yuan, 33459 people have been employed and 22.501 billion yuan
of sales income has been achieved. At present, Baotou steel has entered the
ranks of tens of millions of tons of iron and steel enterprises in China, and
has an important influence in the world. In recent ten years, Baotou steel has
been committing to adhere to lead innovation, promoting the ecological cycle
transformation and upgrading, keeping the environmental protection concept
and setting the image of green development. Therefore, based on the practical
situation of Baotou steel, evaluating its performance helps to future development
selection and strategic decision-making.
(1) Data selection
According to the availability of data, this paper selects Baotou steel annual
reports during the ten years from 2006 to 2015, part of the data are obtained
by calculations, and others are obtained by the expert scoring method.
(2) Empirical results
In the process of analyzing of the index data, in order to eliminate the
influence of dimension to the data, we use the formulas (1) and (2) to get
the standardized data shown in Table 1, then take use of formula (3) to
give normalized treatment to the data and finally get the weight P and the
entropy φ.
As is shown in Table 2, the weight of the indicators for economic opera-
tion ranges between 0.02 and 0.033, the weight of the indicators for resource
impact control is weighted between 0.06 and 0.12, the weight of the indicators
for ecological circulation innovation range between 0.04 and 0.132. Therefore,
the indicators of resource impact control have a great effect on the ecological
cycle performance of Baotou steel.
Most of the indicators are at a low level, comprehensive index of ten years in
a similar seven years posted schedule H below the 0.1 level. But it had a great
reversal in 2010, whose comprehensive value of H reached the level of 0.04607,
the value of H for economic operation even reached at 0.7 or so, and the value
of H for resource impact control and the ecological circulation innovation were
also at a high stage the ten years (Table 3).
Eco-Cycle Comprehensive Operation Performance Evaluation 1025
Year A1 A2 A3 A4 B1 B2
2006 0.1107 0 0.0397 0.0911 0.1048 0.1347
2007 0.1159 0.1197 0.0007 0.0902 0.2048 0.1347
2008 0.1071 0.1112 0.0552 0.089 0.1476 0.1796
2009 0.1117 0.1118 0.0322 0.0866 0.1667 0.1856
2010 0 0.0913 0.7332 0.121 0.119 0.2156
2011 0.1103 0.1126 0 0.1401 0.1571 0.0749
2012 0.1109 0.119 0.0383 0.0949 0 0.0449
2013 0.1108 0.1041 0.0389 0.0905 0.0524 0.015
2014 0.1116 0.1147 0.0351 0.1967 0.0476 0.015
2015 0.111 0.1156 0.0268 0 0 0
Year B3 C1 C2 C3 C4 C5
2006 0.0435 0.1019 0 0.1327 0.142 0.1393
2007 0 0.1019 0.1135 0.1467 0.1514 0.1333
2008 0 0 0.0054 0.1234 0.142 0.1274
2009 0.3913 0.051 0.0054 0.1292 0.0978 0.1298
2010 0.087 0.1911 0.1135 0.1467 0.142 0.1262
2011 0.0435 0.2229 0.2054 0.1257 0.1293 0.119
2012 0.0435 0.0127 0.1946 0.1118 0.142 0.1298
2013 0.1304 0.1274 0.1135 0.0012 0.041 0.0012
2014 0.1304 0.0637 0.1243 0 0.0126 0
2015 0.1304 0.1274 0.1243 0.0827 0 0.094
Item Indicator P ϕ
Economical operation A1 0.9542 0.0231
A2 0.953 0.0236
A3 0.4763 0.2634
A4 0.9355 0.0325
Resource impact control B1 0.8624 0.0692
B2 0.8474 0.0768
B3 0.7755 0.1129
Eco-cycle innovation C1 0.765 0.1182
C2 0.7384 0.1316
C3 0.9004 0.0501
C4 0.9 0.0503
C5 0.9036 0.0485
1026 Y. Luo et al.
Table 3. The closeness of the evaluation objects and the positive ideal solution
0.8000
0.7000
0.6000
0.5000
0.4000
0.3000
0.2000
0.1000
0.0000
2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Fig. 2. The trend of closeness of each evaluation object and positive ideal solution
In order to reflect the current operating state of Baotou steel more intuitively
and clearly, we make a brief line chart, it shown in Fig. 2.
The chart can reflect the development status of the last few years clearly
and objectively, and combining with the specific content from annual report of
Baotou steel, the following conclusions can be reached.
Baotou steel’s comprehensive performance is at huge ups and downs. From
2006 to 2009, the overall ecological cycle operation of Baotou lagged behind
slightly, but still grew steadily. But in the year of 2010, the business of Baotou
steel had a great leap forward trend. The value of H rose from about 0.1 in
2009 to nearly 0.5. For Baotou steel, the year of 2010 is not only glorious, but
also a sharp peak of development. The ecological cycle is good, which is the
Eco-Cycle Comprehensive Operation Performance Evaluation 1027
advanced stage (0.6 ≤ N < 0.8), and advanced stage (0.8 ≤ N < 1.0). So it’s
obvious to come out that the highest H values 0.46, the eco-cycle development
of Baotou steel is in the middle and primary stage. Therefore, we can draw the
conclusion that before 2010, the performance of Baotou steel’s ecological cycling
operation increased slowly, until 2010, the ecological cycle achieved a good level,
but after 2010, industry innovation tended to be slow gradually, Baotou group
operating performance gradually tumbled and the eco-cycle development of Bao-
tou steel is in the middle and primary stage.
5 Conclusion
In view of the above empirical results, we conclude that the ecological perfor-
mance of Baotou Steel rose in 2010, reached a peak in 2010, then decreased
gradually, and the eco-cycle development of Baotou steel is in the middle and
primary stage. And in the case of data support analysis, we confirm the relia-
bility of the conclusion. So based on the conclusion of Baotou Steel Company,
some suggestions are purposed as follows:
(1) Increase resource recycling and sustainable development. In the above analy-
sis we can clearly observe the resource control impact indicator is the corner-
stone of enterprise ecological cycle operation. Benign circulation of resources
under the guidance of national policy is conducive to the development of
enterprises. In the indexes we studied we can also know that controlling pol-
lutant emissions, building chain impact prevention level, achieving effective
material use and effective cycle are the measures to promote the develop-
ment of enterprise resources. Of course, we can also through other ways such
as optimization of industry leading and the construction of new production
lines and other methods to improve the level of corporate resources impact
control.
(2) Increase investment in green technology research and development. Drawn
from the above analysis, eco-cycle innovation plays an important role in the
eco-cycle comprehensive development of enterprises, so the ecological cycle
innovation as a part of leading enterprise development is greatly important
and cannot be ignored, therefore, increasing investment in green technol-
ogy research and development is very important. The policy of the country
now is to encourage the development of green industry and environmental
protection enterprises, only to meet the mainstream trend is the real devel-
opment, and Baotou as a high energy consumption and high pollution steel
enterprise, designing and creating for the theme of environmental protection
industry chain is the priority among priorities of Technology in recent years.
(3) Improve the energy structure. Increasing the recycling of resources and
investment in green technology research and development are currently the
most urgent task and measures based on the current. And for the future
development of ten years, Baotou Steel Company should start in the long
run, improve the energy structure of enterprise, deepen the reform and build
Eco-Cycle Comprehensive Operation Performance Evaluation 1029
References
1. Chan FTS, Nayak A et al (2014) An innovative supply chain performance mea-
surement system incorporating research and development (r & d) and marketing
policy. Comput Ind Eng 69(1):64–70
2. Deilmann C, Lehmann I et al (2016) Data envelopment analysis of cities-
investigation of the ecological and economic efficiency of cities using a benchmark-
ing concept from production management. Ecol Ind 67:798–806
3. Dong J (2010) Analysis of external environment of transmission project based on
improved entropy topsis method. Water Resour Power 3:152–154 (in Chinese)
4. Gongrong C, Yanghong D (2016) Performance evaluation of eco-industrial park
based on the fuzzy mathematics method. J Hunan Univ Sci Technol 19:82–89
5. Jung S, Dodbiba G et al (2013) A novel approach for evaluating the performance
of eco-industrial park pilot projects. J Cleaner Prod 39(5):50–59
6. Li P, Ren H, Zhao L (2005) Evaluation and analysis of enterprise eco-
industrialization based on dynamic indicators. Quant Technica Econ 22(12):16–24
7. Mavi RK, Goh M, Mavi NK (2016) Supplier selection with shannon entropy and
fuzzy topsis in the context of supply chain risk management. Procedia Soc Behav
Sci 235:216–225
8. Rashidi K, Saen RF (2015) Measuring eco-efficiency based on green indicators
and potentials in energy saving and undesirable output abatement. Energy Econ
50:18–26
9. Singh S, Sidhu J (2016) Compliance-based multi-dimensional trust evaluation sys-
tem for determining trustworthiness of cloud service providers. Future Gener Com-
put Syst 67:109–132
10. Song X, Shen J (2015) The ecological performance of eco-industrial parks in shan-
dong based on principal component analysis and set pair analysis. Resour Sci
37:546–554 (in Chinese)
11. Swendsen RH (2016) The definition of the thermodynamic entropy in statistical
mechanics. Phys A Stat Mech Appl 467:67–73
12. Tarantini M, Loprieno AD, Porta PL (2011) A life cycle approach to green public
procurement of building materials and elements: a case study on windows. Energy
36(5):2473–2482
13. Wu XQ, Wang Y et al (2008) Evaluation of circular economy development in
industrial park based on eco-efficiency theory and topsis approach. Chin J Ecol
27(12):2203–2208
14. Ying Z, Hongzhi W, Guotai C (2016) Construction and application of green indus-
try evaluation indicator system based on factor analysis. J Syst Manag 25:338–352
15. Yan L (2012) Assessment of circular economy in chlorine-alkali chemical industrial
parks based on ahp-fce. Ind Technol Econ 31:151–155
Optimization of Closed-Loop Supply Chain
Model with Part and Module Inventory Centers
1 Introduction
There are many conventional studies considering the above two aspects in
the CLSC model [1–4,7,10,12].
For constructing various facilities, Fleischmann et al. [4] proposed the CLSC
model with three activities of product production, resale and waste disposal in
customer, reused market and disposer market, respectively. Amin and Zhang
[1] proposed the CLSC model with reuse activity. For the reuse activity, they
considered the new part from supplier as well as the reusable part from refur-
bishing center so that all parts are used for producing product at manufacturer
in FL. Wang and Hsu [12] suggested the CLSC model with reuse activity, that is,
recycler in RL classifies the returned product into reusable and unusable mate-
rials, respectively. The reusable materials are then reused at manufacturer in
FL and unusable materials are disposed in landfill area. Similar to Wang and
Hsu [12], Chen et al. [3] also suggested the CLSC model with various reuse
activities. In this CLSC model, recycling center collects the returned product
from customer and then classifies them into reusable and unusable products.
The reusable product is reused at retailer in FL and the unusable product is dis-
assembled into reusable and unusable materials. The reusable material is reused
at manufacturer and the unusable material is treated at waste disposal plant
in RL.
For optimizing the CLSC model, Amin and Zhang [1] and Wang and Hsu
[12] minimized the total costs resulting from each stage of FL and RL. However,
Chen et al. [3] maximized the total profit consisting of total revenue and total
cost.
Although, the conventional studies mentioned above considered various activ-
ities of each facilities in FL and RL, they do not explained exactly how to use
the reusable part (or material) for the facilities in FL. Therefore, in this paper,
we propose a new CLSC model with two inventory centers (part and module
inventory centers) so that the reusable part (or material) is exactly and effec-
tively used for the facilities in FL. In Sects. 2 and 3, the proposed CLSC model
is represented by a mathematical formulation, which is to minimize the sum
of various costs resulting from each stage of FL and RL under satisfying vari-
ous constraints. The mathematical formulation is implemented by an adaptive
hybrid genetic algorithm (a-HGA) approach in Sect. 4. In Sect. 5 for numerical
experiment, various scales of the proposed CLSC model are presented and the
performance of the a-HGA approach are compared with those of several con-
ventional approaches. Finally, in Sect. 6, as a conclusion, the efficiencies of the
proposed CLSC model and the a-HGA approach are proved.
The difference between the proposed CLSC model and the conventional
CLSC models [1,3,12] is that the former considers two inventory centers and
the latter does not taken into account them. The detailed logistics are as fol-
lows. Each part supplier at areas 1, 2, 3, and 4 respectively produces new part
types 1, 2, 3, and 4 and then send them to part inventory center. Also, each
module assembler assembles new module types 1 and 2 and then send them to
module inventory center. Recovery center checks the returned product from col-
lection center and then classifies them into recovered modules (recovered module
types 1 and 2) and recovered parts (recovered part types 1, 2, 3, and 4). Each
recovered part and module are sent to part and module inventory centers, respec-
tively. Each inventory center has a function that new part and module in FL and
recovered part and module in RL can be used for respectively assembling and
producing module and product in module assembler and product manufacturer
in FL. The recovered product and unrecovered module at recovery center are
sent to secondary market and waste disposal center so that they are resold and
land filled, respectively.
3 Mathematical Formulation
First, some assumptions are presented.
• Only single product is produced.
• The numbers of facility at each stage are already known.
• Among all facilities at each stage, only one facility should be opened.
• Fixed cost for operating each facility are different and already known.
Optimization of Closed-Loop Supply Chain Model 1033
• Unit handling cost considered at same stage is the same and already known.
• Unit transportation cost considered between each facility are different and
already known.
• All products from customer are returned to collection center.
• The qualities of recovered part and module at recovery center are identical
with those of new part and module.
The objective function of Eq. (1) is to minimize the total sum of fixed costs,
handling costs and transportation costs. Equations (2)–(9) means that only one
facility is opened at each stage. Equation (10) implies that the sum of the han-
dling capacity at each supplier in areas 1, 2, 3 and 4 is the same or greater than
that of the module inventory center. The same meaning is considered in Eqs.
(11)–(17). Equation (18) implies that the sum of the recovered products at each
Optimization of Closed-Loop Supply Chain Model 1037
recovery center is the same or greater than that of the recoverable products with
a1 % at each collection center. Equation (19) restricts that the sum of the han-
dling capacity at all part inventory center is the same or greater than that of the
recoverable product with a2 % at all recovery centers. Equations (20)–(21) indi-
cate the same meanings of the Eqs. (18) and (19). Equations (22)–(29) restrict
the variables to integers 0 and 1. Equation (30) means non-negativity.
4 A-HGA Approach
The mathematical formulation is implemented using the a-HGA approach. The
a-HGA approach is a hybrid approach with adaptive scheme. For the hybrid app-
roach, conventional GA and Cuckoo search (CS) are used and for the adaptive
scheme, Srinivas and Patnaik’s approach [11] is used. Using the hybrid approach
can achieve a better improvement of solution rather than using a single app-
roach does. By using the adaptive scheme, the rates of crossover and mutation
operators used in GA are automatically regulated. The detailed implementation
procedure [5,6,8] is as follows.
Step 1. GA approach
Step 1.1. Representation 0–1 bit representation scheme is used for
effectively representing opening/closing decision of all facil-
ities at each stage.
Step 1.2. Selection Elitist selection strategy in enlarged sampling
space is used
Step 1.3. Crossover Two-point crossover operator (2X) is used.
Step 1.4. Mutation Random mutation operator is used.
Step 1.5. Reproduce offspring
Step 2. CS approach Apply Levy flight scheme [8] to the offspring of GA and
produce new solution.
Step 3. Adaptive scheme Apply the adaptive scheme used in Srinivas and Pat-
naik [11] to regulate crossover and mutation rates.
Step 4. Termination condition If pre-determined stop condition is satisfied,
then stop all Steps, otherwise go to Step. 1.2.
5 Numerical Experiments
In numerical experiment, three scales of the proposed CLSC model are presented.
Each scale has various sizes of part suppliers in areas 1, 2, 3 and 4, par inventory
center, module assembler in area 1 and 2, module inventory center, product
manufacturer, distribution center and retailer in FL and customer, collection
center, recovery center, secondary market, and waste disposal center in RL.
The detailed sizes of each scale are showed in Table 1. For each scale, 1,500
products are produced in FL and handled in RL. The rates at recovery center for
handling the returned products from collection center are as follows: α1 = 60%,
1038 Y. Yun et al.
Scale Part Part Module Module Product Distri- Retalier/ Collect- Recovery Second- Waste
supplier inventory assembler inventory manufa- bution customer ion center ary disposal
center center cturer center center market center
1 2 3 4 1 2
1 4 4 4 41 3 3 1 4 3 10 4 3 10 1
2 12 12 12 12 2 8 8 2 12 8 15 12 8 15 2
3 23 23 23 23 3 20 20 3 23 20 25 20 20 25 3
Approach Description
GA Conventional GA [5]
HGA Conventional HGA by Kanagaraj et al. [6]
a-HGA Proposed approach in this paper
Lingo Conventional optimization solver by Lindo Systems [8]
Measure Description
Best solution Best value of the objective functions under satisfying all
constraints
Average solution Averaged values of the objective functions under satisfying all
constraints
Average time Average value of the CPU time (Sec.) used for running each
approach
Percentage difference The difference of the best solutions of GA, HGA and a-HGA
when compared with that of Lingo
the search processes of the GA, HGA and a-HGA. Table 4 shows the computation
results by GA, HGA, a-HGA and Lingo.
In the scale 1 of Table 4, the a-HGA including the GA and HGA has the same
re-sult and their performances are greater than that of Lingo in terms of the
best solution and percentage difference. In terms of the average solution, the a-
HGA shows the best performance compared with the GA and HGA. However, in
terms of the average time, the a-HGA shows the worst performance and the GA
is the best performer. In scale 2, the performance of the a-HGA is more efficient
than the GA and HGA in terms of the best solution and average solution. The
difference in terms of the percentage difference means that the a-HGA is 0.05%
and 0.06% advantageous compared with the HGA and GA. However, in terms
of the average time, the a-HGA is the slowest and the GA is the quickest.
Similar results are also shown in the scale 3, that is, the a-HGA shows the
best performances in terms of the best solution, average solution and percentage
difference when compared with the GA, HGA and Lingo. However, the search
speed of the a-HGA is about thirty times slower than those of the GA and HGA.
Figure 2 shows the convergence behaviors of GA, HGA and a-HGA until the
generation number is reached to 200.
In Fig. 2, all approaches show rapid and various convergence behaviors during
initial generations. However, after about 50 generations, each approach does not
show any more convergence behaviors and the a-HGA shows to be more efficient
behaviors than the GA and HGA.
The result of the detailed material flows in the a-HGA for the scale 3 is shown
in Fig. 3. The opened facilities at each stage are displayed as white-coloured
boxes. The 975 new parts in each area are produced and then sent to part inven-
tory center. The recovery center recovers the quality of the returned products
1040 Y. Yun et al.
Fig. 3. Detailed material flows and facility numbers opened at each stage
and then sent 225 recovered parts to part inventory center. Part inventory center
stores 975 new parts and 225 recovered parts. Total 1,200 parts (= 975+225) are
shipped to module assembler. Module assembler assembles 1,200 parts and pro-
duces 1,200 new modules. Recovery center recovers the quality of the returned
products and then sent 300 recovered modules to module inventory center. Mod-
ule inventory center stores 1,200 new modules and 300 recovered modules. Total
1,500 modules are sent to product manufacturer for producing 1,500 products.
1500 products are sent to each retailer via distribution center. In RL, 1,500 prod-
Optimization of Closed-Loop Supply Chain Model 1041
ucts from all customers are returned to recovery center through collection center.
900 recovered products (= 60% × 1, 500) of all returned products are resold at
secondary markets. 75 unrecovered parts (= 5%×1, 500) of all returned products
are sent to waste disposal center.
Based on the results of Table 4, Figs. 2 and 3, we can reach the following
conclusions.
• The proposed CLSC model can represent the detailed material flows at each
stage and effectively handle the recovered part and module by using two
inventory centers, when compared with the conventional models of Amin and
Zhang [1], Wang and Hsu [12], and Chen et al. [3].
• The a-HGA approach shows to be more efficient in many measures of per-
formance than the GA, HGA and Lingo, which implies that the former can
explore whole search space rather than the latter do.
• The search speed of the a-HGA is significantly slower than those of the GA
and HGA, since the former has an adaptive scheme to regulate crossover and
mutation operators and the search structure requires many times.
6 Conclusion
In this paper, we have proposed a new type of the CLSC model. The proposed
CLC model has part suppliers at areas 1, 2, 3, and 4, module assembler, product
manufacturer, distribution center and retailer for FL and customer, collection
center, recovery center, secondary market and waste disposal center for RL.
Especially, for effectively handling recovered part and module, two inventory
centers (part and module inventory centers) have been used.
The proposed CLSC model has been represented by a mathematical formula-
tion, which is to minimize the sum of handling cost, fixed cost and transportation
cost resulting from each stage of FL and RL under satisfying various constraints.
The mathematical formulation has been implemented by the a-HGA approach.
The a-HGA approach is a hybrid algorithm with GA and CS approach. Also,
using an adaptive scheme, the rates of crossover and mutation operators are
automatically regulated in a-HGA approach. The a-HGA approach has been
implemented in various scales of the proposed CLSC model to compare its per-
formance with those of GA, HGA and Lingo. The experimental results have
shown that the a-HGA approach is more efficient in terms of various measures
of performance than the GA, HGA and Lingo. However, since the search speed
of the a-HGA approach is significantly slower than those of the others, a room
for improvement is still left in the a-HGA approach.
References
1. Amin SH, Zhang G (2012) An integrated model for closed-loop supply chain
configuration and supplier selection: multi-objective approach. Expert Syst Appl
39(8):6782–6791
2. Amin SH, Zhang G (2013) A multi-objective facility location model for closed-
loop supply chain network under uncertain demand and return. Appl Math Model
37(6):4165–4176
3. Chen YT, Chan FTS, Chung SH (2014) An integrated closed-loop supply chain
model with location allocation problem and product recycling decisions. Int J Prod
Res 53(10):3120–3140
4. Fleischmann M, Krikke HR et al (2000) A characterisation of logistics networks
for product recovery. Omega 28(6):653–666
5. Gen M, Cheng R (1997) Genetic algorithms and engineering design. Wiley, New
York
6. Gen M, Cheng R (2000) Genetic algorithms and engineering optimization. Wiley,
New York
7. Georgiadis P, Besiou M (2008) Sustainability in electrical and electronic equip-
ment closed-loop supply chains: a system dynamics approach. J Cleaner Prod
16(15):1665–1678
8. Kanagaraj G, Ponnambalam SG, Jawahar N (2013) A hybrid cuckoo search and
genetic algorithm for reliability–redundancy allocation problems. Comput Ind Eng
66(4):1115–1124
9. Lingo (2015) Lindo Systems. www.lindo.com
10. Savaskan RC, Bhattacharya S, Van Wassenhove LN (2004) Closed-loop supply
chain models with product remanufacturing. Manage Sci 50(2):239–252
11. Srinivas M, Patnaik LM (1994) Adaptive probabilities of crossover and mutation
in genetic algorithms. IEEE Trans Syst Man Cybern 24(4):656–667
12. Wang HF, Hsu HW (2010) A closed-loop logistic model with a spanning-tree based
genetic algorithm. Comput Oper Res 37(2):376–389
The Innovation Research of College Students’
Academic Early-Warning Mechanism Under
the Background of Big Data
Yu Li and Ye Zhang(B)
1 Introduction
After the reform and opening up, China’s higher education has changed from elite
education to mass education. The admission rate of college entrance examination
has risen from 7% to 80% Since the 1980s, but the quality of students has
been declining and the academic problems have been increasing year by year.
For instance, the number of students who demote and dropped out of school
increased obviously. Therefore, it is necessary to establish the academic early-
warning mechanism to solve these problems.
There is no uniform definition of big data at present, citing the McKinsey Global
Institute’s description of The Next Frontier in Innovation, Competition and Pro-
ductivity: big data refers to the date process that is different from traditional
software tools [8]. Although the definition of big data is not uniform, the char-
acteristics of big data are clear and recognized. (1) The huge data volume. A
variety of terminal equipment and sensors produce a lot of data, PB-scale data
sets can be described as normal. (2) The variety data types. In the era of big
data, more and more unstructured data, including network log, audio, video,
pictures, geographical information. These different types of data needs higher
requirements on data processing. (3) Processing speed. The most significant fea-
tures is to distinguish big data technology from traditional data technology. In
the face of massive amounts of complex data, big data can deal with real-time
data faster. (4) Low value density. The value density is inversely proportional to
the size of the data, for an hour video, the useful data may be only one or two
seconds in the continuous monitoring.
The first feature of big data is large-scale, with the development of infor-
mation technology, a variety of information systems, databases, cloud storage,
Internet, Web of Things and mobile intelligent terminals, especially in recent
years, increase rapidly in social system. The data sharing becomes very easy,
so that the data scale is expanding constantly. At the same time, big data also
has a complex and diverse data structures, owing to large, complex and volatile
date, it is becoming more and more difficult to obtain hidden, useful knowledge,
traditional data warehouse and data mining-related processing mode has been
unable to meet the requirements, big data has a stronger decision-making power,
insight and process optimization capabilities, which brings new changes to data
processing, storage and conversion.
The latent value of big data is the correlation between the data, big data
thinking is a conversion from the traditional causal analysis to the relevant analy-
sis. More and more countries, governments, industries, enterprises and other
institutions have realized that big data is becoming the organization’s the most
important asset, the capabilities of data analysis is becoming the organization’s
core competitiveness [11]. At present, the government has put big data into
application to promote people’s daily lives, the internet has penetrated into var-
ious industries. Big data has brought about an enormous effect on education. It
has been integrated with education and is promoting the reform of educational
model. Besides, it grounds for the application in college students’ academic early-
warning mechanism.
1046 Y. Li and Y. Zhang
Big data is a prediction method based on objective data that has been
obtained rather than on subjective inferences. This analysis method is based
on the huge data, which is obviously different from the sample analysis method
which has been used in the past. It can correct the error of the sampling method
and improve the accuracy of the analysis result. Big data analysis can get the pre-
cise results, easier access to the user’s trust, and thus has a wide utilization. The
application of big data technology in college students’ academic early-warning
mechanism will have more timely and accurate early-warning.
coming of big data, the basis of college students’ work is to master the huge
data which can really understand the characteristics of the behavior of college
students to effectively find out the education, management and service counter-
measure. First, colleges and universities should conduct the top-level design that
is an overall vision of an integrated data platform inside their own campus. Con-
siderable universities in the development of the wisdom campus’s construction,
different departments of the university only consider the work of the department
needs, so that it is difficult to achieve data sharing and integration. So the school
should set up a coordination department and academic pre-warning department
to build a data center to gather the information of the office, academic affairs,
school work, library and other related departments in order to form informa-
tion platform. Secondly, building a system of online data collection platform to
form a large database of students across the school in order to ensure the timely
and complete collection of all students of all data. At the same time, colleges
and universities should from the overall point of view do a good job of data
classification, hierarchical collection and planning work to ensure the diversify
of data sources and data types. Thirdly, colleges and universities should take
the initiative to share the social database contributing to the establishment of
big database of education. In this paper, the educational data can be divided
into three categories: explicit data, behavioral data and system data (Fig. 1).
Explicit data is the data input or output by the end user. Behavior data can
also be called control data, which is designed by the developer for the purpose
of recording the data of the operation process of the user, generally only seen
by the administrator. System data is automatically generated by the system,
behavioral data and system data are hidden data [9].
effective. The work of this system is to dynamically monitor the behavior of col-
lege students inside and outside the classroom, using the data mining technology
of big data to analyze the collected information, and discover the abnormal phe-
nomenon or behavior of college students. Under the big data environment, we
use the technology of Outlier Mining Algorithm to find out the data away from
the regular objects and launch the analysis of the outlier data. From the data
itself, outlier data are deviations from normal data, which may be produced
due to errors in the data formation process, and are often eliminated directly
in early data analysis. However, due to data from reality, a lot of outlier data
is not coming into being due to the errors, but the corresponding data source
itself does contain a special behavior that may also mean a very important
information that indicates the emergence of a new situation, which need us to
distinguish, so the mining and analysis of outlier data is of great significance.
The general idea of outlier mining is that: in a data set with n data objects,
a desired outlier objects k (k ≤ n) is given, and dig out a number of outliers
data that are significantly different from the rest of the objects in the data set
[1]. There are a number of outlier mining algorithms, including statistic-based,
distance-based and bias-based detection algorithms, and outlier-based data using
conventional data mining algorithms. Through scientific monitoring and analy-
sis, we can timely identify the adverse factors, issue academic early-warning to
avoid the depravation in academic.
(1) The early-warning program of the red signal: for red alert students, firstly,
we need to analyze the reasons of their learning difficulties one by one, and
arrange mutual help with each other. Counselor or class teacher would better
chant with each students as much as possible to help them find causes of
learning decline, and then seek out appropriate learning methods to improve
academic grade. Secondly, for red warning students, we can be appropriate
to reduce their extracurricular activities, and increase learning time, learning
content. At the same time, we should not encourage them to take too much
non-professional training curriculum.
(2) The early-warning program of the yellow signal: it is better to give encourage-
ment to the yellow alert students. We could not only affirmed their existing
achievements, but also put forward higher requirements. What’s more, the
1050 Y. Li and Y. Zhang
yellow warning for students can also be divided into two types to be treated
in different. Some students study hard but not get good achievements. Their
academic performance is not good, which is not caused by laziness. They take
a lot of time to learn, but do not learn too much, we can arrange study-well
students to give them one-on-one help, and offer guidance on learning meth-
ods. Other students lack initiative in learning, it is better to carry out strict
management measures to require poor autonomy students. For example, we
can take certain treatment measures, such as the centralized management,
self-study at night, roll-call system.
(3) The early-warning program of the green signal: the green students are very
excellent. We should pay attention to control their complacency, and put
forward higher requirements to keep the existing achievements and continue
to improve their comprehensive ability. Besides, we should ask them to learn
well in the professional courses, doing well in the minor course at the same
time, and inspire them to reach high standards in its computer level and
English ability.
(1) Personalize abnormal elements analysis of outlier objects. Set the key
attribute subspaces of the same outlier object p from every test are kas(p),
i = 1, 2, · · · , p. If P in the i time is not outlier, then kas(p) is an empty
set. Set P is outlier in qo , cj is the occurring times of attribute aj in all
kas(p). Apparently, the bigger is cj , the more times of attribute aj affecting
P to outlier. Appraisal the affecting of attribute aj to P abnormal study by
cj /po , to draw personalize abnormal elements affection chart. For example,
one student is judged as abnormal 7 times in 10 tests data in one week, which
The Innovation Research of College Students’ Academic Early-Warning 1051
is more than 10/2, so he gets an orange warning and his personal affecting
elements and their degree of affecting are: Job performance is 5/7, times of
being late in class is 2/7, internet surfing duration is 4/7, downstream flow
is 3/7, community activity duration is 3/7, staying in dormitory duration
is 1/7, which shows the student’s constitute of study abnormal elements by
Fig. 2.
(2) Total degree of affecting analysis of element aj , set the abnormal degrees of
affecting of aj in every test are cj /k, i = 1, 2, · · · , q. Draw elements continu-
ously affecting chart to show the total situation of study abnormal affecting
elements.
(1) The information collection system of academic early-warning: using the large
database of information collection in university and the large database of
education outside the university collects the internal and external informa-
tion in colleges and universities, and using big data technology to initialize
and classify them.
1052 Y. Li and Y. Zhang
6 Conclusion
Big data can collect data that is unable to obtain or acquire costly in the past,
and more easily utilize their teaching level and management ability in the process
of teaching analysis to fully enhance students’ academic performance, and it will
also set off a huge revolutionary by bringing influence in the field of education.
However, the goal of education is the growth of each person that driving people
to recognize things with a rational thinking, and gradually form a relatively
perfect thinking of self-consciousness, which is so complex and hard to make
perfect forecast. Therefore, on one hand, big data services for education in the
academic early-warning system to be required cautious implementation in the
practical application. On the other hand, it is so attractive and challenging
The Innovation Research of College Students’ Academic Early-Warning 1053
References
1. Annika W, Zdenek Z et al (2014) Improving retention: predicting at-risk students
by analyzing clicking behavior in a virtual learning environment
2. Arnold KE, Pistilli MD (2012) Course signals at purdue: using learning analytics
to increase student success. http://www.itap.purdue.edu/learning/does/research/
ArnolejPistilli-Purdue-University-Course-Signals-2012.pdf2015-12-31
3. Cao W (2014) Analysis of the mechanism of the early-warning and aid for the stu-
dents with learning disabilities. Higher Eng Educ Res 2014(5):93–96 (in Chinese)
4. Chen Q (2012) To explore the early warning mechanism of college students under
the credit system. J Guangxi Teachers Educ Univ 2012(28):60–65 (in Chinese)
5. Ding F (2015) An empirical study on the performance of academic early-warning
of college students. Frontiers Pract 2015(2):72–73 (in Chinese)
6. Ge D, Zhang S (2015) Educ Data Mining Methods Appl. Educational Science
Press, Beijing
7. Hua J (2016) Learning early-warning system in Taiwan University. Educ Sci
2016(3):41–44 (in Chinese)
8. Manyika J, Chui M, et al (2011) Big data: the next frontier for innovation, com-
petition, and productivity. Analytics pp 3–5
9. U.S. Department of Education Office of Educational Technology (2010) Trans-
forming American education: learning powered by technology. http://www.ed.gov/
sites/default/files/netp2010.pdf
10. Wei S (2016) Digging the value of education data in the era of big data. Mod Educ
Technol 23(2):5–11 (in Chinese)
11. Witten IH, Frank E (2000) Data mining: practical machine learning tools and tech-
niques with Java implementations. San Francisco: Morgan Kaufmann Publishers
Inc
Bottleneck Management of Multi-stage
Sorting-Packing Operations with Large-Scale
Warehouse System
1 Introduction
In recent years, responding to the diversifying needs of customers and markets
with optimizing distribution operations and resource saving are the trends among
logistics industries. However, it requires to handle various types of products in
small lots and the efficiency of each operation are decreased in conventional
supply chain with plural local DCs (Distribution Centers). Thus, integration of
local DCs into a large-scale DC with AWS (Automatic Warehouse System) is
considered as an effective strategy for improvement of products flow in entire
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 87
Bottleneck Management of Multi-stage Sorting-Packing Operations 1055
supply chain. As numerous items must be handled in such integrated DC, ade-
quate design of the components such as warehouse, sorting-packing lines and
their operation system is a critical issue for maintaining competitive advantage.
For optimizing products flow in entire process of large-scale integrated DC,
this paper discusses the utility of TCM (Target Chasing Method) [2,4–7] based
dispatching policies. TOC (Theory of Constraints) [1] identifying and improving
the bottleneck processes in multi-stage operations is an effective way for better-
ment of productivity. In this study, various PCVs (Pseudo Control Variables)
[3,8] are examined to utilized, which represent the state of the system, easy to
observe but indirectly controllable. Performance analysis is examined by sim-
ulation experiments and obtained results suggest that smooth flow of items in
each stage of sorting-packing line is realized by the proposed dispatching poli-
cies. Especially, dispatching policy for Just-In-Time delivery to customers that
intends to adapt delivery time to customers, reveals the weak portion of the
entire system.
2 Experimental Method
2.1 Target Chasing Method (TCM)
Symbols:
k : Order of job entry;
i : Lot number;
j : Resource type;
I : Set of the products;
R : Number of total resource;
Q : Total number of all products;
Nj : Usage of resource j for manufacturing all products;
Xjki : Usage of resource j for manufacturing from 1st to k th product i;
Dk : The gap between ideal and real resource usage after k th job entry.
1056 T. Sato and H. Katayama
This paper focuses on the utility of PCVs [3,8], which represents the state of
the system, easy to observe but indirectly controllable (e.g. number of items
and waiting time in queue, work load rate in each sorting-packing line), for
TCM-based dispatching policy. Figure 2 shows the schematic diagram of the
controlled behavior of TCM using PCVs. Vertical and horizontal axes of Fig. 2
are the PCV values of each resource. In this case, there are two facilities in the
same operational stage and the target of PCVs (diagonal line in Fig. 2) is defined
as the same values between PCV1 and PCV2. Therefore, TCM controls current
status of PCVs to the target line for Heijunka production. For example, the
value of PCV2 is higher than PCV1 on the point P of Fig. 2, thus TCM tries to
control PCV2 to decrease and PCV1 to increase for reducing the gap between
actual and target PCVs.
PCV2
Target of PCVs
Control
P
Time evolution
P'
0 PCV1
Fig. 2. Schematic diagram of target chasing method using pseudo control variables
Bottleneck Management of Multi-stage Sorting-Packing Operations 1057
Local DCs
Integrate
Large-scale DC
Arrival &
Suppliers Warehousing Automatic Warehouse 1
Stock
Yard 1st Sorting Stage ・・・
Automatic Warehouse 2
Packing Stations
Area 1
2nd Sorting Stage
・・・
Assemblers Shipment
Chute
Yard
1st Packing Stage
& Re-Warehousing
Packing
・・・ ・・・
Area 2
Table 1. Details and facilities of the operations in each stage of the objective distrib-
ution center
Automatic Warehouse 1
Aisle 1
In order to input jobs to buffer.
Shuttle Racks Shuttle
Stock Yard Buffer Conveyor Buffer Buffer Conveyor
Shuttle Racks Shuttle
Aisle 2
Packing Area 1
Automatic Warehouse 2
Aisle 3
Buffer Station 1
Shuttle Racks Shuttle
Buffer Station 2 Buffer Conveyor Buffer
Packing Area 2
Buffer Station 4
Buffer Conveyor Shipment Yard
Buffer Station 5
Table 3. Part of the generated lists of job entry and shipping order
(a) The list of job entry (Input) (b) The list of shipping order (Output)
Time of Product Station of Station of Shipping Product Lot Station of Station of
job entry code packing packing order code size packing packing
Area 1 Area 2 Area 1 Area 2
1 1 3 4 1 3 4 2 5
2 2 3 4 2 1 4 1 5
3 1 3 4 3 1 6 2 4
4 2 3 4 4 2 4 2 5
5 1 3 4 5 2 4 2 5
6 3 3 5 6 3 6 2 4
7 2 3 4 7 2 6 1 4
8 3 3 5 8 2 4 1 5
9 3 3 5 9 2 6 3 4
10 3 3 4 10 3 6 2 4
11 3 2 4 11 3 6 1 4
12 2 2 5 12 3 6 3 4
13 3 2 5 13 2 6 3 4
14 2 2 5 14 1 4 1 5
15 1 2 4 15 3 6 2 4
··· ···
2000 1 2 5 410 2 6 2 4
The lists of job entry and shipping order of packaged products have been gen-
erated randomly in advance, which are shown in Table 3. Note that all products
1060 T. Sato and H. Katayama
Symbols:
xj : The j th facility x;
xj
OTi : Dispatched time of product i from facility xj ;
QTixj : Waiting time of product i in facility xj ;
ITixj : Job entry time of product i into facility xj ;
W Rx j : Work load rate in facility xj ;
T : Total processing time of entire operations;
N xj : The number of products of which operation is finished in facility xj .
The proposed TCM-based dispatching policies in each stage of sorting oper-
ation using AWS are shown in Table 4. The multi-stage operations are improved
by a combination of the policies in the 1st and 2nd sorting stage. Regarding to
the policy code S − Xa or S − Xb, its components, i.e. S, X, a or b, represent
the stage of the sorting operations, the index of the policies and number of items
or waiting time in queue for PCVs, respectively. For example, the combination
of policy codes 1-1a and 2-0 represents adoption of indicated dispatching rule to
the 1st sorting stage and dispatching rule to the 2nd sorting stage described in
Table 4, respectively.
Table 4. Proposed TCM based dispatching policies for each stage sorting operations
Fig. 5. The time series of the number of items in queue at each packing station in case
of policies 1-0 and 2-0
Table 5. Averages and standard deviations of the criteria values in case of policies 1-0
and 2-0
(a) Number of items in Queue [units] (b) Waiting time in Queue [unit time] (c) Work load rate
AWS1 Packing AWS2 Packing AWS1 Packing AWS2 Packing AWS1 Packing AWS2 Packing
Area 1 Area2 Area 1 Area2 Area 1 Area2
Ave 0.661 38.681 44.011 21.503 1.000 57.524 30.803 31.534 0.33% 50.01% 22.01% 83.44%
SD 0.813 52.552 35.066 18.402 0.000 55.398 30.315 18.612 0.46% 49.94% 7.80% 37.17%
Fig. 6. The time series of the number of items in queue at each packing station in case
of policies 1-1a and 2-0
Table 6. Averages and standard deviations of the criteria values in case of policies
1-1a and 2-0
(a) Number of items in Queue [units] (b) Waiting time in Queue [unit time] (c) Work load rate
AWS1 Packing AWS2 Packing AWS1 Packing AWS2 Packing AWS1 Packing AWS2 Packing
Area 1 Area2 Area 1 Area2 Area 1 Area2
Ave 68.191 2.967 45.355 17.671 58.185 3.527 32.874 25.967 34.10% 49.58% 22.68% 82.73%
SD 72.045 3.912 36.089 15.591 75.736 3.209 34.045 17.020 21.77% 49.94% 7.73% 37.80%
Fig. 7. The time series of the number of items in queue at each packing station in case
of policies 1-1a and 2-1a
Table 7. Averages and standard deviations of the criteria values in case of policies
1-1a and 2-1a
(a) Number of items in Queue [units] (b) Waiting time in Queue [unit time] (c) Work load rate
AWS1 Packing AWS2 Packing AWS1 Packing AWS2 Packing AWS1 Packing AWS2 Packing
Area 1 Area2 Area 1 Area2 Area 1 Area2
Ave 68.191 2.967 54.340 8.358 58.285 3.527 39.689 11.755 34.10% 49.58% 27.17% 82.73%
SD 72.045 3.912 44.170 7.497 75.736 3.209 47.651 7.435 21.77% 49.94% 10.19% 37.80%
8 Station1 20 Station4
Station2 Station5
Number of Items in Queue
Station3
6 15
4 10
5
2
0
0
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Processing Time [unit time] Processing Time [unit time]
(a) Number of Items in Queue at the 1st Packing Stage (b) Number of Items in Queue at the 2nd Packing Stage
Fig. 8. The time series of the number of items in queue at each packing station in case
of policies 1-2a and 2-2a
Table 8 shows the criteria values in each stage in case of policies 1-2a and 2-
2a. In addition, the proposed dispatching policy dependent characteristics of the
number of items in queue, waiting time in queue and the work load rate at each
stage are summarized in Fig. 9. The bar and line charts in this figure are the aver-
ages and standard deviations of the criteria values, respectively. From Table 8,
the work load rate in AWS 1 is decreased from the result of the last section
shown in Table 7(c); nevertheless, the number of the items and the waiting time
in queue of AWS 1 are extremely decreased from Table 7(a) and (b). This is
Bottleneck Management of Multi-stage Sorting-Packing Operations 1065
Table 8. Averages and standard deviations of criteria values in case of policies 1-2a
and 2-2a
(a) Number of items in Queue [units] (b) Waiting time in Queue [unit time] (c) Work load rate
AWS1 Packing AWS2 Packing AWS1 Packing AWS2 Packing AWS1 Packing AWS2 Packing
Area 1 Area2 Area 1 Area2 Area 1 Area2
Ave 4.807 3.554 50.920 10.444 5.154 4.180 36.578 14.222 2.40% 51.92% 25.46% 86.62%
SD 11.048 4.554 46.495 8.753 20.815 3.276 74.151 7.768 7.18% 49.90% 12.27% 34.04%
4 Concluding Remarks
This paper examined the performance and the effectiveness of the proposed
TCM-based dispatching policy using PCVs for bottleneck management of multi-
stage sorting-packing operations in large-scale AWS-based DC. From the results,
Heijunka production in each packing stage is expected to realize by the proposed
dispatching policies. Moreover, this paper analyzed the case of given scheduled
shipping order. In this case, dynamic reassignment of the transactions to each
operational station at the 1st and 2nd packing stage facilitates optimization
of dispatching operation effectively. Therefore, the proposed TCM-based dis-
patching policies with dynamic reassignment are utilized for effective bottleneck
management for realizing Heijunka operation of the entire system.
The aim of this paper is the performance analysis of dispatching policies at
each sorting stage in AWS for its relevant installation. Heijunka of the work load
rate of each aisle at multi-stage AWS and operations at each stage are required
for further improvement of throughput; which is a possible subject for future
analysis.
1066 T. Sato and H. Katayama
References
1. Goldratt EM, Cox J (1984) The goal: a process of ongoing improvement. The North
River Press, Great Barrington
2. Hwang R, Katayama H (2009) Integrated procedure of balancing and sequencing
for mixed-model assembly lines: a multi-objective evolutionary approach. Int J Prod
Res 40(21):6417–6441
3. Ishikawa S, Murata K, et al. (2012) Bottleneck management on multi-process
sorting-packing system in distribution centre TCM implementation and its applica-
bility reinforcement. In: Proceedings of The 15th J.S.L.S. national conference. The
Japan society of logistics systems, 2nd–3rd June, pp 143–148 (in Japanese)
4. Katayama H (2006) Lean technology and its extensions-a case of target chasing
method. In: Proceedings of the 11th Cambridge symposium on international manu-
facturing (CSIM 2006), Institute for Manufacturing, University of Cambridge, UK,
27th–28th September, 15 pages in CD-ROM
5. Katayama H, Tanaka M (1996) Target chasing method for work load stabilisation
of production line-some advances of lean management technology. In: Proceedings
of the 20th international conference on computers & industrial engineering (20th
ICC & IE). Korean Institute of Industrial Engineers, Kyongju, 6th–9th October, pp
425–428
6. Kotani S (1983) Sequencing algorithms for the mixed-model assembly line. Toyota
Technol 33(1):31–38 (in Japanese)
7. Monden Y (1983) Toyota production system: an integrated approach to just-in-time,
2nd edn. Industrial Engineering & Management Press, Norcross
8. Sato T, Ishikawa S, Katayama H (2016) Bottleneck management on multi-process
sorting-packing system in large-scale distribution centre. In: Proceedings of the 2016
annual autumn meeting of japan industrial management association. Japan Indus-
trial Management Association, Tokyo, pp 188–189 (in Japanese)
A Carbon-Constrained Supply Chain Planning Model
1 Introduction
More evidences show that human activity leads to global climate change via carbon
emissions. Curbing the carbon emissions and controlling climate change have become
a great challenge that all humanity has to face. As the main carbon emitters, firms must
take their carbon emissions seriously.
A conventional thinking to curb the carbon emissions is to improve energy effi-
ciency and install emissions control devices. However, such initiatives require capi-
tal investments. Meanwhile, efficient business practices and operational policies are
another way to change carbon emissions. Different manufacture or order frequency
may result in different emissions. Obviously, this way is more cost-effective, in con-
trast to the way of investing novel equipments. Some researchers have focused on the
strategy to reduce the carbon emission through supply chains. Benjaafar et al. [5] inte-
grated the carbon emission concerns into the operational decision-making with regard
to procurement, production, and inventory management. They provided a series of oper-
ational models under different regulatory policies. Lee [15] used empirical case study
of HMC (Hyundai Motor Company) and its key supplier’s front bumper product to
illustrate the issue of carbon footprint in supply chain management. Absi et al. [1] mod-
eled the integration of carbon emission constraints in lot-sizing problems. Helmrich
et al. [11] showed that lot-sizing with an emission capacity constraint is NP-hard and
proposed several solution methods. Hua et al. [13] investigated how firms manage car-
bon footprints in inventory management under the carbon emission trading mechanism.
Chen et al. [9] provided analytical support for the notion that it may be possible, via
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 88
1068 Z. Tao and J. Xu
2 Problem Statement
A simple supply chain normally consists of several manufacturing plants and one ware-
house or distribution centre (DC), with several products (see Fig. 1).
Plant 1 DC 1 Customer 1
Plant 2 DC 2 Customer 2
. . .
. . .
. . .
Plant I DC J Customer K
Given there is common decision maker who manages the whole supply chain, the
supply chain planing problem can be formulated as an single level decision-making
A Carbon-Constrained Supply Chain Planning Model 1069
model. However, An alternative, more realistic, way to pose this problem is by recog-
nizing: (i) the natural hierarchy that exists between the production and the distribution
part of the supply chain and (ii) the fact that full information against the distribution
part may not be really available at the production part and vice versa.
Without carbon emission considerations, the plants and the DCs determine the pro-
duction amount and the inventory amount, respectively. Their objective In the presence
of carbon emission considerations, the plants and the DCs must account for the emis-
sions associated with various decisions regarding ordering, production, and inventory
holding. In order to develop the mathematical model, we make some condition assump-
tions as follows:
(1) The emissions are linearly increasing in the associated decision variables;
(2) All the plant belongs one owner and all the DCs belongs to the other owner. DCs’
owner has priority in decision;
(3) Customers’ demands are stochastic;
(4) There is only one period in a planning horizon.
3 Model Formulation
In this section, we develop a mathematical model for the problem described above.
3.1 Notations
Before formulating the mathematical model, the needed notations are listed as follows.
Indices:
i: Plant (1, · · · , I);
j: DC ( j, · · · , J);
k: Customer (1, · · · , K);
l: Product (1, · · · , L).
Parameters:
αil : Capacity coefficient of product l at plant i;
βil : Resource coefficient of product l at plant i;
γ jl : Resource coefficient of product l at plant j;
ail : Production cost coefficient for product l at plant i;
bi jl : Transportation cost coefficient for product l from plant i to DC j;
h jkl : Inventory holding cost coefficient for product l at DC j for customer k;
tr jkl : Transportation cost coefficient for product l from DC j to customer k;
ãil : Emission coefficient due to production for product l at plant i;
b̃i jl : Emission coefficient due to transportation for product l from plant i to DC j;
h̃ jkl : Emission coefficient due to inventory for product l at DC j for customer k;
jkl : Emission coefficient due to transportation for product l from DC j to customer k;
tr
Mkl : Demand of product l at customer k;
Pi : Production capacity of plant i;
Q: Resources available to all the plants;
1070 Z. Tao and J. Xu
3.2 Modelling
For the owner of the plants, his (or her) to minimize the costs, which consists of its
manufacturing cost and distribution cost between plants and DCs. Hence the objective
of the production part ZPC can be developed as
I J L I J L
min ZPC = ∑ ∑ ∑ ailYi jl + ∑ ∑ ∑ bi jlYi jl . (1)
Yi jl
i=1 j=1 l=1 i=1 j=1 l=1
We consider the strict emission cap regulation, the cap on carbon emissions of produc-
tion part cannot be exceeded, i.e.,
I J L I J L
∑ ∑ ∑ ãilYi jl + ∑ ∑ ∑ b̃i jlYi jl ≤ CP . (2)
i=1 j=1 l=1 i=1 j=1 l=1
Since production amounts from the plants should meet the levels required at the DC,
we have
I K
∑ Yi jl ≥ ∑ X jki , ∀ j, l. (3)
i=1 k=1
The plant capacities are formulated as
L J
∑ ∑ ailYi jl ≤ Pi , ∀i. (4)
l=1 j=1
For the distribution part, the decision maker’s objective is to minimize the costs, then
J K L J K L
min ZDC =
X jkl
∑ ∑ ∑ h jkl X jkl + ∑ ∑ ∑ tr jkl X jkl , (6)
j=1 k=1 l=1 j=1 k=1 l=1
in which the first term denotes inventory holding cost including material handling cost
at DCs and the second indicates transportation cost from warehouses to customers. The
carbon emissions from the distribution part also satisfies the emission cap, so
J K L J K L
∑ ∑ ∑ h̃ jkl X jkl + ∑ ∑ ∑ tr jkl X jkl ≤ CD . (7)
j=1 k=1 l=1 j=1 k=1 l=1
A Carbon-Constrained Supply Chain Planning Model 1071
Demands’ constraints (8) are stochastic types. In other words, we cannot ensure that the
kl are known. Some techniques have
constraints are satisfied before the exact values of M
been developed to handle the stochastic case like this. Here, the chance-constrained
technique proposed by Charnes and Cooper [8] is applied. Let θkl denote prescribed
confidence levels. Applying chance-constrained technique to (8), the corresponding
constraints are
J
Pr ∑ X jkl ≥ M kl ≥ θkl , ∀k, l, (9)
j=1
where Pr means probability.
Inventory levels are limited by individual DC capacities, so
K L
∑ ∑ γkl X jkl ≤ R j , ∀ j. (10)
k=1 l=1
In many cases, the decision maker has priority decisions on the production part are
affected by parameters which are decided by the distribution part. For instance, produc-
tion levels are decided from given information regarding the inventory conditions. Thus
the supply chain planning model can be posed as the following bilevel decision-making
problem: ⎧ J K L J K L
⎪ min Z = ∑ ∑ ∑ h X + ∑ ∑ ∑ tr X
⎪
⎪
⎪ DC jkl jkl jkl jkl
⎪
⎪ X jkl
⎧ J Kj=1Lk=1 l=1 j=1 k=1 l=1
⎪
⎪
⎪
⎪ ⎪
⎪ ∑ ∑ ∑ h̃ jkl X jkl + ∑ ∑ ∑ tr
J K L
jkl X jkl ≤ CD
⎪
⎪ ⎪
⎪
⎪
⎪ ⎪
⎪ j=1
⎪
⎪ ⎪
⎨
k=1 l=1 j=1 k=1 l=1
⎪
⎪ J
⎪
⎪ s.t. Pr ∑ X jkl ≥ M kl ≥ θkl , ∀k, l
⎪
⎪ ⎪
⎪
⎪ ⎪
⎪ j=1
⎪
⎪ ⎪
⎪
⎪
⎪ ⎪
⎪
K L
⎪
⎪ ⎩ ∑ ∑ γkl X jkl ≤ R j
⎪
⎪
⎪
⎪
k=1 l=1
⎪
⎪ where
⎧ Yi jl solves
⎪
⎪
⎨⎪ ⎪
I J L I J L
⎪
⎪ min ZPC = ∑ ∑ ∑ ail Yi jl + ∑ ∑ ∑ bi jl Yi jl
⎪ ⎪ i jl
⎪ Y (11)
⎪
⎪ ⎪
⎪ ⎧ I Ji=1Lj=1 l=1 i=1 j=1 l=1
⎪
⎪ ⎪
⎪ ⎪
I J L
⎪
⎪ ⎪
⎪ ⎪
⎪ ∑ ∑ ∑ ãil Yi jl + ∑ ∑ ∑ b̃i jl Yi jl ≤ CP
⎪
⎪⎪ ⎪
⎪ ⎪
⎪
⎪
⎪ ⎪ ⎪
⎪
i=1 j=1 l=1 i=1 j=1 l=1
⎪
⎪ ⎪
⎪ ⎪
⎪ I K
⎪
⎪ ⎪
⎨ ⎪
⎪ ∑ Yi jl ≥ ∑ X jki , ∀ j, l
⎪
⎪ ⎪
⎪
⎪
⎪ ⎪
⎨ i=1 k=1
⎪
⎪ ⎪
⎪
L J
⎪
⎪ ⎪
⎪ s.t. ∑ ∑ ail Yi jl ≤ Pi , ∀i
⎪
⎪ ⎪
⎪ ⎪
⎪
⎪ ⎪ ⎪
⎪ l=1 j=1
⎪
⎪ ⎪
⎪ ⎪ L I J
⎪
⎪
⎪ ⎪
⎪ ⎪
⎪ ∑ ∑ ∑ βil Yi jl ≤ Q
⎪
⎪ ⎪
⎪ ⎪
⎪
⎪
⎪ ⎪
⎪ ⎪
⎪ l=1 i=1 j=1
⎪
⎪⎪ ⎪ ⎪ X ≥ 0, ∀ j, k, l
⎪
⎩⎪
⎪ ⎪
⎩
⎪
⎩ jkl
Yi jl ≥ 0, ∀i, j, l.
1072 Z. Tao and J. Xu
4 Algorithm
Algorithms are the bridges between the theoretical models and the practical applica-
tions. The NP-hardness of bilevel programming problems make it full of challenges
to design efficient algorithms. Researchers have proposed many approaches to solve
bilevel programming problems [2, 4, 6, 14, 16, 18]. Among these algorithms, the interac-
tive fuzzy programming methods are popular in recent years [3, 20, 24].
Model (11) can be reformulated in a general bilevel programming framework as
follows: ⎧
⎪ maxn F(x, y)
⎪
⎪ x∈R⎧ G(x, y) ≤ 0
⎪ 1
⎪
⎨ ⎪ ⎪
⎪
⎨ where y solves: (12)
⎪
⎪
⎪ s.t. max f (x, y)
⎪ ⎪ y∈Rn2
⎩ ⎪
⎪ ⎪
⎩
s.t. g(x, y) ≤ 0,
where, x ∈ Rn1 is the decision vector for the upper-level decision maker (leader) and
y ∈ Rn2 is decision variable for the lower-level decision maker (follower); F(x, y) is
the objective function of the upper-level model and f (x, y) is objective function of
the lower-level model; G(x, y) is the constraint of the upper-level programming and
g(x, y) is the constraint of the lower-level model.
The key idea is that the decision makers at both levels consider individual objectives
separately as the initialization. Let D = {x, y|G(x, y) ≤ 0, g(x, y) ≤ 0}. For each of
the objective functions, assume that the decision makers have fuzzy goals such as “the
objective should be substantially less than or equal to some value”. The individual best
values
F min = min F(x, y), f min = min f (x, y)
(x,y )∈D (x,y )∈D
are referred to when the decision makers elicit membership functions prescribing the
fuzzy goals for the objective functions. The decision makers determine the member-
ship functions μ1 (F(x, y)) and μ2 ( f (x, y)) which are strictly monotone decreasing for
objective functions, consulting the variation ratio of degree of satisfaction in the interval
between the individual best values and the individual values. The domains of the mem-
bership functions of leader and follower are the interval [F min , F min ] and [ f min , f min ],
respectively. For the sake of simplicity, in this paper, we adopt a linear membership
function, which characterizes the fuzzy goal of the decision maker at each level. The
corresponding linear membership function μ1 (F(x, y)) and μ2 ( f (x, y)) are defined as:
⎧
⎨ 0 max if F(X1 , X2 ) > F max
F −F(x,y )
μ1 (F(x, y)) = if F min ≤ F(x, y) ≤ F max (13)
⎩ F max −F min
1 if F(x, y) < F min ,
A Carbon-Constrained Supply Chain Planning Model 1073
and ⎧
⎪
⎨ 0 max if f (X1 , X2 ) > f max
f − f (x,y )
μ2 ( f (x, y)) = f max − f min
if f min ≤ f (x, y) ≤ f max (14)
⎪
⎩1 if f (x, y) < f min .
After eliciting the membership functions, leader subjectively specifies a minimal
satisfactory level δ̂ ∈ [0, 1] for his/her membership function μ1 (F(x, y)). Then, fol-
lower maximize his/her membership function subject to the condition that leader’s
membership function μ1 (F(x, y)) is larger than or equal to δ̂ under the given con-
straints, that is, followers solves the following problem:
⎧
⎪
⎨ xmax μ2 ( f (x, y))
∈Rn1
μ1 (F(x, y)) ≥ δ̂ (15)
⎪
⎩ s.t.
(x, y) ∈ D.
If an optimal solution to problem (15) exists, it follows that DM1 obtains a satisfac-
tory solution having a satisfactory degree larger than or equal to the minimal satisfactory
level specified by leader’s own self. However, the larger the minimal satisfactory level
is assessed, the smaller follower’s satisfactory degree becomes. Consequently, a relative
difference between the satisfactory degrees of leader and follower becomes larger and
it is feared that overall satisfactory balance between both levels cannot maintain.
To take account of overall satisfactory balance between both levels, leader needs to
compromise with follower on leader’s minimal satisfactory level. To do so, a satisfac-
tory degree of both decision makers is defined as
solution (x∗ , y ∗ ) does not always satisfy the condition. Then the ratio of satisfactory
degree between both levels
μ2 ( f (x∗ , y ∗ ))
Δ= (18)
μ1 (F(x∗ , y ∗ ))
is adopted.
If Δ > 1, i.e., μ2 ( f (x∗ , y ∗ ) > μ1 (F(x∗ , y ∗ )), then leader updates the minimal satis-
factory level δ̂ by increasing its value. Receiving the updated level δ̂ , follower solves
problem (15) with δ̂ and then leader obtains a larger satisfactory degree and fol-
lower accepts a smaller satisfactory degree. Conversely, if Δ < 1, i.e., μ2 ( f (x∗ , y ∗ ) <
μ1 (F(x∗ , y ∗ )), then leader updates the minimal satisfactory level δ̂ by decreasing its
value, and leader obtains a smaller satisfactory degree and follower accepts a larger
satisfactory degree. The interactive process terminates when two conditions hold [21].
5 Numerical Example
Consider supply chain consisting of three plants, two DCs, three customers, with two
products. The values of parameters are listed in Table 1. The demands follow the normal
distribution, defined as
The resources available to all the plants are 10000. The production capacities of the
plants are 1800, 1500 and 1200, respectively. Assume that all θkl = 0.8.
Let CP = CD = 6000. Suppose that the initial minimal satisfactory level as δ = 1.0,
and the lower and the upper bounds of Δ as [0.6, 1.0]. Using the algorithm described in
Sect. 4, the decisions are obtained after three iterations:
∗ ∗ ∗ ∗ ∗ ∗
X111 = 52.5, X112 = 22.1, X121 = 12.8, X122 = 26.8, X131 = 72.3, X132 = 44.2,
∗ ∗ ∗ ∗ ∗ ∗
X211 = 12.6, X212 = 62.5, X221 = 45.2, X222 = 92.5, X231 = 12.5, X232 = 14.5,
∗ ∗ ∗ ∗ ∗ ∗
Y111 = 82.5, Y112 = 72.5, Y121 = 62.5, Y122 = 20.2, Y211 = 32.0, Y212 = 26.5,
∗ ∗ ∗ ∗ ∗ ∗
Y221 = 65.3, Y222 = 34.5, Y311 = 17.5, Y312 = 42.5, Y321 = 54.3, Y322 = 62.5.
The corresponding costs of leader and follower are ZDC ∗ = 60512.8 and Z ∗ = 32746.8,
PC
∗
respectively. The amounts of their emissions are E DC = 26128.2 and EPC ∗ = 11321.4,
respectively.
In order to test the effect the emission caps on costs, we use different emission caps
to calculate the costs and actual emissions. For simplicity, let CDC = CPC . The results
are shown by Fig. 2.
Figure 2 shows the impact of varying the emission cap on the total cost and total
emissions for the examples considered. Both the leader and follower have the similar
trends. As expected, reducing the emission cap increases total cost and reduces total
emissions. However, what is perhaps surprising is the fact that the emission cap can
be significantly reduced without significantly affecting the total cost. In the example
shown, reducing the emission cap from 6000 to 3000 reduces the average total amount
of emissions by 50% but increases the average total cost by only 10%. Note that this
1076 Z. Tao and J. Xu
(1) More quantitative analysis. The results in this paper are based on a numerical
example. More quantitative analysis can reveal the essence behind the phenom-
enon, which is conducive to the application of the model.
(2) Scenario analysis under different regulatory emission control policies. In this
paper, only the strict emission cap policy is considered. However, policies of the
carbon tax, carbon offset and cap-and-trade are also used wildly. It is necessary to
compare them deeply.
Acknowledgements. This research was supported by the Programs of NSFC (Grant No.
71401114), the Fundamental Research Funds for the Central Universities (Grant No.
skqy201524).
References
1. Absi N, Dauzère-Pérès S et al (2013) Lot sizing with carbon emission constraints. Eur J Oper
Res 227(1):55–61
2. Angelo JS, Barbosa HJ (2015) A study on the use of heuristics to solve a bilevel program-
ming problem. Int Trans Oper Res 22(5):861–882
3. Baky IA (2014) Interactive topsis algorithms for solving multi-level non-linear multi-
objective decision-making problems. Appl Math Model 38(4):1417–1433
4. Beheshti B, Özaltın OY et al (2015) Exact solution approach for a class of nonlinear bilevel
knapsack problems. J Glob Optim 61(2):291–310
5. Benjaafar S, Li Y, Daskin M (2013) Carbon footprint and the management of supply chains:
insights from simple models. IEEE Trans Autom Sci Eng 10(1):99–116
6. Bialas WF, Karwan MH (1984) Two-level linear programming. Manage Sci 30(8):1004–
1020
A Carbon-Constrained Supply Chain Planning Model 1077
Li Lu(B)
1 Introduction
It is a common knowledge that large amount of carbon emissions directly leads
to global warming which has brought serious challenges to human survival and
development. The best way to solve this problem is changing the way of pro-
duction and life. So realize the sustainable development of low-carbon economy
is becoming a hot spot of global attention. Nowadays, different governments
including China have carried out the corresponding carbon emissions regulation
policy according to their national conditions. Typical carbon policies are Manda-
tory carbon emissions capacity (Cap) and Cap-and-trade. Both the two policies
are carbon constraints under the regulation of government, and the government
will assign different initial carbon emissions quota to different enterprises. As we
know, cap policy is a kind of mandatory constraint. The enterprises cannot sale
or buy the carbon emissions quota in carbon trading market when the initial is
overmuch or insufficient. However, cap-and-trade policy allows enterprises sale
or buy the carbon emissions quota in carbon trading market when the initial
is overmuch or insufficient, in order to meet the production requirements and
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 89
Supply Chain Coordination by Revenue Sharing Contract 1079
improve yields [13]. The earliest implementation of Cap and Cap-and-trade poli-
cies is European Emissions Trading System (EU ETS), and has now become the
world’s largest carbon emissions trading market [2].
The implementation of carbon policy in operation management has brought
new challenges to the enterprise. It makes enterprise management more complex
in performance target for decision-making (dual goals of increasing profits and
reducing carbon emissions), the decision variables (production, order, pricing and
other traditional decision variables and carbon emissions, carbon trading deci-
sion variables) and decision environment (production capacity, capital and other
traditional and limitation of carbon constraints). As we all know, carbon emis-
sions are throughout the entire supply chain, however, the existing researches
on carbon policies in practice are mainly based on the perspective of individual
enterprise. If only focus on individual enterprise, and cannot effectively solve the
supply chain coordination problem of upstream and downstream enterprises, we
cannot fundamentally promote carbon policy’s implementation, and achieve the
ultimate goal of reducing carbon emission [15].
Revenue sharing contract is a common supply chain coordination mechanism.
Downstream enterprise buys goods from upstream enterprise at a relatively low
wholesale price, but shares the sales revenue with upstream enterprise [3]. In
this paper, we analyze the supply chain coordination problem based on revenue
sharing contract, and research to solve the following problems:
(1) Whether revenue sharing contract could coordinate the supply chain under
different carbon policy?
(2) How does the different carbon policy affect the revenue sharing ratio?
(3) How does the different carbon policy affect the manufacturer’s expected
profit?
The rest of this paper is organized as follows: Sect. 2 reviews the relevant
literatures; Sect. 3 describes the problem and assumptions; Sect. 4 presents the
model and analysis; and discussion is presented in Sect. 5; finally Sect. 6 con-
cludes this paper.
2 Literature Review
In this paper, we review literature spanning across two streams. The first section
addresses literature on supply chain coordination by revenue sharing contract.
The second section addresses literature on supply chain coordination under dif-
ferent carbon policies.
From the perspective of economics, Mortimer et al. [16] carried on the empirical
research on the application of revenue sharing contract in the DVD rental indus-
try. Giannoccaro et al. [10] systematically studied the problem of supply chain
1080 L. Lu
Ghosh et al. [8] took garment industry as an example, and studied the influence
of green level on prices and profits, then put forward a kind of contract includ-
ing two parts of tax to coordinate the green supply chain. Swami et al. [17]
considered the consumers’ environmental consciousness, and thought that green
operation efforts of the retailers and manufacturers in the supply chain would
affect consumer demand, then designed a cost sharing contract to realize supply
chain coordination. Jaber et al. [12] considered a two-stage supply chain con-
sisting a manufacturer and a retailer, and studied the supply chain coordination
mechanism under Cap-and-trade policy when manufacturer undertook the cost
of carbon emission. Based on the newsboy model, Choi [6] analyzes the impact
of wholesale price and price subsidies contract under Cap-and-trade policy on
retailers procurement source selection. Du et al. [7] considered a two-stage supply
chain consisting of a carbon emission dependent enterprise and a carbon emission
supplier under Cap-and-trade policy, and designed a supply chain coordination
mechanism using the non-cooperative game theory. Ghosh and Shah [9] explored
supply chain coordination issues arising out of green supply chain initiatives and
explore the impact of cost sharing contract on the key decisions of supply chain
players undertaking green initiatives. Xu [18] analyzed the decision behavior
and coordination mechanisms for a two-echelon sustainable supply chain under
a cap-and-trade regulation. Cao et al. [5] investigated the government’s role in
allocating the appropriate emission quota to maximize social members’ utilities
and analyzes how the emission-dependent enterprise improves revenues of both
itself and the whole system through supply chain collaboration.
Through literature review, we found that the revenue sharing contract was a
widely used supply chain coordination mechanism. But most of the researches did
not consider the impact of different carbon policy on supply chain coordination
mechanism. In order to fill the gap, we will analyze the supply chain coordination
problem based on revenue sharing contract under different carbon policies.
Supply Chain Coordination by Revenue Sharing Contract 1081
F −1 ϕ1ϕp+g−w−c m
. To coordinate the supply chain, we have to make Q∗1 =
1 (p−v)+g
p+g−cs −cm p+g−cs −cm
Q∗ϕ1 , that is F −1 p+g−v = F −1 ϕ1ϕp+g−w−c
1 (p−v)+g
m
, we get p−v+g =
ϕ1 p+g−w−cm
ϕ1 (p−v)+g , so ϕ1 = (p−v)(cm +w)+(w−cs )g+vg
(p−v)(cm +cs )+vg .
In order to guarantee both sides accept this contract, we must ensure that
their respective benefits under the revenue sharing contract are greater than
those before, namely realizing Pareto improvement. We should have
E [πs1 (Q∗1 )]
> 1, (3)
E πs1 Q∗ϕ1
E [πm1 (Q∗1 )]
> 1. (4)
E πm1 Q∗ϕ1
dE[πm2 (Qϕ2 )]
Proof. As dQϕ2 = [ϕ2 (p − v) + g] [1 − F (Qϕ2 )] − (w + cm − ϕ2 v), then
2
d E[πm2 (Qϕ2 )]
d2 Qϕ2 = − [ϕ2 (p − v) + g] f (Qϕ2 ) < 0, there exists a unique optimal
order quantity Q∗ϕ2 which maximizes E [πm2 (Qϕ2 )].
= 0, we get optimal order quantity Q∗ϕ2 =
dE[πm2 (Qϕ2 )]
Let
dQϕ2
F −1 ϕ2ϕp+g−w−c
2 (p−v)+g
m
.
∗
When Qϕ2 ≤ e , the Cap policy doesn’t play a role to the supply chain,
E
Proposition 2. When Q∗2 ≤ Ee , and Q∗2 ≤ Ee , the supply chain can coor-
dinate by revenue sharing contract in the scenario
of Cap policy where ϕ 2
(p−v)(cm +w)+(w−cs )g+vg E [πm2 (Q∗ϕ2 )] E [πs2 (Q∗ϕ2 )]
= (p−v)(cm +cs )+vg . And when ϕ2 ∈ E [πT 2 (Q∗
, 1 − E π Q∗ ,
2 )] [ T 2 ( 2 )]
it can realize Pareto improvement for both supplier and manufacturer; when
g [1−F ( E )]−w−cm
Q∗ϕ2 ≤ Ee < Q∗2 , the supply chain can coordinate where ϕ2 = F E e(p−v)−p ,
(e)
when Q∗2 > Q∗ϕ2 > Ee , the supply chain can coordinate where ϕ2 = 1.
Proof. (1) When Q∗2 ≤ Ee and Q∗ϕ2 ≤ Ee , the Cap policy doesn’t play a role.
To achieve supply chain coordination, we should make Q∗2 = Q∗ϕ2 , that is,
s −cm s −cm
F −1 p+g−c
p+g−v = F −1 ϕ2 p+g−w−cm
ϕ2 (p−v)+g , we get p+g−c
p−v+g = ϕ2ϕp+g−w−c
2 (p−v)+g
m
.
To simplify, we obtain ϕ2 = (p−v)(c m +w)+(w−cs )g+vg
(p−v)(cm +cs )+vg .
In order to make sure that both sides realize Pareto improvement,
E[πs2 (Q∗ E[π (Q∗ )]
we should have E [πs2 (Q∗
2 )]
> 1, E π m2 Q∗2 > 1. As E [πs2 (Q∗2 )] +
ϕ2 )] [ m2 ( ϕ2 )]
E [πm2 (Q∗2 )] = E [πT 2 (Q∗2 )], E [πs2 (Q∗2 )] = (1 − ϕ2 ) E [πT 2 (Q∗2 )],
∗ ∗ ϕ2 E[πT 2 (Q∗ )]
E [πm2 (Q2 )] = ϕ2 E [πT 2 (Q2 )], put into the formula, we get E π Q∗ 2 > 1.
[ m2 ( ϕ2 )]
E [πm2 (Q∗ )] E [πs2 (Q∗ )]
To simplify, we get E π Qϕ2 ∗ < ϕ2 < 1 − E π Qϕ2∗ .
[ T 2 ( 2 )] [ T 2 ( 2 )]
1084 L. Lu
(2) When Q∗ϕ2 ≤ Ee < Q∗2 , the Cap policy plays a role. Let Q∗2 = Q∗ϕ2 , that
E ϕ2 p+g−w−cm
is F −1 ϕ2ϕp+g−w−c
2 (p−v)+g
m
= E
e . To simplify, we obtain F e = ϕ2 (p−v)+g , then
E
g [1−F ( )]−w−cm
ϕ2 = F E e(p−v)−p .
(e)
(3) When Q∗2 > Q∗ϕ2 > Ee , as Q∗2 = Q∗ϕ2 = Ee , the supply chain doesn’t need
to coordinate, the manufacture keeps all the profit, now ϕ2 = 1.
This completes the proof.
This proposition suggests that, when carbon cap doesn’t play a role, supply
chain can coordinate by revenue sharing contract under Cap policy and realize
Pareto improvement for both sides; when carbon cap plays a role, although
we can find the condition which can achieve supply chain coordination, it is
unable to realize the Pareto improvement for both sides. Because the Cap policy
damages the whole supply chain’s profit, it is difficult to achieve supply chain
coordination in reality.
This proposition shows that, whether carbon emission quota is enough or not,
supply chain can coordinate by revenue sharing contract under Cap-and-trade
policy. However, sharing proportion must meet a certain relationship to make
the Pareto improvement for both sides; otherwise the contract shall be invalid.
5 Discussions
Through the analysis above, we notice that revenue sharing contract can coordi-
nate the supply chain under no carbon emission constraint, Cap and Cap-and-
trade policy. Next, we will discuss the effect of carbon policy on revenue sharing
ratio and manufacturer’s expected profit.
1086 L. Lu
This proposition shows that, under cap policy, when the carbon cap doesn’t
play a role (that is no carbon emissions constraint), the manufacturer’s revenue
share proportion is greater than it under Cap-and-trade policy. Obviously, in the
situation of no carbon emission constraint, manufacturer’s profit is bigger, so he
can share a higher proportion of revenue.
When the initial carbon quotas assigned by government are enough under cap
policy, it seems to be a problem without carbon emission constraint. In this
paper, taking the manufacturer’s expected profit without carbon emission con-
straint as a benchmark, we want to study the effect of carbon policy on man-
ufacturer’s expected profit. In addition, as the supply chain is coordinated by
revenue sharing contract, we adopt the whole supply chain profit to represent
the manufacturer’s expected profit.
This proposition indicates that the manufacture’s expected profit under Cap-
and-trade policy is greater than it in the situation without carbon emissions
constraint under certain conditions. This is because the Cap-and-trade policy is
a kind of mixed strategy combining market operation and government regulation.
It not only control the total emission amount, but also offer an opportunity for
the enterprises to buy or sell the carbon emission quota, and make them to
guarantee production or gain more profits.
In this paper, taking the two echelon supply chain (one supplier and one man-
ufacturer) which is dominant by supplier as the research object, we analyze the
supply chain coordination problem based on revenue sharing contract under Cap
and Cap-and-trade policy. It provides several interesting observations.
(1) Revenue sharing contract can coordinate the supply chain under both Cap
and Cap-and-trade policy, and can realize the Pareto improvement for both
sides under certain conditions.
(2) When the carbon policy plays a role, the manufacture’s revenue share pro-
portion under Cap-and-trade policy is always less than it without carbon
emission constraint. The manufacturer would buy carbon emission quota for
the limitation of it. It produced a certain cost and reduced profits, so he
would distribute less profit to the supplier.
(3) Under certain conditions, the manufacture’s expected profit under Cap-and-
trade policy is greater than it in the situation without carbon emissions
constraint. Because Cap-and-trade policy not only control the total emission
amount, but also offer an opportunity for the enterprises to buy or sell the
carbon emission quota, and make them to gain more profits. However, the
Cap policy do not has such function. As a result, theoretically speaking,
Cap-and-trade policy is better than Cap policy.
References
1. Arani HV, Rabbani M, Rafiei H (2016) A revenue-sharing option contract toward
coordination of supply chains. Int J Prod Econ 178:42–56
2. Böhringer C (2014) Two decades of european climate policy: a critical appraisal.
Rev Environ Econ Policy 8(1):1–17
3. Cachon GP (2003) Supply chain coordination with contracts. Handbooks Oper Res
Manag Sci 11:227–339 Supply Chain Management
4. Cachon GP, Lariviere MA (2005) Supply chain coordination with revenue-sharing
contracts: strengths and limitations. Manag Sci 51(1):30–44
5. Cao J, Zhang X, Zhou G (2016) Supply chain coordination with revenue-sharing
contracts considering carbon emissions and governmental policy making. Environ
Prog Sustain Energy 2(35):479–488
6. Choi TM (2013) The international journal of advanced manufacturing technology.
Int J Prod Res 1(6851):835–847
7. Du SF, Zhu LL, Liang L (2013) Emission-dependent supply chain and environment-
policy-making in the cap-and-trade system. Energy Policy 57:61–67
8. Ghosh D, Shah J (2012) A comparative analysis of greening policies across supply
chain structures. Int J Prod Econ 2(135):568–583
9. Ghosh D, Shah J (2014) Supply chain analysis under green sensitive consumer
demand and cost sharing contract. Int J Prod Econ 164:319–329
10. Giannoccaro I, Pontrandolfo P (2004) Supply chain coordination by revenue shar-
ing contracts. Int J Prod Econ 89(2):131–139
11. Govindan K, Popiuc MN (2014) Reverse supply chain coordination by revenue
sharing contract: a case for the personal computers industry. Eur J Oper Res
2(223):326–336
12. Jaber MY, Glock CH, Saadany AME (2013) Supply chain coordination with emis-
sions reduction incentives. Int J Prod Res 1(51):956–968
13. Keohane NO (2009) Cap and trade, rehabilitated: using tradable permits to control
us greenhouse gases. Rev Environ Econ Policy 3(1):42–62
14. Kong G, Rajagopalan S, Zhang H (2013) Revenue sharing and information leakage
in a supply chain. Manag Sci 59(3):556–572
15. Matthews HD, Gillett N et al (2009) The proportionality of global warming to
cumulative carbon emissions. Nature 459:829–832
16. Mortimer JH (2002) The effects of revenue-sharing contracts on welfare in vertically
separated markets: evidence from the video rent industry. University of Californian
Los Angeles
17. Swami S, Shah J (2013) Channel coordination in green supply chain management.
J Oper Res Soc 3(64):336–351
18. Xu J, Chen Y, Bai Q (2016) A two-echelon sustainable supply chain coordination
under cap-and-trade regulation. J Cleaner Prod 135:42–56
Research on Integration of Livestock Products
Supply Chain Based on the Optimal Match
Between Supply and Demand
Liang Zhao, Yong Huang(B) , Zhusheng Liu, Mingcong Wu, and Lili Jiang
1 Introduction
China is a big country on livestock production and Chinese livestock products
output value is increased yearly. However, the construction of livestock products
supply chain (LPSC) is inefficient. At present, Chinese big enterprises in the
livestock products market have just started the integration of supply chain, and
the livestock products market is still mainly dominated by self-employed farmer’s
production mode. Farmers neither share their information nor have any plans for
production, so it leads to the imbalance between production and market demand.
Once a product is sold well, farmers flock to produce it blindly. Then the price
will fall, and farmers will switch to produce other products sold well, finally
becoming a vicious circle. In addition, the production of livestock products has
many shortcomings, such as low levels of automation, decentralized farmers. If we
want to integrate the supply chain, the issue that matches supply with demand
must be met. When the farmers make production plans, they’re faced with the
situation that demand is stochastic and production plans must be decided before
they know the real demand [1]. From the experience of developed countries
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 90
1090 L. Zhao et al.
2 Literature Review
agricultural products at the end of the supply chain supermarket and informa-
tion technology under the traceability system and e-commerce environment. And
he believed that the establishment of a new integrated production and market-
ing supply chain and circulation system to achieve the circulation of agricultural
products and information network management, could make Chinese agricultural
products enter further into the global supply chain.
Compared with China, the United States, the Netherlands, the European
Union, Japan and other developed countries have been perfect on the develop-
ment of agricultural supply chain, so there are many researches about LPSC,
especially about beef and pork supply chain. Birthal and Pratap [10] proposed
that the revolutionary progress in livestock production was demand-driven. The
income elasticity of demand was higher compared to most other food commodi-
ties. And livestock would have a larger effect on poverty reduction compared
to crop sector. Facchioli [3] used a computational simulation system tool to
coordinate its flow of matter and information. Piewthongngam [8] developed a
system dynamics model as a tool for managers to visualise the movement of the
entire production chain. William, Norina and Cassavant [13] thought SCRA was
a “production adjustment & consumer-driven” systems. Van Roekel, Kopicki
and Broekmans [11] gave the basic four steps in the development of agricultural
supply chains. Simchi and Kaminsky [9] pointed out that information sharing
and task planning was the key to integrating supply chain. Min and Zhou [5]
summarized the methods and applications of integrated supply chain modeling.
It provided a reference to establishment of the ILPSC. Pan and Jean [7] analyzed
the link between pork supply chain in China and the United States, and put for-
ward that a mechanism of enterprises and peasants, a sound logistics operation
system and an information network platform should be established so that the
efficiency of Chinese pork processing supply chain could be improved. Robert and
Jon [6] used a visual modelling environment to overcome the problems involved
in implementing simulation models.
3 Model Description
3.1 Integrated Livestock Products Supply Chain Model
The ILPSC model refers to the integration of all members in the LPSC, based
on the common goals, to achieve true sharing of information and integration of
supply, production and sale. Figure 1 shows the whole ILPSC model, and there
are mainly four parts. The first part is the process of producers which con-
tains capital goods suppliers and flows. Capital goods suppliers weekly produce
pups as per production plan which is designed by demand. Then, a collection
of several suppliers makes up a flow. The combined production of a flow is kept
together when it moves through the model. The second part is the process of
manufacturers for a period of breeding activities. The third part is the process
of processors where livestock products are processed and packaged. The fourth
part is the process of sending products to market.
1092 L. Zhao et al.
In the whole model, the information flow center makes information flow in
the whole supply chain smoothly and shares the information with every part
in the model. At the same time, reasonable decisions are made on all links of
the supply chain through the integrated information, such as production plan
and transportation routes. And early in the supply chain construction, it can
determine whether the number of assets to meet the market demand number by
the weekly market demand plan. Monitoring center is mainly for flows, manu-
facturers and processors. Among them, the flows and manufacturers are mainly
concerned about the health condition of livestock. What we want to monitor is
whether the livestock are sick, whether the disease is contagious and whether
the sick livestock are dead. In addition, the weekly sales of processors should be
monitored, because the processor needs to sell products to the market weekly.
And a batch of products is sent to market several weeks.
The ILPSC model is a two-way driving model. That is, on the one hand, the
model needs the production plan of the most upstream companies to push the
entire supply chain, on the other hand, it needs the final market demand to pull.
Therefore, it’s necessary to forecast market demand to pull the production to
guarantee the matching of supply and demand information, and then push the
production through the production plan. Events in the model take 1 week.
The goal of the model is to strike a balance between the supply of livestock
products to market and the demand of the market. The supply-demand ratio can
be used to response the situation about balance of supply and demand. There-
fore, whether the target is achieved or not can be determined by analyzing the
mean and variance of all processors’ corresponded supply-demand ratio. If the
mean of it is close to 1 and the variance is close to 0, it can be proved that the
target is achieved. And a sub-goal may exist, such as the cost. The rule to trans-
port products can be considered, when the transporter works. Distance between
different parts in the model is affected by the rule, and the cost is affected by
the distance. There are some uncertain factors in the model which is concerned
Research on Integration of Livestock Products Supply Chain 1093
with the goal of the model. The first one is the uncertainties associated with
the health condition of livestock. It includes whether the livestock are diseased,
which type of health episodes is caught, whether it has died after the illness and
which week after the illness is the time livestock dead. Secondly, the week when
the livestock products are sent to the market is also an uncertainty factor. All
of them will affect the number transported to the market, through the uncertain
production losses for the entire system caused by themselves. In this paper, these
uncertainties are solved by simulation model.
The ILPSC model is suitable for livestock products, which mainly refer to
livestock and poultry meat, and livestock including pigs, cattle, sheep and so on,
poultry including chickens, ducks, geese and other poultry meat and wild birds.
The characteristics of livestock products include that the number of production
can be counted, production cycle is long, the sickness can result in uncertain pro-
duction losses, it’s easily restricted by the size of the farm, it will be transported
between different farms and processing and packaging are needed.
There are many process in the model, and the processes of health condition and
selection of flows to manufacturers are the most important and analyzed.
(1) Health Condition Process
Livestock may get sick during the breeding process, and the type of diseases
may be different. The infectivity of different types of diseases is also different.
If the disease is infectious, the diseased livestock need to be isolated from other
healthy livestock. The flow chart of the process is shown in Fig. 2.
(2) Selection of Flows to Manufacturers Process (based on the shortest distance
rule)
When livestock are transported from flows to manufacturers, a transportation
rule can be considered such as the shortest distance rule. It is preferable to
transport the products to the available manufacturer who is the nearest to the
flow. At the same time, the number of livestock transported will be constrained
by the capacity of transporters and manufacturer’s assets. The flow chart of the
process is shown in Fig. 3.
1094 L. Zhao et al.
To achieve the goal, the model needs to be optimized. There are many parts in
the supply chain will have an impact on the goal. In this paper, three factors
are mainly considered including the asset numbers of manufacturers, production
plans and transportation routes.
(1) The asset numbers of manufacturers
It refers to the number of assets in the supply chain where manufacturers breed
livestock. Manufacturers are used to breed pups provided by capital goods sup-
pliers, and it’s the key to connecting upstream and downstream of the supply
chain. Therefore, when the production plans of capital goods suppliers have been
decided with the change of the market demand, the asset number of manufactur-
ers should change accordingly. So, whether the asset number of manufacturers is
enough must be first determined in the establishment of the model. Only when
the number of manufacturers’ assets is sufficient can the model continue to be
optimized.
(2) The production plans
A production plan is a schedule of the number of pups produced by capital goods
suppliers. It directly determines the approximate size of the livestock products
that are ultimately sent to the market. Therefore, the sum of livestock products
transported to market will be closed to the sum of market demand by optimizing
the production plan.
1096 L. Zhao et al.
4 Empirical Analysis
4.1 Model Overview
The empirical data in this article refers to the competition of the 2016 Arena
Simulation Student Competition: Rockwell Duck Farm Supply Chain Optimiza-
tion. Two schedules will drive the simulation model of the farm system: a market
schedule, and a hatching farms schedule.
There are currently about 75 hatching farms in the system which produce a
total of approximately 100,000 ducks each week. Hatching farms breed ducks per
the hatching farms schedule, and a collection of several hatching farms makes
up a flow to move together. The order for transport is carried out in descending
order of flows and finds the nearest available growth/finish farms to receive the
ducks with the shortest distance rules. Production will be affected by the health
condition of ducks. There are four diseases to ducks showing in Table 2. The
probability of getting sick in four quarters of every hatching farm and prevalence
of the four diseases are based on the data provided in the competition.
There are currently about 230 growth/finish farms in the system. The farms
receive the ducks from the hatching farms until it is full. Ducks are breeding
22 weeks in the growth/finish farms. Because disease 3 and disease 4 need to be
separated from other ducks, the growth/finish farm 27, growth/finish farm 199,
growth/finish farm 1564, growth/finish farm 65, and growth/finish farm 179 are
selected to receive the ducks suffering from the two diseases, which are relatively
closer to each station.
There are 11 packers/plants in the system. After packaged and processed,
ducks are sold for 6 weeks, the proportion shown in Table 3.
Research on Integration of Livestock Products Supply Chain 1097
Ducks are transported by truck in the system and each truck is loaded with
2,600 ducks. A distance matrix defines the miles between a central point of a
flow’s location to a particular growth/finish farm. The simulation time is two
years or 104 weeks. Figure 4 shows the supply chain model of the duck farms
established with visual simulation software. The lower left part of the facility
interface is 75 hatching farms, the top part is 230 growth/finish farms and the
lower right part is 11 packers/plants.
Fig. 4. The ILPSC model of duck farms established with visual simulation software
the results of operations. It’s mainly solved through adjusting the corresponding
packers/plants of the new three groups to the packers/plants whose supply is
obviously insufficient. To find the optimal production plan, an experiment needs
setting up. The controlled variable for the experiment is the number of hatchings
per week for 75 hatching farms. The target is SD Rate# of 11 packers/plants.
SD Rate# in 40 scenarios is shown in Fig. 5.
Since the ultimate goal is to achieve the match of supply and demand, it
can be achieved by finding a scenario where the supply-demand ratio is closer
to 1 and the SD Rate# is closer to 0. Therefore, by comparing the mean and
coefficient of variation (CV) of the absolute value of the SD Rate#, the scenario
can be found where the mean and CV are closer to 0. It proves that the number
of supply and demand is closer. An mean-CV line chart is shown in Fig. 6.
The mean and CV of scenario 2,4,16,36 are all suitable. Finally, the pro-
duction plan of scenario 4 is selected. Then the model is ran again and the
transportation routes from the growth/finish farms to the packers/plants are
adjusted based on the results. The adjustment scheme is shown in table 4(G/F
farm represents Growth/Finish farm). Since the supplies of packer/plant5 and
packer/plant9 are sufficient, they are not shown in Table 4.
The line graphs between the supply and demand of the 11 packaging plants
in the original and the optimized model are shown in Fig. 7.
From Fig. 7, it can be seen the fluctuation range of the optimized model’s
supply is decreased compared with the original model, and supplies of every
packer/plant are basically fluctuating up and down in demand. The supply-
demand ratio of the original model and the optimized model is compared, as
shown in Table 5. And the line chart is shown in Fig. 8.
According to Fig. 8, it can be seen that the overall supply-demand ratio of
the 11 packers/plants of the optimized model is close to 1, and the fluctuation
is obviously reduced. There are three changes seen from Table 5 the mean of
supply-demand ratio of 11 packers/plants is reduced by 0.0836, the standard
deviation is reduced by 0.6979 and the variation coefficient is reduced by 0.6478.
Although the mean of the original model is also close to 1, the standard deviation
is 0.8142 which means it is very volatile. And the standard deviation and the
coefficient of variation after optimization is closer to 0, it can be seen that the
optimized model can make the match of supply and demand better.
1100 L. Zhao et al.
Fig. 7. The line graphs between the supply and demand of the 11 packaging plants in
the original and the optimized model during 28-104th weeks
Table 5. The supply-demand ratio of the original model and the optimized model
Packer number 1 2 3 4 5 6 7
Supply-demand ratio 2.6161 1.0385 0.6777 1.1512 0 2.1518 0.6294
Optimized supply-demand ratio 0.994 1.0853 0.832 0.9447 1.0613 0.8967 0.7984
Packer number 8 9 10 11 Mean SD CV
Supply-demand ratio 0.4204 1.8445 0.7463 0.4056 1.062 0.8142 0.7667
Optimized supply-demand ratio 1.0332 1.1317 1.1111 0.8735 0.9784 0.1163 0.1189
Research on Integration of Livestock Products Supply Chain 1101
Fig. 8. The line chart for the supply-demand ratio of the original model and the opti-
mized model
5 Conclusion
This paper establishes an ILPSC to integrate the entire supply chain, which ulti-
mately matches the supply and demand of livestock products. The whole process
of the ILPSC from the upstream capital goods suppliers to the downstream mar-
ket is introduced, including the information flow center and the control center to
control the information flows and uncertain factors. Then, an integrated supply
chain model for a duck farm is constructed with simulation software. In the exam-
ple, the number of assets in the growth farms is added, and the production plan
of the hatching farms and the transportation routes between the growth farms
and the packing plants are optimized. Finally, the mean of supply-demand ratio
of the 11 packing plants is reduced by 0.0836, the standard deviation is reduced
by 0.6979 and the variation coefficient is reduced by 0.6478. The construction of
the ILPSC is a quite complex problem. Though the ILPSC can represent most
of the integrated process of livestock products, it also can be extended to other
aspects, for example, the question on reduction of transportation costs, and the
question on increasing utilization of growth farms.
References
1. Cachon G, Terwiesch C (2009) Matching supply with demand, 2nd edn. McGraw-
Hill, Singapore (in Chinese)
2. Chen C, Luo Y (2003) Construction on supply chain model of livestock products
in China. J Nanjing Agric Univ 26:89–92 (in Chinese)
3. Facchioli P, Severino G et al (2015) Use of a simulation system tool at the logistic
of a sugarcane company. Rev Metropolitana Sustentabilidade 5:112–127
4. Liu Z, Sun S, Wang J (2009) The trend of agricultural supply chain management
in China. Commercial Res 3:161–164 (in Chinese)
5. Min H, Zhou G (2002) Supply chain modeling: past, present and future. Comput
Ind Eng 43:231–249
1102 L. Zhao et al.
1 Introduction
Since the 21st century, with the global procurement, non-core business outsourc-
ing, and development of business models in supply chain management, such as
the lean management, the space distance on the supply chain becomes longer
and longer, while the time distance becomes shorter. The change of the tem-
poral and spatial variation of the supply chain improves the possibility of an
interrupt occurance. Every firm’s supply chain is susceptible to a diverse set of
risks, such as natural disasters, terrorism, war, financial crisis, supplier bank-
ruptcy and transportation delays. A lot of strategies used by firms to mitigate
disruption risks include emergence purchases, multi-sourcing, inventory reserve
and some reliability improvement of supply process [6]. The awareness regard-
ing the importance of supply chain risk management (SCRM) has grown in the
recent years. Supply chains are becoming increasingly competitive and complex
in order to effectively meet customer demands. Supply chain disruptions often
lead to declining sales, cost increases, and service failures for the company. There
are several kinds of mode of supply chain disruptions. Different scholars have dif-
ferent classifications for it. It can be clearly seen in Table 1.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 91
1104 K. Kang et al.
2 Literature Review
There is a great deal of literature on supply chain. In recent years, the research in
supply chain disruption is becoming a hot spot. Disruption has the characteristics
of low probability of occurrence and big influence on the part of the supply chain,
even the whole supply chain. Many scholars classified the types of supply chain
disruption, and provided several operational strategies for managing disruption.
But there are few papers which are mentioned the insurance to solve the risks.
Our study is related to studies focusing on the transportation disruption in
supply chain. Zhen et al. [9] investigated four strategies: basic strategy, BI insur-
ance strategy, backup transportation strategy, and mixed strategy, in order to
deal with distribution centers’ daily risk management. They used the mathematic
model to compare BI insurance strategy with backup transportation strategy,
and found that the choice of BI insurance strategy and the backup transportation
strategy depended on transportation market, insurance market and distribution
center’s operational environments. Hishamuddin et al. [3] built a recovery model
for a two-echelon serial supply chain when transportation disruption occurred.
This model determined the optimal ordering and production quantities with the
recovery window, and ensured the minimum total relevant costs. They developed
an efficient heuristic to solve the problem. In 2015, they developed a simulation
model of a three echelon supply chain system with multiple suppliers subject
to supply and transportation disruptions [4]. The objective of the paper is to
examine the effects of disruption on the system’s total recovery costs. Hernn
et al. [1] provided a novel simulation-based multi-objective model for supply
chains with transportation disruptions, aiming to minimize the stochastic trans-
portation time and the deterministic freight rate.
In this paper, we apply a novel multi-objective programming model for sup-
ply chain with transportation disruption. We reference about the method of
calculating profit of Lin’s paper [5] to study the problem of the manufacturer
and retailer’s profit when there is a transportation disruption. And we introduce
the insurance contract to the model.
In this study, we consider a single supply chain model which has a manufacturer
and a retailer. We assume that the information between them is symmetrical.
The manufacturer has production, while the retailer has inventory. Before the
selling season, the manufacturer and the retailer agree on an insurance contract,
but the retailer needs to pay part fees to the manufacturer. And the premium
1106 K. Kang et al.
is decided by the insurance company. In our model, we assume that the trans-
portation disruption occurs in the delivery from the manufacturer to the retailer,
which interrupts the timely delivery of goods to the retailer. The transportation
disruption may be caused by an accident or a natural disaster, such as a earth-
quake, or flood. In addition, the goods in transit may or may not be damaged
during the disruption. Because of the transportation disruption, the insurance
contract comes into action.
In this paper, we use a multi-objective programming model to solve the
problem. The notations are explained as follows:
Denotes
Parameters
q: an order made by the retailer and the manufacturer based on their forecast
of the market demand D.
When the transportation disruption occurs, the manufacturer and the retailer’s
expected profit will be decreased. But with the insurance contract, this situation
A Novel Multi-Objective Programming Model Based on Transportation 1107
Objective function (1) maximizes the manufacturer’s expected profit, where (w−
c)q is the earnings of the product for the manufacturer q with the order quantity
from the retailer, qct is the cost of transportation, 0 (p − s)(q − x)f (x)dx +
∞
q
h(x − q)f (x)dx is the expected losses generated by the deviation of the
retailer’s order quantity from the market demand, and PR is the cost of the
manufacturer paying to the insurer.
q ∞
max πr =(p − w)q − α (p − s)(q − x)f (x)dx + h(x − q)f (x)dx
0 q (2)
+ βP.
Objective function (2) maximizes the retailer’s expected profit, where (p−w)q
is the earnings of the product for the retailer while the assumption is that the
whole product can be sold out. When the transportation disruption happens,
the retailer can get a compensation βP. When the transportation disruption
happens, the insurance contract belongs to the third party liability insurance,
and its rate is usually a fixed value.
In order to guarantee the integrity of the model, there are constraints as
follows:
0 < s < c < w < p. (3)
If the product have not been sold off, the salvage value of the unsold product
s is less than its cost. This is equivalent to a kind of punishment for the exceed
quantity. And to make sure for the manufacturer and the retailer’s profit, the
relationship of unit cost c, the wholesale price w and the retail price p need to
be c < w < p.
0 < h < w. (4)
The shortage cost h is less than the wholesale price w.
P ≥ qc. (5)
The purpose to purchase the insurance contract is to minimise the manufac-
turer and the retailer’s profit, so the pure premium P is not less than the cost
of the product.
alpha ∈ [0, 1], (6)
beta ∈ [0, 1], (7)
where α is the proportion of the expected losses, and β is the proportion of the
pure premium.
1108 K. Kang et al.
Theorem 1. The optimal order quantity for the supply chain system will
achieve the maximum only if the following condition is satisfied:
p−w
α∗ = . (9)
p − c − ct
Proof. With the insurance contract, the system’s expected profit is
q
max π = max πm + max πr = (p − c − ct )q − (p − s)(q − x)f (x)dx
0
∞ (10)
+ h(x − q)f (x)dx − P R + P,
q
∂π
= p − c − ct − [(p − s + h)F (q) − h] = 0. (11)
∂q
Through the function of max πm and max πr , we can acquire the optimal
∗
order quantity qm and qr∗ solving the following equation:
∂πm
= w − c − ct − (1 − α)[(p − s + h)F (q) − h] = 0, (12)
∂q
∂πr
= p − w − α[(p − s + h)F (q) − h] = 0. (13)
∂q
Then the optimal order quantity is
p − c − ct + h
q ∗ = F −1 . (14)
p−s+h
A Novel Multi-Objective Programming Model Based on Transportation 1109
The the optimal order quantity of the manufacturer and the retailer is
showed by:
∗ −1 (1 − α)h + w − c − ct
qm = F , (15)
(1 − α)(p − s + h)
αh + p − w
qr∗ = F −1 . (16)
α(p − s + h)
∗
Generate Eq. (9) into Eqs. (15) and (16), then we can get qm =qr∗ =q ∗ . So
∗ p−w
only when α = p−c−ct is established, the order quantity is to the most optimal
value.
4 Numerical Analysis
In this section, we examine the model by numerical analysis. The data was
obtained from the Lin’s paper [5], which assumed that the market demand fol-
lowed uniform distribution D ∼ [400, 500]. The other parameters were as follows:
p = 18, w = 15, c = 12, s = 8, h = 3, β = 0.6, R = 15%, and ct = 1.
Take the data into the formulations, and draw the trend chart Fig. 1, which
shows that the manufacturer’s optimal order quantity increases as α increases,
while the retailer’s optimal order quantity decreases as α increases.
∗ p−w
Let qm =qr∗ , then we get α∗ = p−c−c t
= 0.6. We can find α∗ = 0.6 from
Fig. 1 that proves the model. And from Fig. 1, we can see that the crossing is
the optimal order quantity q ∗ as Table 2, which is equal to 461. This is the best
quantity for them. So the pure premium is not less than 5532, and the purchase
to the insurer needs to be 830.
manufacturer
560
retailer
the optimal order quantity
520
480
440
Decision 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
m 438.46 440.17 442.31 445.05 448.72 453.85 461.54 474.36 500 576.92 ∞
r ∞ 653.85 538.46 500 480.77 469.23 461.54 456.04 451.92 448.72 446.15
1110 K. Kang et al.
5 Conclusion
In this paper, we describe the transportation disruption in supply chains. We set
a mathematical model which we introduce the insurance contract to deal with
transportation disruption the way from the manufacturer to the retailer, and
use a case to investigate the applicability of this model. Although the insurance
contract is effective in coordinating the supply chain, it also has some limitations.
The most critical limitation is that the supplier incurs an administrative cost
in monitoring the retailer’s sales situation. The objective of the study is to
determine the optimal order quantity in the case of the uncertainty of the market
demand. This insurance contract transfers the risk from the manufacturer and
the retailer to the insurance company, which protects the manufacturer and the
retailer’s profits and improves the efficiency of the supply chain. In particular,
how much to purchase the insurance is discussed in this paper. The model is
useful for decision makers to determine the product quantity.
There are several directions for this study to continue. We can extend the
model to a complex supply chain with multiple manufacturers or retailers. In
addition, we can apply different strategies to deal with transportation disruption.
References
1. Chávez H, Castillo-Villar KK et al (2016) Simulation-based multi-objective model
for supply chains with disruptions in transportation. Robot Comput Integr Manu-
fact 43:39–49
2. Heckmann I, Comes T, Nickel S (2015) A critical review on supply chain risk—
definition, measure and modeling. Omega 52:119–132
3. Hishamuddin H, Sarker RA, Essam D (2013) A recovery model for a two-echelon
serial supply chain with consideration of transportation disruption. Comput Ind
Eng 64(2):552–561
4. Hishamuddin H, Sarker R, Essam D (2015) A simulation model of a three echelon
supply chain system with multiple suppliers subject to supply and transportation
disruptions. IFAC Papersonline 48(3):2036–2040
5. Lin Z, Cai C, Xu B (2010) Supply chain coordination with insurance contract. Eur
J Oper Res 205(2):339–345
6. Lynch GS (2012) Supply chain risk management 192:33–40. London: Springer
7. Nooraie SV, Parast MM (2015) Mitigating supply chain disruptions through the
assessment of trade-offs among risks, costs and investments in capabilities. Int J
Prod Econ 171:8–21
8. Shu T, Gao X et al (2016) Weighing efficiency-robustness in supply chain disruption
by multi-objective firefly algorithm. Sustainability 8(3):250
9. Zhen X, Li Y et al (2016) Transportation disruption risk management: business
interruption insurance and backup transportation. Transp Res Part E 90:51–68
An Interval-Parameter Based Two-Stage
Stochastic Programming for Regional Electric
Power Allocation
1 Introduction
The second industrial revolution marked the people entered the age of electricity.
In recent years, the world electricity industry has experienced a great develop-
ment and it played a vital role in human beings’ life. According to Key World
Energy Statistics by International Energy Agency (IEA), world electricity gen-
eration increased form 6131 TWh in 1973 to 23816 TWh in 2014, where the
growth rate is as high as 288% [8]. However, electrical power systems are fac-
ing different problems such as economical and environmental issues. As for the
electric power distribution, it is also a complex giant system which has been
confusing electricity operators, and researchers have been committed to study it
for a long time.
In China, during the last decades, a lasting market-oriented reform has been
carried out in electric power industry. Improving the operation efficiency of elec-
tric power industry is the primal goal of the government of China, and 2016
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 92
1112 J. Dai and X. Li
marks the beginning of the second round of such reforms and the 13th Five-year
Plan of China [3,4]. Besides, “Further strengthening the institutional reform
of the electric power industry was promulgated by the State Council of China
whose No. 4 attachment file is about opening electricity plan orderly to promote
optimum allocation of electric power resources” [3]. Therefore, on the perspec-
tive of the national grid, how to guarantee the power supply and obtain more
benefit at the same time are important questions they should face.
In the past years, many scholars studied problem of electric power alloca-
tion plan from different perspectives. Many innovative optimization techniques
have been developed for allocating and managing electric power in more effi-
cient benign ways [5,7,9,10,13]. Among these methods, linear programming is
the basic and prior way to solve this problem. Recently, F. Chen et al [2] used
fuzzy chance-constrained programming model to study electric power generation
systems planning problem which can reflect uncertain interactions among ran-
dom variables directly. For power generation systems planning, this approach
can have a wider application scope than existing optimization models. However,
Huang and his group proposed a new method that is interval-parameter two-
stage stochastic program to solve resource management problem, such as water
resource management and solid waste management [11,12,14]. This inspired us
to apply this kind method to other resource management problem, for example,
electric power allocation.
Birge and Louveaux introduced various kinds of two-stage and multi-stage
stochastic programming problems [1]. Among them, two-stage stochastic pro-
gramming is effective for analysing medium-to-long-term planning problem
where the system data is characterized by uncertainty [12]. Decision makers
will make an initial decision based on the future events and this is called first
stage decision. Then, after the uncertainty things are resolved, the second stage
decision will be made to correct the results of the first stage decision. Besides,
introducing interval parameter into two-stage stochastic model can potentially
address uncertainties of the problem.
The goal of the paper is to apply interval-parameter two-stage stochastic
program to study problem of electric power allocation while simultaneously tak-
ing system uncertainties into account. It will be demonstrated that this kind
of method can help decision makers to design more efficient plans and obtain
more benefits. The paper is structured as follows: The next section is the state-
ment of key problem. Section 3 presents process of modelling through using the
method introduce in this paper. A simple case study was used to demonstrated
the effective of the method in Sect. 4. In the end, the paper summarizes the main
contents and contributions of this paper in Sect. 5.
2 Key Problem
In China, generally speaking, the State Grid Corporation and the govern-
ment have joint ownership of electric power allocation right. They will make
allocation decision overall considering the historical data, requirements of the
local government and production capacity of electric power. The authority has
an obligation to make a reasonable and scientific electric power allocation plan
to achieve the best economic benefits with better resources saving because it will
take a lot of resources in the process of electric power production.
Electric power allocation involves a region’s residents life, industrial produc-
tion and other unpredictable situations, so it is a complex system. All the groups
need to know how much electric power they can expect because if the guaran-
teed amount cannot be provided, they have to make a change of their initial
planning which may bring them a large loss. For example, the factories have to
halt production and the residents have to stop their work that needs use electric
power. At the same time, it is also hard for decision makers to make the alloca-
tion plan. In practice, there are many stochastic factors that will influence power
supply which directly generates to the complexity of the problem. In this situ-
ation, this paper uses two-stage stochastic method to solve the problem where
the authority will make two decisions. The first decision is to make allocation
plan to guarantee basic demands according to historical data and the second
decision is to make an compensate that is resulted by the first decision. At the
same time, when the State Grid Corporation makes decisions, there are some
uncertain factors such as weather, breakdown of machinery that would lead to
a gap between demand and supply. Thus, treating uncertain number as interval
parameter is a reasonable method to deal with problem’s uncertainty.
Therefore, from this discussion, the problem can be solved by building a
interval-parameter two-stage stochastic model to obtain more benefits. On one
hand, the first decision of insufficient power supply can be made up by the second
decision. On the other hand, by adjusting two-phase decision-making, the State
Grid Corporation can acquire maximum benefits, which ensures the harmonious
development of power industry to a certain extent. The mathematical form for
this problem is given in the next section.
3 Modelling
3.1 Assumptions
3.2 Notations
To facilitate the problem description, the notations are introduced firstly.
Indices
Decision variables
Xi : the first-stage decision variable, which is allocation target for electric power
that is promised to user i;
YiS : the second-stage decision variable, which is shortage of electric power to
user i when the actual allocation amount is S.
Uncertain parameters
Bi : net benefit to user i per billion kilowatt hour;
Ci : punish coefficient, which is loss to user i per billion kilowatt hour;
Di min : minimal demand of user i to ensure the normal operation of the society;
Sih : the amount where the electric power allocation target is not meet for user
i when actual amount of electric power supply is Si with probabilities ph ,
S: random variable, actual total amount of electric power generation; letting S
take values Shi with probabilities Ph .
Certain parameters
Si max : maximum allowable allocation amount for user i;
ph : the probability of occurrence of shortage level h, where h = 1, 2, · · · , H,
H
h=1 ph = 1;
h = 1: actual power supply is very close to the demand and the shortage is
smallest;
h = 2: actual power supply is relatively close to the demand and the shortage is
medium;
h = H: actual power supply is far from to the demand and the shortage is
highest.
Symbols
+: represent the upper limit of parameters;
−: represent the upper limit of parameters.
(2) Limitations
Owing to resource constraints and other reasons, there are 4 kinds of con-
straints and they are detailedly introduced as follows:
1 Minimum power demand constraints
Local government has relative policies to promise minimum power demand
of the area according to the historical data of electricity consumption so as to
ensure normal social life without causing unrest. Based on the above description,
we can get the constraint as follow:
Xi ≥ Di min .
2 Allowance electric power allocation constraints
For one region, the ability of electric power production is limited. Therefore,
allowance allocation amount is also limited so that grid company can’t distribute
more than actual electric power generation volume to all users. This kind of
constraints can be expressed as following:
YiS ≤ Xi ≤ Si max .
3 Available power constraints
In first-stage decision making process, Xi must be determined before actual
total electric power supply S are known, while the shortage Yi S are determined
during the second stage when electric power supplies are known but allocation
amounts have been fixed. This kind of constraints mean that first-stage decision
variable subtract second-stage decision variable should not be more than actual
total supply amount. We can have this relationship expressed as:
n
(Xi − YiS ) ≤ Shi .
i=1
4 Non-negative constraints
YiS ≥ 0.
1116 J. Dai and X. Li
⎧ ± i=1
±
i=1 h=1
⎪
⎪ X±i ≥ Di min
⎪
⎪ ±
⎪
⎪ Sih ≤ Xi ≤ Si max
⎨ n
s.t. (X± ±
i − Sih ) ≤ Sh
i±
⎪
⎪ i=1
⎪
⎪
⎪
⎪ S± ≥ 0
⎩ ih
∀i = 1, 2, · · · , n.
According to research of Wang and Huang [15], this model can be solved
through convert interval-parameter two-stage stochastic model into two sub-
models which correspond to the upper and lower bounds of the object-function
value. It is difficult for decision makers to determine whether the upper bound
X + or the lower bound X − of the uncertain variable X ± correspond to the upper
bound of the total net benefit. Thus, maximized total benefit can be obtained
TSSP Regional Electric Power Allocation 1117
by optimized target value. Letting Xi± = Xi− + ΔXi zi where ΔXi = Xi+ − Xi−
is a fixed value and zi is range from 0 to 1. Here, we introduce a new decision
variable zi to identify the optimized target value. Based on above statement, we
can get a new transformation model as follows:
n
n H
max f ± = Bi± (Xi− + ΔXi zi )− ph Ci± Sih
±
4 Case Study
A real case study is given in this section to demonstrate the efficiency of this
method.
1118 J. Dai and X. Li
electric power. Therefore, the decision making about electric power allocation
target plays a vital role in the whole process of the allocation problem. The opti-
±
mized targets for three uses can be obtained by letting Xiopt = Xi− + ΔXziopt ,
and it is 2679.42 billion kwh, 945.92 billion kwh and 43.2 billion kwh for Sichuan
province, Chongqing and Tibet, respectively. At the same time, each shortage
and the total benefit can be easy to know form the Table reftab:daijingqi3. From
the results, the electric power would be first allocated to Sichuan province, next
for Chongqing and the last for Tibet when the scarcity occurs because Sichuan
province will bring State Grid Corporation the highest benefit.
In real world, suitable policy is important for decision makers to make plan
of an area’s sustainable electricity use. From the analysis above, solutions can
be obtained by letting the different allocation targets under various policies.
Different policy orientation will make the aspect of the consideration vary when
decision makers think about the problem. Some policy implications will be given
to help making better decision as follows. Firstly, to establish a relatively stable
long-term trading mechanism where suppliers and demanders can trade on their
own in the competition market. Secondly, improving the mechanism of inter-
district electric power deals across the province which permits power generation
companies, power users and electricity sell bodies can make occasional trade
when electricity is in short supply in a day. Thirdly, environmental protection
also is the topic that policy makers need to be concerned with. Thus, forming
1120 J. Dai and X. Li
5 Conclusions
This paper studied interval-parameter two-stage stochastic model applying to
the problem of electric power allocation. By using the proposed model, we con-
sidered not only how to make a more stable consumer plan, but also let decision
makers can benefit more form it. In the process of model building, we overcame
uncertainties of the problem through introducing interval-parameter into the
model, which at the same time got over complexity of solving process. The ini-
tial model was transferred into two submodels so as to obtain solutions, avoiding
system failure risk of interval solution. Of course, the approach was demonstrated
effective by using data form Southwest branch of the State Grid Corporation of
China as a case study. Through analysing result of the case study, some pol-
icy implications were proposed in the end which may offer help to regulators of
electric power allocation to make decision.
Compared with previous studies, this kind of method is innovative to be
applied in the electric power allocation which has good practicality value. How-
ever, this method was built based on the assumption that the upper and lower
boundary is known. Besides, the probabilities of degree of electric power short-
age are also directly given. The future research may focus on these two points
to make the study more rigorous, which can better deal with the uncertainty.
Besides, some suitable policy simulation also can be put forward to help better
decision making.
References
1. Birge JR, Louveaux F (1997) Introduction to Stochastic Programming. Springer,
New York
2. Chen F, Huang GH, Fan YR, Chen JP (2017) A copula-based fuzzy chance-
constrained programming model and its application to electric power generation
systems planning. Appl Energy 187:291–309
3. CCCCP (Central Committee of the Chinese Communist Party) SSC (2016) Further
strengthening the institutional reform of the electric power industry. In Chinese
4. Development NN, Reform Commission NNEA (2016) Six subsidiary documents for
the new electric reforms were released. In Chinese
5. Fukuta N, Ito T (2012) A preliminary experimental analysis on combinato-
rial auction-based electric power allocation for manufacturing industries. In:
IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent
Agent Technology, pp 394–398
6. Huang GH (1998) A hybrid inexact-stochastic water management model. Eur J
Oper Res 107(1):137–158
7. Iba K, Suzuki H et al (1988) Practical reactive power allocation/operation planning
using successive linear programming. IEEE Trans Power Syst 3(2):558–566
8. Iea IEA (2016) Key world energy statistics 2016. Int Energy Agency 1:37
9. Jordehi AR (2016) Allocation of distributed generation units in electric power
systems: a review. Renew Sustain Energy Rev 56:893–905
10. Kayal P, Chanda CK (2016) Strategic approach for reinforcement of intermittent
renewable energy sources and capacitor bank for sustainable electric power distri-
bution system. Int J Electr Power Energy Syst 83:335–351
11. Li YP (2014) Municipal solid waste management under uncertainty: an interval-
fuzzy two-stage stochastic programming approach. J Environ Inform 12(2):96–104
12. Maqsood I, Huang GH, Yeomans JS (2005) An interval-parameter fuzzy two-stage
stochastic program for water resources management under uncertainty. Eur J Oper
Res 167(1):208–225
13. Shang JC, Zhang LQ (2007) Research and application of technologies in energy-
saving emission-reducing and optimal resource allocation of electric power system.
Power Syst Technol 31(22):58–63 In Chinese
14. Wang S, Huang GH (2013) An interval-parameter two-stage stochastic fuzzy pro-
gram with type-2 membership functions: an application to water resources man-
agement. Stochast Environ Res Risk Assess 27(6):1493–1506
15. Wang S, Huang GH (2015) A multi-level taguchi-factorial two-stage stochastic
programming approach for characterization of parameter uncertainties and their
interactions: An application to water resources management. Eur J Oper Res
240(2):572–581
Pricing Strategies of Closed Loop Supply Chain
with Uncertain Demand Based
on Ecological Cognition
1 Introduction
In the process of reverse logistics, the factors affecting the recycling of used
products are numerous. Pankaj et al. [6] have done a research on the possibility of
three way recycling options. Llgin et al. [13] analyzed four major categories about
remanufactured products. Fleischmann et al. [8] studied quantitative models for
reverse logistics, and [5,11] researched on sustainable development of supply
chain systems. In addition, Thierry et al. [24] researched strategic issues on recy-
cling and management product. The spare parts research is also very important,
so Ruud H et al. [23] considered the uncertain quality of the used products.
Chari et al. [2] addressed a problem about the repairable spare parts to find
the cost-optimal production strategy. In order to further study the acquisition
price and the quantity of remanufactured products, Shaligram [21] studied the
impact of the consolidation center and collection centers. Based on the study of
Shaligram et al., Gönsch Jochen [10] supposed that the quantity of used products
and replacement parts are unrestricted. Sung et al. [14] devoted to developing
optimal production planningnings for different production strategies.
The production demand of remanufactured products is extremely uncertain,
so Mitsutaka [17] and others predict the demand of the product by time series
analysis. Shi et al. [22] developed an option pricing model to evaluate the acqui-
sition price of used products under uncertain demand and return. For the sake
of considering the actual situation of the remanufacturing process will be inter-
rupted, B.C. Giri and S. Sharma [9] studied the complex production planningn-
ing under the uncertain demand when the product supply is interrupted. Because
there is always a variety of products were produced at the same time, so Shi et al.
[22] considered the demand of many kinds of products are uncertain. For solving
the problem systematically, Cheng [26] studied the influence of the new product
design on the pricing strategy. In order to solve the pricing strategy, a model is
established by using the Brown model [16]. Chen et al. [4] used the dynamic pro-
gramming method to obtain dynamic pricing strategy. Also consider the study
of competition, such as Mitra [18] researched the competition strategy under
the monopoly industry. Adem et al. [20] researched the competition between
an original equipment manufacturer and an independent remanufacture. In the
closed-loop supply chain, Chen and Chang [3] considered a remanufacturing
models under cooperative and competitive.
Due to the emergence of remanufactured products, many consumers have a
different treatment of remanufactured products and new products. Ferrera et
al. [7] chosen differentiated prices for new products and remanufactured prod-
ucts. Benjamin et al. [12] considered the role of ambiguity tolerance in consumer
perception of remanufactured products. [1,19] consider the impact of govern-
ment subsidies on the remanufacturig activities. The scholar Li [15] divided the
demand of products into two categories by the ecological cognition of consumers,
and made a research on the policy of the government subsidy. In this paper, based
on the above, we discuss the problem of remanufacturing in closed-loop supply
chain, based on ecological cognition and government subsid, then formulate the
cooperative and competitive game model to investigate the manufacturer’s and
retailer’s pricing strategy.
1124 D. Yu and C. Guo
2 Model Description
In this paper, we consider three stakeholders, they are manufacturer, retailer and
consumer. The manufacturer will produce remanufactured products and new
products, and then sold them together to the retailer. The retailer sold remanu-
factured products and new products to consumers, and collected used products
from the consumers, then provide them to manufacturers. The reverse logistics is
commissioned by the manufacturer, the manufacturer gives the acquisition price
to stimulate the retailers to collect the used products. Thus, the closed-loop
supply chain model is shown in Fig. 1.
The model structure studied the case of a single manufacturer and a single
retailer. Besides, the manufacturer is the leader in the closed loop supply chain.
In this paper, we only consider the government takes the corresponding pol-
icy mechanism to stimulate the manufacturer. The remanufacturer derives the
market demand function of remanufactured products and new products based
on the cognitive degree of consumers, then makes the production planning of
the product and determines the wholesale price of the product. On the other
hand, in order to ensure the orderly conduct of the manufacturing process, the
acquisition price is determined by the manufacturers, and the corresponding
used product planning is established. Retailers are the followers, according to
the manufacturer’s wholesale price and acquisition price to determine the retail
price of finished product and acquisition price of used products, and optimize
their own profit. Finally, we explore the effects of cooperative and competitive
model on pricing strategies.
2.1 Notations
2.2 Assumptions
Assumption 1. The decision of the consumer to purchase the product is based
on the utility of the product. Then we supposed the consumers’ evaluation of
new products is v, and supposed the v is uniformly distribution, its probability
density function is:
1, x ∈ [0, 1]
f (x) =
2, x ∈
/ [0, 1] .
Supposed that the market size is 1, and each consumer only buy one product.
Assumption 2. We defined the θ is the degree that consumer based on environ-
mental awareness and willing to buy the remanufactured products. We supposed
0 ≤ θ ≤ 1, when pn > θv, customers will not buy new products, when pr > θv,
customers will not buy remanufactured products. And if pn ≤ v, pn > θv,
meanwhile, θv − pr ≤ θv − pn , consumers will choose the new products. When
pn ≤ θv, pr ≤ θv, and v−pr ≤ θv−pn , consumers will choose the remanufactured
products.
Assumption 3. Due to make the production continue, here, we assume that
p1 ≤ p2 , Cr ≤ Cn .
Assumption 4. Manufacturers and retailers are based on complete information
symmetry, it means that each of them fully learn of the cost of each other, pricing,
strategy, and other relevant information, so that the next Steinberg game model
can work well.
Assumption 5. Suppose that in the process of remanufacturing, the rate of
production is 100%. It means that no waste products in the process of manufac-
turing. And used production can meet the market demand.
1126 D. Yu and C. Guo
3 Model Development
Based on the above, we obtain the profit function of the manufacturer and the
retailer, the profit function of manufacturer:
The first element on the right side of the equation is the profit of the man-
ufacturer for selling new products; The second element on the right side of the
equation is the profit of the manufacturer for selling remanufactured products;
the third element on the right side of the equation is the profit of the manu-
facturer produces the remanufactured products and then gets the government
subsidies.
The profit function of retailer:
The formula on the right side of the first represent the profits of retailer
for selling new products, of which second is stand for the profits for selling
remanufactured products. So there is a total profit of the closed-loop supply
chain:
In this paper, we consider the following two cases in the closed loop supply
chain.
In the competitive pricing strategy, the manufacturer and the retailer take their
own benefit maximization as the decision-making goal respectively. In this deci-
sion models, the manufacturer is in the dominant position, and the retailer is
the follower. So the paper uses the reverse induction method to solve the model.
The derivation of the above formula, we can get:
2
∂ πr /∂ 2 pn = 2/(θ − 1) ∂ 2 πr /∂pn ∂pr = −2/(θ − 1)
.
∂ 2 πr /∂pr ∂pn = −2/(θ − 1) ∂ 2 πr /∂ 2 pr = 2/(θ(θ − 1))
Pricing Strategies of Closed Loop Supply Chain 1127
Combine Eqs. (6), (7), (8) and (9), we can get that:
In the same way, we can know that there is a optimal solution. And solved
it, we can get that:
4 Numerical Example
0.35
0.3
qr
0.25 qn
0.2
Volume of business
0.15
0.1
0.05
-0.05
-0.1
0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Cognitive level of remanufacturing products θ
0.08
πr
0.06
πn
0.04
0.02
Profits
-0.02
-0.04
-0.06
-0.08
0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Cognitive level of remanufacturing products θ
products is equal. At this case, when θ > 0.8781, consumers tend to buy manu-
factured products. Manufacturer can only producing the remanufactured prod-
ucts when the remanufactured product is profitable enough. When θ = 0.9300,
manufacturers no longer produce the new products, because at this point, the
demand for new products is zero. Why it happened? Maybe there is no bias
towards the new products and remanufactured products, and the price of the
remanufactured product is lower than the new product. In Fig. 3 we can draw
that: when θ = 0.9735, the remanufacturer and the retailer’s profit is equal, and
in this mode, the manufacturer produces remanufacturer products are negative
1130 D. Yu and C. Guo
0.09
πr
0.085
πn
0.08
0.075
Profits in the maket
0.07
0.065
0.06
0.055
0.05
0.045
0.04
0 0.05 0.1 0.15 0.2
θ =0.6 unit subsidy for remanufactured product k1
0.085 πr
πn
0.08
0.075
Profits in the maket
0.07
0.065
0.06
0.055
0.05
0.045
0.04
0 0.05 0.1 0.15 0.2
θ =0.5 unit subsidy for remanufactured product k2
0.1
0.09 r
0.08
0.06
0.05
0.04
0 0.05 0.1 0.15 0.2
θ =0.4 unit subsidy for remanufactured product k3
continue to decline. Only when the government subsidy is greater than 0.05, the
retailer’s profit began to grow. We can find the selling profits mainly come from
new products. The pr decreases with the k1 increase, that makes the consumer
tends to buy remanufactured products. As a result, the demand of new product
begin to decline. When the k = 0.13, because the government subsidies make
up for the loss in remanufacturing, the manufacturers’ profit function begins to
grow, so the retailer’s selling price also increased significantly. From the Figs. 4,
5 and 6 we can draw that: in the case of other variables are constant, the smaller
the value of θ, the more serious decline in the profit function. It should be noted
that with the decrease θ the lowest point of red and blue line start move to
the right. We can draw that the higher cognition of remanufactured products,
the government’s subsidy will be smaller. The model also reveals that manufac-
turers and retailers will not be able to make a remanufacturing activity if the
government does not take the policies and give subsidies.
0.65
0.6 θ =0.4
θ =0.6
0.55 θ =0.8
0.5
0.45
r
p
0.4
0.35
0.3
0.25
0 0.05 0.1 0.15 0.2
unit subsidy for remanufactured product k
0.8
0.6
q n θ =0.8
0.4
Market demand
q r θ =0.8
0.2 q n θ =0.6
q r θ =0.6
0 q n θ =0.4
q r θ =0.4
-0.2
-0.4
-0.6
0 0.05 0.1 0.15 0.2
unit subsidy for remanufactured product k
5 Comparison of Models
In this paper, we study cooperative and competitive game model in the closed
loop supply chain. In the model of competition, due to the manufacturers and
retailers make decisions respectively, there is competition between them. Pro-
ducing the remanufactured products can adjust the interests of the relationship
Pricing Strategies of Closed Loop Supply Chain 1133
between the manufacturer and the retailer. It is benefit for the government to
promote remanufacturing. And the retailer more like to make remanufactured
products to compete interests with manufacturer. But From Tables 1 and 2 we
can obviously find remanufacturing hurts the interests of manufacturer, retailer
and consumer. So them will choose cooperative pricing model. In the Fig. 7, it
is not difficult to find that the total profit of cooperation is always higher than
the competition. Producing remanufactured products could not make profits, so
the profits always decrease with the increase of the degree of cognition. When
the θ from 0.9 to 0.95, there is a sharp decline in the total profit of the two
models. The reason is the rapidly decrease in the number of new products and
the number of remanufactured products rapidly increase. It proves again that
produces the remanufactured products will not bring profit to manufacturers.
The main reason of remanufacturing could not make profits is that the remanu-
facturing technology has not kept pace with the society. It also reveals that if the
government not take any actions, it will happen nothing on remanufacturing.
6 Conclusion
In this paper, we consider the closed loop supply chain that consists of the
manufacturer, retailer and consumers, discuss two models of cooperative and
competitive game model. At the same time, we using the game theory to analysis
the model and the conclusions are the following.
In the cooperative and competitive game model, if the government no sup-
port or no provide subsidy, remanufacturing will not be able to carry on. And
in the cooperative game model, the government’s regulation should be strength-
ened. Because the cooperation is not conducive to producing remanufactured
products. As the number of new products and profits are declining, manufactur-
ers have a reason to down the price of new products to get more quantity. But
the optimal wholesale price in this paper has nothing to do with θ, maybe in
this model we use the reverse induction method to solve the model, the retailers
is the first to determine the market price, then manufacturer make the price
of wholesale. What’s more, producing remanufactured products can adjust the
profit distribution between manufacturer and retailer. And in cooperation, it
can bring higher profit to the supply chain. But it does not accord with the
interests of the government. In order to save economic costs, the government is
more willing to see them make decisions separately. With the θ raised, it will not
good for manufacturers and retailers, but it is good for the government to take
actions. The discussion of these two models is very important for our research
in the future, and give the direction of the government to advocate green envi-
ronmental protection. In the future work, we could consider the complex closed
loop supply chain systems with multiple manufacturers and multiple retailers,
and the used products will be pre-sale to the manufacturers. What’s more, used
products will be divided into different grades by the consolidation center. The
consolidation center will be the new participator in the future model.
1134 D. Yu and C. Guo
References
1. Aksen D, Aras N, Karaarslan AG (2009) Design and analysis of government
subsidized collection systems for incentive-dependent returns. Int J Prod Econ
119(2):308–327
2. Chari N, Diallo C et al (2016) Production planning in the presence of remanufac-
tured spare components: an application in the airline industry. Int J Adv Manuf
Technol 87(1):957–968
3. Chen JM, Chang CI (2012) The co-opetitive strategy of a closed-loop supply chain
with remanufacturing. Transp Res Part E Logist Transp Rev 48(2):387–400
4. Chen JM, Chang CI (2013) Dynamic pricing for new and remanufactured products
in a closed-loop supply chain. Int J Prod Econ 146(1):153–160
5. Choi TM, Li Y, Xu L (2013) Channel leadership, performance and coordination in
closed loop supply chains. Int J Prod Econ 146(1):371–380
6. Dutta P, Das D et al (2016) Design and planning of a closed-loop supply chain
with three way recovery and buy-back offer. J Clean Prod 135:604–619
7. Ferrer G, Swaminathan JM (2010) Managing new and differentiated remanufac-
tured products. Eur J Oper Res 203(2):370–379
8. Fleischmann M (2004) Quantitative models for reverse logistics. Springer, Heidel-
berg
9. Giri BC, Sharma S (2015) Optimal production policy for a closed-loop hybrid
system with uncertain demand and return under supply disruption. J Clean Prod
112:2015–2028
10. Gönsch J (2015) A note on a model to evaluate acquisition price and quantity of
used products for remanufacturing. Int J Prod Econ 169:277–284
11. Guide VDR, Wassenhove LNV (2009) The evolution of closed-loop supply chain
research. Oper Res 57(1):10–18
12. Hazen BT, Overstreet RE et al (2012) The role of ambiguity tolerance in consumer
perception of remanufactured products. Int J Prod Econ 135(2):781–790
13. Ilgin MA, Gupta SM (2010) Environmentally conscious manufacturing and product
recovery (ecmpro): a review of the state of the art. J Environ Manage 91(3):563–591
14. Jung KS, Dawande M et al (2016) Supply planning models for a remanufacturer
under just-in-time manufacturing environment with reverse logistics. Ann Oper
Res 240(2):1–49
15. Li XQ (2015) Research on the policy of remanufacturing subsidy based on the
ecological cognition of consumers. Prod Res 2015(9):12–19 in Chinese
16. Liang Y, Pokharel S, Lim GH (2009) Pricing used products for remanufacturing.
Eur J Oper Res 193(2):390–395
17. Matsumoto M, Komatsu S (2015) Demand forecasting for production planning in
remanufacturing. Int J Adv Manufact Technol 79(1):161–175
18. Mitra S (2015) Models to explore remanufacturing as a competitive strategy under
duopoly. Omega 20:215–227
Pricing Strategies of Closed Loop Supply Chain 1135
19. Mitra S, Webster S (2008) Competition in remanufacturing and the effects of gov-
ernment subsidies. Int J Prod Econ 111(2):287–298
20. Örsdemir A, Parlaktürk AK (2014) Competitive quality choice and remanufactur-
ing. Soc Sci Electron Publ 23(1):48–64
21. Pokharel S, Liang Y (2012) A model to evaluate acquisition price and quantity of
used products for remanufacturing. Int J Prod Econ 138(1):170–176
22. Shi J, Zhang G, Sha J (2011) Optimal production planning for a multi-product
closed loop system with uncertain demand and return. Comput Oper Res
38(3):641–650
23. Teunter RH, Flapper SDP (2011) Optimal core acquisition and remanufacturing
policies under uncertain core quality fractions. Eur J Oper Res 210(2):241–248
24. Thierry M, Salomon M, Nunen JV, Wassenhove LV (1995) Strategie issues in
product recovery management. Calif Manag Rev 37(2):114–135
25. Wei S, Cheng D et al (2015) Motives and barriers of the remanufacturing industry
in China. J Clean Prod 94:340–351
26. Wu CH (2012) Product-design and pricing strategies with remanufacturing. Eur J
Oper Res 222(2):204–215
Beds Number Prediction Under Centralized
Management Mode of Day Surgery
1 Introduction
There are three types of surgery management modes adopted overseas: day
surgery center inside a hospital, free-standing day surgery center and opera-
tion room at the clinic [7]. The mainly adopted mode in our country is day
surgery center inside a hospital under three different management modes: cen-
tralized management mode, decentralized management mode and the manage-
ment mode combining centralized mode and decentralized mode. Centralized
management mode is an integrated management mode where day surgery cen-
ter, as the centralized management platform, assembles all patients together to
conduct centralized admission, scheduling and follow-up visit. While decentral-
ized management mode is a mode where day surgery is managed by departments.
In this paper, we study hospitals under the mode where centralized and
decentralized management coexist. Under centralized management mode, day
surgery patients are admitted in day surgery center where their admission, oper-
ation and discharge are arranged. Day surgery center is self-contained with wards
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 94
Beds Number Prediction under Centralized Management Mode 1137
and operation room. The beds and operation rooms of day surgery patient are
dedicated, not shared with elective patents. Under decentralized mode, wards
and operation rooms of day surgery and elective surgery are managed by depart-
ments. The beds and operation rooms has no boundary between day surgery
patients and elective surgery patients, i.e. to some extent, beds and operations
are shared by day surgery patients and elective surgery patients.
We conduct simulation study on centralized and decentralized management
modes in this paper. Through a large number of simulation experiments, we
reach the conclusion that centralized management mode is more advantageous
to either elective surgery patients or day surgery patients. And decentralized
management mode is more equitable. While the development of day surgery
is encouraged, centralized management mode is a better choice. But if beds
allocation is far from equitable under centralized management mode, it will be
exceedingly advantageous to one of the two types of patients. Thus, in this paper,
we seek the beds number allocate to day surgery patients, under Centralized
management mode, that is advantageous to day surgery patients and has Equity
Closest to the same situation under Decentralized mode (CECD). Based on the
large amount of simulation data, we screen out characteristic variables and use
regression and neural network to predict the beds number allocate to day surgery
patients under CECD. And then we verify the effectiveness of the regression
equation and neural network model.
Technological and theoretical advances in computer science and mathemat-
ics offer new options to complement traditional statistical analysis [7]. A major
focus of machine learning research is to automatically learn to recognize com-
plex patterns and make intelligent decisions based on data. In the past decade,
machine learning algorithms have revealed previously undetected trends in his-
torical data [9,13]. Especially, artificial neural network and traditional linear
regression are effective method. There are some studies use linear regression and
neural network to predict the objective function value. Such as, Menke et al.
[7] designed an artificial neural network to predict emergency department vol-
ume. Shi et al. [11] performed hierarchical linear regression and propensity score
matching to test hospital/surgeon volume for associations with breast cancer
surgery costs. Tsai [12] developed artificial neural network models to predict
length of stay for inpatients with one of the three primary diagnoses: coronary
atherosclerosis, heart failure, and acute myocardial infarction in a cardiovascular
unit in a Christian hospital in Taipei, Taiwan. Li et al. [15] proposed an artificial
neural network model to predict the severity of menopausal symptoms. Launay
et al. [5] used artificial neural networks to predicate of prolonged length hospi-
tal stay in older patients hospitalized in acute care wards after an emergency
department. Gholipour et al. [4] used a neural network for predicting survival
and length of stay of patients in the ward and the intensive care unit of trauma
patients and to obtain predictive power of the current method. Wise et al. [14]
used artificial neural network model to provide vascular surgeons a discriminant
adjunct to assess the likelihood of in-hospital mortality on a pending Ruptured
abdominal aortic aneurysm admission. Some studies use linear regression and
1138 J. Yang et al.
B : total number of beds for day surgery patients and elective surgery
patients;
BD : number of beds for day surgery patients;
BE : number of beds for elective surgery patients, BD + BE = B;
λD : arrival rate of day surgery patients (unit: person per day);
λE : arrival rate of elective surgery patients (unit: person per day);
OTD : operation time of day surgery patients (unit: hour);
OTE : operation time of elective surgery patients (unit: hour);
RTD : length of hospital stay after operating day surgery (unit: hour);
RTE : length of hospital stay after operating elective surgery (unit: hour);
W T1 : average waiting time before admission (unit: hour), in this paper
we define
W T1 = time of admission − time of arrival;
AD : number of day surgery patients arrived (unit: person);
AE : number of elective surgery patients arrived (unit: person);
SD : number of day surgery patients served (unit: person);
SE : number of elective surgery patients served (unit: person);
S : total number of surgery patients served (unit: person),
S = SD + S E
In this paper, we define the regular opening hour of each operation room as 8 h,
at the end of each workday, operations still in progress should be finished with
extra work, newly-opened operations are not allowed, and operation rooms are
closed for weekends. When we conduct simulation in this paper, OTD , RTD ,
OTE and RTE are considered as random numbers generated according to the
mean values and standard deviations in Table 1.
The simulation period of Model I and Model II are both 1 year, and the
conclusion we reach through a large number of simulation experiments are in
accordance. In this paper we only elaborate on the simulation result when D =
8, E = 30. The number of elective surgery patients arrived (AD ), number of
elective surgery patients arrived (AE ), number of day surgery patients served
(SD ), number of elective surgery patients served (SE ), total number of surgery
1140 J. Yang et al.
patients served (S), average waiting time before admission (W T1 ) are shown in
Table 2.
According to Table 2, under the condition where BD = 4, Model I has
30 fewer day surgery patients served than Model II, 18 more elective surgery
patients served than Model II. To sum up, Model I has 12 fewer patients served
than Model II, and 15 h longer average waiting time before admission than
Model II.
While, under the condition where BD = 5, Model I has 256 more day-surgery
patients served than Model II, 59 fewer elective-surgery patients served than
Model II. To sum up, Model I has 197 more patients served than Model II, and
74 h shorter average waiting time before admission than Model II.
The result shown in Table 2 indicates that, under centralized management
mode, bed and operation room resources of day surgery and elective surgery
are respectively independent, and it is advantageous either to elective surgery
patients (under the condition where BD ≤ 4 in Table 2) or day surgery patients
(under the condition where BD ≥ 5 in Table 2). While, under decentralized
mode, bed and operation room resources are shared between day surgery patients
and elective surgery patients, so it is more equitable. Under centralized manage-
ment mode, when allocation is more advantageous to elective surgery patients
(i.e. fewer beds are allocated to day surgery patients, and more elective surgery
patients are served at the cost of having fewer day surgery patients served than
decentralized mode), fewer patients are served within equal time period and the
average waiting time before admission is much longer when compared to decen-
tralized mode. When allocation is more advantageous to day surgery patients
(i.e. more beds are allocated to day surgery patients, and more day surgery
patients are served at the cost of having fewer elective surgery patients served
than decentralized mode), a lot more patients are served within equal time period
and the average waiting time before admission is much shorter when compared
to decentralized mode. While the development of day surgery is encouraged,
centralized management mode is a better choice. However, if bed allocation is
far from reasonable under centralized management mode, it will be exceedingly
advantageous to one of the two types of patients, such as the result in Table 2
when BD < 4 or BD > 5. For the convenience of applying the predictions in
hospital management, the following work in this paper use simple calculation
to explore the beds number allocate to day surgery patients under CECD, i.e.
situation in Table 2 where BD = 5, that is advantageous to day surgery and
Beds Number Prediction under Centralized Management Mode 1141
Let all the input and output in Table 3 stay unchanged, we enter them
into regression equation and find redundant variables in the input parameters.
Through stepwise regression, we identify M VOTD , SDOTD , M VOTE , SDOTE ,
SDRTD and SDRTE as redundant variables. In hospital management practice, the
operation time is usually shorten than the length of hospital stay after surgery, so
M VOTD , SDOTD , M VOTE , SDOTE are identified as redundant variables.
We let C0 to C5 represents constant, coefficient of λD , coefficient of λE ,
coefficient of M VRTD , coefficient of M VRTE and coefficient of B. The results of
linear regression are shown as Tables 4 and 5.
C0 C1 C2 C3 C4 C5
0.5248 0.2386 −0.4378 0.0036 −0.0008 0.2726
As can be seen from Table 5, the regression equation is valid. So BCECD , the
beds number allocate to day surgery patients under CECD can be represented
by the formula (1).
The actual beds number allocate to day surgery patients under CECD is inte-
ger, thus, we rewrite Eq. (1) by rounding it to integer and we obtain it denoted
as BCECD ∗ by the formula (2).
∗
BCECD = 0.5248 + 0.2386λD − 0.4378λE + 0.0036M VRTD
(2)
− 0.0008M VRTE + 0.2726B + 0.5.
The actual and the predicted beds number allocate to day surgery patients
under CECD calculated according to formula (2) are shown in Figs. 1 and 2:
The pentagrams in Fig. 2 denote entirely accurate predictions. From Figs. 1
and 2, we can see that not much predictions are entirely accurate, but the maxi-
mum error is 3. The results of the key parameters which reflect regression effect
are: R2 = 0.9162, F = 205.5406, p = 0.0000. Parameter R2 = 0.9162 indicates
that the fitting result of the model is satisfied, parameter p < α, which indicates
that selection of every variable in the regression equation is significant. The
results of Table 5 indicate that Eq. (1) has a satisfied fitting result. The regres-
sion equation can provide decision supports for beds allocation under centralized
management mode.
∗
Next section, we predict the number of BCECD with neural network method.
B1 B2 B7 B9 B11
−0.4018 0.3646 −0.2478 0.1699 −0.1319
−0.2532 0.2661 −0.0389 0.067 −0.3323
−0.2306 0.2747 −0.0048 0.0475 −0.3204
−0.1817 0.2614 0.0323 0.0675 −0.3431
−0.353 0.3008 −0.1178 0.0916 −0.2154
0.6947 −0.9056 0.4409 −0.2699 0.7933
−0.3098 0.2657 −0.0423 0.0847 −0.2985
0.0418 0.0656 0.1519 0.0017 −0.1027
1144 J. Yang et al.
B1 B2
−1.0442 −1.6046 −1.7828 −1.8679 −1.3586 −1.7432 −1.4882 −2.0515 0.5287
5 Conclusion
In this paper, we firstly conduct a large number of simulation experiment using
Matlab and prove that centralized management mode is more advantageous
to either elective surgery patients or day surgery patients, it depends on the
Beds Number Prediction under Centralized Management Mode 1145
The regression equation satisfied fitting results, the maximum error is 3, and
only a few recorded errors are 3, most are 1 and 2 besides the accurate pre-
dictions. And the beds prediction equation in this paper, which is simple and
convenient for calculation. Then we use artificial neural network to do the same
prediction. And the results are similar to linear regression. The maximum error
is 3, and only a few recorded errors are 3, most are 1 and 2 besides the accu-
rate predictions. Both linear regression equation and the neural network above
can provide decision supports for beds allocation under centralized management
mode.
The deficiency in this paper is that we only consider beds number without
considering the limitation to operation room resource. We will take account of
the limitation to operation room resource to enrich our study next. Moreover,
we will figure out other methods to improve accuracy of the prediction.
References
1. Abdullah L (2014) Modeling of health related quality of life using an integrated
fuzzy inference system and linear regression. Procedia Comput Sci 42:99–105 (in
Chinese)
2. Aguiar F, Torres R et al (2016) Development of two artificial neural network models
to support the diagnosis of pulmonary tuberculosis in hospitalized patients in Rio
de Janeiro, Brazil. Med Biol Eng Comput 54:1–9
3. Bonellie S (2012) Use of multiple linear regression and logistic regression models
to investigate changes in birth weight for term singleton infants in Scotland. J Clin
Nurs 21:92–114
4. Gholipour C, Rahim F et al (2015) Using an artificial neural networks (anns) model
for prediction of intensive care unit (icu) outcome and length of stay at hospital
in traumatic patients. J Clin Diagn Res 9:1096–1105
5. Launay C, Rivière H et al (2015) Predicting prolonged length of hospital stay in
older emergency department users: use of a novel analysis method, the artificial
neural network. Eur J Intern Med 26:478–482 (in Chinese)
6. Luo L, Luo Y et al (2014) Difference analysis of day surgery’s and elective surgery’s
duration associated with surgery. Stat Inf Forum 29:104–107 (in Chinese)
7. Menke N, Caputo N et al (2014) A retrospective analysis of the utility of an
artificial neural network to predict ed volume. Am J Emerg Med 32:614–617
1146 J. Yang et al.
1 Introduction
Nowadays, the consequences of the Great Recession are still quite noticeable, a
downfall that started in 2008 with the Lehman Brothers Holding Inc., an invest-
ment bank, declaring bankruptcy. Several other banking institutions followed
the same path after the beginning of crisis. After that period, the companies
focused on a clear objective, use innovation to be more competitive.
The automotive industry has also suffered some financial distress, leading
manufacturing organizations to start to think of new production methods and
a better application of the tools used at the shop floor. This allows companies
to pursue operational excellence with an improvement of their performance,
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 95
1148 A. Simas and V. Cruz-Machado
providing high quality products earlier in order to reduce costs [2]. Worldwide,
the companies start to encourage a Lean culture on their employees due to the
competitive nature of this industry, caused by emerging markets which produce
similar products at lower prices. To face those markets, the automotive industry
must be constantly updated, developing new products to customers.
Lean Thinking is the basis of implementation within companies, providing a
guide on how they can do more with less, less effort, equipment, time and space.
This can be divided in a set of five principles: value, value stream, flow, pull and
perfection [10].
Implementing Lean can be a difficult process, but when it is well applied
becomes an essential characteristic within an organization, playing a crucial role
leading up to the intended results. Some authors said this philosophy is deter-
minant for the survival and constant innovation of organizations. One tool from
Lean culture is Visual Management, an important tool used at the shop floor,
which is defined as signs and other forms of visual information used to simplify
the workplace and make it easy to recognize abnormalities on processes [4].
2 Visual Management
Visual Management is a Lean tool that has the objective of giving visual infor-
mation or displaying requirements to set directions [3]. This tool is frequently
used at manufacturing industries, but has currently expanded to other business
industries [1]. This concept was created to highlight abnormalities directly in the
workplace, thus helping operations and processes as soon as a problem occurs
[8]. Giving the right information to employees at the right moment is a vital
characteristic to improve the performance of a company [3]. That kind of infor-
mation can be provided by signs, labels or a colour code. The use of this type of
information eliminates the “guess”, searching and a cumulative of information
or material [6].
Many authors, such as Wilson [9] don’t use the term “visual management”
but rather transparency, because this tool allows the observation of processes on
time. An operator can see what is happening in a process and he can change or
adjust it if an abnormality is to occur.
Within Visual Management there are tools that support operators’ tasks and
highlight any abnormalities on processes. There are two different sets of tools [3]:
3.1 Methodologies
A methodology characterizes all the stages to follow from a process. It is a
detailed explanation of all actions of the work until the expected result is reached.
For better support, a Standardization Methodology divided in two sections
was developed:
• Current Situation Analysis Methodology - this refers to all stages that precede
the analysis of instruments of some plants;
• Future Design Methodology - this refers to the creation of improvements.
1150 A. Simas and V. Cruz-Machado
All stages must be strictly followed for the success of a project, if the
implementer doesn’t respect the stages, the results can’t be truly trusted. This
Methodology was designed for organizations that have multiple plants worldwide
and the tools are often unequal, but can be used in all organizations with few
adjustments.
(1) Current Situation Analysis Methodology
This Methodology is the initial phase of a standardization, representing
the analysis of information that the implementer collects from an organization
(Fig. 1).
Research of organization
Data Collection
Data analysis
1 Research of organization
This phase must be the most detailed one considering the resources available,
documents or procedures that are used by the organization in the composition of
the workstation, which incorporates the visual controls. The person who wants
to implement this methodology should know if the organization has some Lean
policy that can match with this methodology.
2 Visualize tools at the shop floor
The objective of this stage is to get used to the tools that are being used
at the shop floor, through its visualization in operation. This knowledge is the
transition from theory, studied in the previous phase, to practice. The need
of seeing tools at the shop floor can clarify some misunderstandings of theory
information from documents and can be a way to identify abnormalities on tool
applications.
3 Identification of possible tools to improve
After observing the tools in operation, the implementer should use critical
sense to identify possible tools for improvement. It is important to keep in mind
the limitations of the organization, starting with a few number of improvements
A Standardization Methodology 1151
is better than start with a huge change on tools. Employees tend to accept
changes if they are implemented slowly.
4 Data Collection
This is the critical phase of all the methodology, because at this stage is where
the implementer gathers the information from plants related to the tools for
improvement. This data collection combines interviews, by email or telephone,
with photos to support what they said, and visits to other plants from the
organization to check on how they apply the tools. The characteristics required
from plants are based on how the tool is used, its appearance, to whom it is
intended and where it is used inside the shop floor.
5 Data analysis
After the data collection is concluded, the information of all plants must be
placed on paper at the beginning (like a draft). When the draft is completed, the
implementer can introduce that in a computer and print it. It is recommended
to print in large formats for easier comprehension. The disposal of information
provides a good view of how tools are being used, if similar, or not, to the other
plants. When a plant has a good practice of some tool, it must be used in total
or partially for improvement of that tool.
(2) Future Design Methodology
This part of the Methodology is the development phase, which include the
conception and the implementation at the shop floor (Fig. 2).
Develop improvements
Improvement implementation
that were subject to improvements at the shop floor. This is the last step of
information collection and analysis regarding plants from different regions.
This subchapter will give the reader a better comprehension of this methodology
applied in an organization from the automotive industry. Two topics will be
approached where the methodology can be applied, for example, Stopping Points,
a subject from routing, and the color code from plants.
Every plant has routes and this routes need stopping points, a specific loca-
tion near the workplace where the tuggers’ drivers do the collection or replace-
ment of materials during the route. After getting the information from plants,
through photos of the application of the tools at plants, it was verified that the
tool’s presentation is different in all plants within an organization. Figures 3, 4,
5, and 6 show the differences in stopping points between some plants.
The existence of several ways of presenting the stop signs shows the need to
standardize this tool, so every plant can use the same tool.
Another concept that many plants have huge differences between them is
the color code, normally used in floor marking. This tool is important to have
for a well-organized workplace so it can prevent accidents and injuries. Many
organizations use these markings with the intention of enhancing the visual
management of the organization. To obtain the data from the plants interviews
by phone were held, with photos to support what has been told on interviews.
These photos in Figs. 7 and 8 show that there are enormous differences between
the color code from each plant.
Fig. 7. Orange color for dangerous material [plant 1] (at left) and rework [plant 2] (at
right)
When all information was collected, it was observed that there wasn’t a
common color code for all plants, every plant has their own code.
3.3 Results
Now, the improvements for the tools mentioned above will be described, includ-
ing Stop Points and Color Code.
The appearance of Stop Points is a very useful visual tool in routing, they
let the driver of the tugger know which route belongs to what and the number
of the stop (usually are numbered with 1, the nearest stop from warehouse and
so on). The decision was to design an easily visible tool with all the necessary
information, so the driver doesn’t have issues performing the route. The new
tool is shown in the Fig. 9.
In this case, this tool has the name of the route it belongs to, a frame with
the color of the route and the number of the stop, the other aspects are only
aesthetics. For the design of this tool, the criteria established by the methodology
were considered. To compare this tool with the ones already implemented, the
methodology requests a table with the comparison:
Characteristics New tool Tool plant 1 Tool plant 2 Tool plant 3 Tool plant 4
Fast interpretation ♥ ♥ ♣ ♣ ♣
Visible ♥ ♥ ♣ ♥ ♥
Low cost ♥ ♣ ♥ ♣ ♥
This tool can be quite financially viable. It is made with paper (A4 format),
with the colors from the respective routes allowing a fast interpretation for the
tugger’s driver (Table 2).
As I said before, the color code is essential to mark workplaces and pedes-
trian access areas. Those marks should be visible and noticeable by operators,
employees and visitors that walk at the shop floor. If a workplace has a tidy
1156 A. Simas and V. Cruz-Machado
4 Conclusions
Standardization of visual management within an organization, with a lot of
plants, is a complex and long journey process. You need to transform an entire
organization, from tools that are being standardized to the mindset of employees,
who will contact with the new tools. The tools to improve must be requested as
they come across a tool that can be improved.
The benefits of having all visual management standardized are huge: reduc-
tion of wasting time using confuse tools; an increased efficiency of operators, tools
are designed for them; simplifies the work environment and the seek for excel-
lence. Normally the output (improvement tools) manages to convey the basic
principles of visual management, through “eye management”, always prevailing
simplicity, transparency, and clarity.
References
1. Bateman N, Philp L, Warrender H (2016) Visual management and shop floor
teams - development, implementation and use. Int J Prod Res 7543:1–14
2. Belekoukias I, Garzareyes JA, Kumar V (2014) The impact of lean methods and
tools on the operational performance of manufacturing organisations. Int J Prod
Res 52(18):5346–5366
A Standardization Methodology 1157
Carlos Quiterio Gómez Muñoz(B) , Alfredo Peinado Gonzalo, Isaac Segovia Ramirez,
and Fausto Pedro Garcı́a Márquez
1 Introduction
The whole energy produced in the Earth has been obtained, directly or indirectly, from
the Sun. Even fossil fuels, which come from the decomposition of plants that have
previously needed the sun to perform photosynthesis. These fossil fuels are oil, coal
and natural gas, and they are in limited reserves.
From the industrial revolution, fossil fuels consumption has increased without con-
sidering the environmental effects, such as global warming or climate change. However,
as they are finite resources, alternative forms of fossil fuels must be sought to obtain
energy and to avoid the pollution they produce. These measures are being carried out
globally, led by the Intergovernmental Panel on Climate Change (IPCC).
The renewable energies are being developed and employed to use a clear production,
e.g. in Spain they produce 28.6% of the energy consumed [9, 16]. Renewable energies
are those based on the use of virtually endless resources, e.g. the sun (Fig. 1), wind
or ocean currents, or resources that are renewed periodically if they are used sustain-
ably, such as biomass or biofuels. In general, its production is more expensive than
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 96
1162 C.Q.G. Muñoz et al.
Fig. 1. China and Pakistan project for Solar plant Solar Quaid Azam installed in Pakistan (http://
www.qasolar.com/)
fossil fuels, but if does not generate polluting emissions during their production. These
include: Wind energy [14, 22], tidal, solar (Fig. 1), biomass and biofuels, geothermal
and hydraulic.
2 Infrared Thermography
2.1 Radiation
There are only three methods of heat transmission in physical terms: conduction, con-
vection and radiation. In the case of radiation, a transmission medium is not necessary,
Online Fault Detection in Solar Plants 1163
the energy is transmitted due to the temperature of the emitter body. Depending on the
temperature, the radiation emitted changes. The law of Stefan-Boltzmann, Eq. (1), is
used to quantify it. It must be adjusted for real bodies, by adding the emitter parameter
to the equation.
EN (T ) = σ · T 4 , (1)
where EN is the emittance of the body, is the Stefan Boltzmann constant, value 5,6710 B
−8 W/m2 K4 )) and T the body temperature. This emittance is for of a black body, which
must be corrected for real bodies through the emissivity. The Planck law is given by
Eq. (2).
C1
ENλ (λ , T ) = C2 , (2)
λ 5 (e λ T − 1)
where EN is the spectral emitted, C1 and C2 are constants, λ the wavelength and T the
body temperature. Eq. (2) shows that most of the radiation emitted by bodies at low
temperatures is found in the infrared region [15].
Emittance is the proportion of energy that a body can emit relative to the total. In a
black body (maximum emitter) is the unit. In any other body, this magnitude is smaller
and depends on the temperature. In addition, the characteristic wavelength of the bodies
also depends on their temperatures, generating the electromagnetic radiation spectrum.
The thermal behavior of a real body is different from a black body. While the black
body absorbs all the energy that reaches its surface and emits throughout the radiation
region equally, a real body does not absorb all the energy and does not emit in the entire
electromagnetic spectrum.
A real body does not absorb all the radiation, but part of it passes through it or
reflects on its surface. In addition, this behavior also depends on the wavelength being
studied, that depends the concept of emittance, given by Eq. 3. Emittance is the ratio
of energy that a body emits with respect to the total that it can emit with a certain
temperature and wavelength [15].
E θ ,ϕ ,λ (θ , ϕ , λ , T ) I θ ,ϕ ,λ (θ , ϕ , λ , T )
ε θ ,ϕ ,λ (θ , ϕ , λ , T ) = θ ,ϕ ,λ
= θ ,ϕ ,λ
(3)
EN (θ , λ , T ) IE,N (λ , T )
being ε the emissivity, θ and ϕ emission angles, E emittance and I the radiation inten-
sity of the body. For this research work, infrared radiation is the most relevant, since it
is the dominant thermal radiation in this temperature range. Therefore, the camera and
the sensor are designed to collect this type of radiation.
2.2 Applications
The main function in this paper of infrared thermography is the visualization of ele-
ments whose surface temperature variations show their state [2, 3, 12]. Among its appli-
cations include the following:
1164 C.Q.G. Muñoz et al.
(1) Electrical and mechanical systems: It is possible to observe the hot spots of the sys-
tems and areas subjected to higher thermal stresses by comparison on the surface
of the system [13].
(2) Structures: Any irregularity in the temperature of the facade can correspond to
different conditions in buildings, such as poor insulation, the emittance of an area
with a greater amount of bricks than concrete, or even an insect nest within the
facade.
(3) Welds: Temperature variations in a pipe are usually due to areas where welds have
been made. If these variations show a very irregular appearance, it is probably due
to a bad union
(4) Surveillance: Control of emissions, traffic, fire prevention, etc.
(5) Medicine: Monitoring of diseases and anomalies in the human body.
The infrared thermography is done by a non-radiometric camera to check the results
obtained with the sensors.
3 Experimental Platform
3.1 Radiometer
They are used to check the temperature and the radiance of the measured targets. Both
sensors are used to contrast the data obtained with each one in the different case studies.
(1) Campbell SI-111 Sensor
This sensor is connected to an Arduino board, which processes the voltage data
and process that information to obtain the temperature in Celsius degrees. The system
is capable of making accurate measurements to the bodies that are within its field of
vision. The SI-111 Precision Infrared Radiometer features a thermistor to measure its
internal temperature, and a thermopile for the target temperature using a germanium
lens.
Using Stefan-Boltzmann’s law of radiation, and the voltages in the thermopile and
the thermistor, the target temperature is calculated [23].
The temperature measured by the sensor is the average of the temperatures of its
view range, receiving between 95 and 98% of the infrared radiation of the view range
and between 2 and 5% from outside. Figure 2 shows the view range of the SI-111 sensor
with half-angle of 22.5o. The view range depends of the distance to the area and the
inclination with respect to this area (see Fig. 2).
This sensor has been chosen because of its high accuracy, reaction rate to changes
in temperature and the large range of temperatures that can operate. Its size and weight
allow a great manageability. Its installation on the unmanned aerial vehicle allows the
global system to performance more efficient and flexible inspections.
Fig. 3. Arduino Uno G3 (a) and Wi-Fi Shield expansion card (b)
Uno R3 (Fig. 3a) is a motherboard that uses the ATmega328 microcontroller and is
powered by either USB cable connected to a computer or external power from a battery.
The Wi-Fi Shield component is an Arduino expansion card. It allows the transmis-
sion of data via wireless connection by Wi-Fi (Fig. 3b).
Data obtained by the sensor is processed and sent via Wi-Fi to the online platform
Thingspeak to monitor the results (Table 1).
The camera is non radiometric used to check the data obtained with the sensors and to
see the temperature differences of the observed pieces. This camera consists in assign-
ing a determined colour to each level of radiation that it receives. These levels are cal-
culated for each resolution pixel that the camera has, creating a thermographic picture
(Figs. 4 and 5).
1166 C.Q.G. Muñoz et al.
These panels were used in the experiments to check their temperature and voltage at the
terminals. Their technical characteristics are shown in Table 2.
Online Fault Detection in Solar Plants 1167
Fig. 6. Representation of the air vehicle taking a measurement with the wireless sensor mounted
on the Gimbal mount
This system allows to inspect solar plants in a totally automatic and autonomous
way. It can inspect large areas, and a very large park can be inspected by sections. The
system sends online all the information about the state of the plates (dirt and defects),
generating alarms and reports of each module.
4 Case Study
The objective of the experiments was to detect dirt on the solar panels [4]. The experi-
ments were carried out in three different scenarios: In the first scenario, the solar panel
was clean; the second one, mud was added in half solar panel; third scenario, the solar
panel was totally covered by mud. In each experiment, the temperature values collected
by the radiometer and the volts generated by the solar panel were obtained.
The incidence of the sun is crucial in the emissivity values collected by the radiome-
ter. For this reason, they were carried out three cases of study at different hours to study
the incidence of the sun in the results. The first case study was performed at 10:00 am.
The second case study at 12:00 noon. The last case study was carried out with absence
of direct light because of clouds (Figs. 7, 8 and 9).
Experiments have been performed at 10:00 am to check the temperature reached by the
panels when, due to the incidence of light, the panel does not reach its maximum power
generation. Table 3 shows the obtained temperature and generated voltage by the panel.
It is observed an evident difference of temperatures between the clean panel and
the panel with dirt on the half of its surface. There are also minor differences between
the half dirty panel and the panel totally covered by mud. Figure 10 shows the different
temperatures obtained in the series of experiments represented by columns. The ambient
temperature series has been added to contrast with those acquired by the panels in each
case shown in the form of a line.
These relationships are proportional to the voltages generated by the solar panel.
Figure 11 shows the influence of the mud in the voltage at 10:00 am [5].
These experiments have been performed to observe the differences in temperature and
energy production of the panels when the panel reach its maximum of production, i.e.
when the light strikes perpendicularly on the panels (Table 4).
The temperature variations are bigger than in the first case study, as well as the
voltage. This is because of the intensity of the light is much greater and it makes more
evident the cases where the panel is covered with dirt. Greater differences in tempera-
ture are observed in Fig. 12, as well as higher temperatures.
A greater voltage difference is shown in Fig. 13. It is due to a greater inequality of
sunlight incident between the panel without dirt and completely covered.
1170 C.Q.G. Muñoz et al.
Fig. 10. Comparison between the different panel temperatures in case study 1
Fig. 11. Comparison between different panel temperatures and voltages in case study 1
Fig. 13. Comparison between different panel temperatures and voltages in case study 2
The differences of temperatures are lower, with a large margin of error that does not
allow to know exactly the condition of the panel. Figure 14 shows the low temperature
variations between the different experiments.
The voltage in each case are similar for all case studies. In these conditions, it is
concluded that the system is not effective to detect the dirt in the solar panels.
1172 C.Q.G. Muñoz et al.
Fig. 14. Comparison between the different panel temperatures in case study 3
Fig. 15. Comparison between different panel temperatures and voltages in case study 3
5 Conclusions
Dirt and dust in solar panels is a common problem in photovoltaic plants, which are
usually located in desert areas with lots of sand and dust. A novel non-destructive test
system is proposed in this paper, based on: a radiometer; Arduino; a Wi-Fi Shield, and;
a UAV for inspecting the state of the photovoltaic panels.
The low weight and cost of the system can reduce the costs in operation and main-
tenance of the whole solar plant. The system can be automated, reducing the inspection
time and the costs (Fig. 15).
Three different conditions of the solar panel were analysed: Clean, half covered by
mud and totally covered. The infrared radiometry of the panels was read and the voltage
of the panels. The experiments were performed in three different solar conditions to
increase the actuary of the experiments: at 10 am, 12:00 pm and with clouds and without
direct sunlight. The absence of direct light does not allow identification among the three
cases.
Online Fault Detection in Solar Plants 1173
Acknowledgements. The work reported herewith has been financially by the Spanish Ministerio
de Economı́ay Competitividad, under Research Grant Ref.: RTC-2016-5694-3.
References
1. Acciani G, Simione G, Vergura S (2010) Thermographic analysis of photovoltaic panels. In:
International conference on renewable energies and power quality (ICREPQ10). Granada,
Spain, March, pp 23–25
2. Ancuta F, Cepisca C (2011) Fault analysis possibilities for pv panels. In: International youth
conference on energetics, pp 1–5
3. Bazilian MD, Kamalanathan H, Prasad DK (2002) Thermographic analysis of a building
integrated photovoltaic system. Renew Energ 26(3):449–461
4. Dorobantu L, Popescu M et al (2011) The effect of surface impurities on photovoltaic panels.
In: International conference on renewable energy and power quality
5. Fares Z, Becherif M et al (2013) Infrared thermography study of the temperature effect on
the performance of photovoltaic cells and panels. In: Sustainability in energy and buildings,
Springer, pp 875–886
6. Garcı́a Márquez FP, Muñoz G et al (2014) Structural health monitoring for concentrated
solar plants. In: 11th International conference on condition monitoring and machinery failure
prevention technologie. Manchester, UK
7. Gómez CQ, Villegas MA et al (2015) Big data and web intelligence for condition monitoring:
a case study on wind turbines. In: Handbook of research on trends and future directions in
big data and web intelligence; information science reference. Hershey
8. Gómez Muñoz CQ, Garcı́a Márquez FP (2016) A new fault location approach for acoustic
emission techniques in wind turbines. Energies 9(1):40
9. Heras-Saizarbitoria I, Cilleruelo E, Zamanillo I (2011) Public acceptance of renewables
and the media: an analysis of the spanish pv solar experience. Renew Sustain Energ Rev
15(9):4685–4696
10. Ruiz de la Hermosa González-Carrato R, Garcı́a Márquez FP et al (2015) Acoustic emission
and signal processing for fault detection and location in composite materials. In: Global
cleaner production & sustainable consumption conference. Elsevier
11. Jiménez AA, Muñoz CQG et al (2017) Artificial intelligence for concentrated solar plant
maintenance management. Springer, Singapore
12. Maldague XP (2002) Thermographic inspection of cracked solar cells. In: Aerosense, p 185
13. Maldague XPV (2002) Introduction to ndt by active infrared thermography. Mater Eval 60(9)
14. Márquez FPG, Tobias AM et al (2012) Condition monitoring of wind turbines: techniques
and methods. Renew Energ 46(5):169–178
15. McAdams WH (1958) Heat transmission. McGraw-Hill
16. Montoya FG, Aguilera MJ, Manzano-Agugliaro F (2014) Renewable energy production in
spain: a review. Renew Sustain Energ Rev 33(2):509–531
17. Munoz CQG, Arenas JRT, Marquez FPG (2014) A novel approach to fault detection and
diagnosis on wind turbines. Glob Int J 16(6):1029–1037
18. Muñoz CQG, Marquez FPG et al (2015) A new condition monitoring approach for mainte-
nance management in concentrate solar plants. Springer, Heidelberg
19. Muñoz CQG, Márquez FPG, Tomás JMS (2016) Ice detection using thermal infrared radiom-
etry on wind turbine blades. Measurement 93:157–163
20. Muñoz G, Quiterio C et al (2015) Energy environment maintenance management. In: Gfeecc
2015 Gfeecc International conference forenergy, environment and commercial civilization.
Chengdu, China
1174 C.Q.G. Muñoz et al.
21. Papaelias M, Cheng L et al (2016) Inspection and structural health monitoring techniques
for concentrated solar power plants. Renew Ener 85:1178–1191
22. Pliego Marugán A, Garcı́a Márquez FP, Pinar Pérez JM (2016) Optimal maintenance man-
agement of offshore wind farms. Energies 9(1):46
23. Quinn TJ, Martin JE (1985) A radiometric determination of the stefan-boltzmann constant
and thermodynamic temperatures between -40 degrees c and +100 degrees c. Philos Trans R
Soc B Biol Sci 316(1536):85–189
24. Ramirez IS, Muñoz CQG, Marquez FPG (2017) A condition monitoring system for blades
of wind turbine maintenance management. Springer
Demand Response Mechanism of a Hybrid
Energy Trading Market for Residential
Consumers with Distributed Generators
1 Introduction
Smart grid (SG) as a complex advanced electricity system is capable of fac-
ing with the growing energy demand in a reliable sustainable and economical
manner [3]. Advanced two-way communication infrastructure of smart grids and
efficient demand response mechanism (DR), according to which the energy con-
sumers and energy sellers can schedule their energy consumption and energy
supply, respectively, call attention to SG for achieving better performance than
the conventional grid. Exclusively, bidirectional energy trading has been pro-
vided by expanded utilization of advanced smart metering systems in future SG
and deployment of distributed energy sources. Accordingly, a precise design of
control mechanism for both economical optimization and energy scheduling for
energy consumers and sellers has been needed.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 97
1176 N. Naseri et al.
tial users and a two-level game was proposed to model the interaction between
these two levels. The competition among the utility companies was formulated
as a non-cooperative game, while the interaction among the residential users
was formulated as an evolutionary game. Then, they proved that the proposed
strategies were able to make both games converge to their own equilibrium. Wu
et al. [10] focused on a hybrid energy trading market consisting of an external
utility company and a local trading market managed by a local trading center.
First, they quantified the respective benefits of the energy consumers and the
sellers from the local trading and then investigated how they can optimize their
benefits by controlling their energy scheduling in response to the LTC’s pricing.
The main contributions of this paper are summarized as follows. This paper
first models a hybrid trading market that is comprised of multiple generator
companies, multiple energy consumers, external power grid, and a local trad-
ing center. In the consumers’ side, some distributed generators are considered.
The consumers’ appliances are categorized into three groups according to their
features. The battery as a storage facility is considered for consumers so as to
reduce the average peak load ratio.
The paper is organized as follows. Sect. 2 proposes the overview of the system
and presents the mathematical model. Sect. 3 shows the numerical example and
the results. Finally, the paper is concluded in Sect. 4.
2 Problem Definition
In this paper, a real-time scheme has been considered for a hybrid trading market
in order to reduce the peak to average load ratio and maximize each user’s
objectives. A hybrid trading market consists of the number of Energy Consumers,
Energy Sellers or Generator Companies (GenCo), a Local Trading Center (LTC)
and Conventional Power Grid. Figure 1 shows a simplified illustration of the
hybrid energy system.
Every energy consumer and energy seller is connected to both local trading
center and the conventional power grid. Energy consumers are residential con-
sumers which have the different type of electrical appliances, these appliances
can be categorized into three main groups. The first category, A, named as non-
flexible appliances, includes background appliances which are needed to be used
at specific times and cannot be shifted; refrigerator, lightening appliances are
some examples of this group.
The second category, B, named as semi-flexible appliances, consists of appli-
ances which the time of using of these appliances can be shifted in a specific
period of time. Figure 2 shows an example of these type of appliances. α and β
are the start and end of the preferred time interval, respectively, for an appliance
type B.
The third category of appliances, C, which is called as flexible appliances
includes those appliances which their time of use can be shifted during a specific
period of time as well as be disarticulated in the definite number of sub-activities.
Figure 3 shows an example of appliance type C.
It is assumed that a number of energy consumers have Distributed Generators
(DG). These DGs consist of a Photovoltaic system with a definite capacity and
load availability, and a battery with a specific capacity. The battery is used to
store energy surpluses during off- peak time, therefore, it can be charged via
PV, LTC, and the grid; and it would be discharged only to supply power to load
during peak demand hours.
In the GenCos’ side, it is assumed that in every time slot h, there is a certain
and constant amount of load available.
The objective of this problem is to minimize the cost of energy for consumers
and maximize the income for GenCos. To implement the model, a day is divided
into 24 time slots. To model the problem, the following notations are defined:
Indices:
Parameters:
[αaB , βaB ] : Start and finish time of appliance aB ;
[λaC ,k , γaC ,k ] : Start and finish time of subactivity k for appliance aC ;
laB : Activity time of appliance aB ;
la C ,k : Activity time of sub-activity k for appliance aC ;
T LaC ,k : Upper limit of the gap between two sub-activity of
appliance aC ;
T LaC ,k : Lower limit of the gap between two sub-activity of
appliance aC ;
di,h : Total energy demand of consumer i at time h;
P Vi,h : Available load from PV system of customer i at time h;
EaA : Total energy needed for appliances type A;
ε : Self-discharge rate of the battery;
η discharge : Discharging efficiency of the battery;
η charge : Charging efficiency of the battery;
B
Ecapacity : Rated capacity of the battery (kWH);
μ : Minimum residue coefficient of the battery
BaB ,h : Energy consumption of appliance aB at time h;
M : A large number;
dj,h : Available load from GenCo j at time h;
CaC ,k,h : Activity k for appliance aC at time h;
ph : Energy sell-out price for the grid at time h ($ /kW);
Wi,h (.) , Wj,h (.) : A linear function for energy transmission loss ;
qh : Buy-back price for the grid at time h ($ /kW);
ph : Energy sell-out price for LTC at time h. ($ /kW);
qh : Buy-back price for LTC at time h ($ /kW)
uh : Buy-back price for the grid from consumers at time
h ($ /kW);
uh : Buy-back price for LTC from consumers at time h ($ /kW)
Decision variables:
1, if appliance aB starts at time h,
STaB ,h =
0, otherwise;
1, if appliance aB finishes at time h
ETaB ,h =
0, otherwise;
1, if sub-activity k of appliance aC starts at time h,
STaC ,k,h =
0, otherwise;
1, if sub-activity k of appliance aC finishes at time h,
ETaC ,k,h =
0, otherwise;
1, if the status of appliance aB at time h is on
SaB ,h =
0, otherwise;
1, if the status of sub-activity k of appliance aC at time h is on
SaC ,k,h =
0, otherwise;
1180 N. Naseri et al.
grid
Ei,h : Total power purchased from the grid for customer i at time h
(kW);
grid
Ei,h,sto : Power purchased from the grid for battery storage of customer i
at time h (kW);
grid
Ei,h,self : Power purchased from the grid for self-use of customer i at time
h (kW);
LTC
Ei,h : Total power bought from LTC for customer i at time h (kW);
LTC
Ei,h,self : Power purchased from LTC for self-use of customer i at time h
(kW);
LTC
Ei,h,sto : Power purchased from LTC for battery storage of customer i at
time h (kW);
PV
Ei,h : Total power of PV unit of customer i at time h (kW);
PV
Ei,h,self : Power out of PV unit of customer i at time h for self-use (kW);
PV
Ei,h,sto : Power out of PV unit of customer i at time h for battery
charging (kW);
PV
Ei,h,salegrid : Surplus electricity from PV unit of customer i at time h sell to
the grid (kW);
PV
Ei,h,saleLTC : Surplus electricity from PV of customer i at time h sell to LTC
(kW);
B
Ei,h : Available power stored in the battery of customer i at time h
(kW);
B
Ei,h,self : Power discharged from the battery of customer i at time h for
self-use (kW);
B
Ei,h,outLTC : Power discharged from the battery of customer i at time h sell to
LTC (kW);
B
Ei,h,outgrid : Power discharged from the battery of customer i at time h to the
grid (kW);
fi,h grid : Binary variable indicating the energy purchase from the grid for
customer i
fi,h B : Binary variable indicating the charging state of the battery of
customer i
fi,h LTC : Binary variable indicating the energy purchase from the LTC for
customer i
fi,h grid,sto : storage for battery from the grid for customer i
fi,h LTC,sto : Binary variable indicating the energy storage for battery from
the LTC for customer i
fi,h P V,sto : Binary variable indicating the energy storage for battery from
the PV for customer i
Xj,h : Total energy sold to the grid from GenCo j at time h (kW);
Yj,h : Total energy sold to LTC from GenCo j at time h (kW)
The mathematical model for the aforementioned problem is proposed in the
coming 24 h as follows. The objective function is minimizing the cost of buying
Energy for customers and maximizing the income for energy consumers and
Demand Response Mechanism of a Hybrid Energy Trading Market 1181
sellers, respectively. These two objectives have been formulated in the following
expression.
grid
min Z = Ei,h × ph + LT C
(Ei,h − Wi,h (Ei,h
LTC
)) × p h
h i h i
− PV
Ei,h,salegrid × uh + PV
Ei,h,saleLTC × u h (1)
h i h i
− Xj,h × qh + (Yj,h + Wj,h (Yj,h )) × q h
h j h j
βaB −laB
s.t. STaB ,h = 1; ∀aB , ∀h ∈ H (2)
h≥αaB
βaB
ETaB ,h = 1; ∀aB , ∀h ∈ H (3)
h≥αaB +laB
γaC ,k −l aC ,k
ST aC ,k,h = 1; ∀aC , k, ∀h ∈ H (4)
h≥λaC ,k
γaC ,k
ET aC ,k,h = 1; ∀aC , k, ∀h ∈ H (5)
h≥λaC ,k +l aC ,k
LTC
Ei,h,sto ≤ M × fiLTC,sto ; ∀i ∈ ΩC , ∀h ∈ H (17)
PV
Ei,h,sto ≤ M × fiP V,sto ; ∀i ∈ ΩC , ∀h ∈ H (18)
figrid,sto + fiLTC,sto + fiP V,sto = 1; ∀i ∈ ΩC (19)
grid grid
Ei,h,self + Ei,h,sto ≤ M × fi grid ; ∀i ∈ ΩC , ∀h ∈ H (20)
PV
Ei,h,salegrid B
+ Ei,h,outgrid ≤ M × (1 − fi grid ); ∀i ∈ ΩC , ∀h ∈ H (21)
LTC
Ei,h,self LTC
+ Ei,h,sto ≤M× fi LTC
; ∀i ∈ ΩC , ∀h ∈ H (22)
PV B
Ei,h,saleLTC + Ei,h,outLTC ≤ M × (1 − fi ); ∀i ∈ ΩC , ∀h ∈ H
LTC
(23)
CaC ,k,h × S aC ,k,h + grid
BaB ,h × SaB ,h +EaA + Ei,h,sto LTC
+ Ei,h,sto
aC k aB (24)
PV
+ Ei,h,sto = di,h , ∀i ∈ ΩC , ∀h ∈ H
grid
Ei,h,self PV
+ Ei,h,self B
+ Ei,h,self LTC
+ (Ei,h,self − Wi,h (Ei,h,self
LTC
)) = di,h ;
(25)
∀i ∈ ΩC , ∀h ∈ H
Xj,h + (Yj,h + Wj,h (Yj,h )) ≤ dj,h ; ∀j ∈ ΩG , ∀h ∈ H (26)
B grid LTC P V,sto
STaB ,h , ETaB ,h , ST aC ,k,h , ET aC ,k,h , SaB ,h , S aC ,k,h , fi,h , fi,h , fi,h , fi,h ,
fi,h grid,sto , fi,h LTC,sto ∈ {0, 1}
(27)
grid grid grid LTC LTC LTC PV PV PV PV
Ei,h , Ei,h,sto , Ei,h,self , Ei,h , Ei,h,sto , Ei,h,self , Ei,h , Ei,h,sto , Ei,h,self , Ei,h,salegrid ,
PV
Ei,h,saleLTC B
, Ei,h B
, Ei,h,self B
, Ei,h,outgrid B
, Ei,h,outLTC , Xj,h , Yj,h ≥ 0.
(28)
Constraints (2)–(5) represent the start and finish time of each appliance. Con-
tinuity of appliances’ activities have been avouched by constraints (6) and (7).
The situation of appliances’ activities (on or off) have been showed in Eqs. (8)
and (9). Constraint (10) represents the limitation for the gap between each sub-
activity for appliances type C; it means that the finish time of sub-activity k − 1
and start time of sub-activity k must be in a definite limit. Constraint (11)
shows that the total load available from PV system is consumed for self-use,
storing in the battery, selling to the grid and LTC. Constraint (12) represents
available load stored in the battery at time h + 1 during charge and discharge
process. It remarks that the stored load at time h + 1 is equal to initial load at
time h with considering the self-discharge percentage, plus incoming load from
PV system, the grid, and LTC deducting the amount of load needed for the
customers’ demand. As the full discharge of the battery reduces its lifetime con-
straint (13) confines the state of charge and discharge of the battery. Constraints
(14) and (15) prevent the battery from charging and discharging simultaneously.
Constraints (16)–(19) prevent the process of charging from three sources (grid,
LTC, and PV) simultaneously for the battery. Constraints (20)–(23) illustrate
the limit of simultaneous buying and selling electricity load process for both grid
and LTC. Equation (24) confines that the total energy demand for an energy
Demand Response Mechanism of a Hybrid Energy Trading Market 1183
consumer is coming from the total energy needed for appliances type A, B, and
C, and the amount of energy needed for charging the battery. Equation (25)
expresses that the energy demand can be met through the energy purchased
from the grid or LTC, or the energy supplied through PV or battery; taking
into consideration the amount of energy transmission loss from the LTC. The
last expression (26) represents the constraint for the generators’ side. Total load
available from every GenCo at time can be sold to the grid or LTC. Transmission
loss has been considered in the process of selling electricity to the LTC.
3 Numerical Example
In order to show the performance of the hybrid energy system consisting of the
conventional grid, LTC, energy consumers with different types of appliances and
GenCos, a numerical example is presented. Based on the structure of the hybrid
energy system illustrated in Fig. 1, the numerical example is utilized to verify
the optimization model. Figure 4 shows the energy demand for appliances of
type B and the cumulative energy demand. It has been considered there are four
appliances of type B.
Figure 5 shows the Energy sell-out and buy-back prices for both grid and
LTC. In this example, it is assumed that the energy sell-out price for LTC is
less than the energy sell-out price for the grid. Besides, the energy buy-back
price for LTC is greater than energy buy-back price for the grid. The numerical
example is carried out by using IBM ILOG CPLEX Optimizer v12.3 on the PC
with Intel(R) Core(TM) i7-4770 CPU@3.4 GHz and 8 GB RAM. Tables 1 and
2 represent the results. The consumer uses the PV energy and provides more
energy with buying energy from LTC in order to meet energy demand in peak
time.
1184 N. Naseri et al.
Fig. 5. Energy sell-out and buy-back prices for grid and LTC
Time slots
1 2 3 4 5 6 7 8 9 10 11 12
LTC
Ei,h,self 0.58 0.58 0.58 0.55 0.00 0.00 0.00 0.25 2.11 3.34 4.34 3.34
PV
Ei,h,self 0.00 0.00 0.00 0.99 2.72 3.66 4.46 4.41 2.16 0.00 0.00 0.00
sum 0.58 0.58 0.58 1.54 2.72 3.66 4.46 4.66 4.26 3.34 4.34 3.34
Time slots
1 2 3 4 5 6 7 8 9 10 11 12
Yj,h 0.13 0.13 0.13 0.51 0.82 0.99 0.79 0.95 0.88 0.54 0.99 0/89
4 Conclusion
Based on MILP theory, this paper formulated an optimization model to inves-
tigate the optimal operation strategy of the PV, battery, conventional grid, and
LTC based residential hybrid energy trading system. Besides satisfying the resi-
dential electricity demands, total costs were minimized; moreover, the optimiza-
tion scheme maximized the profit of GenCos. The LTC provides new opportuni-
ties for the energy consumers and GenCos to perform the local energy trading in
a cooperative manner, as a result, they all can benefit. In this case, it has been
considered that the LTC is a non-profit oriented LTC which aims at benefiting
the energy consumers and energy sellers.
Numerical results are presented in order to validate the benefits of the con-
sidered hybrid energy market and the related DR mechanism.
Demand Response Mechanism of a Hybrid Energy Trading Market 1185
References
1. Chai B, Chen J et al (2014) Demand response management with multiple utility
companies: a two-level game approach. IEEE Trans Smart Grid 5(2):722–731
2. Chen S, Shroff NB, Sinha P (2013) Heterogeneous delay tolerant task scheduling
and energy management in the smart grid with renewable energy. IEEE J Sel Areas
Commun 31(7):1258–1267
3. Keshav S, Rosenberg C (2010) How internet concepts and technologies can help
green and smarten the electrical grid. In: ACM SIGCOMM workshop on green
NETWORKING 2010. New Delhi, India, pp 35–40
4. Kim BG, Ren S et al (2013) Bidirectional energy trading and residential load
scheduling with electric vehicles in the smart grid. Sel Areas Commun IEEE J
31(7):1219–1234
5. Mohsenian-Rad AH, Leon-Garcia A (2010) Optimal residential load control with
price prediction in real-time electricity pricing environments. IEEE Trans Smart
Grid 1(2):120–133
6. Qian LP, Zhang YJA et al (2013) Demand response management via real-time
electricity price control in smart grids. IEEE J Sel Areas Commun 31(7):1268–
1280
7. Ren H, Wu Q et al (2016) Optimal operation of a grid-connected hybrid pv/fuel
cell/battery energy system for residential applications. Energy 113:702–712
8. Torres D, Crichigno J et al (2014) Scheduling coupled photovoltaic, battery and
conventional energy sources to maximize profit using linear programming. Renew
Energ 72(4):284–290
9. Vaziri SM, Rezaee B, Monirian MA (2017) Bi-Objective integer programming of
hospitals under dynamic electricity price. Springer, Singapore
10. Wu Y, Tan X et al (2015) Optimal pricing and energy scheduling for hybrid energy
trading market in future smart grid. IEEE Trans Ind Inf 11(6):1585–1596
A Fuzzy Multi-Criteria Evaluation Method
of Water Resource Security Based
on Pressure-Status-Response Structure
1 Introduction
prominent and has attracted a worldwide attention and emphasis [3]. The eval-
uation and insurance of water security are the core issues of sustainable water
resources management. There are increasing studies about water resource secu-
rity evaluation. For example Jiang [6] studied on water resource safety strat-
egy for China in the 21st century. Bitterman et al. [2] proposed a conceptual
framework and candidate indicators for water security and rainwater harvest-
ing. Chen [4] studied on water resources security concept and its discussion. Xia
and Zhang [21] worked on water security in north China and countermeasure to
climate change and human activity. Hall and Borgomeo [9] worked on risk-based
principles for defining and managing water security. Norman et al. [14] worked
on water security assessment: Integrating governance and freshwater indicators.
Qian and Xia [5] worked on risk assessment of water security in Haihe River
Basin during drought periods based on D-S evidence theory. However, a com-
prehensive evaluation of water resources security is such a complex, vague and
multi-level evaluation process. Multi-criteria evaluation method of water secu-
rity is still worthy depth research. In addition, due to the increasingly severe
climate change, water scarcity and pollution, the problem of water resources
security has becoming more and more important. Therefore, this paper presents
a fuzzy multi-criteria evaluation method, which include indicator system under
Pressure-Status-Response framework, uncertainty rating analysis, and aggrega-
tion method of TOPSIS with fuzzy judgments.
Current water
Human-induced resource Action to reduce
influences management influences
unit key figures
impacts information
resources measures
Fig. 1. Adapted pressure, state and response framework for evaluating security on the
water resource management unit level
expands on the outstanding PSR structure (Fig. 1) including three boxes for
Pressure, State and Response objects with the cases practically connected.
Human-initiated Pressures incite impacts on State limitations by making utili-
tarian and monetary advantages additionally influencing the (societal) Response
factors. The other way around Pressures utilize and deplete resources both sub-
jectively and quantitatively (State) and undertaking information, choices and
activities from Responses to control them. State and Response are connected
by figures and material interchange on the one side and directly state-changing
procedures on the other. Inside the three boxes, the structure is changed. Asso-
ciations and impacts between variables are involved into evaluation by making
system subsystems inside every case (Fig. 1) each pointer is appointed to each
of the cases however conveying a different idea for the evaluation. Really, the
examples of connotation will change among the groups since the pointers cover
dissimilar PSR viewpoints for every indicator. This adjustment has been picked
because a strict arrangement of indicator into boxes would mean loss of fig-
ures about the interconnections amongst indicator and their pertinence as per
pressure, states and responses [18]. The interconnections between pointers are
indicated by arrows in (Fig. 1).
Based on the PSR structure, we consider five criteria for water resource
security, which are: water resources pressure (C1 ), social-economic pressure
(C2 ), water resources state (C3 ), social-economic state (C4 ) and social-economic
response (C5 ).
3 Fuzzy Statement
Table 1. Linguistic variables and triangular IFNs for rating under the subjective eval-
uation criteria
Table 2. Linguistic variables and triangular IFNs for rating the importance
their assessments, however, they can utilize linguistic variables according to their
professional knowledge and experience. Hence, the concept of fuzzy numbers can
be integrated into the multi-criteria evaluation of water resources security.
Evaluators first make their own judgments of water resource security based
on subjective evaluation criteria C1 –C5 . Ratings under subjective evaluation
criteria are considered as linguistic variables. A linguistic variable is a variable
whose value is a natural language phrase. It is very useful in dealing with situa-
tions which are ill-defined to be described properly in conventional quantitative
expressions. Water resource security performance under each subjective evalu-
ation criteria can be expressed on a 7-point rating scale: “very good”,“good”,
“medium good”, “fair”, “medium poor”, “poor”, and “very poor”. Such linguis-
tic variables are converted into triangular intuitionistic fuzzy numbers (IFNs) [1]
as shown in Table 1. The linguistic variables and triangular IFNs for rating the
importance are shown in Table 2. Figures for membership functions are shown
in Figs. 2 and 3. IFNs are commonly used for solving decision-making problems,
where the available information is imprecise. There are different shapes or forms
of IFNs, among those, trapezoidal IFNs and triangular IFNs are the most com-
monly used. For example, Shaw and Roy [17] used trapezoidal IFNs for analysing
fuzzy system reliability, while Vahdani et al. [22] applied triangular IFNs to fuzzy
1190 T. Qadeer and Z. Li
Fig. 2. Figure for membership functions of linguistic variables of rating under the
subjective evaluation criteria
Fig. 3. Figure for membership functions of linguistic variables for rating under the
importance
4 Aggregation Method
Step 1. Normalize the evaluation index as: Compute the normalized fuzzy deci-
sion matrix (V = [vij ]). The normalized value nij is calculated as:
aij
vij = . (1)
a2ij
A Fuzzy Multi-Criteria Evaluation Method of Water Resource Security 1191
d j = 1 − Ej . (4)
vj+ = vi+ , · · · , vn+ = [(max vij |i ∈ I), (min vij |i ∈ I)], (7)
vj− = vi− , · · · , vn− = [(min vij |i ∈ I), (max vij |i ∈ I)]. (8)
Step 8. Calculate the relative closeness to the ideal solution. The relative close-
ness of the alternative aj with respect to A∗ is defined as:
d−
j
C.Li = , I = 1, 2, · · · , m. (11)
d− +
i + di
5 Case Study
This study presents a case study of 9 provinces from Yellow River basin. These
9 provinces of China are Qinghai, Sichuan, Gansu, Ningxia, Inner Mongolia,
Shaanxi, Shanxi, Henan and Shandong. The original data are from Liu et al.
[13]. Based on the original data, we have experts to make their fuzzy judgements
for all the criteria (C1 –C5 ). The decision table is Table 3.
We also have experts give their ratings about the importance of all criteria
as shown in Table 4.
Based on Eqs. (1) and (2), we obtained results shown in Tables 5 and 7.
By using Eq. (3) results are E1 , E2 , E3 , E4 , E5 = (0.9, 0.9, 0.6), (0.8, 0.9, 0.9),
(0.8, 0.9, 0.9), (0.6, 0.8, 0.9), (0.8, 0.8, 0.9) respectively (Table 6).
By using Eq. (4) divergence are d1 , d2 , d3 , d4 , d5 = (0.0, 0.0, 0.4), (0.3, 0.1, 0.1),
(0.2, 0.1, 0.1), (0.4, 0.2, 0.1), (0.3, 0.2, 0.1) respectively.
According TOPSIS, we obtained weights: W = [(0.005, 0.005, 0.154), (0.099,
0.038, 0.020), (0.084, 0.050, 0.024), (0.184, 0.095, 0.038), (0.104, 0.068, 0.033)].
By using Eq. (5) the normalized weights are W1 , W2 , W3 , W4 , W5 =
(0.0, 0.0, 0.2),
(0.1, 0.0, 0.0), (0.1, 0.1, 0.0), (0.2, 0.1, 0.0), (0.1, 0.1, 0.0) respectively.
1194 T. Qadeer and Z. Li
Calculate the relative closeness to the ideal solution using Eq. (4), Qinghai’s
results are given below:
2.112
CLQinghai = = 0.434.
2.112 + 2.756
By the TOPSIS method, we get the rank of 9 provinces as shown in Table 8.
Based on the ranking result of TOPSIS method, the water resources security
degrees of the 9 provinces.
6 Conclusions
This paper presented a fuzzy multi-criteria evaluation TOPSIS method for the
water resource security evaluation. 9 provinces within the Yellow River basin was
ranked for the water resources security degree based on the proposed method.
The structure of “pressure-state-response” was embedded in developing the
methodology. A fuzzy multi-criteria was proposed not just to deal with the eval-
uation created on the established indicators, additionally to tackle the inherent
suspicions. Based on the case study results, Qinghai, Ningxia, Henan, Inner
Mongolia and Sichuan are ranked top 5 of the water security degree, with TOP-
SIS index greater than 0.4. Future research will be focused on the uncertainty
analysis, innovative evaluation index system and aggregation methods of water
resource security evaluation.
Acknowledgements. We are thankful for financial support from the Research Center
for Systems Science & Enterprise Development (Grant No. Xq15C01), the National
Natural Science Foundations (Grant No. 71671118, Grant No. 71601134) and Project
funded by China Postdoctoral Science Foundation.
1196 T. Qadeer and Z. Li
References
1. Atanassov KT (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst 20:87–96
2. Bitterman P, Tate E et al (2016) Water security and rainwater harvesting: a con-
ceptual framework and candidate indicators. Appl Geogr 76:75–84
3. Cook C, Bakker K (2012) Water security: debating an emerging paradigm. Glob
Environ Change 22:94–102
4. Chen SJ (2004) Water resources security concept and its discussion. China Water
Res 17:13–15
5. Dong QJ, Liu X (2014) Risk assessment of water security in Haihe River Basin
during drought periods based on D-S evidence theory. Water Sci 7:119–132
6. Jiang WL (2001) Study on water resource safety strategy for China in the 21st
century. Adv Water Sci 1:66
7. Jia SF, Zhang JY, Zhang SF (2002) Regional water resources stress and water
resources security appraisement indicators. Progr Geogr 21:538–545
8. Koehler A (2008) Water use in LCA: managing the planet’s freshwater resources.
Int J Life Cycle Assess 13:451–455
9. Hall J, Borgomeo E (2013) Risk-based principles for defining and managing water
security. Philos Trans R Soc London A Math Phys Eng Sci 371:20120407
10. Hwang CL, Yoon K (1981) Multiple attributes decision making methods and appli-
cations. Springer, Heiddberg
11. Li Z, Liechty M et al (2014) A fuzzy multi-criteria group decision making method
for individual research output evaluation with maximum consensus. Knowl Based
Syst 56:253–263
12. Linser S (2001) Critical analysis of the basics for the assessment of sustainable
development by indicators. Schriftenreihe Freiburger Forstliche Forschung, Bd. 17,
Freiburg
13. Liu KK, Li CH et al (2014) Comprehensive evaluation of water resources security in
the Yellow River basin based on a fuzzy multi-attribute decision analysis approach.
Hydrol Earth Syst Sci 18:1605–1623
14. Norma ES, Dunn G et al (2013) Water security assessment: integrating governance
and freshwater indicators. Water Res Manag 27:535–551
15. OECD (1993) Core set of indicators for environmental performance reviews: a syn-
thesis report by the group on the state of the environment. Environment mono-
graphs, vol. 83. Organization for Economic Co-operation and Development, Paris
16. Shirouyehzad H, Dabestani R (2011) Evaluating projects based on safety criteria:
using TOPSIS. In: 2011 2nd International conference on construction and project
management IPEDR, vol. 15. Singapore
17. Van Raan AFJ (1996) Advanced bibliometric methods as quantitative core of peer
review based evaluation and foresight exercises. Scientometrics 36:397–420
18. Vacik H, Wolfslehner B et al (2007) Integrating the DPSIR-approach and the
analytic network process for the assessment of forest management strategies. In:
Reynolds K, Rennolls K et al (eds) Sustainable forestry: from monitoring and
modelling to knowledge management and policy science. CABI Publishing
19. Voeroesmarty CJ, McIntyre PB et al (2010) Global threats to human water security
and river biodiversity. Nature 467:555–561
A Fuzzy Multi-Criteria Evaluation Method of Water Resource Security 1197
20. Wolf A (2007) Shared waters: conflict and cooperation. Annu Rev Environ Res
32:269–279
21. Xia J, Zhang YY (2007) Water security in north China and countermeasure to
climate change and human activity. Phys Chem Earth Parts A/B/C 33:359–363
22. Xu JP, Li ZM et al (2013) Multi-attribute comprehensive evaluation of individual
research output based on published research papers. Knowl Based Syst 43:135–142
Management of Technological Modes of System
Distributed Generation Electric Energy
on the Basis of Daily Schedules
of Electric Loadings
1 Introduction
The future power supply system should combine large power stations without
which are problematic electro supply of large consumers and maintenance of
growth of power consumption. Large power stations allow raising voltage with
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 99
Technological Modes of System Distributed Generation Electric Energy 1199
The daily schedules of electric load allow properly assess the mode of the work
of electrical equipment and the enterprise as a whole, identify the bottlenecks
and reserves, to set the optimal mode of the work.
Most fundamentally change in the electrical load associated with communal
general needs. Fig. 2 represents the daily schedule, from which we see that the
electrical load in the winter more than the summer and is sharply reduced at
night hours [6]. The smallest of its value called of the minimum load. In the
afternoon and evening hours is observed increasing of load and a more consid-
erable change - in winter. There are two maximum loads-morning and evening.
Schedule of electrical loads should be provided (“covered”) on a mandatory basis.
Therefore, aspire carry out all necessary renovations during the summer in order
to virtually all equipment of power plant could be used to provide winter peak.
This maximum is called the peak load.
Calculation of the urban network load includes determining the load of indi-
vidual consumers (residential buildings, public buildings, municipal services,
etc.) and electrical elements of the system (distribution lines, transformer sub-
stations, distribution centers, power centers, etc.) [3].
In connection with the appearance of part of the population the possibility of
use in the home of a wide range of modern household appliances and equipment,
1202 A. Arifjanov and R. Zakhidov
random and depend on some of factors: the way of life of the various families,
the number of power consumers and power and others. Therefore, the basis for
determining the load is used the probabilistic and statistical approach to the
load as a non-stationary random variable. Hence, the estimated electrical load
or network element assumes to be probable maximum load value for an interval
of 30 min.
To obtain reliable data while a design of standard charts and determine
their numerical characteristics it is necessary correctly handle the experimental
results, based on the position of mathematical statistics and probability theory.
It should be borne in mind that, according to the law of large numbers and the
theory of probability, the results determine the average aggregate will be valid
when the number of tests or, in other words, the number of surveyed members
together will be quite large. On the other hand, with an increase in the number
of aggregate members, their examination is a huge work, and we have a problem
of determining a sufficient number of members in the aggregate, that allow to
obtain mean values with sufficient accuracy.
Therefore, for an assessment of this loading very often used some general-
ized indicators, coefficients, specific loadings and specific expenses of the electric
power.
The existing algorithms of load calculation of the industrial power systems,
determination of the maximum loads and the choice of electric equipment don’t
consider dynamics of growth and character of household loading, laws of its
functioning, do not allow to take into account in calculating of the actual loading
the relationship of load changes on a day of the week, time of day. Despite
the large number of works on the subject, the daily schedule load model of
residential and public buildings and their practical implementation are not well
developed. There is not a program that gives to specify the load calculation data
of residential and public buildings on the current state of the loads.
The method widely used for determining the maximum electrical load of elec-
trical networks are based on the measurement of the average load of electrical
consumers over a given time period t (t = 8 h or 0.5 h) with variable initial mea-
suring point. The total number of electrical consumers, on which measurements
are made should not be less than 20% of the total number of electrical con-
sumers connected to the electric network (but not less than 15). Measurements
should be carried out repeatedly and for a long time [2,7]. All these methods are
characterized by a long time and a low accuracy of measurements; by a signif-
icant difference from the actual loads, especially the total; by significant errors
in calculation (do not include the probabilistic nature of the electrical loads in
the urban network). These methods also do not take into account the time fac-
tor of maximum electric load of each customer. In addition, these techniques
are designed for installations with a regular operating mode and require large
amounts of additional measurements and not possible to determine the desired
value for a predetermined time period including at least three units.
The way of definition of the maximum load of electric consumers [6] according
to which measure once individual loadings in various technological operating
1204 A. Arifjanov and R. Zakhidov
modes, total time of each technological mode for basic time and calculate the
load group with a probability of exceeding no more than the required conditions
of the problem.
However, this method does not provide sufficient accuracy while determining
the estimated load for a group of different types of electric consumers, in partic-
ular, if even in the same technological modes individual electrical loads are not
constant.
fi−1
S3 (x) = x3i − 3x2i x + 3xi x2 − x3 + x3 − 3x2 xi−1 + 3xx2i−1 − x3i−1 + xi
hi
Mi−1 · h2i fi−1 Mi−1 · h2i fi f Mi · h2i Mi · h2i
− xi − x+ x+ x − i xi−1 − x+ xi−1
6hi hi 6hi hi hi 6hi 6hi
x3 3
= (Mi − Mi−1 ) · + · (Mi xi − Mi−1 xi−1 )x2 + (Mi · x2 i−1 − Mi−1 · x2 i − 2fi−1 ,
6hi 6hi
+ Mi−1 · h2 i − Mi · h2 i + 2fi )x + (Mi−1 · x3 i − Mi · x3 i−1 + 6fi−1 · xi − 6fi · xi−1
1
− Mi−1 · h2i · xi + Mi · h2i · xi−1 )
6hi
= ai0 · x3 + ai1 · x2 + ai2 · x + ai3 ,
1206 A. Arifjanov and R. Zakhidov
Fig. 3. The characteristic (typical) daily schedule of electric load of the shop manufac-
tured goods: 1- constructed from experimental data, 2- built by Lagrange polynomial
where
Mi − Mi−1
a0 = ,
6hi
1
a1 = (Mi−1 xi − Mi · xi−1 ),
2hi
1
a2 = (Mi · x2 i−1 − Mi−1 · x2 i + Mi−1 · h2 i − Mi · h2 i − 2fi−1 + 2fi ),
2hi
1
a3 = (Mi−1 · x3 i − Mi · x3 i−1 + 6fi−1 · xi − 6fi · xi−1
6hi
+ Mi · h2i · xi−1 − Mi−1 · h2i · xi ).
For each electrical consumer or appliances construct its equation of the curve
and own formula for the maximum electrical load. As a result, the calculation
is obtained precise and more authentic and closer to the actual load, taking
into account the peculiarities of load conditions change, the time factor for each
appliance and consumer included in the city’s power system.
5 Conclusions
References
1. Arifjanov AS, Ayupov AS (2015) Forecasting and rationalization of energy con-
sumption for megacities taking into account the factors caused by scientific and
technical progress, with application of informational - analytical technologies. In:
A report on the global forum for energy, environment and commercial civilization
(GFEECC)
2. Arifjanov AS, Zakhidov RA (2017) An approach to the creation of the adaptive
control system for integration of nonsteady power sources into a common electric
power grid. In: Proceedings of the tenth international conference on management
science and engineering management. Springer, pp 563–574
1208 A. Arifjanov and R. Zakhidov
Abstract. The purpose of this paper is to study the effects of air pollu-
tion, especially the haze, on the stock returns of steel mills. This study
collects data from air quality index, variables represent characteristic of
the eleven steel mills in China and the stock returns ratio from the stock
market etc. SPSS19.0 is utilized to conduct a descriptive analysis of the
correlation between the principal air pollution(including PM2.5 , PM10 ),
the variables represent charactering(monetary funds, net assets, liabili-
ties, operating margin, financial leverage, and total asset growth) of the
elven steel mills in China and the stock returns ratio from the stock mar-
ket etc. Hence, through the research on the air pollution index, analysis
of the listed company’s earnings and stock price, and by using the linear
regression analysis method, research shows that the serious air pollution
have a negative impact on the profitability of iron and steel enterprises
through the emotion and expectations of investors. It is imperative for
people to tackle air pollution urgently.
Keywords: Haze related air pollution · Stock return rate · Listed steel
companies
1 Introduction
With the rapid economic development and fast expansion of productive enter-
prises, China is increasingly facing the large-scale severe air pollution. Air pol-
lution with PM2.5 and PM10 as the main pollutants troubles the people’s life,
threatening their health. According to a report released by the WHO in May,
2016, seven of the world’s 10 cities with the most severe air pollution are in
China (China Environmental Report 2016). PM2.5 has an essential impact on
air quality and visibility, and the haze may last longer based on the different
geographical and meteorological conditions, impairing the people’ health consid-
erably. The report further shows that less than 1% of China’s 500 largest cities
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 100
1210 K. Liu et al.
meet the air quality standards recommended by the WHO (China Environment
Report 2016).
The severe air pollution is caused principally by the rapid growth of pro-
ductive enterprises which use coal as the main raw material and highlight high
energy consumption. To maximize the profits, these steel companies have ignored
environmental protection, and even sacrificed the environment for a long time,
causing serious environmental problems, threatening future health of the Chinese
people and sustainable development of the ecosystem. As a resource-intensive
industry, the iron and steel industry, using coal as the main source of energy,
explores a variety of powdery and massive ferrous metals and non-metallic min-
erals through large-scale production and complex processes. The emissions of
the iron and steel industry can be divided into three categories: The first is the
exhaust gas caused in the production process, such as the smoke, sulfur diox-
ide and other harmful gases generated through the sintering, smelting, and steel
rolling process. The second type is the dust and sulfur dioxide produced by burn-
ing fuels, for instance, coal and coal gas, in the furnace. The third category is
the dust generated during the processes of transporting, loading and discharging
and processing of raw materials and fuels.
The smog not only severely impairs people’s health but limits their outdoor
activities as well. It affects individual mental activities and emotional states,
resulting in more negative emotions. This paper seeks to study whether air pol-
lution, especially the smog, affects the steel plants’ stock returns. Through exam-
ining the air pollution index, the paper analyzes the earnings and stock prices
of listed steel companies, and uses the linear regression analysis to investigate
whether the air pollution has relevance to the stock returns of these enterprises.
The structure of this paper is as follows: Introduction comes the first, which is
followed by a literature review–the second section. The third part is the research
model, data and the method. Then, the paper presents the empirical research
results and the corresponding analyses and discussions, which constitute the
fourth section. Last, it draws conclusions and provides suggestions.
2 Literature Review
Studies have shown that air pollution directly or indirectly affects people’s psy-
chological conditions and emotions. Lepori [8], using different trading techniques
in Italy and samples from major international stock exchanges, demonstrated the
relationship between air pollution, people’s emotions, as well as stock returns.
Evans [5] find that exposure to polluted air increases the level of depression,
anxiety, helplessness, and anger. Emotion affects people’s assessment of both
future prospects and risks [13]. Some studies reveal that air pollution is posi-
tively correlated with people’s negative emotions. Levy [9] examined the rela-
tionship between air pollution and stock returns. Using air quality indices and
stock return data from four US stock exchanges, they argue that air pollution
is negatively correlated with stock returns. Mehra [11] observed, in their study,
Haze-Related Air Pollution and Impact on the Stock Returns 1211
that small emotional volatility has significant impacts on the fluctuation of cap-
ital prices. Lucy [10] held that emotions, which are not caused by the investors’
future decisions, may still affect the decision makers.
To address the deteriorating air quality and ease the increasing public con-
cern, relevant government departments may reinforce stringent emission stan-
dards and formulate strict environmental policies to control the high pollution
and high emissions behavior. The regulatory policies strengthening environmen-
tal protection will affect the stock returns of iron and steel companies with the
such characteristics as high input, high pollution, and high energy consumption
[12]. Taking China’s steel mills as examples, considering the serious environmen-
tal protection attitudes and stringent environmental standards, steel mills will
face greater pressure from environmental protection and environmental costs,
and the stock returns of the listed steel companies will be affected accordingly.
Meanwhile, air pollution may easily trigger industrial policy adjustments, which,
in turn, lead to the re-allocation of resources. This may impact the listed com-
panies’ operating performances and growth prospects, resulting in stock price
fluctuation [4]. In addition, other government policies, for instance, security reg-
ulations, energy prices, credit rationing, and international cooperation, which
are closely related to air quality, also have a significant impact on the stock
prices of the listed companies [6].
It is worth noting that air quality is regional and it largely affects the sen-
timent of local investors, especially individual investors. Likewise, air pollution
may impact the stock market through influencing the stock traders’ emotion.
Currently, the two major stock exchanges and three futures exchanges in China
adopt the command-driven system rather than the quotation-driven system, and
all trading quotations should be input to the matching system by the investors
with the help of the agent brokers (exchange members). As for the command
trading system operators, when their emotions are affected by local air pollution,
their rational judgment and selection ability may be reduced, resulting in irra-
tional trading behavior and causing stock price fluctuation. In the information
age, individual investors are still the largest fund providers and the main traders
in the current stock market [3], among whom there is a “herd effect” [1,2,7]. In
this context, when outside investors make trade decisions (of the listed compa-
nies) based on the air quality of the major cities (e.g. Beijing, Shanghai), the
“herd effect” may further amplify the impact of air quality on the stock market
of the steel industry.
3.1 Data
This paper studies 11 Chinese steel plants, which are Shanghai Baosteel
(600019), Shougang Group (000959), Ansteel Group Corporation (000898),
Wuhan Iron and Steel (Group) Company (600005), Jiangsu Shagang Group Co.,
Ltd. (0020), Taiyuan Iron & Steel (Group) Co., Ltd. (000825), Jinan Iron and
1212 K. Liu et al.
Steel Co., Ltd. (600022), Panzhihua Iron and Steel (Group) Co. (000629), Hes-
teel Group Tangsteel Company (000709), Benxi Iron and Steel (000761), and
Nanjing Iron and Steel Group Co., Ltd. (600282). The paper gathers the stock
returns and feature variables (monetary funds, net assets, liabilities, operating
margin, financial leverage, and total asset growth) from the CSMAR database.
Moreover, some data of financial leverage and total asset growth are collected
from the WIND database and China Securities Depository and Clearing Corpo-
ration Limited (CSDC).
This paper uses six variables (monetary funds, net assets, liabilities, operating
margin, financial leverage, and total asset growth), which are borrowed from
Berger and Tuttle, to examine the stock information of the steel companies from
different dimensions. The annual financial statements are released with a time
lag. That is, the announcements are usually released in March of the next year. In
order to ensure the relative effectiveness of the financial information, the paper
uses the financial information (July, t, ∼ June, t + 1) as the stock transaction
data of the t − 1 year.
The urban air quality data used in this study are gathered at the sites of
the steel plants. Partly of the data are from the PM2.5 hourly data monitored
by the five US embassies and Consulates in China. The 2013–2015 air quality
index data (such as SO2 , NO2 , CO, PM2.5 , PM10 , and AQI) are obtained from
the Ministry of Environmental Protection of the People’s Republic of China.
In order to investigate the impact of air pollution on the earnings of the steel mills,
the paper uses an ordinary least squares (OLS) regression model, which is as follows:
For Returns represents the dependent variable and refers to the monthly rate
of return of the steel plant.
Haze-Related Air Pollution and Impact on the Stock Returns 1213
N Min Max Mean Std. Variance Skewness Std. error Kurtosis Std. Error
deviation of skewness of kurtosis
Returns 263 −0.3729 1.0142 0.0392 0.1611 0.026 1.006 0.15 5.056 0.299
PM2.5 275 17 182 67.95 30.103 906.212 1.072 0.147 1.311 0.293
PM10 275 40 259 113.84 42.381 1796.152 0.729 0.147 0.241 0.293
Valid N (list wise) 263
The population standard deviation of net profit is very large, indicating sig-
nificant fluctuations of the net profit. The variance of PM2.5 and PM10 is rela-
tively small, but also they are over 30, reflecting obvious fluctuations of the air
quality. The skewness of PM2.5 and PM10 is greater than 0 and has a maximum,
which should be removed, suggesting the right deviation. The skewness of the
net profit is less than 0, showing left deviation. The skewness has a minimum,
which should be removed (Table 5).
Haze-Related Air Pollution and Impact on the Stock Returns 1215
No N Min Max Mean Std. Variance Skewness Std. Error of Kurtosis Std. Error
deviation skewness of kurtosis
1 Returns 25 −0.2231 0.3908 0.0193 0.1249 0.016 0.958 0.464 2.783 0.902
PM2.5 25 33 124 55.56 19.812 392.507 1.865 0.464 4.881 0.902
PM10 25 49 ‘ 75.64 22.381 500.907 1.143 0.464 1.176 0.902
Valid N 25
(listwise) 25
2 Returns 21 −0.3728 0.2349 0.0420 0.1466 0.022 −0.832 0.501 1.706 0.972
PM2.5 25 44 154 80.84 28.868 833.39 1.19 0.464 1.043 0.902
PM10 25 58 173 108.28 30.893 954.377 0.275 0.464 −0.774 0.902
Valid N 21
(listwise) 21
3 Returns 25 −0.2137 0.4106 0.0230 0.1227 0.015 1.075 0.464 3.175 0.902
PM2.5 25 40 144 64.64 24.605 605.407 1.609 0.464 3.206 0.902
PM10 25 66 188 109.72 29.389 863.71 0.736 0.464 0.576 0.902
Valid N 25
(listwise) 25
4 Returns 25 −0.2859 0.3299 0.0257 0.1435 0.021 0.323 0.464 0.131 0.902
PM2.5 25 35 182 78.44 38.448 1478.257 1.523 0.464 2.355 0.902
PM10 25 67 218 113.08 37.627 1415.827 1.214 0.464 1.502 0.902
Valid N 25
(listwise) 25
5 Returns 19 −0.1911 1.0142 0.1490 0.2862 0.082 1.622 0.524 3.538 1.014
PM2.5 25 33 111 63.12 22.769 518.443 0.728 0.464 −0.191 0.902
PM10 25 66 179 113.04 27.902 778.54 0.658 0.464 0.022 0.902
Valid N 19
(listwise) 19
6 Returns 24 −0.3657 0.3861 0.0370 0.1712 0.029 −0.192 0.472 1.102 0.918
PM2.5 25 38 105 64.72 21.084 444.543 0.562 0.464 −1.091 0.902
PM10 25 82 176 122.12 27.039 731.11 0.123 0.464 −0.863 0.902
Valid N 24
(listwise) 24
7 Returns 25 −0.3012 0.3698 0.0337 0.1503 0.023 0.447 0.464 0.851 0.902
PM2.5 25 57 159 91.68 26.009 676.477 1.326 0.464 1.194 0.902
PM10 25 98 259 173.4 38.249 1463 0.414 0.464 0.157 0.902
Valid N 25
(listwise) 25
8 Returns 25 −0.2836 0.3081 0.0278 0.1369 0.019 0.121 0.464 0.439 0.902
PM2.5 25 17 73 35.72 13.296 176.793 1.233 0.464 1.615 0.902
PM10 25 40 144 74.56 26.013 676.673 1.182 0.464 1.261 0.902
Valid N 25
(listwise) 25
9 Returns 24 −0.3657 0.3861 0.0370 0.1712 0.029 −0.192 0.472 1.102 0.918
PM2.5 25 39 147 92.04 29.464 868.123 0.211 0.464 −0.82 0.902
PM10 25 80 204 153.68 37.201 1383.893 −0.393 0.464 −0.962 0.902
Valid N 24
(listwise) 24
10 Returns 25 −0.2762 0.3809 0.0324 0.1365 0.019 0.357 0.464 1.303 0.902
PM2.5 25 26 111 53.32 22.527 507.477 0.791 0.464 −0.108 0.902
PM10 25 55 146 94.08 27.412 751.41 0.326 0.464 −0.935 0.902
Valid N 25
(listwise) 25
11 Returns 25 −0.3672 0.4124 0.0312 0.1607 0.026 0.067 0.464 0.942 0.902
PM2.5 25 30 155 67.32 29.22 853.81 1.387 0.464 2.432 0.902
PM10 25 59 243 114.6 44.507 1980.917 1.188 0.464 1.735 0.902
1216 K. Liu et al.
This correlation varies from region to region. For example, Shanghai and
Jiangsu show a positive correlation. That is, the net profit increases with the
rise of PM10 . In contrast, Nanjing shows a strong negative correlation. In other
words, the net profit reduces with an increase in PM. Overall, the majority of
the cities show a negative correlation. That is, when the PM10 increases, the net
profit decreases.
According to the correlation analysis of PM10 and the total assets growth, dif-
ferent from the net growth rate, most regions show strong positive correlations.
That is, the total asset ratio will rise with the increase in PM10 . For exam-
ple, Jinan, Panzhihua, Tangshan, and Benxi show a strong positive correlation.
Jiangsu, however, shows a strong negative correlation.
The correlation coefficient between PM2.5 and the return is less than 0.2,
reflecting a weak correlation. Although some correlation coefficient of PM10 is
over 0.2, but they are less than 0.5, indicating its weak correlation with the
return. PM2.5 and PM10 , hence, impact the return of the steel companies.
1218 K. Liu et al.
Table 8. Correlations
References
1. Banerjee AV (1992) A simple model of herd behavior. Q J Econ 107(3):797–817
2. Cao M, Wei J (2005) Stock market returns: a note on temperature anomaly. J
Bank Finance 29(6):1559–1573
3. Chen YA, Tan S (2005) Estimation of the steady inflation rate of economic growth.
Econ Res J 4:002
4. Esposito P, Patriarca F et al (2013) Economic convergence with divergence in envi-
ronmental quality? Desertification risk and the economic structure of a mediter-
ranean country (1960–2010). Mpra Paper 102(4):715–721
5. Evans GW, Jacobs SV et al (1987) The interaction of stressful life events and
chronic strains on community mental health. Am J Community Psychology
15(1):23–34
6. Fong WM, Toh B (2014) Investor sentiment and the max effect. J Bank Finance
46(3):190–201
7. Kamstra MJ, Kramer LA, Levi MD (2003) Winter blues: a sad stock market cycle.
Am Econ Rev 93(1):324–343
8. Lepori GM (2015) Air pollution and stock returns: Evidence from a natural exper-
iment. J Empir Finance 35:25–42
9. Levy T, Yagil J (2011) Air pollution and stock returns in the US. J Econ Psychology
32(3):374–383
10. Lucey BM, Dowling M (2005) The role of feelings in investor decision-making. J
EconSurv 19(2):211–237
11. Mehra R, Sah R (2002) Mood fluctuations, projection bias, and volatility of equity
prices. J Econ Dynamics Control 26(5):869–887
12. Oberndorfer U (2006) Environmentally oriented energy policy and stock returns:
an empirical analysis. ZEW-Centre for European Economic Research Discussion
(06–079)
13. Slovic P, Finucane ML et al (2007) The affect heuristic. Eur J Oper Res
177(3):1333–1352
Low Carbon-Oriented Coupling Mechanism
and Coordination Model for Energy Industry
Investment and Financing of China
1 Introduction
In recent years, affected by the combined effects of resource endowment con-
straints and economic growth, China’s energy production and energy consump-
tion is dominated by coal, greenhouse gas emissions are rising rapidly has become
the first emissions of global carbon dioxide and sulfur dioxide [5]. In recent years,
China’s atmospheric pollutant emissions continue to increase, the composite air
pollution has become increasingly prominent, sulfur dioxide, nitrogen oxides and
volatile organic compounds led to the second pollution was intensified, which
include Fine particulate matter, ozone, acid rain and so on. The formation cause
of air pollution is complex energy production and processing conversion is one of
the important source. So it is imperative to realize energy saving and emission
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 101
Low Carbon-Oriented Coupling Mechanism and Coordination Model 1221
coupling theory. Norgaad [4] put forward the theory of coordinated develop-
ment, think through the feedback loop and can achieve the coordination and
common development between social system and ecological system. Guneralp [2]
constructed the important infrastructure and the national security, the ecologi-
cal environment, the economical coupling effect model and so on. The domestic
scholars based on the measure method of coupling coordination, researches on
the problems of ecological environment and economic development coordination.
Binding energy industry investment and financing characteristics, we believe
that they may exist coupling interaction and coordinated operation mechanism
and the coupling coordination theory and method shall also apply in the energy
industry investment financing in the field of research on the coordination mecha-
nism. In the plane of the sustainable development, if the interaction of investment
in the energy industry and the energy industry financing has a higher level of
coordination of the coupling of, there will be helping to improve the efficiency
of the compound system, and ultimately the economy energy low carbonization
and intensive development.
through policy intervention and price signals to attract financing support, and
achieving the energy industry investment and financing coupling coordinated
development goals.
In the energy industry investment subsystem, the long-term implementation
of the energy industry investment government approval, corporate finance, bank
loans investment and financing model, the lack of flexible, effective and clean
investment and financing mechanism. This type of investment is the subject
of the behavior of state-owned energy enterprises, national investment of this
kind of enterprises lack of supervision method and risk control means, lack of
supervision and evaluation mechanism of enterprise investment decision, enter-
prise project approval lack of benefit and risk evaluation mechanism. Especially
in the oil and gas, electric power and other industries, to a large extent there
are administrative intervention, unclear definition of the functions of the govern-
ment, macro-control offside, resulting in energy companies are often blind invest-
ment behavior. In the energy industry financing subsystem, in recent years, the
energy industry financing channels tend to be diversified, but still in the enter-
prise internal self financing and bank credit as the main. Commercial banks in
the energy companies monopoly profits and national climate and environmental
protection policies to make a difficult choice between. Leading to the ups and
downs of the size of the finance. As shown in Fig. 1, the energy industry invest-
ment and financing gap has increased in recent years. At the same time, by
stimulating the impact of energy production and consumption, energy industry
investment growth rate increased year by year. Overall, the interaction between
the two is weakened, and the coordinated development is facing challenge.
Fig. 1. Energy industry investment and financing, energy production and the evolution
of energy consumption
The investments of oil and gas extraction, petrochemical, coking and nuclear
fuel processing industry are unstable, lack of financing support. Overall, the
energy industry internal financing coupling degree is not high.
1224 Y. Deng and L.N. Xu Hou
where xij is the variable xij contribution to the system’s function, which reflects
the degree of satisfaction of the control parameters to achieve the goal, its range
of values for [0, 1], 0 for the most dissatisfied, 1 for the most satisfactory.
If M (j = 1, 2, m) order parameter is extracted from the H control parame-
ters in a subsystem U (k = 1, 2), the order parameter matrix (Fij )n×m with
n(i = 1, 2, · · · , n) is formed. Because the investment in energy industry and
energy industry financing are two different but interacting subsystems within
the system, to achieve each order parameter of the order degree by the inte-
grated method, the model is as follows:
m
m
uk = θj × Fij , θj = 1, (2)
j=1 j=1
where θj are order parameter weights. Using the concept of capacity coupling
and coefficient model for reference, the energy source is obtained, industrial
investment and financing coupling degree function is:
1
C = {(U1 × U2 )/ [(U1 + U2 ) × (U1 + U2 )]} 2 , (3)
where the letter C is the system interaction coupling degree, C ∈ (0, 1); U1
and U2 represent energy industrial investment sub-system and the energy indus-
try finance system on the system total contribution, namely investment in the
energy industry comprehensive sequence parameters and energy industry financ-
ing order parameters.
For the energy investment and financing system, the significance of the cou-
pling degree model is: quantitative description the interaction of between the two
subsystems; reflect the order parameter of each subsystem in a certain period
of time, the relationship between the number of regional interaction and adjust-
ment process, in order to provide a basis for evaluation of the evolution trend of
the interactive coupling of the composite system.
Table 1. The coupling coordination system and criteria of energy industry investment
and financing
Table 2. Energy industry investment and financing coupling coordination system index
system
4 Summary
Based on the system theory and synergetic theory, this paper establishes a
coupled coordination model based on low carbon sustainable development and
greenhouse gas emission reduction to analyze their interactions. The main con-
clusions are obtained as follows: (1) The investment and financing of the energy
industry has a stress effect on the climate environment, and the climate environ-
ment has a binding effect on the investment and financing of the energy indus-
try. The energy industry investment and financing stress effects on climate and
environment are mainly: the energy industry investment and financing of energy
production capacity increase the emissions of air pollutants; the constraint effect
of the climate environment on the investment and financing of the energy indus-
try is mainly caused by the severe climate and environmental protection policy.
(2) Under the conditions of the policy of reducing greenhouse gas emissions,
the interaction between the energy investment and financing has weakened year
by year. In considering the greenhouse gas reduction factors, energy industrial
greenhouse gas emissions in areas of high energy investment and financing cou-
pling is relatively low, relatively poor coordination; Energy development and
construction may not be an important cause of regional complex atmospheric
pollution; The climate and environment protection policy constraints on energy
investment is not strong, but has an important impact on energy financing.
References
1. Du J, Bo Y, Yao X (2012) Selection of leading industries for coal resource cities
based on coupling coordination of industry’s technological innovation. Int J Min Sci
Technol 22(3):317–321
2. Gneralp B, Seto KC (2008) Environmental impacts of urban growth from an inte-
grated dynamic perspective: A case study of shenzhen, south china. Global Environ
Change 18(4):720–735
3. Li Y, Li Y et al (2012) Investigation of a coupling model of coordination between
urbanization and the environment. J Environ Manage 98(1):127
4. Norgaard RB (1990) Economic indicators of resource scarcity: a critical essay. J
Environ Econ Manage 19(1):19–25
5. Smale R, Hartley M et al (2006) The impact of CO2 emissions trading on firm
profits and market prices. Climate Policy 6(1):31–48
6. Tang Z (2015) An integrated approach to evaluating the coupling coordination
between tourism and the environment. Tourism Manage 46:11–19
Low Carbon-Oriented Coupling Mechanism and Coordination Model 1229
Abstract. This paper uses the DEA method to establish the evalu-
ation model of water resources efficiency to make an empirical study
on the urban agglomeration of Chengdu city in 2008–2014. The results
show that the efficiency is increasing year by year, and the economic
efficiency and environmental efficiency of the water resources utilization
are improved. Meanwhile, under the control of the technology, educa-
tion and transportation level to analyze of efficiency from the economic
factors, this paper gets the conclusion that: (1) the city water use effi-
ciency and the level of economic growth showed a significant linear cor-
relation. (2) Technology shows no obvious linear relationship with the
efficiency of water use. Traffic enhances the effect of openness on water
efficiency. Education is negatively correlated to the efficiency. It indicates
that the urban agglomeration should enhance the traffic infrastructure
on its regional economic advantage, optimizing the industrial.
1 Introduction
With the economic development and population growth, water shortage is a
severe problem. According to the National Bureau of statistics report, Sichuan
water consumption for each industrial added value dropped from 135 m3 per
million yuan in 2008 to 46 m3 per million yuan in 2014, a decline of 66%, achiev-
ing the requirements of the State Council in advance. However, compared with
Beijing, Suzhou and other “national water-saving city” in recent years, which
maintain the value of 16–19 m3 per million yuan, the efficiency of water use needs
improvement.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 102
Space-Time Analysis for Water Utilization Efficiency 1231
The shortage of water resources is a global problem, and how to make efficient
use of water to ease the supply and demand of resources is urgent. The coordi-
nated development of economy and environment is the foundation of building a
resource-saving society. On the one hand, the main branches of the economy have
an impact on the efficiency of water use [2], on the other hand, the improvement
of water use efficiency is conducive to the local food exports [7].
Therefore, the quantitative analysis of economic development with water
use for sustainable development is particularly important. Previous studies had
focused on the evaluation of the efficiency of water use on agriculture, industry
and mixed model, or on the profound the model. The present studied the use of
water in the economic output perspective, ignoring the ecological impact, this
paper aims to evaluate the Chengdu city group water utilization through two
aspects of economic and environmental efficiency. Few existing research attached
quantitative analysis of factors affecting the efficiency of water use, this paper
based on the empirical analysis study the efficiency of water use and its economic
factors, prolonging sustainable development for the construction of water-saving.
2 Research Objectives
There are many ways to analyze the utilization of water resources, in view of the
water potential of the unified understanding degree is not high. The range of fluc-
tuation is difficult to quantitative analysis of the decoupling theory. This article
based on the perspective of the utilization efficiency of water resources to study
the water use in the city of Sichuan province. This article unfolds as follows.
Primarily, In order to explore the relationship between water use efficiency and
economic development, this article based on the Cobb-Douglas production func-
tion, to set the input-output model to measure the efficiency of water resources
utilization. Secondly, according to the difference of water use in time and space,
this paper divides the comprehensive utilization efficiency of water resources into
economic efficiency and environmental efficiency. Thirdly, this article analyzes
the impact of economic development on the efficiency of water use, to discover
the relationship between economic development and water consumption in urban
agglomeration. Lastly, In order to reflect the economic indicators in a better way,
this paper under the control of the city of scientific technology, education and
water transportation level, to do further analysis between economic level and
efficiency of water use. It’s of great importance to profoundly analyze economic
development and resource allocation. Figure 1 shows the construction.
This paper chooses BBC model to analyze the relative efficiency of variable
returns to scale. The BBC model increases the convexity assumptions on the
basis of the CCR model, which will be helpful to analyze the environmental and
economic output [4,8,9]. We add the curve measure to the BBC model. The
curve measure is the environmental efficiency evaluation method of nonlinear
analysis, analyze the output with radial infer, use reciprocal curve to measure the
pollutants, in order to achieve efficiency evaluation in the process of increasing
the output and reduce the pollution.
1232 Y. Huang and Y. Liu
Opening
degree
Economic
Consumption
structure
Urban
Water
Capital Population
resource
In order to fully reflect the input and output of water resources in cities, the
paper based on the Cobb-Douglas production function, constructed an input
output analysis framework including water use and water pollutant emissions.
The exact input-output index system followed in Table 1. The fixed assets invest-
ment of the whole society reflects the scale of capital investment of the whole
society in a period. The local population, compared to the employment popula-
tion, better expresses the contribution to the increase in water consumption and
economic output. We use the annual water supply as the water consumption
of each grade city. All the data comes from the Sichuan statistical yearbook.
This paper adds city sewage emissions as undesirable outputs. In case of pollu-
tion and other undesirable products are produced, based on the previous [3,4]
for pollutants treatment, this paper inserts the reciprocal evaluation which has
comparative advantage to DEA model as the environmental impact.
The paper uses the DEAP 2.0 to set the model, lists the results in Table 3. It is
obvious from Table 3 The water use efficiency of Chengdu urban agglomeration
increased gradually in 2008 to 2014. The water utilization efficiency of Ziyang
in 2008 to 2012 showed the optimal level 5 times, indicating that the capital
investment, labor and water resources allocation in Ziyang city is in a good
condition and take the best place among Chengdu urban agglomeration. We use
ArcGIS to demonstrate the variation of the comprehensive efficiency.
On Fig. 2, these directly show the trend of water utilization efficiency. From
the perspective of time series, the financial crisis in 2008 affected the economic
development and water use of Chengdu urban agglomeration which led to the
tendency of water utilization efficiency declined in 2009. In 2011, with economic
recovery and city industrial construction, water utilization efficiency and the
indirect influence gradually recovered to the level of 2008.
From the perspective of the spatial distribution of the industry, Chengdu
urban agglomeration pillar industries (electronic information, water and elec-
tricity, pharmaceutical and chemical industry, machinery, metallurgy, food and
beverage, etc.) of large enterprises were mainly distributed in Chengdu, Deyang,
Mianyang, Ziyang, However, the development of industry in Meishan, Leshan
lagged behind, which lacked the support for large enterprises and projects.
1234 Y. Huang and Y. Liu
Fig. 2. Chengdu city group water use comprehensive efficiency from 2008–2014
stable efficiency at the high level in 2011. Deyang’s pillar industry was mechani-
cal, which accounted for nearly half share. Although the development of machin-
ery industry was still in the extensive stage, under the atmosphere of 2011–2014,
tap estates had a greater contribution to pull the local economic output. Ziyang
on the location was the unique regional center city which connected Chengdu
and Chongqing the “dual core”. It’s known as its automobile industry the same
as Deyang, which drove the rapid economic development through increasing
industrial output then indirectly improved the economic efficiency. Compared to
Mianyang, the city manifested in industrialization as well, it existed an obvious
gap. Mianyang played a relatively low ecoefficient role among Chengdu urban
agglomeration with its electronic as pillar industry. The falling behind velocity
of the increasing development restricted its economic efficiency to a higher level.
Although Mianyang regarded as the second biggest city in Sichuan, the lowest
water environmental effect dragged it at the bottom and pull down its water
utilization comprehensive efficiency. Similarly, Chengdu, which was specialized
in electronics, medicine and so on, because it was the urban agglomeration cen-
ter city and the capital city of Sichuan Province, its economic development was
much higher than that of other prefecture level cities which raised its water use
efficiency. In terms of the environmental efficiency of water use, Chengdu urban
agglomeration showed the effective in environmental efficiency but relatively low
valid in eco-efficiency. The reason is that it didn’t make the best use of factors.
For illustration, Yaan and Meishan did not serve its turn. Meanwhile, the plenty
of cities had already taken a big scale, such as Chengdu, Deyang, Leshan, Ziyang
and Mianyang. In general, the input-output ratio was reasonable, and the com-
bination of factors achieved a certain scale economy. Therefore, it is close to the
optimal efficiency on the scale of the city but insufficient in eco-efficiency.
1 1
0.9 0.9 Chengdu
0.8 0.8 Deyang
0.7 0.7 Mianyang
0.6
0.6 Leshan
0.5
2008
2009
2010
2011
2012
2013
2014
2008
2009
2010
2011
2012
2013
2014
Eco. Meishan
Env.
eff Yaan
eff.
Year Year Ziyang
Fig. 3. 2008–2014 Chengdu urban agglomeration water use Economic and Environ-
mental efficiency
4 Economic Effects
4.1 Hypothesis
Based on the above empirical, it is visible that economic output and
its level of city development has close relationship with water utilization.
1236 Y. Huang and Y. Liu
This paper assumes that the explanatory variables are: economic structure,
opening degree, consumption level. At the same time, this paper selects the
technology, transportation and education level as the control variables.
Hypothesis 1 : economic structure has a negative impact on water use efficiency.
The economic structure is the composition and structure of the national econ-
omy. The influence of economic structure on water resources utilization efficiency
can be analyzed from two aspects. On the one hand, the industrial production
have positive effects on the economy of a region, the economic structure in the
industry tend to have higher economic output which shows high utilization effi-
ciency of water. On the other hand, the increase in industrial investment will
increase the impact of the ecological environment, resulting in a larger proportion
of environmental output, thereby reducing the city’s water use efficiency. Now
the negative effect of the development of Chengdu city group is the ecological
environment influence over its economic effects is unknown.
Hypothesis 2 : the degree of opening to the outside world has a positive impact
on the efficiency of water resources utilization.
The degree of opening effects the import of resources and the introduction of new
technologies. Chengdu is located in China’s western region, the western energy
technology itself is relatively backward. The change in the degree of dependence
on foreign trade has great influence on the west. Therefore, the study on the
degree of opening to the outside world is of great significance to the utilization
efficiency of water resources in Chengdu urban agglomeration.
Hypothesis 3 : consumption level has a positive impact on water use efficiency.
To some extent, the consumption level reflects the scale of production and indi-
rectly affects the efficiency of resource utilization. On the one hand, consump-
tion is an important booster of economic growth. On the other hand, all kinds
of garbage generated in the process of consumption also makes the decline in
environmental quality [1]. Whether the ecological impact of rising consumption
level is beyond its economic effects? This paper assumes that the consumption
level is a positive impact on water use efficiency.
This paper will explain the following three aspects on the selection of economic
factors and the setting of variables: As explained variable, utilization represents
the regional comprehensive efficiency of water use. The data is calculated by
the DEA-BBC model. Other data comes from the 2009–2015 Sichuan statistical
yearbook. For Explanatory variables, structure refers to the economic structures,
this paper uses the sum of the primary and secondary industries with the ratio
of the service industries to express the economic structure. Set the ratio due
to the water consumption of primary and the secondary industriesshare more,
the service industrial (including commodity trade, catering and accommodation,
transportation organizations and other services industry) accounted for only 3%
(Sichuan water resources bulletin 2014). In terms of the gross national product
structure, in the Chengdu urban agglomeration, the primary industry occupies
Space-Time Analysis for Water Utilization Efficiency 1237
less compared to the secondary, the service industries. Therefore, use the sum
of primary and secondary industrial added value divided by the added value of
the third industry to represent the economic structure and investment allocation
direction. Consumption indicates the level of consumption, this paper selects the
total retail sales of social consumer goods to express the transaction situation
of various organs, enterprises and institutions, in order to reflect the domestic
consumption level and demand of the city. Export is used to indicate the degree
of opening up. In this paper, the ratio of total import and export to GDP is
used to reflect the degree of opening up. The city’s total import and export of
goods better embodies the foreign trade level. Because of its joint venture, said
the unit is million, taking into account other indicators to billions of units, the
direct use of the total import and export data analysis model will have a certain
deviation, and the change of exchange rate is floating, it is difficult to calculate
by import and export the amount of US dollars into RMB. This paper selects
the import and export volume and the current ratio of GDP to reflect the effect
of the level of international trade.
For the control variables, this paper use the proportion of public finance
expenditure to reflect the levels. The city’s technology investment has positive
effect on improving the efficiency of water resources utilization. The progress of
technology can create more water-saving facilities, increase the circulating water,
thereby increasing the utilization efficiency of water resources. The educational
level of the city reflects a city’s progress and the civilized degree. The level of
urban transportation will affect the degree of opening up. At the same time,
the improvement of the level of transport will also play a role in alleviating the
1238 Y. Huang and Y. Liu
The econometric model of panel data includes fixed effect model, random effect
model and least square regression method. The equitation is as followed.
where b1 stands for the intercept, b2 , b3 , b4 represent the coefficient of the eco-
nomic structure, the degree of opening up and the social consumption. ε is the
stochastic error. Because of the regional difference of the urban agglomeration
is relatively small, the difference of the water resources utilization efficiency
between different sections of the cross section can be ignored. The time span
is small, and the variation of parameters can be ignored as well. In this paper,
WLS regression is used to fit and analyze the influencing factors and weights,
and the control variables are introduced gradually.
result of the three models. It was the primary and the secondary industry high
proportion improve the utilization efficiency of water use. In addition, the further
openness would introduce new energy, advanced technology and talents, and it
could largely reduce economic output and has high impact on the ecological
environment. The regression coefficient of the level of consumption is significantly
positive. We use stepwise least squares regression, in the case of abandoning the
constant, to analyze the standardized coefficient.
Before the insert of control variables After the insert of control variables
Pooled Fixed Random WLS + Technology + Transportation + Education
OLS effect effects
b2 0.217a 0.230a 0.2259a 1.041a 0.404 0.379 0.700a
b3 4.162b 4.421 4.3012b 0.532b 0.671a 0.939a 1.253a
a b a b b a
b4 0.001 0.001 0.001 0.637 0.981 0.607 0.542a
b5 − − − − −0.1 − −
b6 − − − − − 0.418a 0.471a
b7 − − − − − − −0.338a
2
R 0.412 0.411 0.411 0.407 0.354 0.503 0.577
F 55.31 6.61 − 10.759 6.995 12.895 13.649
a b
Correlation is significant at the 0.01 level, Correlation is significant at the 0.05 level.
From Table 5, it is obvious that the economic structure takes the greatest
place among the economic indicators to demonstrate the trend of the water
use efficiency. The reason is that the Chengdu urban agglomeration has a higher
degree of industrialization. The domestic consumption promotes economic diver-
sification of functions of the city through the sharing of resources among cities,
which indirectly improve the utilization of water. Meanwhile, international trade
will bring the inflow of technology and talent, and promote the development of
economy.
In this paper, the coefficient of technology is negative as control, and it didn’t
pass the significance test. This doesn’t mean that there is no linear relationship
between the technology and the efficiency of water use. It is based on the tech-
nology proportion of public expenditure. In fact, many of the city’s scientific
development depend not only on the public finance expenditure, it depends on
technological innovation in enterprises and large companies. The traffic level
shows that increasing its proportion of public finance expenditure has a positive
effect on improving the efficiency of water use. Under the influence of the traffic,
the effect of the economic structure weakened, and the effects of opening up rise
up. The coefficient of educational level is negative.
1240 Y. Huang and Y. Liu
5 Conclusion
The water use efficiency of Chengdu urban agglomeration has difference in time
and space. Attach the time series as the axis, since the financial crisis into the
economic buffer period, the efficiency of water use was gradually increasing. From
the perspective of spatial difference, because of its geographical advantages, in
the electric, automotive and other industries as a pillar industry in the city,
its water use efficiency was better. For some cities, although the marginal scale
investment was higher, due to its serious environmental pollution and relatively
Space-Time Analysis for Water Utilization Efficiency 1241
weak economic growth, it shared the low utilization efficiency of water use. The
data shows that the water use of Chengdu urban agglomerations still had some
room to improve, the government departments should make efforts to increase
sewage treatment.
The water utilization efficiency from the perspective of economic level analy-
sis, found the water resource utilization and economic development is linear, the
economic structure takes the most profound effect. Increasing the utilization rate
of resources depends largely on the green industry output; the opening degree
and city’s consumption affected the input of technology and talent to indirectly
affect the efficiency of water use. Meanwhile it directly affect the water use
through the flow of resources among the agglomeration.
This paper makes a further analysis on the urban development, and finds that
the level of urban transportation has a positive effect on the efficiency of water
resources utilization and will increase the effective proportion of the degree of
opening up. Raising the level of education in the short term is not conducive to
the water use. Due to the long period of payback in the educational investment,
it requires for a higher levels of openness to compensate the effect for extra
education spending of the utilization efficiency of water resources.
References
1. Kox HLM (2013) Export decisions of services firms between agglomeration effects
and market-entry costs. Service industries and regions. Springer, Heidelberg, pp
177–201
2. Kulmatov R (2014) Problems of sustainable use and management of water and
land resources in Uzbekistan. J Water Resour Prot 6(1):35–42
3. Liao H (2011) Utilization efficiency of water resources in 12 western provinces of
China based on the DEA and malmquist TFP index. Resour Sci 33(2):273–279
4. Mai Y, Sun F et al (2014) Evaluation of China’s industrial water efficiency based
on dea model. J Arid Land Resour Environ 11:008
5. Malaeb L (2004) Constituent transport in water distribution networks
6. Melo PC, Graham DJ, Noland RB (2009) A meta-analysis of estimates of urban
agglomeration economies. Reg Sci Urban Econ 39(3):332–342
7. Taheripour F, Hertel TW et al (2016) Economic and land use impacts of improving
water use efficiency in irrigation in South Asia. J Environ Prot 7(11):1571–1591
8. Thanassoulis E (2000) Dea and its use in the regulation of water companies. Eur
J Oper Res 127(1):1–13
9. Veettil PC, Ashok A et al (2011) Sub-vector efficiency analysis in chance con-
strained stochastic dea: an application to irrigation water use in the Krishna River
Basin, India. In: The 122nd Eaae Seminar “Evidence-Based Agricultural And Rural
Policy Making: Methodological And Empirical Challenges of Policy Evaluation”
10. Wu F, Liu Z (2008) Research on the mechanism of how city group drive eco-
nomic growth-empirical evidences from 16 cities of Yangtze River delta. Econ Res
J 11:126–136
Exploring Linkages Between Lean and Green
Supply Chain and the Industry 4.0
1 Introduction
According to [34] to establish the global value chain networks, the Industry
4.0 describes a production oriented CPS that integrates production facilities,
warehousing and logistics systems and even social requirements. In addition,
Germany Trade & Invest (GTAI) [15] mention that the industrial value chain,
product life cycles and business information technology combination must inte-
grate the processes from the product design to production, supply chain man-
agement, aftermarket service and training [15]. An intelligent factory is in devel-
opment and is coined as smart factory. With this concept, others appear and
are important for the establishment of implementation of Industry 4.0, as for
example smart products, smart manufacturing and smart data.
The authors [27] mentioned that this implementation is still in progress.
That’s why it is important to understand the role of lean and green supply chain
management in Industry 4.0. For example, the lean waste would be recognized
with the smart factory implementation [28]. The resource efficiency which is a
lean and green concept, are in the focus of the design of smart factories [27].
This study intends to understand if the Industry 4.0 allows the lean and green
supply chains concepts become more important. That is, if it enables more easily
the deployment of lean and green characteristics. A number of characteristics
were presented on model, namely: (i) manufacturing, (ii) logistics and supply,
(iii) product and process design, (iv) product, (v) customer, (vi) supplier, (vii)
employee, (viii) information sharing and (ix) energy.
The remainder of this paper is organized as follows: in Sect. 2, a theoretical
background on Industry 4.0 and lean and green supply chain are presented; in
Sect. 3, a combination between lean and green supply chain and Industry 4.0 is
developed; Finally, some concluding remarks are drawn.
The Industry 4.0 is considered the paradigm of the fourth stage of industri-
alization and describes a vision of future production [21,30]. The core idea of
Industry 4.0 is the integration and application of information and communica-
tion technologies to implement Internet of Things and Services so that business
process and engineering process are deeply integrated making an environment
intelligent [28,34]. The concept of industry 4.0 which represents the integration
of the virtual and physical worlds in a way that together create a truly networked
environment and where intelligent objects communicate and interact with each
other, is a Cyber-physical systems [15]. According to [19] the Industry 4.0 “will
involve the technical integration of CPS into manufacturing and logistics and
the use of the Internet of Things and Services in industrial processes. This will
have implications for value creation, business models, downstream services and
work organization.”
The Industry 4.0 is represented by three features [19,30,34]: (i) horizontal
integration across the entire value networks; (ii) vertical integration and net-
worked manufacturing system; and (iii) end-to-end digital integration of engi-
neering across the entire value chain or product life cycle.
1244 S. Duarte and V. Cruz-Machado
The horizontal integration across the entire value network refers to the inte-
gration of the various systems used in the different stages of the manufacturing
and business planning processes that involve an exchange of materials, energy
and information both within a company as logistics, production, and marketing,
and between several different companies; The idea is that information, mate-
rial and money can flow easily among different companies creating new value
networks as well as business models. This can result in an efficient ecosystem
[19,30,34].
The vertical integration refers to the integration of the various information
and physical systems at the different hierarchical levels, as for example the pro-
duction management, manufacturing and execution, and corporate planning.
This integration is inside a factory to create flexible and reconfigurable manu-
facturing system [19,30,34].
The goal of the horizontal and vertical integration is to deliver an end-to-end
solution. The end-to-end solution refers to the digital integration of engineering
across the entire value chain to support product customization: from the raw
material acquisition to manufacturing product, and product in use and in the
end of life [19,30,34].
Through these features, the Industry 4.0 expects to implement an environ-
ment more flexible, efficient, and sustainable. The idea is to individualize the
customer requirements, as a customized product through a mass customization,
improving productivity and achieving higher levels of quality with a manufac-
tured profitably result [6,19]. Indeed, by applying advanced information and
communication technologies and systems in the manufacturing and supply chain
operations, the industry 4.0 addresses the smart factory [28]. Smart factory is
designed according to sustainable and business practices, insisting upon, flex-
ibility, adaptability and self-adaptability, learning characteristics, fault toler-
ance, and risk management [15]. Therefore, standards are essential to ensure the
exchange of data between machines, systems and software and guarantee that
product moves within a network value chain [6].
That is, high levels of automation come as standard [15]. Automation sys-
tems, manufacturing and product management are integrated and are the base
of the smart factory [6]. Manufacturers can now add sensors and microchips
to tools, materials, machines, vehicles and buildings to communicate with each
other in real-time to make smart products [15].
According to [36], “products know their histories and their routes, and
thereby not only greatly simplify the logistic chain but also form the basis
for product life cycle data memories”. Also, the products can be manufactured
because smart factory is being supplied with energy from smart grids [30].
Not only smart factory and smart product were defined in this new industri-
alized era. Others concepts connected to them are considered in the literature.
For their work development, Kolberg and Zuhlke [21] considered four different
smart concepts to define the smart factory, namely, smart planned, smart prod-
uct and smart machine, and smart operators. The authors [30] make mention
of the smart grid, smart logistics and smart data. Sanders et al. [28] mention
Exploring Linkages Between Lean and Green Supply Chain 1245
Concepts Description
Smart factory Smart factory represents the key characteristic of Industry 4.0 [15]. The
smart factory will be more flexible, dynamic and intelligent [27], where
people, systems and objects communicate with each other [15]. The
internet of things and services are the main enabler technology for a
smart factory [15, 29]
Smart manufacturing Manufacturing will be equipped with sensors and autonomous systems
which allow that operations can be optimized with a minimum
employee’s intervention [27, 29]. It produces small-lot products of
different types, more efficiently [34]
Smart product A smart product is a product with sensors and microchips that allow
communication via the internet of things with each other and with
employees [27]. It holds the information about its requirements for the
manufacturing processes and manufacturing machines [21, 30]
Smart logistics It is one of sustainable mobility strategies [15]. Smart logistics will use
CPS for carrying the material flow within the factory and in the supply
chain (between factories, customers and other stakeholders) [30]. The
transport equipment is a part of smart logistics that is able to react to
unexpected and autonomously should be able to drive between the
starting point and the destination [30]. Distribution and procurement
will increasingly be individualized [27]
Smart engineering Includes product design and development, production planning and
engineering, production and after sales service [29]
Smart data Smart data is structured information of data that can be used for
decision-making [30]
Smart machine Machines and equipment will have the ability to improve processes
through an intelligent decision-making, instead of being directly
instructed [27, 34]. The smart machines should have additional
autonomy and sociality capabilities to adapt and reconfigure to
different types of products [34]
Smart planner Smart Planner optimizes processes in real-time [21]. Decentralized
self-organization [27]
Smart operator smart operator is an employee who supported by ICT, control and
supervise ongoing activities [21]. Employees can be quickly directed to
the right tool [6]
Smart customer Customers’ needs and behaviors are analyzed in real-time in way to
provide them with new and more sustainable products and services [29]
Smart supplier Based on factory needs it is possible to select the best supplier (which
allows higher flexibility) and strengthen a sustainable relations with
suppliers (by increase information sharing in real-time) [29]
Smart grid Responsible to supply energy to a factory [30]. Energy management [15]
Smart energy Monitor and provide feed-back on energy production and use [23]
others concepts as the smart systems, smart environment, smart machine and
smart devices, and smart task. Table 1 compiles several concepts of Industry 4.0.
Through the integration of the industry concepts and technologies it should
be possible provide a customized or individualized product or service and at the
same time be highly adaptive to demand changes [15]. These changes must be
made on all stages of product life cycle: design phase, raw material acquisition
phase, manufacturing phase, logistics and supply phase, and the use and end of
1246 S. Duarte and V. Cruz-Machado
life phases [15,30]. Therefore, the requirements for design and operations of our
factories become crucial for the success [36].
Nowadays, lean and green supply chain is an integrated approach; they have
different objectives and principles but they complement each other [7,9,10,12–
14,31]. Lean supply chain is about to increase value for customers by adding
product or service features, with the elimination of waste or non-value steps
along the value chain [11]. Green supply chain regards to reducing environmen-
tal impacts and risks while improve ecological efficiency of the organizations
and their partners, and try to achieve corporate profit and market share objec-
tives [35].
These two paradigms are often seen as compatible because of their joint
focus on waste reduction [5]. Lean paradigm is concerning to the elimination of
waste in every area of design, manufacturing, and supplier network and factory
management [13]. The basic forms on the reduction and elimination of waste
are [17]: production, waiting, transportation, unnecessary inventory, inappropri-
ate processing, defects and unnecessary motions. One more waste is pointed by
[31] as the unused employee creativity. Green considers ways to eliminate waste
from the environment’s perspective [11]. The waste generation have the form of
[16]: Greenhouse gases, eutrophication, excessive resource usage, excessive water
usage, excessive power usage, pollution, rubbish and poor health and safety.
In their research [12] mention that the two paradigms have the same type of
wastes: (i) inventory; (ii) transportation, and (iii) the production of by-product
or non-product output. According to [5] the removal of non-value adding activ-
ities suggested by lean paradigm can provide substantial energy savings which
integrates the principles of green paradigm.
The combination of lean and green supply chain practices have better results
than the total from the implementation of each, but separately [12]. The two
paradigms have similar characteristics. According to [4] both paradigm prac-
tice contribute for: (i) the increase of information frequency, (ii) the increase
of the level of integration in supply chain, (iii) the decrease of production and
transportation lead time, (iv) the reduction in the supply chain capacity buffers;
(v) and the decrease of inventory levels. Another practice that contributes for
the better employ and use of all tools is the involvement of the employees [12].
Both paradigms look into how to integrate product and process redesign in order
to prolong product use, to allow easily the recycling or re-use of products, and
to make processes with less wasteful [12]. In the supply chain both paradigms
ask for a closed collaboration with partners [22]. In addition, waste reduction,
lead time reduction, and use of techniques and approaches to manage people,
organizations, and supply chain relations are synergies mention by [13].
Commitments must be made within factory, supplier network and customer,
for the better deployment of lean and green practices in way to achieve the best
supply chain efficiency. In the authors previous study [11] it was presented a table
Exploring Linkages Between Lean and Green Supply Chain 1247
comparing the different characteristics between lean and green. Others important
studies [3,4,12,18,22] were inspire for the development of a comparison between
lean and green paradigms. Several lean and green supply chain characteristics
are considered in Table 2.
not only can be integrated in lean manufacturing but can be beyond that improve
lean manufacturing. Also Sanders et al. [28] considered that industry 4.0 and lean
manufacturing can be integrated to achieve a successful production management.
However, they are not mutually exclusive [28].
Definitely, several researches mention the benefits of the integration of lean
and green in different stages of the company or the supply chain [1,9,12,13,20].
Kainuma and Tawara [20] studied the lean and green supply chain incorporating
there cycling or re-use during the life cycle of products and services. It represents
the different phases of a product life cycle, consisting in [20]: (i) the acquisition of
the raw material, (ii) the manufacturing, (iii) the distribution, (iv) the retailer,
(v) the use, (vi) the collection, (vii) the transportation, (viii) the dismantling
and (ix) the decomposition.
Stock and Seliger [30] presented the opportunities for the realization of a
sustainable manufacturing in the Industry 4.0. For them the life cycle (in the
end-to-end solution) consists in different phases [30]: (i) the raw material acqui-
sition, (ii) the manufacturing, (iii) the transport (between all phases), (iv) the
use and the service phase, and (v) the end-of-life phase (containing the reuse,
remanufacturing, recycling, recovery and disposal). In addition, the environ-
mental/green dimension of the sustainability is better considered because the
allocation of resources as products, materials, energy and water can be realized
in a more efficient way [30]. The adoption of smart energy systems facilitates the
energy use [23].
In fact energy models would assist the analysis of green factory designs,
especially for evaluating alternatives during early design stages [25]. The design
of lean and green supply chain, special in the early design stages for the products
and the processes is a very important issue for the elimination of waste. In a lean
and green environment [7] mention that “eliminating the use of toxics through
product or process re-design could mean reduced worker health and safety risks,
reduced risks to consumers and lower risk of product safety recalls and reducing
process wastes in manufacturing often find more opportunities to reduce waste
throughout the life cycle of the product, thereby having a possible domino effect
on the entire supply chain”. Industry 4.0 is in line with these ideas. According to
[27], Industry 4.0 processes will change the entire supply chains, from suppliers
to logistics and to the life cycle management of a product. It helps to streamline
the process, with more transparency and flexibility.
Lean and green supply chain requires manufacturing technologies to make
processes and products more environmentally responsible [22]. In addition they
ask for a flexible information system [17]. The technology is a driver of the
Industry 4.0 [15,34]. With smart technologies which include the use of electronics
and information technologies [27] will help the implementation of a more efficient
lean and green supply chain.
Also the collaboration with suppliers which is a lean and green characteristic
is considered by Industry 4.0. Through a better communication mechanisms,
with a high compatibility issues of hardware and software which should required
Exploring Linkages Between Lean and Green Supply Chain 1249
standardized interfaces, and synchronisation of data, allow that lean and green
suppliers get better synchronisation with manufacturers [28,32].
The author [14] concludes that lean and green “is an effective tool to improve
processes and reduce costs, by not only reducing non-value-added activities but
also physical waste created by systems”. Industry 4.0 is in line with this state-
ment due this paradigm make all but in a better way, more sustainable, faster
and efficient. According to [21] lean allows the organizations to be more stan-
dardized, transparent and having only the essential work which result in an
organization less complex and support the installation of industry 4.0 processes
and solutions. The green also support the implementation of the Industry 4.0
due it allows to reduce the negative environmental impacts.
The customer type is a concern in the lean and green supply chain. Of course
that lean and green aim is to satisfy the customer needs, but this satisfaction
is relative to: in the lean paradigm is based on cost and lead time reduction
[9] and in the green paradigm is based on helping customers to being more
environmentally friendly [13]. The Industry 4.0 will go to improve in this subject.
It allows a better understanding of the customer needs and allows the immediate
sharing of the demand data throughout complex supply chains [15]. According to
[27] with the full automation and digitalization systems, it allows an individual
customer-oriented adaptation of products that will increase the value added for
organizations and customers. Customers instead of choose from a fixed product
spectrum set by the manufacturer, they will be able to individually combine
single functions and components and define their own product [15].
Another characteristic of lean and green supply chain is the employee involve-
ment and empowerment [12]. According to [8] employee commitment and moti-
vation, and employee empowerment and participation are elements of lean
and green organization. Also [24] mention that connections between lean and
green practices are shown through: (i) employee involvement, (ii) learning by
doing, (iii) continuous improvement, and (iv) problem-solving tools. [36] men-
tion that lean means reducing complexity, avoiding waste and strictly supporting
the employees in their daily work. Also the reduction of environment impacts
improves the health and safety of employees [31]. These aspects are in line of
what is an employee in the four industrial revolution. Indeed, employees may
find greater autonomy and more interesting or less arduous work [6]. Industry
4.0 needs employees not only with creativity and decision-making skills, encoun-
tered as a lean and green supply chain characteristic, as well as technical and
ICT expertise [6].
There are in literature some studies that try to make the bridge between lean
paradigm and green paradigm with Industry 4.0. [28] used 10 lean concepts in
their research in way to validated for attainability through Industry 4.0 para-
digm. Kolberg and Zühlke [21] described the lean automation and Industry 4.0
and give an overview of the links between them. [30] present an overview of sus-
tainable manufacturing with the future requirements of Industry 4.0. Figure 1
illustrates an attempt to link lean and green supply chain characteristics to
Industry 4.0 concepts.
1250 S. Duarte and V. Cruz-Machado
Fig. 1. Linking the lean and green supply chain characteristics to the Industry 4.0
concepts
5 Conclusion
Today, the term Industry 4.0 describes a vision of future of the supply chains.
There is a strong conviction that the definition of lean and green supply chain
will not disappear, it will be evolve and adapt to the new trends that the new
industrial era will require. Lean and green supply chain is focussed on organiza-
tion and in the flow of information, material and money between partners. That
is, more directed to physical processes and less for virtual and technology. Even
so there are in literature some examples that try to make the bridge between
lean paradigm or green paradigm with Industry 4.0. This paper bridges the gap
between the well-known lean and green supply chain management and the new
era of industrial revolution.
A conceptual model was developed linking the lean and green supply chain
characteristics to the Industry 4.0 concepts. Several characteristics were pre-
sented on model, namely: (i) manufacturing, (ii) logistics and supply, (iii) prod-
uct and process design, (iv) product, (v) customer, (vi) supplier, (vii) employee,
(viii) information sharing and (ix) energy. Those who understand the relation-
ships between these two topics will have a greater chance of influencing their
Exploring Linkages Between Lean and Green Supply Chain 1251
supply chains into a source of competitive advantage and help in a better way
on the deployment of the Industry 4.0 paradigm.
Future research is needed. Understand which lean and green characteristics
are more important for the development of Industry 4.0 is required. It would be
also beneficial to understand the priority between characteristics on the imple-
mentation of this new paradigm and in different entities in the supply chain.
Industry 4.0 will be a step forward for the effectiveness and competitiveness of
the lean and green supply chains.
References
1. Azevedo SG, Carvalho H et al (2012) Influence of green and lean upstream supply
chain management practices on business sustainability. IEEE Trans Eng Manage
59(4):753–765
2. Bortolotti T, Boscari S, Danese P (2015) Successful lean implementation: organi-
zational culture and soft lean practices. Int J Prod Econ 160(12):182–201
3. Carvalho H, Azevedo SG, Cruzmachado V (2010) Supply chain performance man-
agement: lean and green paradigms. Int J Bus Perform Supply Chain Model
2(3):304–333
4. Carvalho H, Duarte S, Machado VC (2011) Lean, agile, resilient and green: diver-
gencies and synergies. Int J Lean Six Sigma 2(2):151–179
5. Carvalho H, Azevedo S, Cruz-Machado V (2014) Trade-offs among lean, agile,
resilient and green paradigms in supply chain management: a case study approach.
Lect Notes Electr Eng 242:953–968
6. Davis R (2015) Industry 4.0, digitalisation for productivity and growth. European
Parliamentary Research Service (EPRS), Members’ Research Service, European
Union
7. Dhingra R, Kress R, Upreti G (2014) Does lean mean green? J Cleaner Prod 85:1–7
8. Duarte S, Cruz-Machado V (2013) Lean and green: a business model framework.
Lect Notes Electr Eng 185:751–759
9. Duarte S, Cruz-Machado V (2013) Modelling lean and green: a review from busi-
ness models. Int J Lean Six Sigma 4(3):228–250
10. Duarte S, Cruz-Machado V (2015) Investigating lean and green supply chain link-
ages through a balanced scorecard framework. Int J Manage Sci Eng Manage
10(1):20–29
11. Duarte S, Machado VC (2011) Manufacturing paradigms in supply chain manage-
ment. Int J Manage Sci Eng Manage 6(5):328–342
12. Dües CM, Tan KH, Ming L (2013) Green as the new lean: how to use lean practices
as a catalyst to greening your supply chain. J Cleaner Prod 40(2):93–100
13. Garzareyes J (2014) Lean and green-synergies, differences, limitations, and the
need for six sigma. IFIP Adv Inf Commun Technol 439:71–81
14. Garzareyes JA (2015) Green lean and the need for six sigma. Int J Lean Six Sigma
6(3):226–248
15. Germany Trade & Invest (GTAI) (2014) Industry 4.0-Smart Manufacturing for the
future. Germany Trade and Invest
1252 S. Duarte and V. Cruz-Machado
16. Hines P (2009) Lean and Green, 3rd edn. Sapartner, New York
17. Jasti NVK, Kodali R (2015) Lean production: literature review and trends. Int J
Prod Res 53(3):867–885
18. Johansson G, Winroth M (2009) Lean vs. green manufacturing: Similarities and
differences. In: International euroma conference: implementation - realizing oper-
ations management knowledge
19. Kagermann H, Helbig J et al (2013) Recommendations for implementing the strate-
gic initiative german industrie 4.0. final report of the industrie 4.0 working group.
Technical report, Forschungsunion
20. Kainuma Y, Tawara N (2006) A multiple attribute utility theory approach to lean
and green supply chain management. Int J Prod Econ 101(1):99–108
21. Kolberg D, Zühlke D (2015) Lean automation enabled by industry 4.0 technologies.
IFAC Papers Online 48(3):1870–1875
22. Mollenkopf D, Stolze H et al (2010) Green, lean, and global supply chains. Int J
Phys Distrib Log Manage 40(1/2):14–41
23. Noppers EH, Keizer K et al (2016) The importance of instrumental, symbolic, and
environmental attributes for the adoption of smart energy systems. Energy Policy
98:12–18
24. Pampanelli AB, Found P, Bernardes AM (2014) A lean & green model for a pro-
duction cell. J Cleaner Prod 85:19–30
25. Prabhu VV, Jeon HW, Taisch M (2012) Modeling green factory physics - an ana-
lytical approach. In: IEEE international conference on automation science and
engineering, pp 46–51
26. Qin J, Liu Y, Grosvenor R (2016) A categorical framework of manufacturing for
industry 4.0 and beyond. Procedia Cirp 52:173–178
27. Roblek V, Meško M, Krapež A (2016) A complex view of industry 4.0. Sage Open 6
28. Sanders A, Elangeswaran C, Wulfsberg J (2016) Industry 4.0 implies lean manu-
facturing: research activities in industry 4.0 function as enablers for lean manufac-
turing. J Ind Eng Manage 9(3):811
29. Shrouf F, Ordieres J, Miragliotta G (2014) Smart factories in industry 4.0: A
review of the concept and of energy management approached in production based
on the internet of things paradigm. In: IEEE international conference on industrial
engineering and engineering management, pp 697–701
30. Stock T, Seliger G (2016) Opportunities of sustainable manufacturing in industry
4.0. Procedia Cirp 40:536–541
31. Verrier B, Rose B, Caillaud E (2015) Lean and green strategy: the lean and green
house and maturity deployment model. J Cleaner Prod 116:150–156
32. Veza I, Mladineo M, Gjeldum N (2016) Selection of the basic lean tools for
development of croatian model of innovative smart enterprise. Tehnicki Vjesnik
23(5):1317–1324
33. Vonderembse MA, Uppal M et al (2006) Designing supply chains: towards theory
development. Int J Prod Econ 100(100):223–238
34. Wang S, Wan J et al (2016) (2016), Implementing smart factory of industrie 4.0:
an outlook. Int J Distrib Sensor Netw 4:1–10
35. Zhu Q, Sarkis J, Lai KH (2008) Confirmation of a measurement model for green
supply chain management practices implementation. Int J Prod Econ 111(2):
261–273
36. Zuehlke D (2010) Smartfactory-towards a factory-of-things. Annu Rev Control
34(1):129–138
A Model of Maker Education
in Chinese Universities:
The Perspective of Innovation Ecosystem
1 Introduction
2 Literature Review
Maker education of universities arises from the maker movement with the learn-
ing by doing culture. The maker movement has provided a perspective on learn-
ing that differs from the traditional learning practices taking place in schools and
universities [14]. Barrett, Pizzico, Levy and Nagel [1] discussed the benefits of
university makerspaces that are primarily focused around building physical mod-
els and the inherent of informal learning environments and community. Physical
models can increase the effectiveness and quality of the final design for the devel-
opment of undergraduates by linking the material covered in the classroom to the
real world. Informal learning environments are more open environment allows
them to be used more freely and interwoven into the class structure for multiple
classes without typical classroom scheduling constraints.
The best practices of university makerspace are investigated by several
researchers. Wilczynski [23] explored academic maker spaces in universities,
such as Arizona State University, Georgia Institute of Technology, Massachusetts
Institute of Technology, Northwestern University, Rice University, Stanford Uni-
versity, and Yale University. He recommended the best practices as follows: The
mission of the academic makerspace must be clearly defined; Ensure that the
facility is properly staffed; Open environments promote collaboration; Aligning
access times with the student work schedules; Providing user training is essential;
Attention must be devoted to establish a maker community on campus. Katona,
Tello, et al. [10] put forwards four best practices: flexible models for interdisci-
plinary faculty hiring and engagement, development of student entrepreneurs,
integrating cross-campus curricula, and the development of cross-campus col-
laborations. Myers [12] encouraged interdisciplinary research and idea, which
elements were student led engagement, access to the latest technology, and key
partnerships. Bieraugel [2] found that the on-campus makerspace located outside
the university library encouraged the most innovative behaviors and exploration
of new ideas, and within the library, collaboration rooms were the best spaces
for encouraging creativity.
As to Chinese researchers, their focuses are: relationships between maker edu-
cation and innovation education [8,25]; Essentials and functions of maker educa-
tion [27–29], double helix model [24] and design-based learning [9,30]; augmented
A Model of Maker Education in Chinese Universities 1255
3 Theoretical Roots
The innovation ecosystem is a newer and mainstream concept discussed widely
by scholars and practitioners. However, innovation ecosystem is often described
in different ways, and thus understanding it remains a challenge. Luomaaho and
Halonen [11] defined innovation ecosystem as a permanent or temporary sys-
tem of interaction and exchange among an ecology of various actors that enable
the cross-pollination of ideas and facilitates innovation. Nordfors [13] referred
that innovation ecosystems embody technology and information flow between
those needed to turn ideas into processes, products or services. Bulc [4] pointed
out that an innovation ecosystem is a system made for innovation creation in an
open, natural manner, which enables a holistic understanding of needs, solutions,
and consequences related to innovation processes and innovation itself, and is an
interaction between people, enterprises, and institution. Ritala [15] had a view
that innovation ecosystem refers to clusters (physical or virtual) of innovation
activities around specific themes (e.g., biotechnology, electronics and software).
The complex innovation ecosystem where networks of innovations and commu-
nities of people and organizations interact to produce and use the innovations
[22]. Taken all together, Innovation ecosystem is an interactive work environment
which strengthens actors of different disciplines to co-create new ideas, and then
to construct prototypes and commercialize them in a way of ongoing innovation.
An innovation ecosystem has several important elements. The basic compo-
nents of the ecosystem must be a high quality (universities, funding possibili-
ties, specialized services, talent pool and regional dynamics) [7]. Bulc [4] thought
that key elements can be put into 4 major groups: participants, tools, content,
and principles. Haines [6] identified the following key ingredients of an innova-
tion ecosystem: culture, champion(s), network, stakeholder engagement, process,
physical space and events. Obviously, the innovation ecosystem includes funds,
material resources (equipment or facilities), and the innovative actors such as
students, teachers, researchers, industry specialists, venture capitalists. They
participate in the ecosystem to enable technology development and all kinds of
innovations.
Creating an innovation ecosystem needs to think about some basic princi-
ples: getting rid of the box, core value, competences, dynamic structure, and
constant change [4]. To make innovation, a suitable innovation ecosystem must
meet different conditions such as natural, structural, organizational and cul-
tural factors. Those factors can be grouped based on the following dimensions:
resources, governance, strategy and leadership, organizational culture, human
1256 Q. Zhan and M. Yang
(2) Innovators
In IEOM education, innovators with the similar interests gather together to
try unique combinations of multi-domain, multi-entity, multi-method and cross-
linking innovations. They blur boundaries, continuously redefine, and constantly
remodel works. Innovators connect IEOM education to nature and life in order
to design a unique perspective of the new works, such as technology + nature
+ art, digitization + virtualization + audiovisual performances, and other new
explorations.
(3) Researchers
Facing the unknown future, researchers use systematic thinking and scientific
methods to deepen the essence of things and the main contradictions. They cap-
ture new knowledge, generate new ideas, utilize new technologies, and explore
new approaches. Rather than the reality of unchanging model, researchers study
the possibility of the future to help the development with the IEOM education.
It is important for researchers to migrate and remix technologies, which are an
evolution path of new technologies. Similarly, the study of innovative activi-
ties can also stimulate the creative thinking and technological innovation of the
researchers.
(4) Experts
Experts emphasize the professionalism of their subject knowledge and the sensi-
tivity to complex things. They are not to solve problems according to the steps,
but to focus on asking questions, mining data, clarifying the relationship and
finding a solution. In IEOM education, the experts transcend over-fitting (self-
perceived vertexes) through cross-border connectivity and cyclic learning. At the
same time, on the basis of strong information filtering, experts show innovative
wisdom of research on product-updating, platform-upgrading, and entity-high-
value-reusing.
(1) Interest
Interest is the engine of makerspace development. Innovators gather in the mak-
erspace, generate creative ideas and innovative products in a more personalized
and independent way, based on aspirations, visions and interests. In IEOM edu-
cation, interest-based innovation activities, are not a clear bounded knowledge
or experience passed to the creators, but a creative enthusiasm for innovation,
social responsibility and other elements to affect makers. Innovation activities
emphasize cross-border, connectivity, collaboration and sharing, to enhance the
cohesion of makers’ innovations. They highlight the motive of exploration and
innovation, to encourage self-learning, decision-making and creativity. Innova-
tive activities show the powerful forces of innovative self-development and co-
creation through the de-centralization, peer organization. They offer high-yield
innovations with substantial value-added.
A Model of Maker Education in Chinese Universities 1261
(2) Research
Research is the foundation of the development of IEOM education. Through the
unknown, undetermined, complex, chaotic and other advanced technology explo-
ration and the transfer of key technologies, research achieves changes and tech-
nological breakthroughs. As for the entities, products, and services, a researcher
can flexibly design, massively customizes and agilely develops. The aspects of
design, construction, and testing may be updated or changed iteratively.
(3) Practice
Makerspaces are central resource spaces offering various making practices in dif-
ferent contexts. Practice is an acceleration power of innovation. Practice is the
only way to produce knowledge creation and innovation. Creative activity in
practice refers to the fact that the experts absorb, interact, reflect, criticize and
recreate knowledge and products through the context-based collection, corre-
lation and integration. It stresses the unity of theory and practice, and shows
the great wisdom of innovation and creation. In IEOM education, cross-domain
experts carry out innovative ideas and actions to the practice of product develop-
ment. This practice appears in a circle of knowledge production and application
transformation.
(4) Smart creation
In industry, smart creation means to quickly and adaptively fabricate prod-
ucts using automatic, networked and intelligent information technologies such
as Internet of things. In IEOM education, smart creation can be all a mat-
ter of structural change, production optimization, efficiency enhancements and
value-added. It is decided on core skills, intellectual capital and other top fac-
tors of the practitioners. Smart creation is the dexterous behavior, which the
individual and the collective in the creative network culture focus on the tiny
creativity, the agile product development, the custom individuality service, and
the creative behavior reshaping. Smart creation demonstrates the interaction
and collaboration between makers by enhancing their dexterity behavior and
the creativity practice. With IEOM education, it increases the possibility of the
creative exploration, scientific attempt and innovative practice for makers.
7 Conclusion
IEOM education not only drives the development of the world but also the
progress of humanity. It transforms the external world, while at the same time
shaping the inner world of mankind which conforms to the characteristics of
the new era. IEOM education takes individual intelligence to be gathered into
a powerful force, and to create a new situation of humanization, open-sources,
and co-creation. It breaks the limitations of traditional innovation and closed
research, to change people’s ideas and thoughts in the traditional innovation
environment, and promotes the development of the makerspace.
References
1. Barrett T, Pizzico M et al (2015) A review of university maker spaces. Georgia
Institute of Technology
1264 Q. Zhan and M. Yang
24. Yang G (2016) The construction of the double helix model of maker education.
Mod Distance Educ Res 139:62–68 (in Chinese)
25. Yang G (2016) Maker education: the new path to the development of creative
education in China. China Educ Technol 3:8–13 (in Chinese)
26. Yang L, Zhang L, Wang G (2016) Study on the system framework of maker edu-
cation. Mod Distance Educ 3:28–33 (in Chinese)
27. Zhan Q, Yang M (2015) Research on maker education 2.0 and smart learning
activities from the perspective of “internet plus”. J Distance Educ 6:24–31 (in
Chinese)
28. Zhang M, Liu X, Zhang C (2016) Connotation, function and reflection of maker
education. Mod Educ Technol 26(2):14–19 (in Chinese)
29. Zhong B (2016) Discussion on background, essence, form, and support system of
maker education. Mod Educ Technol 26(6):13–19 (in Chinese)
30. Zhu L, Hu X (2016) Research on maker education oriented design based learning:
model and case. China Educ Technol 358:23–29 (in Chinese)
Pricing and Greening Policy for Manufacturer
with Low Carbon Strategic Customers
Wen Jiang(B)
1 Introduction
Global climate change is a serious threat to human survival and development. Global
warming is mainly caused by excessive emissions of greenhouse gases especially the
carbon dioxide [9], which makes the carbon reduction become hot issues to businesses
and consumers [5, 10]. In response to the Government’s carbon reduction requirements
and low carbon preference of customers, manufactures are considering whether invest
in green technologies to reduce the carbon emissions of their products. With the imple-
mentation of the carbon footprint, it is becoming easier for customers to obtain the
information on carbon emissions and to take this information into account in purchas-
ing decisions. Therefore, it is very meaningful to study the optimal decisions of the
manufacturer with carbon emissions sensitivity demand. Another phenomenon worthy
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 105
Pricing and Greening Policy for Manufacturer 1267
of study is the presence of strategic customer behavior [7]. With the development of
e-commerce and the application of dynamic pricing strategies, customers are increas-
ingly able to anticipate future price cuts and chose to delay the purchasing. Such “smart”
customers are called strategic customer and customers who will not anticipate discount-
ing are called myopic customer. The strategic and myopic customers are co-exist in
real-life. Firms who ignore the presence of strategic customer will result in 20% losses
of their total profit [1].
Two research streams are closely related to this article and will be reviewed to high-
light our contributions. In the literature of models with carbon emissions demand or
green technology investment, Bansal and Gangopadhyay [2] found that consumers were
willing to pay a higher price for low-carbon products. Klingelhö [11] examined the effect
of carbon emissions trading on green technology investment. The paper found that tak-
ing carbon emissions trading into consideration had impact on the investment, but this
impact was not always positive incentives. Zhao et al. [17] investigated how did the man-
ufacture in perfectly competitive market chose a technology when green technology was
available. Yalabik and Fairchild [16] examined the manufacturer’s optimal green tech-
nology investment of environment-friendly products with consumer choice was taken
into consideration. Xia et al. [15] analyzed the interrelations among the carbon emission
reduction, green technology investment choices and enterprise performance. These study
provided a reference for the study of this paper, but they all assumed the customers were
myopic and ignored the fact that myopic and strategic customers co-exists in real-life. In
addition, Su [14] examined the pricing model of a monopolist who sold finite inventory
to heterogeneous strategic customers in finite period. Jerath et al. [6] investigated the
impact of two sales approach (through an opaque intermediary versus last-minute sales
directly to consumers) on the service provider’s profit when strategic customer behavior
was considering. Lai et al. [12] investigated the effect of a posterior price matching pol-
icy on the customers’ purchasing behavior, and seller’s pricing decisions. They showed
that strategic consumers’ waiting incentive could be eliminated by this policy. Du et al.
[4] examined the single-period joint inventory and pricing strategies with strategic cus-
tomers and risk preference. Prasad [13] considering mix of myopic and strategic con-
sumers and studied the choices of mixed bundling pricing and reserved product pricing
for a firm who sold two products. Jiang and Chen [8] studied the optimal decision for
a low carbon supply chain consists of one supplier constrained by cap-and-trade policy
and one retailer facing homogeneous strategic customers. The literature above does not
consider carbon emissions, or assumes that customers are homogeneous.
In order to fill the gap present above, this paper examine the manufacturer’s pric-
ing and green technology investment policy when the customer are carbon emissions
sensitive and myopic and strategic hybrid. This paper aims to address following three
questions:
• What is the optimal pricing policy for the manufacture without green technology
investment? How does the presence of strategic customers affect the optimal pricing?
• What is the optimal pricing and greening policy for the manufacture with green
technology investment? How does the presence of strategic customers affect the
optimal pricing and greening policies?
• What effect does the green technology investment on the optimal policies?
1268 W. Jiang
v − p − η e0 > 0, v − p − η e0 > δ (v − s − η e0 ) .
Then
p − δ s + (1 − δ ) η e0
0< < v < vf .
1−δ
So the expected number of strategic customers who take the purchase action in the
first stage is:
vf v f − p−δs
1−δ − η e0
θ N p−δ s+(1−δ )η e f (v) dv = θ N .
1−δ
0 vf
To sum up, the expected number of customers who take the purchase action in the first
vf v f −p−η e0
stage is (1 − θ ) N ∫ p+ η e0 f (v) dv = (1 − θ ) N vf
.
(2) The second stage
At the second stage, the sales price reduces to s. Then the rest of the two type customers
will response to this price according to themselves rules.
As to myopic customers. The expected number who take the purchase action in the
p+η e
second stage is (1 − θ ) N ∫s+η e00 f (v) dv = (1 − θ ) N p−s
vf
.
As to strategic customers. The product will exit the market after the second stage.
Waiting does not make sense for strategic customers now. As long as the valuation is
greater than s + η e, the strategic customers will buy products. Therefore, the expected
number of strategic customers who take the purchase action in the second stage is:
p−δ s+(1−δ )η e0 p−s
+η e0
θ N ∫s+η e01−δ f (v) dv = θ N 1−δv f .
Based on the above analysis, the number of customers who purchase products can
be obtained and listed in Table 1.
Proposition 1 shows that the manufacturer’s optimal pricing policy without green
technology investment is unique exist.
Proposition 2. pn is decreasing in θ .
d pn d pn dA v f −η e0 −s δ
d θ = dA d θ = − 1−δ . I have v − η e0 − s > 0, 1 − δ > 0. Then I can
Proof. f
2A2
d pn
obtain d θ < 0, i.e. pn is decreasing in θ . This completes the proof.
This conclusion is intuitive. θ represents the ratio of strategic customer. When the
number of strategic customers is increasing, in order to enable more strategic customers
buying at regular price, the manufacture will reduce the regular price in first stage.
Proposition 3. pn is decreasing in δ .
f −η e
0 −s θ
Proof. d pn
dδ = d pn dA
dA d δ = −v 2A2
. I have v f − η e0 − s > 0. Then I can obtain
(1−δ )2
d pn
dδ < 0, i.e. pn is decreasing in δ . This completes the proof.
Proposition 3 shows that the optimal pricing of the manufacturer decreases in con-
sumer patience (i.e. δ ). More patience means loss out less consumptive value and more
willing to buy the product in second stage for the strategic customers. In order to stim-
ulate strategic customers buy early, the manufacturer have to reduce the price.
Pricing and Greening Policy for Manufacturer 1271
Proposition 4. pn is decreasing in η .
d pn e0 d pn
Proof. dη = − 2A 2 < 0. So dη < 0, i.e. pn is decreasing in η . This completes the proof.
Proposition 4 indicates that the optimal pricing of the manufacturer decreasing in the
carbon sensitive degree (i.e. η ). Without green technology investment, the unit carbon
emissions is fixed and higher η means the customers less willing to buy the products.
In order to stimulate more customers to buy products, the manufacturer will reduce the
optimal price in first stage.
∂ πg (p, e) N f 2 (1 − δ + θ δ ) 1 − δ + 2θ δ
= f v − p − ηe + s , (4)
∂p v 1−δ 1−δ
∂ 2 πg (p, e) 2 (1 − δ + θ δ )
=− < 0,
∂p 2 1−δ
∂ πg (p, e) N
= − f (p − s) + 2t (e0 − e) , (5)
∂e v
∂ 2 πg (p, e)
= −2t < 0,
∂ e2
∂ 2 πg (p, e) ∂ 2 πg (p, e)
= = −η .
∂ p∂ e ∂ e∂ p
I have 4tA − η 2 > 0, then I can show that
∂ 2 πg (p,e) ∂ 2 πg (p,e)
∂ p2 ∂ p∂ e
∂ 2 πg (p,e) ∂ 2 πg (p,e)
= 4tA − η 2 > 0.
∂ e∂ p ∂ e2
Proposition 5. The optimal pricing and greening policy (denoted by pg and eg ) of the
manufacturer are:
2tv f v f − e0 η + (2A − 1) s − N η s
pg = ,
4Atv f − N η
4Ae0tv f − N v f − s
eg = .
4Atv f − N η
∂ πg (p,e) ∂ π (p,e)
Proof. Let ∂p = 0 and g∂ e = 0, from equation (4) and (5), I get pg =
2tv f [(2A−1)s−e0 η +1]−N η s 4Ae tv f −N (v f −s)
4Atv f −N η
and eg = 04Atv f −N η . This complete the proof.
Proposition 5 show that the optimal pricing and greening policy for the manufacturer
with green technology investment exist and are unique.
Proposition 6. pg is decreasing in θ ; eg is increasing in θ .
Proof. From Proposition 5, I get the value of pg and eg , then I can show
2
d pg d pg dA 8t 2 v f v f − η e0 − s δ
= =− ,
dθ dA d θ (4Atv f − N η )
2 1−δ
deg d pg dA 4Ntv f v f − η e0 − s δ
= = .
dθ dA d θ (4Atv f − N η )
2 1−δ
dp de
I have v f − η e0 − s > 0. Then I can obtain d θg < 0 and d θg > 0. Therefore, pg is
decreasing in θ and eg is increasing in θ . This complete the proof.
Proposition 6 shows that the higher the ratio of strategic customers, the lower the
manufacturer’s optimal price. This conclusion is in line with the scenario without green
technology investment.
Proposition 7. pg is decreasing in δ ; eg is increasing in δ .
Proof. I can obtain
2
d pg d pg dA 8t 2 v f v f − η e0 − s θ
= =− ,
dδ dA d δ (4Atv − N η )
f 2
(1 − δ )2
deg d pg dA 4Ntv f v f − η e0 − s δ
= = .
dθ dA d θ (4Atv f − N η )
2 1−δ
dp de
I have v f − η e0 − s > 0. Then I can obtain d δg < 0 and d δg > 0. Therefore I get pg
is decreasing in δ and eg is increasing in δ . This complete the proof.
Proposition 7 indicates that more patience the strategic customer have, lower the
optimal pricing with green technology investment is and higher the optimal unit carbon
emissions is with green technology investment. As to the impact of δ on the optimal
unit carbon emissions, I find that more “strategic” customers will lead to less investment
that the manufacturer do in carbon reduction.
Pricing and Greening Policy for Manufacturer 1273
4Atv f
Proposition 8. If N > η , pg and eg are decreasing in η .
4Atv f
Proof. N > η , then 4Atv f < N η . Recall v f − η e0 − s > 0, so
2
d pg 2Ntv f v f − s − 8Ae0t 2 v f 2Ntv f v f − η e0 − s
= > > 0,
dη (4Atv f − N η )
2
(4Atv f − N η )
2
deg N 2 v f − s − 4NAe0tv f N 2 v f − η e0 − s
=− <− < 0.
dη (4Atv f − N η )
2
(4Atv f − N η )
2
Proposition 8 shows that with the addition of carbon sensitive degree (i.e. η ), the
manufacturer will improve the regular sale price and invest more in green technology
investment to reduce carbon reduction when the customer number exceed a certain
threshold.
(1) There is an unique optimal pricing policy for the manufacturer without green tech-
nology investment and there is also an unique optimal pricing and greening policy
for the manufacture with green technology investment. Therefore, by deriving the
optimal policies in different scenarios, the manufacturer can behave appropriately
based on the findings to maximize its expected profit.
(2) I find that the optimal price without green technology investment is decreasing in
the ratio of strategic customers, patience of strategic customers and carbon sen-
sitive degree. This conclusion means that the presence of strategic customers is
detrimental to the manufacturer and the negative impact increases as the ratio and
patience of strategic customers increases.
(3) I find that the optimal price with green technology investment is also decreasing
in the ratio and patience of strategic customers and carbon sensitive degree; the
optimal unit carbon emissions is increasing in the ratio and patience of strategic
customers but decreasing in carbon sensitive degree.
(4) The optimal price with green technology investment is lower than that without
green technology investment when the number of customers exceed a certain
threshold. This conclusion is interesting. The cost of the product is increasing when
the manufacturer invest in green technology. According to the experience, the opti-
mal price should be improved. However, the optimal pricing of the manufacturer
not increased but reduced instead. This conclusion can guide firms make decisions
scientific.
The paper formulates the model by considering only one manufacturer. But supply
chain is more common in real-life. Extending this assumption will have a knock-on
effect on supply chain decisions. So one key research direction is to consider a sup-
ply chain scenario. The manufacturer decide the wholesale price and green technology
investment and the retailer decide selling price and order quantity. Second, this paper
assume that the manufacturer only produces one product. While this simplified setting
provides some interesting insights, we have to acknowledge that usually two or more
products is produced by a manufacturer. These products will substitute or complement
each other. So to assume that the manufacturer produces two or more products, which
have different unit carbon emissions in production, is one important extension of this
work.
References
1. Aviv Y, Pazgal A (2008) Optimal pricing of seasonal products in the presence of forward
looking consumers. Manufacturing Serv Oper Manag 3(10):339–359
2. Bansal S, Gangopadhyay S (2003) Tax/subsidy policies in the presence of environmentally
aware consumers. J Environ Econ Manag 45(2):333–355
3. Cachon GP, Swinney R (2011) The value of fast fashion: quick response, enhanced design,
and strategic consumer behavior. Manag Sci 4(57):778–795
Pricing and Greening Policy for Manufacturer 1275
4. Du J, Zhang J, Hua G (2015) Pricing and inventory management in the presence of strategic
customers with risk preference and decreasing value. Int J Prod Econ 164:160–166
5. IPCC (2007) Cliamte change 2007 synthesis report. IPCC pp 1–13
6. Jerath K, Netessine S, Veeraraghavan SK (2010) Revenue management with strategic cus-
tomers: Last-minute selling and opaque selling. Manage Sci 3(56):430–448
7. Jiang W, Chen X (2012) Manufacturers production and pricing policy with carbon constraint
and strategic customer behavior. Adv Inf Sci Serv Sci 23(4):231–238
8. Jiang W, Chen X (2015) Supply chain decisions and coordination with strategic customer
behavior under cap-and-trade policy. Control Decis 31(3):477–485
9. Jiang W, Chen X (2016) Optimal strategies for low carbon supply chain with strategic cus-
tomer behavior and green technology investment. Discrete Dynamics Nat Soc, 1–13
10. Jiang W, Chen X (2016) Optimal strategies for manufacturer with strategic customer behav-
ior under carbon emissions-sensitive random demand. Ind Manag Data Syst, 4(116):759–776
11. Klingelhö FHE (2009) Investments in eop-technologies and emissions trading-results from
a linear programming approach and sensitivity analysis. Eur J Oper Res 1(196):370–383
12. Lai G, Debo LG, Sycara K (2010) Buy now and match later: Impact of posterior price match-
ing on profit with strategic consumers. Manufacturing Serv Oper Manag 1(1):33–55
13. Prasad A, Venkatesh R, Mahajan V (2015) Product bundling or reserved product pricing?
price discrimination with myopic and strategic consumers. Int J Res Mark 32(1):1–8
14. Su X (2007) Intertemporal pricing with strategic customer behavior. Manage Sci 5(53):726–
741
15. Xia D, Chen B, Zheng Z (2015) Relationships among circumstance pressure, green technol-
ogy selection and firm performance. J Clean Prod 106:487–496
16. Yalabik B, Fairchild RJ (2011) Customer, regulatory, and competitive pressure as drivers of
environmental innovation. Int J Prod Econ 2(131):519–527
17. Zhao J, Hobbs BF, Pang JS (2010) Long-run equilibrium modeling of emissions allowance
allocation systems in electric power markets. Oper Res 3(58):529–548
Evolutionary Game Analysis of the Reservoir
Immigrants’ Joint Venture
Abstract. This paper combined the previous results with the actual
situation of the immigrant area, proposed the main factors which would
influence the joint venture, such as technology level, the familiar degree
of both sides of cooperation, the costs and benefits of cooperation and
coordination. To establish the evolutionary game model of immigrant
joint venture, Finding the influence factors and the “free-rider” punish
coefficient, the investment allocation coefficient, the income distribution
coefficient how to affect the direction of the joint venture for immigrants
according to payoff matrix and the Jacobi matrix.
1 Introduction
With the time increase of reservoir immigrants living in the settlement, the pos-
sibility of its entrepreneurship. Many immigrants chose to entrepreneurship on
some basis of technical in Yongjing county in Linxia Hui Autonomous Prefecture,
Gansu province, such as tourism, fisheries and greenhouse cultivation. It still
exists social employment problem now, the government vigorously encouraging
entrepreneurship. There are many immigrants started the business in Yongjing
county, but to reduce risk, choosing partners to joint venture will reduce the risk
of entrepreneurship. Therefore, it is very necessary to study on the joint venture.
Yang [6] used the binary Logistic regression method to study the influence
factors of immigrant entrepreneurs. Yang [7] re-examined the “bounded ratio-
nality” of venture capitalists. Su [3] argued that the policy of encouraging coop-
eration that government put forward to has not work, suggesting that the gov-
ernment should intensify the policy support and increasing investment. Chou [2]
suggested when cooperating, both firms need to contribute sufficient and com-
plementary efforts when they choose to cooperate partner. When entrepreneurs
choosing a partner, relation network is the key variables to affect entrepreneurial
decision [1]. Szolnoki [4] thouhgt that a carefully chosen threshold to establish
a joint venture could efficiently improve the cooperation level. Wu and Wang
[5] thought that the harder the joint venture is, the higher level of cooperation
will be.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 106
Evolutionary Game Analysis of the Reservoir Immigrants’ Joint Venture 1277
(1) This study involved two entrepreneurs, assuming that there are two individ-
uals A and B, and the two individuals are in the same industry, they have
their own worker sources and experience. In their cooperative game has two
choices: cooperation and noncooperation.
(2) When the cooperative game beginning, both sides will take the degree of
familiar with each other into account, a person’s credit affects whether other
people can choose to believe him. Both sides have some prior information
about each other, which is the understanding of each other, but there is a
credibility in prior information, and some of the information may be wrong.
So it is important to put the prior information reliability coefficient as βi .
That is to say, A has the prior information with B, which is QB , The credi-
bility of the information is βA .
(3) Both the game sides have a certain technology. What’s more, technical degree
is one of the factors that both sides of game will consider in choosing cooper-
ation. Assuming that technical level is Ti , there also is exists a certain degree
of resource utilization while working together, assuming that resource uti-
lization coefficient is αi .
(4) Assuming that the investment apportion coefficient are a and 1 − a, which
can get the income distribution coefficient b and 1 − b.
(5) When there is the phenomenon of “free-rider”, it will have some punishment
d, the punish coefficient is established on the basis of the technical level.
According to the hypothesis and model variables, we can get the payoff matrix
of A and B, as shown in Table 1.
In order to facilitate discussion, assuming that the value of x and y is invari-
able. To calculate the expectations of A and B in the case of cooperation and non-
cooperation respectively, and then get replicated dynamic equation of A and B:
1278 X. Liu et al.
B
Cooperation y Noncooperation 1 − y
A Cooperation x bR + βA QB + αA TB − aC, bR − aC − TA
(1 − b)R + βB QA + αB TA − (1 − b)R + αB TA − (1 − a)
(1 − a)C C − dTA
Noncooperation bR + αA TB − aC − dTB , 0,0
1−x (1 − b)R − (1 − a)C − TB
dx
A: F (x) = = x(UA1 − UA ) = x(1 − x)[y(βA QB + dTB − bR + aC + TA )
dt
+ (bR − aC − TA )],
dy
B: F (y) = = y(1 − y){x[(βB QA + (1 − a)C + dTA + TB − (1 − b)R]
dt
+ (1 − b)R − (1 − a)C − TB }
To obtain the evolutionary stable strategy of the game, First of all, we need
to get the stable point of replicated dynamic equation, namely it need to meet
such conditions:
No. 1 F (x) = 0; No. 2 Evolutionary stable strategy needs to have some
F (x) = 0
anti-interference. For , we can get five stable points, O(0, 0), A(0, 1),
F (y) = 0
B(1, 0), C(1, 1), D(XD ,YD ).
X11 = (1 − 2x)[y(M − W ) + W ],
X12 = x(1 − x)(M − W ),
X21 = y(1 − y)(E − F ),
X22 = (1 − 2y)[x(E − F ) + F ].
We use trJ and det J to express the trace of matrices and determinant. When
the stable points meet the conditions: trJ > 0 and det J > 0, The stable point
Evolutionary Game Analysis of the Reservoir Immigrants’ Joint Venture 1279
is the evolutionary stable strategy of the system (ESS), the value of the Jacobi
matrix and determinants are as follow:
trJ=X11 + X22 ,
det J = X11 X22 − X12 X21 .
Based on the contents mentioned above we can get the local stability of equilib-
rium under different parameters:
(1) When W < 0, F < 0, M < 0, E < 0, the evolutionary stable strategy of the
system is O(0, 0).
(2) When M < 0, E < 0, W > 0, F > 0, the evolutionary stable strategy of the
system is A(0, 1), B(1, 0).
(3) When W > 0, E > 0, M > 0, F > 0, the evolutionary stable strategy of the
system is C(1, 1).
(1) Request the first partial derivative and second derivative of SADBC about
∂2S )C E 2
−W )C M 2
a, we can get ADBC
∂a2 = (E−F
(E−F )4
+ (M(M −W )4
> 0.
• If E > M , that is to say when investment apportion coefficient get greater,
the both sides of game tend to be non-cooperative.
• If E < M , that is to say when investment apportion coefficient get greater,
the both sides of game tend to be cooperative.
(2) Request the first partial derivative and second derivative of SADBC about
∂2S )R E 2
−W )M R 2
b, we can get ADBC
∂b2 = (E−F
(E−F )4
+ (M(M −W )4
.
• If E > M , that is to say when income distribution coefficient get greater,
the both sides of game tend to be cooperative.
• If E < M , that is to say when get income distribution coefficient greater,
the both sides of game tend to be non-cooperative.
∂SADBC
(3) Request
the first partial
derivative SADBC about C, then can get ∂C =
1
2
(a−1)E
(E−F )2 + (M−aM
−W )2 > 0, that is to say when the coordination cost get
greater, the evolutionary games tend to be non-cooperative.
(4) Request the first partial derivative SADBC about d, when the penalty coef-
ficient get greater, the evolutionary game tends to be the cooperative.
(5) Request the first partial derivative SADBC about βA and βB , we can conclude
that when the familiar degree between A and B is more higher, the more
tendency for them to cooperate.
(6) Request the first partial derivative SADBC about αi , the result is 0, so the
ability to absorb and utilize resources in the cooperation is not the main
factors.
B
Cooperation y Noncooperation 1 − y
A Cooperation x 2254.2, 2253.7 2245, 2252
Noncooperation 1 − x 2251.8, 2244 0, 0
It is the best choice for A and B to cooperate with each other. When TA =
1, TB = 9, other value isn’t changed, the payoff matrix is as follow:
B
Cooperation y Noncooperation 1 − y
A Cooperation x 2256, 2250.9 2249, 2250.4
Noncooperation 1 − x 2252.7, 2241 0, 0
Comparing Table 3 with Table 4, it is not difficult to find that, in Table 4, the
benefits is similar for B to choose cooperation or not, so the smaller technology
differences between both sides of the game is, the more tendency to cooperation.
It’s easy to prove the other results from model analysis with this methods.
1282 X. Liu et al.
4 Conclusion
According to the example of Yongjing county in Linxia Hui Autonomous Pre-
fecture, Gansu province, we can get the main factors which would influence the
joint venture in this area, such as technology level, the familiar degree of both
sides of cooperation, the costs and benefits of cooperation and coordination. we
can also get the direction of every factors and the “free-rider” punish coefficient,
the investment allocation coefficient, the income distribution coefficient influence
on joint venture.
(1) When the both sides of the game have a big difference on technical level,
the high level of technical would not choose joint venture.
(2) The most residents in settlement came from different areas, and some of
them experienced two or more times of migration. When they familiar with
each other, they will have a sense of security, and the higher the credibility
of the information is, the more possibility the choice of the joint venture will
be.
(3) The higher costs and benefits of cooperation is, the greater the risk will be,
and the probability of joint venture will decline.
(4) The greater the benefits of joint venture is, the greater the probability of
cooperation will be.
(5) The bigger the “free-rider” punish coefficient is, the bigger the probability
of cooperation is.
(6) When both the investment allocation coefficient and income distribution
coefficient are 0.5, the probability of cooperation is the largest.
References
1. Ardichvili A, Cardozo R, Ray S (2003) A theory of entrepreneurial opportunity
identification and development. J Bus Ventur 18(1):105–123
2. Chou PB, Bandera C, Thomas E (2017) A behavioural game theory perspective on
the collaboration between innovative and entrepreneurial firms. Int J Work Innova-
tion 2(1):6–31
3. Su L, Yang Z et al (2013) The influence factors of green technology cooperation
between enterprises, based on the perspective of supply chain. The Population of
China Resources and environment, pp 149–154 (in Chinese)
4. Szolnoki A, Chen X (2016) Cooperation driven by success-driven group formation.
Phys Rev E 94(4):042–311
5. Wu T, Fu F, Zhang Y et al (2013) The increased risk of joint venture promotes
social cooperation. PLoS One 8(6):1–10
6. Yang X, Wang C, Xiong Y (2015) The influence factors study of the three gorges
reservoir immigrant entrepreneurship decision. The Rural Economy pp 120–124 (in
Chinese)
7. Yang Y (2016) Based on evolutionary game of joint venture formation mechanism
research. J Technol Econ Manag Res 12:20–24 (in Chinese)
Solid Waste Management
in the Republic of Moldova
1 Introduction
Transforming waste management and recycling practices in Republic of Moldova
is one of the most important challenges we face of the next decade. Inad-
equate waste management affects our communities, threatens our environ-
ment, and contributes to the global emissions of greenhouse gases. Thus, solid
waste management and recycling is a local, national and international priority
[1,3,5,6,9,10,12]. Government will establish the legal and institutional frame-
work necessary to support the gradual alignment of our waste management prac-
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 107
1284 G. Duca and A. Mereuţa
120,00
100,00
80,00
60,00
40,00
20,00
0,00
1986 1993 1996 1999 2001 2003 2005 2012 2016
Food waste Paper & carton Plasc Texles Leather Yard wastes Wood Other waste
8000
6000
4000
2000
0
2010 2011 2012 2013 2014 2015 2016 2017 2017 2018 2019 2020 2021 2022 2023 2024 2025
The regional approach for waste management is essential for attracting nec-
essary investment, as well as for the high-cost maintenance through realizing
the implementation on the economical level. It is unacceptable and at the same
time economically unjustified the enterprise’s construction of recovery or waste
disposal in every city, without taking into account the rural areas of the district,
including the specific waste generated. The experience of neighboring countries
shows that the object’s financial sustainability of recovery and waste disposal is
economically feasible when it has a territorial coverage of at least 200–300 thou-
sand people. In this context, it is proposed the waste management regionalization
by the territorial division of the country into 8 regions for waste management.
The basic criteria for regional planning are the geographic position, economic
development, the existence of access roads, soil, and hydrogeological conditions,
the number of population, etc. (Fig. 3).
1286 G. Duca and A. Mereuţa
The local public authorities are expected to establish the waste management
associations at a regional level, as recommended by the Ministry of Environment
regarding the regional waste management planning. The role of associations
is to establish and approve terms of reference for selecting the company that
will manage waste in the region, as well as tariffs for the waste collection and
disposal, etc.
These options will be subject to public debate in order to select the optimal
alternative from the perspective of environmental protection and cost–benefit
analysis, to encourage the creation of partnerships at international, national and
local level, and to attract the necessary investment that will allow the sustainable
development of sector in accordance with the priority needs and at a rhythm
accessible for society.
For the Republic of Moldova, oenology is one of the most substantial key in the
national economic sectors, thus this sector requires continuous implementation of
new performances achieved in science and technology. The wine industry in the
country has a potential of grape processing up to 1 million tons, located in over
100 companies. Recent years, Moldovan Wine industry attention is increasingly
higher. As a result of the implementation of the State Program, the areas planted
with vines have increased significantly, in the quantity of grapes processed rose
as well. But it is known that about 20–25% of the processed grapes turns in are
secondary wine products. If you calculate the amount of waste that result after
processing 300 thousand tons of grapes, we get 60–75000 tones of products:
the cake, yeast, vinasse, wastewater, etc. All these products are a source of
environmental pollution and create major environmental problems. Therefore,
it is important to use complex raw material (grapes), implementing technological
processes with minimum waste, as well as processing products of wine in order
to obtain a wide range of valuable products.
Products of vinification are an important source for obtaining special natural
products with specific properties that cannot be obtained synthetically. Such
products as tartaric acid, oil seed polyphones, colorings, tannins, etc. have a
wide field of use in various industries.
Through the years 1980–1990, Moldova was the country that provides the raw
tartar material (calcium tartrate, tirighia, calcareous sediments) for countries
that produced directly tartaric acid (Ukraine, Odessa, Yerevan, Tbilisi, Italy,
etc.). Prime cost of the product is quite high (100 g of tartaric acid (99.9%) used
in the pharmaceutical industry costs 80–100 $, or 1 kg of natural tartaric acid
used in food costs 30–40 $.
Tartaric acids are reductions that can be used in food, wine, bakery, pharma-
ceutical, photochemical, chemical, textile, construction, electro-technical indus-
try, yet their production in the country is missing, although the country prevails
large stocks of raw materials and annual it could produce 100 tons up to 350
tons of tartaric acid.
An effective stabilizer used in the wine industry is the met tartaric acid,
which inhibits the process of submitting wine stone in wine and juice, do not
change the taste of wine or juice qualities and is not toxic because it represents a
modifying tartaric acid. Compared with the cold treating method, stabilization
of the wines and juices with mesotartaric acid it has a much better economic
1288 G. Duca and A. Mereuţa
effect. The local market is not assured, with this product, although it is requested
on the domestic market than tartaric acid.
Grapes seed represents a precious waste for our national economy. The con-
tent of the seeds in one tone of grapes is 7% by weight. The process of obtaining
seeds and their conditioning is a complicated technological process. In the years
1982–1986 in Moldova obtained to 10,000 tons seeds annually, which were trans-
ferred to processing in the Bender city (oil mill), Odessa, Armavir (Krasnodar
region), Tbilisi (Georgia) and Cocand (Uzbekistan). Partly some seeds were
exported to France with a price of $400 per ton.
Currently, grape dried seeds, oil seeds as well have high demand nationally
and internationally and we are also having factories producing food seed oil.
Getting antimicrobial chemotherapeutic remedies, particularly in natural raw
materials, is current and prospective permanent. Also of major importance is get-
ting antifungal properties and antioxidative substances from plant raw materials.
Research results on the activity of dissoluble etanon in water, allow production
of preparations with pronounced biological activity, useful in medicine and vet-
erinary use in combating bacterial diseases, fungal infections etc.
Currently, after processing the grapes are produced about 59 thousand tons
of pomace. Marc contents of seeds make up 40–65%. So annually in the country,
the grape processing plants can be harvested 20–25 thousand tons of seeds. Given
the fact that enotatin content in grape seed makes up on average 8% annually
could get about 2–2.5 thousand tons of enotanin. Prior research showed that
leaching enotaninului yield is 65–70%. Thus, it may get about 1.2–1.3 thousand
tons of active ingredient to changing them to produce medicinal preparations,
veterinary, agricultural, etc.
Among all waste formed after the vinification process is the marc, which
is the result of distilling alcohol from wine products. The perspective method
of treating this waste concentrates worldwide is considered anaerobic digestion,
which allows not only solve environmental challenges but also allows the conver-
sion of organic pollutants into biogas, which can still be used as an alternative
source of heat and electric.
The data presented above demonstrates that Moldova must build a com-
plex factory for processing secondary wine products. An argument convincing
enough to build such an enterprise serve the scientific data presented in the
monograph [4].
The plastic materials represent about 10–12% of the amount of household gener-
ated waste in the country. Not all the categories of plastic materials are subject
to recycling. The plastic materials in the majority of PVC degrade over sev-
eral centuries, between 100 and 1000 years. Therefore we ascertain how great
the pollution that they produce being persistent in time. The basic problems of
recycling of plastics are in the variety of their frequencies, which are recycled
separately and mixing them make it impossible to recover. The most often is
Solid Waste Management in the Republic of Moldova 1289
considered optimal recovery separated from other plastic materials of the recep-
tacles from PET by producers and consumers of soft drinks, which prevents
mixing with other resins.
Beneficiaries dealing with plastics processing decide the categories of waste
accepted for collection through specialized points.
For these reasons, it is necessary to organize the collection of differentiated
plastics and training potential generators of waste plastic materials, including
the general public. Concomitantly with the increasing amount of organic plastic
materials in use, the problem of recycling is becoming more topical. Proceed-
ing from the particularities of the composition of the waste polymers, plastics
processing at low temperatures, which allowing us to obtain the polymer raw
material (for example, construction materials, ramps, pillars, etc.) is the most
promising.
Plastic deformation of solid bodies usually leads not only to changing its
shape but also to the defects that change the physicochemical properties, includ-
ing the reactivity. The accumulation of defects is used in chemistry to accel-
erate reactions involving the solid substances, for reducing the temperature
processes and for other manipulations for increasing chemistry interaction in
the solid-phase. The mechanical energy absorption initiates destruction of poly-
mers, polymorphic transformations, and other reactions. In order to communi-
cate the mechanical energy of plastic materials, the last are processed in extru-
sion reactors by using the crushers and conveyors. During of such a treatment,
in addition to shredding polymers, are taking place the structural changes of
substance, there are formed a lot of flaws and the material becomes reactive. In
the simultaneous processing of several types of polymers, between them occur
interactions–chemical reactions. However, analogous in the case of the thermal
activation in the solid-phase reactions, for the initiation of mechanical-chemical
reactions it is necessary to transmit the dust a sufficient quantity of mechanical
energy. The energy can be communicated through the crusher and polymer screw
reactor. The screws acting through the impulses on the polymer shredding. The
strength and the result (the degree of transformation after the chemical reac-
tion) of such actions depend on the acceleration of the movement of the screw
within the reactor. Previously, the use of mechanical activation of substances in
the production of new materials has been limited by the lack of reactors oper-
ating safely. Currently, such reactors exist and can be used in such a process
harmless for the environment as a secondary plastic processing [7]. By a project
of technology transfer, in the Republic of Moldova was implemented a complex
process of waste plastics [8].
Waste of plastics separated from polyolefin (PE, PP), unwashed are intro-
duced into the hammer mill where it is comminuted up to 50 mm parallel the
polyolefin are crushing separately, and then they are crushing separately until
the fraction 0.5 mm up to 10 mm (Fig. 4 and 5).
Comminuted and mixed waste in the proportion of 1/3 of the polyethylene
and polypropylene: 3/4 of plastics waste with unidentified composition is loaded
into the bunker with a speed of 0.2 m/s through the screw charger.
1290 G. Duca and A. Mereuţa
ardous waste from medical activity is 0.05 kg/patient/day treated at home for
hospitalized cases–average 0.44 kg/bed/day. Thus the total production of haz-
ardous waste was estimated between 10 and 11 tons of waste per day 4000 tons
per year.
Currently, the total amount of waste reported by medical institutions has
increased, accounting for about 8.7 million tons per year, including infectious
waste (syringes, infusion catheters, etc.).
In Chisinau daily it is accumulate about 3 tons per day of waste medical activ-
ity annually–about 1095 tons of wastes. At the same time, 120 million syringes
annually and 6 million infusion systems are used throughout the country.
According to the World Health Organization, the estimated operational costs
for treating 1 kg of infectious waste varies between 0.13–2.2 US $, depending on
the applied method. The lowest costs for treating infectious waste is the method
by autoclaving (0.13–0.36 US $/kg).
Academy of Sciences of Moldova and the Ministry of Health and Ministry
of Environment in 2015 implemented a pilot project for treatment of infectious
medical waste. It was developed an authorized treatment of infectious medical
waste (Figs. 8 and 9).
3 Conclusion
• Moldova has resources and scientific potential for capitalizing secondary wine
products.
• Create a pilot plant for treating waste plastics complex served as an impetus
for development of the field of waste recovery in Moldova.
• Academy of Sciences collaboration with the private sector has led to an autho-
rized point of treating infectious medical waste, which resolved partially the
pressing problem of neutralizing medical waste in Moldova.
References
1. Braunegg G, Bona R et al (2004) Solid waste management and plastic recycling in
Austria and Europe. Polymer-plastics technology and engineering 43(6):1755–1767
2. Chisinau Statistica (2010) Anuarul statistic al republicii moldova
3. Damani N, Koolivand A et al (2013) Hospital waste generation and management
in some provinces of Iran. Toxicological Environ Chem 95(6):962–969
4. Duca G (2011) Produse vinicole secundare
5. Ikeda Y (2016) Current home medical care waste collection status by nurse in
Japan. J Air Waste Manag Association
6. Kumar S, Mukherjee S, Chakrabarti T, Devotta S (2007) Hazardous waste manage-
ment system in india: an overview. Critical Rev Environ Sci Technol 38(1):43–71
7. Macaev F, Bujor S, Mereuţa A (2011) Reciclarea deşeurilor din mase plastice prin
procedee mecanochimice
8. Macaev F, Mereuţa A, et al (2016) Procedeu de reciclare a deşeurilor de mase
plastice/brevet de invenţie md 949 z
Solid Waste Management in the Republic of Moldova 1295
9. Sivapullaiah PV, Naveen BP, Sitharam TG (2016) Municipal solid waste landfills
construction and management—a few concerns. Int J Waste Resources 6(214):
10. So WMW, Cheng NYI, et al (2015) Learning about the types of plastic wastes:
effectiveness of inquiry learning strategies 44
11. Tugui T, Duca G, et al. (2013) Municipal solid waste composition study
12. Vyas P (2011) Municipal solid waste in India. J Ind Pollut Control 27(1):79–81
How to Popularize Green Residential Buildings
in China: A Survey Study from Sichuan
1 Introduction
China has been the world’s largest carbon emitter and the world’s largest energy
consumer country since 2011. Building is one of the main contributor of energy
consumption and pollution emission. China’s building energy consumption has
more than industrial and transportation, as the first source of energy consump-
tion. To reduce the energy consumption, China had proposed the concept of
green buildings in 2004 [7], which aim to save resources (including: energy, land,
water and materials) and reduce the environmental impact. In the past five years,
China’s green building ushered in a period of rapid development. Green Building
has first stated explicitly in the Five-Year (from 2011 to 2015) Plan of China
[3]. In 2013, China’s Green Building Action Plan has been launched [5]. And in
2015, the new version of “assessment standard for green building” was formally
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 108
How to Popularize Green Residential Buildings in China 1297
implemented [2]. All these have brought positive results to promotion of green
building. And also, more and more scholars have paid their attention to the
green building in China, not only involving technology [4], but also containing
cost [1] and management [8].
Although more and more peoples are showing solicitude for green building,
but most of them are architects, engineers, scholars, and others related to the
construction industry. Green buildings are not yet popularized since most com-
mon peoples don’t still understand its concept and value. In China, all evaluation
standards on green buildings don’t contain economic indexes. The new version
of “green building evaluation criteria” explicitly proposed that green building
evaluation should take the economic analysis into consideration, but it did not
put the economic analysis into the comprehensive evaluation system. Researches
on economic benefit analysis are also seldom found [6]. As a result, most of Chi-
nese people believe that green buildings (especially residential buildings) always
have higher cost, so they don’t want to purchase residential buildings. That an
important reason why residential buildings cannot be popularized in China.
In order to understand the common peoples’ attitude to green residential
buildings fully, we did carry out an investigation to 34 green residential buildings
and 935 peoples in Sichan in January, 2016. The following several sections will
show the investigation data and some statistical analysis in detailed.
2 Survey Statement
This investigation mainly includes three aspects: (1) the public foundation of
green residential buildings; and (2) the incremental cost and incremental price of
green residential buildings; and (3) the people’s acceptability to the incremental
price of green residential buildings. Among them, the first part mainly surveys
local residents’ cognitive level and purchase intention about Sichuan’s Green
Buildings; the second part mainly investigates the incremental cost and the
incremental price of green residential buildings in Sichuan; the third part mainly
researches the public acceptability to the price of green residential buildings,
and analyzes the people’s needs and purchase intentions to green buildings in
different hierarchy.
The questionnaire method was used to the ordinary peoples who are chosen
randomly. This sampling survey was issued 1000 questionnaires, which 935 were
recovered, and 901 are valid. The effective recovery rate of the questionnaires
was 90.1%. The basic information of the respondents is as follows Table 1.
In addition, we also generally investigated all the 34 green residential build-
ings in Sichuan, which information is showed as Table 2.
3 Popularization Degree
3.1 Knowledge Level About Green Residential Buildings
In order to find out the popularized degree of green residential buildings in
Sichuan, three questions are designed: (1) Do you understand the conception of
1298 J. Gang et al.
Statistical variables Sampling Percentage (%) Statistical variables Sampling Percentage (%)
Sex Education
Male 599 66.48 High school or below 104 11.55
Female 302 33.52 College graduate or bachelor 632 70.14
Profession Bachelor or above 165 18.31
Self-employed 101 11.21 Monthly income RMB
White-collar 119 13.43 7000 417 46.28
Ordinary worker 338 37.51 7000 14000 303 33.63
Student 273 30.3 14000 21000 99 10.99
Other 68 7.55 21000 82 9.1
green residential buildings? (2) Do you understand the functions of green resi-
dential buildings? (3) Do you know the policy of green building? The statistical
results of the two questions are shown in Figs. 1, 2 and 3:
Understand
completely
Understand Don't 1%
14% understand,
58, 28%
Understand a
little, 117,
57%
Understand
completely Don't
Understand 0.4% understand,
24% 32, 16%
Understand a
little, 122,
60%
Unknown,
97, 48%
Know a little,
94, 46%
The survey results show that the people who know green buildings very
clearly were accounted for only about 1%, some clearly only 14%; and under-
standing for the concept of green buildings, most people yet stayed in the “a
little” and “a little more” dimensional level. Similar results are found about
the functions and policies. 76% of the people don’t understand or only under-
stand a litter about the function of green residential buildings, while 94% of the
people don’t clearly understand the related policies about green buildings. How-
ever, in the process of investigation, we found that some people believe that the
“green buildings” is the energy-saving buildings, and some people think “green
buildings” is the buildings with many greenbelts. As a whole, the popularization
of green residential buildings is serious shortage, although there is good policy
support.
Very
unnecessary
Unnecessary
Necessary
Very
necessary
100% 93.14%
80%
58.33%
60%
41.67%
40%
20% 6.86%
0%
Willing Unwilling
of design and construction cost. However, cost is not the unique factor which
affects the selection to green buildings. It is found from existing literature that
the main factors about the popularization of green residential buildings mainly
include: government funding, comfort levels, crowding effects, cost factors and
so on. Specifically, the survey found that most people give priority to benefit and
comfort (36.82%). Followed by is the relevant government funding (29.1%). And
22.89% of the respondents tend to the benefit because of the diminishing cost
in the operation phase. In addition, 10.2% of the respondents would buy green
buildings if their relatives and friends have bought. In summary, the public has
a high recognition for green residential buildings. They think it is necessary to
vigorously promote and build the buildings, and also almost all are willing to buy
this kind of building. This shows that the green buildings have a stable audience
groups, there is a huge potential market to be tapped. But how to activate the
vitality of the market, to stimulate people’s actual purchase desire. The most
important thing is that balance the relationship between the construction cost
and the final benefit, that is, not only let people realize the bringing benefits of
green residential buildings, and not simple to raise the price too much.
How to Popularize Green Residential Buildings in China 1301
Government‘s
funding
29.10%
10.20%
22.89%
Cost reduction in Oriented by
use phase peoples around
At the same time, we also carried out statistics to the price of one-star
green residential buildings in Sichuan, the results as shown in Table 4. It shows
Sichuan Province’s the average price of one-star green residential buildings is
7235 (CNY/m2 ), and the average incremental price is 284 (CNY/m2 ). Among
them, the average price and average incremental price of the city of Chengdu are
the highest. They are reached 9345 (CNY/m2 ) and 385 (CNY/m2 ), respectively.
Compared the data in Tables 3 and 4, it can be found that the incremental price
of green residential buildings in Sichuan is much higher than the incremental
cost. The main reasons are as follows: (1) there is no a unified statistics standard
to the incremental cost of green buildings in Sichuan, which leads the reported
1302 J. Gang et al.
incremental cost of some project seriously distorted; and (2) the current statistics
to the incremental costs of green buildings only contain one-time investment
cost, without considering the cost in the whole life cycle, such as cost in project
planning, consulting, and operation stage. For example, although the consulting
fees of green building must be produced for all the green buildings labeling
programs, they are not counted in the reported incremental cost; and (3) for
the same real estate, green buildings than other buildings often are occupied
better natural resources (such as location, direction, greening, ventilation and
etc.), and thus prices will be increased; and (4) housing prices than the cost of
housing will usually retain a certain profit space, thus widening the direct gap
between the price and the cost.
0, 500)
[220 [500, + )
7%
7 %
2%
0)
[140, 220 [0, 70)
18% 29%
Unit:
CNY/m m2
[70, 140)
44%
Fig. 7. People are willing to pay the proportion of the incremental price range
In order to clarify the cost and efficiency of green residential buildings, a survey
about public’s acceptability to the incremental price is carried out. A question
including five incremental price levels is set. The survey results are showed in
Figs. 6–8. The results show that: (1) 73% of people are only willing to buy green
residential buildings with increase price less than 140 (CNY/m2 ); (2) only 8.8%
of people can accept the increase price more than 220 (CNY/m2 ). Thus, except
How to Popularize Green Residential Buildings in China 1303
Mianyang, the incremental prices of green residential buildings are slightly high
in Sichuan. Especially in Chengdu, the incremental prices of green buildings
are much higher than the acceptable price of common people. Therefore, with
enhancing the propaganda of green buildings, the price of green residential should
also be reduced appropriately.
450
4
400
4
350
3 %
28%
300
3 Agges:
%
5%
3
30-50
250
2
1
10-20
7%
37
200
2 2
20-30
5
50-70
150
1 4%
14 64%
% 32%
%
100
1 %
11%
47
7% 25%
%
50 54%
% 25%
8%
67%
2%
% 2%
% 3% 75%
0
0-70
0 70-140
0 0
140-220 220-500 500-1000 (Unit: CNY
Y/m2)
5 Policy Proposals
In order to popularize green residential buildings, based on the survey results and
statistic analysis above, the following policy recommendations are put forward.
From the above statistical analysis, it can be found that one of main reasons for
hindering the development of green residential buildings in Sichuan is that the
common people know too little about green buildings. The survey found that
the vast majority of people are unclear about “what is a green building”, “what
can it bring what”, “whether it increases the cost or not” and other core issues,
which lead to a very low public acceptance for green buildings. At present, in
China, only a small circle of people are working for green buildings. There are
no specialized agencies or platform to popularize and propagate green buildings.
1304 J. Gang et al.
1
100%
10.5% 14.3%
90%
80% 13.6%
70%
60%
50%
40%
30%
20%
10%
0%
<7000 7000-14000 14000-2100
00 >2100
00
<70 70-140 140-2
220
By the end of 2015, 3036 projects have obtained the identification of green
buildings in China, and 56 in Sichuan. However, this information is only posted
on few government websites, including the official website of ministry of housing
and urban rural development of China, and some local official website. Moreover,
these data are often published as public announcements, which are often too
simple to understand. The project information in detail is always put in the
attachment, and there is an open database to store them. As results, there are
no channels to know and identify the green buildings’ information, which hinder
the further development of green building badly. So it is suggested that the
government should establish official database on green buildings and open them
to common people.
some even no statistic, which lead to the unreasonable incremental cost. This
brings serious difficult to correctly estimate the green buildings’ costs and ben-
efits, also to the management and service. In this situation, it is recommended
that the unified standard about the incremental cost statistics of green build-
ings should be drawn up. In addition, the local government should require all
green building projects must announce the data on incremental cost. Although
this will increase cost slightly, it will pay an important role to popularize green
buildings.
6 Conclusion
Although green residential buildings as the trend of the future development,
many people still don’t know exactly what’s the green buildings. A green build-
ing not only means the vertical greening or roof garden, but also refers in the
construction of the whole life cycle, to maximize the energy saving, land saving,
water saving, material saving, environmental protection and pollution reduc-
tion. It is also called the sustainable development construction, ecological con-
struction, energy saving and environmental protection construction etc. Looking
around the property market in Sichuan, the green buildings are still very lit-
tle. Three main reasons have been found according to the survey study in this
paper. First, the common people know too little about green residential build-
ings. Second, the incremental prices of green residential buildings have exceeded
the buyers’ psychological expectations. Third, the environmental benefit and
economic benefit for buyers are not widely accepted. Therefore, to popularize
the green residential buildings, at the same time of improving the comfort, it is
also necessary to strengthen the science propaganda and control the incremen-
tal price.
Acknowledgement. This research was supported by the special funds for building
energy efficiency of Sichuan, and system science & enterprise development research
center of Sichuan (No. Xq15C14).
1306 J. Gang et al.
References
1. Dwaikat LN, Ali KN (2015) Green buildings cost premium: a review of empirical
evidence. Energy Buildings, 110:396–403
2. GB, T (2014) Assessment standard for green building. China Building Industry
Press, Beijing
3. (2011) The twelfth five-year plan for national economic and social development of
the People’s Republic of China. Chin Nurs Res 25(3):207–214
4. Olubunmi OA, Xia PB, Skitmore M (2016) Green building incentives: a review.
Renew Sustainable Energy Rev 59:1611–1621
5. The Ministry of Housing and Urban-Rural Development of the People’s Republic of
China (2013) Green building development plan. http://www.gov.cn/zwgk/2013-01/
06/content 305793.htm
6. Yang Y (2015) Study on economic cost benefit evaluation of green building technol-
ogy. Master’s thesis, Southwest Jiao Tong University, Chengdu (in Chinese)
7. Ye L, Cheng Z et al (2013) Overview on green building label in China. Renew Energy
53(9):220–229
8. Ye L, Cheng Z et al (2015) Developments of green building standards in China.
Renew Energy 73:115–122 (in Chinese)
Prognostics of Lithium-Ion Batteries Under
Uncertainty Using Multiple Capacity
Degradation Information
1 Introduction
With the advantages of high energy density, high galvanic, low-temperature per-
formance, and long lifetime, lithium-ion batteries play a more important role
in electronics system energy supply, and therefore have been widely used in
communication, aerospace avionics, portable devices and other industrial areas
[5,8]. However, battery deterioration and battery failure commonly occur, which
can lead to a reduction in systems performance, result in increased costs and
catastrophic failure [14]. Therefore, for the electronics device, prognostics and
health management (PHM) has received increased attention in battery manage-
ment system to determine the advent of systems failure and to mitigate system
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 109
1308 F. Li and Y. Wang
risk through the evaluation of system reliability in terms of the current life-cycle
conditions [13,16].
In order to evaluate the current health situation, some indicators such as
the state-of-charge (SOC), the state-of-health (SOH) and the state of life (SOL)
are commonly used to quantify the degradation of lithium-ion batteries [17]. To
provide useful prognostics information in the reliability monitoring for battery
systems health management, current researches have focused on SOC and SOH
estimation for batteries. The main measurements required to effectively measure
battery SOH are impedance and capacity [7]. In this study, battery capacity
percentages relative to the initial capacity are adopted to measure the SOH.
In recent years, there are many valuable research works on the battery degra-
dation modeling and prognostics for battery SOH estimation [9,15]. For example,
neural networks and artificial intelligent methods have been rapidly used to esti-
mate the battery SOH and predict the remaining useful life [11]. Stochastic fil-
tering approaches such as Kalman filtering [1], extended Kalman filtering [4,10],
unscented filtering and Bayesian filtering [2] are widely adopted in the SOH esti-
mation of battery, in which the empirical degradation models are often used to
build the dynamical system equation. Usually, the stochastic filtering methods
based on the parameter process or degradation model can show good perfor-
mance only when the degradation model used represents the actual system’s
behavior effectively. However, in practical battery conditions such as operating
conditions, environmental conditions, and other complicated inherent system, it
is difficult to obtain accurate state process model or parameter description when
the uncertainties are considered. To address this issue, many approaches have
been proposed such as a prognostic algorithm based on a Bayesian Monte Carlo
method and the Dempster-Shafer theory for battery SOH estimation and RUL
prediction [3]. Liu et al. [7] used the Gaussian process regression (GPR) to per-
form SOH prediction for lithium-ion battery, where the degradation trends are
learned from battery data sets with the combination of Gaussian process func-
tions. It should be noticed that uncertainty representation of the degradation
model has not yet been fully investigated.
As mentioned above, it is urgent to develop the battery prognostics under
the uncertainties. In this work, a novel approach to lithium-ion battery SOH
estimation is presented through the integration of the MGP model learning and
particle filtering. The proposed method consists of two phases, and the first is the
MGP is used to learn the statistical properties of the degradation model para-
meter combining training data sets from uncertain battery conditions. Secondly,
based on the parameter distribution information for the degradation process,
particle filtering is exploited to obtain the battery SOH estimation. In the train-
ing phases, the GPR is exploited to initialize the distribution parameters for
each component. Then, the MGP learning and the PF updating are recursively
implemented. Finally, a case example is provided based on the NASA battery
data sets to show the performance of the new prognostics method. The con-
tributions of this study can be summarized in two points: the first is a fusion
prognostics framework for the lithium-ion battery SOH estimation is developed
Prognostics of Lithium-Ion Batteries Under Uncertainty 1309
2 Related Work
2.1 Gaussian Process Regression
The degradation model parameter vectors need to be treated as a dynamic
process which captures the time-varying situations in the battery degradation
cycles. To represent the system degradation behavior, The Gaussian process is
considered. A stochastic process {g(x) : x ∈ X }, indexed by elements from some
set X , is a Gaussian process with mean function m(x) and covariance function
k(x, x ). It means for any finite set of elements x1 , · · · , xm ∈ X , the associated
finite set of random variables g(x1 ), · · · , g(xm ) have a multivariate Gaussian
distribution, i.e.,
⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤⎞
g(x1 ) m(x1 ) k(x1 , x1 ) · · · k(x1 , xm )
⎢ .. ⎥ ⎜⎢ . ⎥ ⎢ .. .. .. ⎥⎟
⎣ . ⎦ ∼ N ⎝⎣ .. ⎦ , ⎣ . . . ⎦⎠ .
g(xm ) m(xm ) k(xm , x1 ) · · · k(xm , xm )
It can be denote by g(x) ∼ GP (m(x), k(x, x )). Actually, the mean function
and covariance function are defined as
m(x) = E[g(x)],
k(x, x ) = E[(g(x) − m(x))(g(x ) − m(x ))],
for any x, x ∈ X .
Gaussian process represent distributions over functions, it provide a method
for modeling probability distribution under multiple corruptions in complicated
or uncertainty situations. When the accurately describe for the dynamical para-
meter process is hard to be obtained in advance, the Gaussian process regression
(GPR) can be exploited to supply the approximation distribution of the para-
meter process through learning from the training data available [6].
Consider a set of training data S = {xi , yi }Ni=1 , the relationship between
input xi and output yi can be modeled by yi = g(xi ) + i , where i is zero mean,
Gaussian white noise with variance σn2 . From the GPR, if the prior distribu-
tion over g(xi ) be assumed as Gaussian process, the posterior distribution over
outputs conditioned on sample set S and the test input x∗ is also a Gaussian
process, and the mean and variance can be given by
g¯∗ = E[g∗ |x∗ , S] = k∗T K −1 y, (1)
Cov(g∗ ) = k(x∗ , x∗ ) − k∗T K −1 k∗ , (2)
1310 F. Li and Y. Wang
πk N (x|μk , Σk )
p(z(x) = k|x) = K . (5)
i=1 πi N (x|μi , Σi )
Prognostics of Lithium-Ion Batteries Under Uncertainty 1311
Let p(y∗ |Sk , x∗ ) be the predictive pdf for the output variable y∗ on the con-
dition that the new sample x∗ is obtained from the k th component. Thus, the
predictive pdf from the MGP models can be given as:
K
p(y∗ |S, x∗ ) = p(z(x∗ ) = k|x∗ )p(y∗ |Sk , x∗ ) (6)
k=1
= πk + γ(p(k|xt+1 ) − πk ).
(t+1) (t) (t)
πk (8)
Step 3. Update the distribution parameters:
p(k|xt+1 )
(xt+1 − μk ),
(t+1) (t) (t)
μk = μk + γ (t)
(9)
πk
t+1
p(k|x )
((xt+1 − μk )(xt+1 − μk )T − Σk ),
(t+1) (t) (t) (t) (t)
Σk = Σk + γ (t)
(10)
πk
where, γ can be viewed as the control parameter which can be set as 1/N where
N is the number of acquired samples.
the next time instant can be expressed by adding the corresponding output
to the current state, as the output with each input xi is set as the change
of the parameters. Then, for each component, the mean ḡNk and covariance
(0)
cov(gNk ) are determined through the GPR Eqs. (1)–(2), and denoted μk = ḡNk
(0)
and Σk = cov(gNk ), which means that the initial mean and covariance are
given in the Gaussian process components. Secondly, when the initial distribution
parameters of MPG are given, the degradation model parameters pdf can be
represented by using the recursive algorithm for MPG with the current training
data. For the battery capacity data available from the current battery condition,
denote the current training data DT = {Δx(t) }Tt=1 as the parameter transition,
(t) (t) (t) (t)
where Δx(t) = x(t+1) − x(t) , and Θk = {p(k|Δx(t) ), πk , μk , Σk } are the
current distribution parameters for each component, then, the pdf of MGP can
be computed from the MGP parameters updating using Eqs. (7)–(10). After
that, the importance sampling and the resampling procedure are implemented
from the standard particle filter algorithm. With the particles and associated
[i]
weights, the battery capacity prediction SOHl+p at p step after current cycle l
[i]
can be computed by exploiting the degradation parameter samples x̃l for the
ith trajectory. Meanwhile, the SOH prediction at cycle l can be estimated by
combining the Ns samples of capacity measurements with their corresponding
weights. The steps of our proposed method are summarized as follows.
Step 1. Initialization:
Given the trained data set SL = {(xi , zi )}L
i=1 and the trained parameter
set in different components Dk = {(xi , yi )}N i=1 , (k = 1, · · · , K).
k
[i] [i]
pdf with the mean GPμ (xl , SL ) and the covariance GPΣ (xl , SL ), then
the importance sampling and resampling can be implemented:
Step 4.1 FOR i = 1, · · · , Ns
[i] [i] [i]
sample xl ∼ xl−1 + M GP (Δxl−1 ; Θ(T ) ),
[i] [i] [i] [i]
ωl ∝ ωl−1 N (zl ; GPμ (xl , SL ), GPΣ (xl , SL )).
END FOR
[i]
Step 4.2 Calculate the normalization weights ω̂l and effective sample
size Nef f
Step 4.3 IF Nef f < Nth
[{x̃l , ω̃l }N N
[i] [i] [i] [i]
i=1 ] = resampling[{xl , ω̂l }i=1 ].
4 Case Example
4.1 Battery Data Set
In this section, a case study is conducted to validate the proposed approach.
The battery capacity data used in this paper are collected by the NASA Ames’
Prognostics Center of Excellence (PCoE) [12]. The lithium-ion batteries were run
through different operational profiles, such as charge, discharge and impedance
at room temperature. The battery capacity at cycles was adopted to measure the
SOH, and the end-of-life criterion in the repeated charge and discharge cycles
for the accelerated aging of the batteries was a 30% fade in rated capacity. Four
batteries No. ∼5, No. ∼6, No. ∼7, No. ∼18 are acquired through the discharge
with 2 A constant current level at an ambient temperature of 24 ◦ C until the
battery voltage fell to 2.7V, 2.5V, 2.2V and 2.5V, respectively.
To represent the empirical capacity degradation model, the conditional three-
parameter capacity degradation model as follow was considered, where the con-
stant variable κ was set at 2.
SOH = τ · exp(ι · l) + ρ · lκ . (11)
Using the Matlab curve fitting toolbox, the real degradation and the curve
fitting for battery No. ∼6 with the degradation model (11) are shown in Fig. 1.
The empirical degradation model parameters are (1.0263, 0.0037, 0), which were
estimated by curve fitting with the capacity samples given from cycle 1 to cycle
100. It can be seen from Fig. 1 that the real SOH at prediction cycles are higher
than the capacity degradation estimation. The root mean squared error is 2.8919
and the adjusted R2 is 0.947, which are the goodness-of-fit statistics of the
empirical model.
1314 F. Li and Y. Wang
For the different operating profiles, the data from battery No. ∼6 was assumed
as the capacity samples under the current condition, and the data from other
batteries, which represented the different operating conditions, were treated as
the historical training data. In the training phase, there are two aspects. Firstly,
with historical training data sets for different batteries, the GPR was exploited
to learn the initial distribution parameters, where the mean function was cho-
sen from the linear function m(x) = ax + b and the covariance function was
chosen as the exponential function expressed by Eqs. (3)–(4). Therefore, the
hyper-parameters Θ = [a, b, σn , σg ] were optimized with the maximization of the
log-likelihood function. Secondly, the mixture density of MGP model is learnt by
exploiting the current battery training set with the obtained distribution para-
meters from the different components. After that, the obtained distribution from
data can be treated as the importance density, and then the particles and associ-
ated weights were updated using importance sampling and resampling procedure
implemented as the standard particle filter algorithm.
In this study, the number of particles was set at 200. Consider the current
battery No. ∼6, the prediction began at cycle 100, which means the training
data obtained from the cycle 1 to cycle 50 was used as the training set for the
MGP learning, and the SOH measurements from cycle 51 to cycle 100 were used
for the particles updating, and then the prediction results are shown in Fig. 2.
As shown in Fig. 2, the prediction combining three different training com-
ponents can capture the degradation trends of SOH in most cycles. Therefore,
from these results, the proposed method shows the effective prediction by fusing
the multiple training data which represent the uncertain capacity degradation
conditions.
Prognostics of Lithium-Ion Batteries Under Uncertainty 1315
Fig. 2. Prediction results for battery No. 6 at cycle 100 with different training sets
from batteries No. 5, No. 7 and No. 18
Fig. 3. Prediction results based on GPR-PF and MGP-PF for battery No. 6
1316 F. Li and Y. Wang
For the battery No. 6, it can be seen from the Fig. 3 that the SOH prediction
based on MGP and PF performed batter than the methods which only took
the current capacity data into account. This indicates that the method which
combines the priori information of capacity degradation process can improve the
estimation performance in the current experiments. The reasons for this may lie
in that distribution approximate which only considers the current measurements
of battery ignores the useful fading information under different degradation con-
ditions. Therefore, the method which fuses knowledge from multiple capacity
degradation conditions is more suitable for lithium-ion battery SOH estimation
with the uncertain battery degradation situations.
5 Conclusion
In this paper, a new model for the battery SOH estimation is presented based on
the MGP model which was exploited to learn the distribution parameters from
multiple training sets on the different degradation conditions. To represent the
density of the degradation model parameters under uncertain conditions, the ini-
tialized distribution parameters can be updated recursively through MGP learn-
ing from the current capacity measurements of battery. Based on the degradation
model parameter distribution information, importance sampling and resampling
in the particle filtering framework was implemented. Our method is based on
distribution learning from training data and does not assume any certain state
model of degradation parameter, which is usually hard to be obtained in advance.
Therefore, the proposed approach can also be applied to other estimation prob-
lems with unknown state space models under uncertain conditions.
Lithium-ion battery degradation under complicated conditions or working
environments occurs in practice and different battery conditions can cause dif-
ferences in the degradation. The prognostics for battery SOH estimation under
uncertain conditions have many challenges, as different capacity degradation
models with various degradation features cannot be obtained in advance. The
uncertainties for the PHM of batteries is far more complex than the investigation
in this paper, so further research on the implementation of fusion models under
other uncertain conditions will be the focus of our future works.
References
1. Burgess WL (2009) Valve regulated lead acid battery float service life estimation
using a kalman filter. J Power Sources 191(1):16–21
2. Cui H, Miao Q et al (2012) Application of unscented particle filter in remaining
useful life prediction of lithium-ion batteries 2012:1–6
3. He W, Williard N et al (2011) Prognostics of lithium-ion batteries based on
dempster-shafer theory and the bayesian monte carlo method. J Power Sources
196(23):10314–10321
4. Hu C, Youn BD, Chung J (2012) A multiscale framework with extended kalman
filter for lithium-ion battery soc and capacity estimation. Appl Energy 92(4):694–
704
Prognostics of Lithium-Ion Batteries Under Uncertainty 1317
5. Kim IS (2010) A technique for estimating the state of health of lithium batteries
through a dual-sliding-mode observer. IEEE Trans Power Electron 25(4):1013–1022
6. Ko J, Fox D (2009) Gp-bayesfilters: Bayesian filtering using gaussian process pre-
diction and observation models. Autonomous Robots 27(1):75–90
7. Liu D, Pang J et al (2013) Prognostics for state of health estimation of lithium-ion
batteries based on combination gaussian process functional regression. Microelec-
tron Reliab 53(6):832–839
8. Nishi Y (2001) Lithium ion secondary batteries; past 10 years and the future. J
Power Sources 100(1–2):101–106
9. Piller S, Perrin M, Jossen A (2001) Methods for state-of-charge determination and
their applications. J Power Sources 96(1):113–120
10. Plett GL (2004) Extended kalman filtering for battery management systems of
lipb-based hev battery packs: Part 3. state and parameter estimation. J Power
Sources 134(2):277–292
11. Qian K, Zhou C et al (2010) Temperature effect on electric vehicle battery cycle
life in vehicle-to-grid applications. In: CICED 2010 Proceedings, vol 2010, pp 1–6
12. Saha B, Goebel K (2007) Battery data set. NASA AMES prognostics data repos-
itory
13. Vichare NM, Pecht MG (2006) Prognostics and health management of electronics.
IEEE Trans Components Packaging Technol 29(1):222–229
14. Wakihara M (2001) Recent developments in lithium ion batteries. Mater Sci Eng
R Reports 33(4):109–134
15. Williard N, He W, Pecht M (2012) Model based battery management system for
condition based maintenance. In: Proceedings of the MFPT, vol 2012
16. Xu L, Xu J (2014) Integrated system health management-based progressive diag-
nosis for space avionics. IEEE Trans Aerosp Electron Syst 50(2):1390–1402
17. Zhang J, Lee J (2011) A review on prognostics and health monitoring of li-ion
battery. J Power Sources 196(15):6007–6014
18. Zivkovic Z, Ferdinand VDH (2004) Recursive unsupervised learning of finite mix-
ture models. IEEE Trans Pattern Anal Mach Intell 26(5):651–656
Analysis of the Effect of Low-Carbon Traffic
Behavior on Public Bicycles
1 Introduction
Though there are more and more convenient bus routes today in daily life, some
back streets and new residential areas do not share public transport, and the
residents often take more than 10 min to reach the bus station. To solve the
public “last mile” problem, the community buses driving around the community
entrance came into being in Chengdu in 2012. But because of the limitation of
community buses, such as fixed lines and fewer bus stops, the residents were
not satisfied with community buses, so the traffic pattern could not solve the
problem perfectly. As the replacing for community buses, the public bicycles are
more flexible and more popular.
Generally, the problem of traffic congestion and heavy emissions of CO2
become more and more serious and people are increasingly worried about the
PM2.5 emissions and the smog problem in China. Therefore, low-carbon traffic
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 110
Analysis of the Effect of Low-Carbon Traffic Behavior on Public Bicycles 1319
has been becoming increasingly popular in many cities. The local authorities
have been paying more and more attention to public transport, and cycling in
daily mobility is considered according to efficient means to reduce air pollution,
traffic jams, and carbon emissions. Public bicycle systems (PBS) have turned out
to be efficacious in promoting riding in many cities, particularly when contacted
with public transportation.
Chengdu seems to be one of the most suitable cities for the development of
shared bicycle in China due to flat terrain, mild weather, little sunshine and less
wind as well as wide non-motor vehicle lanes. In addition, it is densely populated
and there are about 238834 persons per square kilometers. Therefore, the public
bicycle solve the Chengdu “last kilometers” of the problem effectively.
Despite public bicycle sharing system existing in many cities in China, cycling
still does not become the habit of most residents, and researches for the influ-
encing factors of PBS is not enough much and perfect. A study summed up the
impact of spatio-temporal interactions on bicycle sharing system demand [10].
And a find pointed out bicycle sharing stations and greenness were motivators
for bicycle commuting and public transport stations and elevation were deter-
rents for bicycle commuting by multinomial logistic regression models [6]. A
research about public bicycle system in Beijing found that the bike-share choice
is most sensitive comfort, user demographics, however, do not affect strongly on
the bike-share choice, indicating the mode will draw users from across the social
spectrum [4]. Firstly, this paper investigated the demographic factors related
the usage of public bicycles. Then based on the theory of planned behavior the
binary logistic model, this paper explored the factors that may affect the pub-
lic bicycle selection of residents from the perspective of attitude towards the
behavior, subjective norm and perceived behavioral control.
2 Literature Review
Attitudes and perception of an individual towards a particular mode would influ-
ence the behavior of that individual [8]. Commonly, the cycling requires a phys-
ical effort among the cyclist. The public bicycles are more and more popular
today. The norm is also important for the usage of public bicycles. So it is neces-
sary to understand which factors make a positive impact on the usage of public
bicycles.
As the history of public bicycle system is not long, the researches on theory
and practice of PBS are still immature. And most of the researches on public
bicycle selection of residents concentrated on characters of public bicycle travel,
the method or theory of travel choice and factors affecting public bicycle selec-
tion.
For the characters of public bicycle travel, a study found that the average
travel distance of bicycle users is about 3.7 km, and the male users were 3.8 km,
the female users are 3.6 Km through a survey towards the American travelers [1].
New findings unequivocally demonstrate that the mean journey length made by
private bicycle is 700–800 m (0.44–0.5 miles) greater than those made by public
1320 J. Ma et al.
bicycle [5]. And a survey found that the average distance traveled by bicycles for
fitness was 14 km [9]. What’s more, through the development of public bicycle
planning in Hangzhou, Shi [16] pointed out that the public bicycle system facil-
ities construction, the usage and operational mechanisms should be carried out
to research from the perspective of public bicycle facilities. In addition, a paper
that examines the tensions between the glamour of a global competitive city
explained what roles might public bicycle paly and how might a public bicycle
system business model operates if it were to serve the transport disadvantage
[13].
For the method and theory of travel choice, based on the information theory,
a selection procedure is proposed to select important variables in the logistic
regression model, and the strong consistency of these programs is proved [2].
And Yang [17] built a structural equation model of resident’s travel choice in
the process of urbanization based on out chain. Sakari [12] put forward that
public bicycle system should be viewed as part of public transport rather than
as a separate cycling scheme. Furthermore, a study developed a methodology for
categorizing bicycling environments defined by the bicyclist’s perceived level of
safety and comfort [14]. In addition, there is a research that develop an estimation
method using sinusoidal model to fit the typical pattern of seasonal bicycle
demand [11].
For the factors affecting public bicycle selection, it turned out that lower edu-
cation and absence of docking stations within walking distance were associated
with lower likelihood of awareness of the public bicycle system programs [3].
And a research explored the impact of specific weather conditions and calendar
events on the spatio-temporal dynamics of the case study public bicycle sharing
programs employing novel spatial analytical techniques [7]. Besides, there is the
study agreed that residents’ travel choices are mainly driven by environmental
conditions and personal travel habits [4]. A survey outcomes in India revealed
perceived benefits, physical barriers, safety hazards, social barriers, and road
condition may be the major factor classes influencing bicycle mode choice used
in exploratory factor analysis [15].
The current literature review reveals a dearth in studies that comprehensively
elicit the influence of demographic, attitude towards the behavior, subjective
norm and perceived behavioral control on the usage of public bicycle. Aptly, the
current paper aims to identify the effect in those aspects. Moreover, the study
also points out the determinants and correlates of public bicycle commuting.
3 Methodology
The factors were identified in the literature as influential in public bicycle system
are numerous and diverse [15]. The study focuses on commuting trips in order to
analysis the factors that influence people use the public bicycle based on theory
of planned behavior (TPB). In this paper, the factors have been divided accord-
ing to this rule that whether it is belong to “attitude towards the behavior”,
“subjective norm” and “perceived behavioral control”, they are the three main
Analysis of the Effect of Low-Carbon Traffic Behavior on Public Bicycles 1321
variables that determine behavioral intentions. Where there is more positive atti-
tude, the more support of important others, and the stronger the control of per-
ceived behavior, there is more the behavior intentions about using public bicycle.
And vice versa. The factors of personal, social and cultural (such as personality,
intelligence, experience, age, gender, cultural background, etc.) indirectly affect
attitude towards the behavior, subjective norms and perceived behavior control
by influencing the behavioral beliefs, and ultimately affect behavioral intentions
and behavior. Attitude towards the behavior, subjective norms, and perceived
behavioral controls are conceptually distinct, but sometimes they may have a
common basis of beliefs, so they are both independent and related.
We first investigated the factors of demographic by a simple distribution
analysis specified for bicycle-shared and non-bicycle-shared commuters. Rele-
vance analysis (e.g. Person’s chi-square test) were applied to find the relation-
ships between the factors of demographic and commuting by public bicycles or
not.
After that, we investigated the determinants of public bicycle commuting
with the model of binary logistic regression from the perspectives of attitude
towards the behavior, subjective norms, and perceived behavioral controls.
4 Data Collection
The questionnaire includes four sections. The first section focuses on the demo-
graphic variables, and the respondents were asked to report their gender, age,
occupation, educational qualification and income. In the second, third and fourth
section, the questions about attitude towards the behavior, subjective norms,
and perceived behavioral controls are descripted in the questionnaire. Table 1
shows the characteristics about the questions.
The data for the study is collected usage face-to-face questionnaire survey
by a sample collected using a random sampling method. And it was at subway
station, university, attractions and business district we did this survey.
1322 J. Ma et al.
A total of 633 people are in the survey and 79% respondents have been used
the public bicycle. Most of the respondents from various income classes use the
public bicycle as transportation to school and to work. Before using the public
bicycles, 73% of respondents used subway or bus as the main mode, 60% od
them walked, 26% of them take a bike, 34% of them use car or taxi. In addition,
it is at school and community sa well as subway and bus station where most
respondents used the public bicycle.
Analysis of the Effect of Low-Carbon Traffic Behavior on Public Bicycles 1323
5 Statistical Analysis
All data analysis were performed using SPSS. For the demographic data, we
test whether it is independent between usage public bicycle and demographic
character by Person’s chi-square test (The significance level α = 0.05).
Attitude towards the behavior that may affect propensity of using public
bicycle were evaluated by using a binary logistic regression model (95% con-
fidence). Participant characteristic were dichotomized before testing as poten-
tial confounders, including whether participants think it is cool, flexible, time-
saving, environmentally friendly, entertaining, good for health, not consistent
with identity. In addition, the potential confounders for subjective norm include
the understanding about PBS, the registration fee, the unit price, the distance,
the find-time, outward appearance, cleanliness, and whether the participants
have private bicycle and e-bicycle. And the potential confounders for perceived
behavioral controls include whether it is laborious, whether there is bicycle lanes,
signals for cyclist, heavy traffic, visibility and weather. Final model fitness was
checked with a (post-estimation) generalized Hosmer–Lemeshow goodness-of-fit
test.
it explain that the use of public bicycles is related to occupation at the 99.5%
confidence level.
For those whose income is less 3000, the proportion of the use is slightly
higher than the non-use. But for those whose income is more than 3000, the
proportion of the use is higher than the non-use. And for those who use the
public bicycle, the groups whose income is less 1500 and between 1500 and 3000
are significantly higher than the other income groups. The group mainly prefer
riding than driving in order to reduce the cost. What’s more, the Asymp.sig.
(2-sided) is equal to 0.005, so it explain that the use of public bicycles is related
to income at the 99.5% confidence level.
For those whose educational qualification is at the same level, the proportion
of the use and the non-use is equal on the whole. And the Asymp.sig. (2-sided)
is equal to 0.345, so there is slight connection between educational qualification
and the use of public bicycles at the 99.5% confidence level.
exp(β0 + β1 x1 + · · · + βk xk ) 1
p= = .
1 + exp(β0 + β1 x1 + · · · + βk xk ) 1 + exp(−β0 − β1 x1 − · · · − βk xk )
Table 4 elicits the factors of significant contribution for chose of public bicy-
cles about attitude towards the behavior. Among the attitudinal characteristics,
the characteristics of “environmental friendly” and “flexible” turned out to be
the prominent variables with the Exp (B) value of 2.626 and 2.190 when com-
pared with the others. From the perspective of significance, the significance of
“environmental friendly” and “flexible” are lower than 0.05 and the significance
of constant is 0.008 lower 0.005, so it explains that there are significant relation-
ship between using public bicycle and environmental friendly as well as flexible.
Therefore, a binary selection model can be shown below as following.
1326 J. Ma et al.
Table 4. Binary selection model regression results summary (attitude towards the
behavior)
1
p= .
1 + exp(−0.619 − 0.816x1 − 0.737x2 )
1
p= .
1 + exp(−1.019 − 0.816x1 − 0.574x2 )
7 Conclusion
This study investigated the demographic factors related the usage of public bicy-
cles, and also explored the factors that affect the choice of using public bicycles
from the perspective of attitude towards the behavior, subjective norm and per-
ceived behavioral control according to TPB. The following are the conclusion
obtained from the study.
For the demographic factors, the variable of gender, age, occupation and
income are closely contacted with the usage of public bicycle. And the education
qualification has a little effect to it.
As for attitude towards the behavior, people are more concerned about the
convenience and low-carbon. Therefore, there are significant relationship between
usage public bicycle and environmental friendly or flexible. From the perspective
of subjective norm, people maybe care about the time-saving and comfort more
than appearance of the public bicycles so that they are more particular about
find-time or cleanness. What’s more, for perceived behavioral control, people
are more concerned about safety, and it is important for people to be competent
to cycle in some particular environment, so people would have a more positive
perception about cycling if there are the bicycle lanes, signals for cyclist or
suitable weather.
References
1. Aultman-Hall L, Hall F, Baetz BB (1997) Analysis of bicycle commuter routes
using geographic information systems: Implications for bicycle planning. Trans-
portation Res Record J Transportation Res Board 1578(1578):102–110
2. Bai Z, Krishnaiah P (1991) Variable seleetion in logistic regression. J China Univ
Sci Technol 21(2):23–31
3. Bernatchez AC, Gauvin L et al. (2015) Knowing about a public bicycle share
program in montreal, Canada: are diffusion of innovation and proximity enough
for equitable awareness? J Transp Health 2(3):360–368
4. Campbell AA, Cherry CR et al (2016) Factors influencing the choice of shared
bicycles and shared electric bikes in beijing. Transp Res Part C Emerg Technol
67:399–414
5. Castillo-Manzano JI, López-Valpuesta L, Sánchez-Braza A (2016) Going a long
way? On your bike! Comparing the distances for which public bicycle sharing
system and private bicycles are used. Appl Geogr 71:95–105
6. Cole-Hunter T, Donaire-Gonzalez D et al (2015) Objective correlates and determi-
nants of bicycle commuting propensity in an urban environment. Transp Res Part
D Transp Environ 40:132–143
7. Corcoran J, Li T et al (2014) Spatio-temporal patterns of a public bicycle sharing
program: the effect of weather and calendar events. J Transp Geogr 41:292–305
8. Davies D, Gray S et al. (2001) A quantitative study of the attitudes of individuals
to cycling. Trl Report
9. Dill JJ, Gliebe (2008) Understanding and measuring bicycling behavior: a focus
on travel time and route choice. Bicycle Lanes 29
10. Faghih-Imani A, Eluru N (2016) Determining the role of bicycle sharing system
infrastructure installation decision on usage: case study of montreal bixi system.
Transp Res Part A Policy Practice 94:685–698
11. Fournier N, Christofa E Jr, MAK (2017) A sinusoidal model for seasonal bicycle
demand estimation. Transp Res Part D Transp Environ 50:154–169
12. Jäppinen S, Toivonen T, Salonen M (2013) Modelling the potential effect of shared
bicycles on public transport travel times in greater helsinki: an open data approach.
Appl Geogr 43(43):13–24
13. Jennings G (2015) Finding our balance: considering the opportunities for public
bicycle systems in cape town, South Africa. Res Transp Bus Manag 15:6–14
14. Joo S, Oh C et al (2015) Categorizing bicycling environments using gps-based
public bicycle speed data. Transp Res Part C Emerg Technol 56:239–250
15. Majumdar BB, Mitra S (2015) Identification of factors influencing bicycling in
small sized cities: a case study of Kharagpur, India. Case Stud Transp Policy
3(3):331–346
16. Shi X, Cui D, Wei W (2011) The investigation of the relationship between using
and planning of public bicycle in Hangzhou. Urban Studies
17. Yang LY, Zhu XN (2012) Travel mode choice in the process of rapid urbanization.
China Soft Science
An Empirical Study on the Impact of Niche
Overlap of Tourism Enterprise on Tourist
Satisfaction
1 Introduction
Tourism industry is still in the initial phase of development at present, and the
overall level of tourism enterprises’ utilization efficiency of resources is relatively
low, which leads to serious phenomena of product homogeneity and vicious com-
petition affecting tourist experience. For tourism industry is a complex business
ecology system which integrates many industries with blurred boundaries, prob-
lem of tourism enterprise competition cannot be explained if the competition
is confined to traditional competition theory within the industry, thus many
scholars have begun to use related theory of natural ecology system to research
tourism phenomena, such as niche theory [8].
Niche theory, as basic theory of ecology, was widely applied in the research of
related fields such as diversity of species, interspecies relation, etc. Niche refers
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 111
An Empirical Study on the Impact of Niche Overlap of Tourism Enterprise 1331
The innovation of this paper is to applicate the niche theory to tourism enter-
prises, to construct a measuring index system of niche overlap degree of tourism
enterprises, and to analyze the influence of niche overlap degree on tourist
satisfaction by using SEM. This article mainly divides into the following sev-
eral parts: the first part mainly introduces the research background and research
progress, the second part mainly puts forward the theoretical model and research
hypothesis, the third part constructs tourism enterprises measuring index system
of niche overlap degree and tourist satisfaction, the fourth part is data collection
and analysis which is based on the questionnaires and interview survey, the last
part is the research conclusions and prospects.
Scholars have identified niche factors from different aspects. People such as Han-
nan [6], Baum [1] elaborated the identification of niche from the aspects of enter-
prises’ market resource demand and ability of providing products. They think
enterprise niche, as the intersection of enterprise resource demand and enterprise
producing ability, relies on the position where the enterprise is located and what
it does. Yan A and Qing-Li DA proposed that enterprises’ niche is formed by
the interaction between other enterprises and themselves actively in special cir-
cumstances [21]. Shan M indicated that enterprise niche can be divided in three
dimensions: human resources, tangible resources and intangible resources [13].
The above scholars’ identification of niche can be summarized in two dimen-
sions: market resource and human resource. However, tourism enterprises are
service organizations satisfying tourists’ tourism experience and demand by pro-
viding products in certain time and space, and the elements of tourism time and
space of tourism enterprises are virtually tourism enterprises’ resource. On this
basis, this thesis proposes the research hypothesis as follows:
Hypothesis 1 : Niche overlap of tourism enterprises can consist of three dimen-
sions: space-time resource, market resource and productive resource.
Furthermore, in business ecosystem, competition within populations can not
only affect their internal development, but also impact other populations’ health.
Tourists, as one of core populations in scenic business ecosystem, their health
conditions are also certainly affected by the relationship of tourism enterprises.
Presentation of tourist population health condition in the whole scenic business
ecosystem is actually tourists’ experience condition, which can be measured with
the index of tourist satisfaction degree by us. Accordingly, this thesis proposes
research hypothesis as follows:
Hypothesis 2 : There is significantly negative influence of tourism enterprise
niche degree on tourist satisfaction degree.
According to the above analysis, this thesis proposes theory study model and
study hypothesis, as shown in Fig. 1.
An Empirical Study on the Impact of Niche Overlap of Tourism Enterprise 1333
3 Variation Measurement
3.1 Measurement of Enterprise Niche Overlap
(1) Space-time Resource Overlapping Degree
Baum argued that enterprise niche, as the intersection of enterprise resource
demand and enterprise producing ability, relies on the position where the enter-
prise is located and what it does [1]. Yan and Da [21] pointed out development
opportunity and ecology strategy should be sought in time factor and social
factor. Thus this thesis designs related measurement items to niche overlap, as
shown in Table 1.
resources, abilities, markets, which populations need to exist and existing enter-
prises should be avoided to the greatest extent, for the niche overlapping will
lead to competition of populations. Therefore, this thesis also adopts market
resource dimension to measure the enterprise niche overlap (Table 2).
The research adopts a more mature SERVQUAL service quality scale to replace
a simple tourist satisfaction evaluation. Proposed in 1990, SERVQUAL model
included five dimensions, tangibles, reliability, responsiveness, assurance and
empathy, and measured the distance of service quality by a questionnaire form
[2]. At present, the model is considered as a most classic method to measure
all kinds of service qualities appropriately and used in the relevant studies of
all service industries [20]. In view of the particularity of different industries,
the dimensionality and index of SERVQUAL model must be amended before
it is applied to tourism industry [11]. Based on the previous studies, the paper
has fine tuned the model and formed a tourist satisfaction evaluation system as
shown in Table 4.
The research based on theoretical model and research hypothesis adopts one-
to-one matching questionnaire method, chooses and designs each variable mea-
surement scale and prepares and issues preliminary survey questionnaire. Due to
indistinct tourism boundaries, it is difficult to define tourism enterprises. Thus,
the selection of the research objects mainly focuses on the enterprises around
scenic spots regarding tourism service as a main purpose. The research issues 450
questionnaires. Removing 38 questionnaires with incomplete data and obvious
erring data, the research has recycled 402 valid questionnaires which indicate
the recovery rate is 89.33%.
.76
NO35 e1
.87
.72
.85 NO34 e2
.86 .73
productive overlap NO33 e3
.82
.67
.85 NO32 e4
.73
NO31 e5
.42 .74
.86
NO25 e10
.65
.81 NO24 e11
.62 .39
.62 Market overlap NO23 e12
.65
.42
.61 NO22 e13
.38
NO21 e14
.61
.63
.79 NO14 e16
.83 .69
Space-time overlap NO13 e17
.81
.66
.75 NO12 e18
.56
NO11 e19
.31
e5 NO31 e12
.39 .95
e4 NO32 .95
.30 .95
e3 NO33 Productive overlap
.94 e11
.32 .29
e2 NO34 ST e6
1.00
.28 1.00 -.37 .24
e1 .85 SRT e7
NO35
.80 .22
.30 Tourist satisfaction SRS e8
.82
.49 e13 .23
e15 NO21 .86 SA e9
.46 .72 .26
-.07 SE e10
e16 NO22 .77
.52 .75
e17 NO23 Market overlap
.95
.28
e18 NO24 1.00
.20 -.11
e19 NO25
.45
.44
.35 e14
e20 NO11
.35 1.00
e21 NO12
.30 1.10
e22 NO13 1.07
Space-time overlap
.38 1.07
e23 NO14
References
1. Baum JA, Mezias SJ (1992) Localized competition and organizational failure in
the manhattan hotel industry, 1898–1990. Adm Sci Q 37(4):580–604
2. Carman JM (1990) Consumer perceptions of service quality: an assessment of the
servqual dimensions. J Retail 66:33–55
3. Chen CA (2016) How can taiwan create a niche in Asia’s cruise tourism industry?
Tour Manag 55:173–183
4. Colwell RK, Futuyma DJ (1971) Application of genetic algorithm to aircraft
sequencing in terminal area. Ecology 52:567–576
5. Dobrev SD, Hannan MT (2001) Dynamics of niche width and resource partitioning.
Am J Sociol 106:1299–1337
6. Hannan MT, Freeman J (1993) Organizational ecology. Harvard University Press
7. Hou J, Lu Q, Shi YJ (2011) Study on the variation and survival factors in the
business evolution process based on organizational ecology. Ind Eng Eng Manag
(IEEM) 5:1922–1926
8. Huang F (2001) Eco-principles’ application in the optimum of tourist system. Ecol
Econ 11:19–20 (in Chinese)
9. Hutchinson GE (1957) Concluding remarks: population studies animal ecology and
demography. Cold Spring Harb Symp Quant Biol 22:415–427
10. Lew AA, Duval DT (2008) Long tail tourism: new geographies for marketing niche
tourism products. J Travel Tour Mark 25:409–419
11. Lin S, Ren P et al (2014) Research on evaluation of travel service quality for
jiuzhaigou valley on the basis of servqual. Wit Trans Inf Commun Technol 46:2157–
2162
12. Pitts BG (1999) Sports tourism and niche markets: identification and analysis of
the growing lesbian and gay sports tourism industry. J Vacat Mark 5:31–50
13. Shan M, Li G, Chen D (2006) Research on enterprises’ competitive strategies based
on ecology niche theory. Sci Sci Manag S & T 3:159–163 (in Chinese)
14. Stuart TE, Podolny JM (2007) Local search and the evolution of technological
capabilities. Strateg Manag J 17:21–38
15. Vanessa OM (2014) Favela tourism: a new niche to be developed by Brazil. In:
Eurochrie conference “hospitality and Tourism Futures”, Dubai 6–9 October
An Empirical Study on the Impact of Niche Overlap of Tourism Enterprise 1341
16. Wang G, Dong GZ, Zhao JL (2008) Research on competitive pattern of theme
parks based on ecological niche: take pearl river delta as an example. Tour Trib
12:45–51
17. Wang QR, Yu G (2008) A study on the measurement of the niche of regional
tourism cities and cooperation-competition model-a case study on the Pearl River
Delta region. Tour Trib 3:50–56 (in Chinese)
18. Wang ZF (2009) Research on the niche strategy of tourism industry cluster. Hum
Geogr:12–15
19. Xiang YP (2009) A study on the evaluation of the tourism niche-case of Zhangjiajie
and Tianmenshan national forest park. Issues For Econ 2:185–188 (in Chinese)
20. Xu M, Yu J (2001) Application of servqual scale to measure service quality. Ind
Eng Manag 6:6–9 (in Chinese)
21. Yan A, Da Q (2005) Study of enterprise’ niche and its motile selection. J Southeast
Univ Philos Soc Sci Ed 7:62–67 (in Chinese)
Long Term Scheduling for Optimal Sizing
of Renewable Energy Sources for Hospitals
1 Introduction
Energy use has increased in last decades due to population and per capita usage
increment. Due to fossil fuel sources depletion, importance of environmental
issues and decrease in carbon emission, the energy crisis is on the rise. According
to the energy crisis, most of the countries intend to generate energy with alterna-
tive sources and change their energy portfolio. Renewable energy sources (RESs)
can reduce green gas emissions and environment pollution; therefore, they are
proper alternatives for current generation method, and some researches focused
on optimal sizing of RESs with different constraints. Moradi et al. [5] considered
renewable energy sources as a reliable source of electricity. In this paper, they
used Net Present Value (NPV) as an economic factor to determine type and
capacity of sources. Sharafi and Tarek [8] presented a multi-objective model for
determining the optimal size of renewable energy sources. They used ε-constraint
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 112
Long Term Scheduling for Optimal Sizing of Renewable Energy Sources 1343
method to minimize total cost, fuel emission, and unmet load. Yamada and Raji
[2] presented a model for determining the size of wind turbines, batteries, and
solar panels. The goal of their paper was to minimize the costs in three years time
horizon. Energy balance constraints, battery constraints, inverter model and load
model constraints were considered in the paper. Abd-el-Motaleb and Bekdach
[1] proposed stochastic model for optimizing size of wind turbines and energy
storage units. They considered energy balance constraints, storage constraints,
and also reliability constraints. Theo et al. [9] presented a linear optimization
model for a hybrid power system in order to minimize net present value of costs.
They chose wind turbines, solar panels and battery in an eco-industrial park as
a case study.
Health centers and hospitals are one of the major consumers of electricity,
where is necessary to generate energy in a continuous way and without black-
outs. Health care forms a significant cost for governments; however, thanks to
energy development, health care costs can be reduced [12]. Health care systems
are evolving rapidly, and the number of patients is increasing due to emergence
of new diseases and aging in society. Consequently, using modern medical equip-
ment and patient increment cause to the rise in energy use. The number of health
centers is increasing, and they evolved as a business industry because their com-
petition increases. In competition market, patients have to pay parts of the costs,
so health centers intend to decrease their costs as far as possible. Since hospitals
consist 31% of health care costs, hospitals are considered in this paper. One way
for reducing costs in hospitals is installation of RESs for energy generation [11].
Many countries allocate incentives to motivate investors for using RESs. There-
fore, hospitals can use government’s subsidy for supporting renewable energy
sources. In this paper, hospitals are considered as energy consumers. Although
supplying efficient energy is essential in hospitals, there are few researches on
energy use in hospitals. Some scholars research about heating and cooling sys-
tems in hospitals [3,6]. Mavrotas et al. [4] developed a bi-objective model for
minimizing costs and maximizing demand satisfaction. The technologies consid-
ered in their model were combined heat and power unit for providing power and
heat, an absorption unit and/or a compression unit for providing a cooling load.
Other papers focused on proper equipment selection in hospitals are [7,13].
The goal of this article is to determine optimal sizing of RESs in order
to minimize the costs and conserve reliability. The paper is organized as fol-
lows. Section 2 proposes problem definition and mathematical model. Section 3
presents the numerical example, analyses results, and discusses sensitivity analy-
sis. Finally, a conclusion is presented in Sect. 4.
2 Problem Definition
• The planning horizon is ten years which is divided into 120 months.
• According to geographic location, the hospital can invest in wind and solar
energy.
• The hospital is connected to the main grid. Purchased energy price from the
grid is varied in different time slots.
• Extra generated energy by RESs is sold to the grid with fixed price.
• Simultaneous buying energy from the grid and selling energy to the grid are
not allowed.
• The investment cost of RESs is varied in time slots which is increased or
decreased by the inflation rate.
• Interest rate and inflation rate are known and stable.
• The specific amount of money is considered as a budget for each time slot.
The extra budget does not transfer to next time slot.
• Since the goal of the hospital is to focus on medical problems, the allocated
area for installing RESs is limited.
• It is assumed that renewable energy sources use different rate of their capacity
in each time slot (βs, βw).
• In each time slot, minimum amount of energy has to be purchased from the
grid to be used in emergency situations.
• For motivating hospitals to invest in renewable energy sources, government
pays a% of sources investment cost.
Parameters:
cap : Capacity of 1 kW of solar panels and wind turbines in a month;
βst : Utilization percentage of 1 kW of solar panels at time t;
βwt : Utilization percentage of 1 kW of wind turbines at time t;
f s : Required area for installing 1 kW solar panel (Square meters);
f w : Required area for installing 1 kW wind turbine (Square meters);
Fs : Total area for solar panels (Square meters);
Fw : Total area for wind turbines (Square meters);
dt : Energy demand of hospital at time t;
rt : Reliability at time t;
smax : Maximum amount of allowable energy which can be bought from the grid;
zmax : Maximum amount of allowable energy which can be sold to the grid;
ps : Investment cost of 1 kW solar panel;
pms : Maintenance cost of 1 kW solar panel;
pw : Investment cost of 1 kW wind turbine;
pmw : Maintenance cost of 1 kW wind turbine;
ct : Cost of buying energy from the grid at time (cent);
w : Price of selling energy to the grid (cent);
It : Budget at time t;
ss : Energy bought for emergency situations;
μ : Interest rate in a month;
α : Inflation rate in a month;
a : Government subsidy.
Decision Variables:
xt : Installed capacity of solar panels at time t (kW);
yt : Installed capacity of wind turbines at time t (kW);
zt : Amount of energy bought from the grid at time t (kW);
st : Amount of energy sold to the grid at time t (kW);
ut : Binary variable (1 if energy has sold to the grid).
Scheduling model intends to determine optimal sizing of RESs during 10 years
time horizon as follows,
T
(ps + pms )(1 − a)(1 + α) xt
t t
(pw + pmw )(1 − a)(1 + α) yt
min t + t (1)
t=1 (1 + μ) (1 + μ)
ct (zt + ss) wst
+ t − t
(1 + μ) (1 + μ)
Subject to
t
t
βst × cap × xj + βwt × cap × yj + zt − st = dt , ∀t (2)
j=1 j=1
t t
(ps + pms )(1 + α) xt + (pw + pmw )(1 + α) yt + ct (zt + ss) − w × st ≤ It , ∀t
(3)
1346 S.M. Vaziri et al.
T
f s × xt ≤ Fs (4)
t=1
T
f w × y t ≤ Fw (5)
t=1
st ≤ smax × ut , ∀t (6)
zt ≤ zmax (1 − ut ) , ∀t (7)
t t
dt − βst × cap × j=1 xj − βwt × cap × j=1 yj + st
≤ (1 − rt ), ∀t (8)
dt
xt , yt ≥ 0, integer, ∀t (9)
zt , st ≥ 0, ∀t (10)
ut ∈ {0, 1} , ∀t. (11)
The objective function is minimizing the cost of supplying energy from renew-
able sources and grid. The two first expressions calculate the investment and
maintenance cost of the wind and solar energy sources by considering interest
rate and inflation rate. The third expression shows the cost of buying energy
from the grid, and the last one expresses hospital’s income from selling energy
to the grid.
Since supplying reliable energy is essential in hospitals, all the demand should
be met which is presented in constraint (2). Constraint (3) represents the limi-
tation of hospital’s budget in each time slot. Area limitation for installing solar
panels and wind turbines have been avouched by constraints (4) and (5), respec-
tively. In each time slot, it is not possible to buy energy from the grid and sell
energy to the grid simultaneously, which have been determined in constraints
(6) and (7). Constraint (8) represents renewable energy sources’ reliability. In
this equation, the left expression shows the amount of energy which has been
purchased from the grid which should be less than a specific amount. Constraints
(9)–(11) indicates variable types.
3 Numerical Example
To show the performance of the model, it is solved for a hospital in Iran. The
numerical example is carried out by using IBM ILOG CPLEX Optimizer v12.3.
As mentioned, the goal of this model is to find the optimal size of renewable energy
sources (RESs) installed in the hospital during the ten years time horizon. For
reducing forecasting errors, years are divided into months time slots (120 time
slots). Hospital’s monthly demand and budget are estimated during meetings with
its experts. Some of the data used in the model are shown in Table 1.
The results are shown in Fig. 1. With scrutiny, it is obvious that investment
is higher in first time slots. It is reasonable due to the positive inflation rate.
Moreover, investments in solar panels are more than wind turbines owing to the
fact that they have more productivity in the aforementioned hospital, and also
they need fewer investment costs.
Long Term Scheduling for Optimal Sizing of Renewable Energy Sources 1347
Total solar panels which will be installed during ten years is 56 kW and
installed wind turbines are 9 kW.
Sensitivity analysis is done for effective parameters on model such as inflation
rate, interest rate, demand, reliability and etc. Some of them which are more
important are elaborated in this paper.
In the aforementioned hospital, a high amount of budget is allocated to the
first time slot because the decision for investment is given in this time slot. It is
assumed that the budget is allocated to the hospital in the sixth time slot or at
the beginning of the second year (13th time slot) with considering its interest.
According to results, which is showed in Table 2, if the interest rate is between
15 and 20%, it is better to allocate budget in the first time slot. If the interest
rate is between 20 and 24%, it is better to allocate budget in the 6th time slot.
4 Conclusion
In this paper, long term scheduling model was presented for determining optimal
sizing of renewable energy sources in hospitals. The model aimed at minimizing
costs while maintaining reliability according to specific constraints in hospitals
1350 S.M. Vaziri et al.
such as area limitation and budget constraint. By solving the model with real
data of the hospital in Iran, the model can reduce 10% of costs comparing with
the time when energy is only purchased from the grid. In presented model,
planning horizon was divided into monthly time slots which caused an increment
in forecasting errors. For further researches, authors can deal with daily or weekly
data. Also in this model, the price of required field for installing wind turbines
and solar panels was not considered. Authors can investigate the effect of field
price on model outputs.
References
1. Abd-El-Motaleb AM, Bekdach SK (2016) Optimal sizing of distributed generation
considering uncertainties in a hybrid power system. Int J Electr Power Energy Sys
82:179–188
2. Atia R, Yamada N (2016) Sizing and analysis of renewable energy and battery
systems in residential microgrids. IEEE Trans Smart Grid 7(3):1–10
3. Bizzarri G, Morini GL (2006) New technologies for an effective energy retrofit of
hospitals. Appl Therm Eng 26(2–3):161–169
4. Mavrotas G, Diakoulaki D et al (2008) A mathematical programming framework
for energy planning in services’ sector buildings under uncertainty in load demand:
the case of a hospital in athens. Energy Policy 36(7):2415–2429
5. Moradi MH, Eskandari M (2014) A hybrid method for simultaneous optimization
of dg capacity and operational strategy in microgrids considering uncertainty in
electricity price forecasting. Int J Electr Power Energy Syst 56(7):241–258
6. Pagliarini G, Corradi C, Rainieri S (2012) Hospital chcp system optimization
assisted by trnsys building energy simulation tool. Appl Therm Eng 44(6):150–
158
7. Renedo C, Ortiz A et al (2006) Study of different cogeneration alternatives for a
spanish hospital center. Energy Build 38(5):484–490
8. Sharafi M, Elmekkawy TY (2014) Multi-objective optimal design of hybrid
renewable energy systems using pso-simulation based approach. Renew Energy
68(68):67–79
9. Theo WL, Lim JS et al (2016) An milp model for cost-optimal planning of an
on-grid hybrid power system for an eco-industrial park. Energy 116:1423–1441
10. Thomaidis NS, Santos-Alamillos FJ et al (2016) Optimal management of wind and
solar energy resources. Comput Oper Res 66:284–291
11. Vaziri SM, Rezaee B, Monirian MA (2017) Bi-objective integer programming of
hospitals under dynamic electricity price. In: Proceedings of the tenth international
conference on management science and engineering management. Springer, pp 409–
417
12. Wang YC, Mcpherson K et al (2011) Health and economic burden of the projected
obesity trends in the USA and the UK. Lancet 378(9793):815–825
13. Yoshida S, Ito K, Yokoyama R (2007) Sensitivity analysis in structure optimization
of energy supply systems for a hospital. Energy Convers Manag 48(11):2836–2843
Life Cycle Assessment of Waste Mobile Phone
Recycling–A Case Study in China
1 Introduction
Electronic waste, also known as e-waste, has been one of the major contributors
to the waste stream since the rapid growth of advanced technological products
[6]. E-waste includes waste electrical and electronic products such as televisions,
desktop computers, microwaves and mobile phones. In recent years, economic
and technological development in China has seen as massive growth in mobile
phone penetration and a consequent rise in mobile phone waste.
Currently over 40 million tonnes of e-waste is generated annually in the world
and is growing every year. Of all the kinds of e-waste, the growth in mobile phone
waste has been three times faster than total annual waste [19]. Current mobile
phone using data indicates that there are 7.2 billion active mobile phones in
2016, with over 1.305 billion of these being in China, a penetration rate of 93.1%
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 113
1352 T. Liu et al.
120.0 8,000
Mobile-cellular telephone
per 100 inhabitants
subscripƟon(in millions)
7,000
100.0
6,000
80.0
5,000
60.0 4,000
3,000
40.0
2,000
20.0
1,000
- -
Fig. 1. Global mobile-cellular subscriptions, total and per 100 inhabitants, 2005–2016.
Note: * Estimate
70.00 1,000.00
60.00
800.00
50.00
600.00
40.00
30.00 400.00
20.00
200.00
10.00
0.00 0.00
2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Fig. 2. China mobile-cellular subscriptions, total and per 100 inhabitants, 2005–2015
(see Figs. 1 and 2) [25]. About 77 million mobile phones are discarded every year
in China, a figure the United Nations environmental program has predicted to
be 7 times higher by 2020 [11].
The growing quantity of e-waste has begun to have a severe impact on the
environment; especially in developing countries. As the leading manufacturing
country of the world, China has become an “electronic waste treatment plant”,
with informal mobile phone recycling having become a major source of toxic
pollution [13]. With rising climate change and water and land pollution con-
cerns, mobile phone recycling waste has attracted increasing attention, forcing
the mobile phone production industry to find solutions to decrease the environ-
mental impact. In addition, various methods have been developed for mobile
phone recycling to reduce the environmental damage. There has been a growing
body of research into the recycling of waste mobile phones in China and the
inherent problems. Wang et al. evaluated the potential yield of indium recycled
from waste mobile phones in China [21], and Yi-Bo et al. and several others have
examined the recovery and recycling of old mobile phones in China [23], and the
other scholars did [4].
Life Cycle Assessment of Waste Mobile Phone Recycling–A Case Study 1353
The main methods that have been applied to waste management analy-
sis are data envelopment analysis(DEA) [2], the index decomposition analysis
method(IDA) [12], and the structure decomposition analysis method (SDA) [1].
Life Cycle Assessment (LCA) was first used in 1969 when the Midwest Research
Institute tracked and quantitatively analyzed the complete process of beverage
containers from raw materials extraction to final disposal [3]. With development,
the LCA has been included in the ISO14000 series of environmental management
standards and has become an important support tool for international envi-
ronmental management and product design [8]. LCA has been widely used in
environmental assessments in such areas as food waste management [14], waste
water [18], and plastic production as well as in waste electrical and electronic
equipment (WEEE) management [10].
As an environmental management tool, LCA has expanded the boundaries of
research systems, as it encompasses the entire life cycle of resource use, energy
consumption and waste discharge and provides an evaluation of the potential
environmental impact at each stage. This paper assesses the recycling of waste
mobile phones in China using LCA. To date, analyses of the environmental
impact of recycling waste mobile phones has lacked a systematic and scientific
focus and has tended to ignore the management of discarded mobile phones [15].
In this paper, the life cycle assessment of the discarded mobile phone recycling
process is quantitatively evaluated to identify the main factors influencing the
environment and provide decision support for the waste management of other
small electronic devices.
The remainder of this paper is structured as follows. In Sect. 2, the current
recycling process in China is outlined, an abandoned mobile phone path analysis
is conducted, and the latest recycling process explained. Section 3 applies the
LCA to the mobile phone recycling process and gives the results. Section 4 gives
suggestions and recommendations based on the analyses.
China has had a rapid growth in e-waste streams over the past ten years [11],
with mobile phone waste being the most rapidly growing. Because there has
not been a regimented recycling system established in China, most waste mobile
phones are kept, donated to others or discarded with other house-hold waste; the
formal recovery rate has been only about 1% [25]. The recycling of batteries and
other waste mobile phones parts has mainly been conducted through nonpro-
fessional channels such as street vendors or small electronics shops or through
recycling platforms on the internet. After recovery, refurbished mobile phones
are generally sold in a second-hand market or sold at a low price in remote towns
and villages. Mobile phones that cannot be refurbished often end up in illegal
dismantling workshops, where workers extract the copper, gold, silver and other
valuable metals using such processes such as acid leaching and open burning,
1354 T. Liu et al.
processes which not only damage the health of workers but also severely pollute
the surrounding environment.
From a product to waste, a mobile phone can change hands hundreds of times;
they can be retained by consumers, returned to a retailer, sold to a second-hand
shop or sent to recycling. The specific circulation paths are shown in Fig. 3.
Recycling Centre
Waste CollecƟon
Recycling Factory
Retailer
Holder
Second-hand
Dealer/Trader
Shop
Junk Shop
This paper evaluates the pollution produced in the recycling phase from the
perspective of material flows using the standard life cycle assessment method,
examines the waste mobile phone recycling process in formal recycling facilities
in China as a practical case study (see Fig. 4) and identifies the factors that
adversely impact the environment in the recycling phase.
The brief description of the recycling process is as follows: after being trans-
ported to the waste treatment plant, the mobile phones are manually disas-
sembled and the phone casings, printed circuit boards, lithium batteries, LCD
screens and other main components extracted. Then, each of these components
is processed separately. The ABS/PC plastic casings are crushed into plastic
pellets then reused as raw materials, the stainless steel, copper and other metals
are melted, with a further smelting processing step needed for the printed circuit
boards and lithium ion batteries (recovery of electrolyte solvent) to recover the
copper, gold, silver, palladium, cobalt, lithium, tin and other metals. LCD dis-
plays are generally incinerated to reduce environmental waste and the remaining
residues are put into landfill.
Mentioned in the process: (1) the life cycle of the mobile phone at each
stage from production, use, and disposal including the transportation of the
various materials; however, the transportation and landfill residue stage after
disassembly are not taken into account. (2) only lithium ion scrap mobile phone
batteries and mobile phone LCD displays are considered.
Life Cycle Assessment of Waste Mobile Phone Recycling–A Case Study 1355
The LCA has four interrelated steps, which we applied to the mobile phone
recycling process, the rough contents for which are shown in Fig. 5.
•analysis
•suggesƟons
InterpretaƟon
The study object in this paper is the abandoned mobile phone. The functional
unit is set to a tonne of phones of various brands and models, with the data for
each component being set at an average value. The study scope for the system
1356 T. Liu et al.
is set for disassembly, waste collection, sorting, processing and final disposal
(Fig. 6) and an inventory analysis and impact assessment for the energy and
resource consumption and pollutant emissions at each stage is established.
From relevant research data regarding domestic and foreign waste mobile phone
processing and combined with the Chinese current waste recycling, the main
process inventory data related to the recycling process was determined (see
Table 1). The data in this paper was taken from the ecoinvent database [7];
as an overview and methodological framework for the LCA, the ecoinvent data-
base is the most consistent and transparent life cycle inventory database in the
world [22].
Inventory analysis involves the tracking of the input and output flows of a
product system including the materials, water and energy used and the waste
released to the air, land and water [17]. In the model developed in this paper,
energy and raw materials consumption is taken into account and GaBi, a pow-
erful life-cycle assessment tool, is used to generate the model from the inventory
data to assess the sustainability solutions [24].
As shown in Fig. 7, the input for this model is 1000 kg of waste mobile phones.
3.4 Interpretation
In this section, the evaluation and improvements for the mobile phone recycling
process are discussed, after which, with the aim of reducing the burden on the
environment, suggestions are given to improve the current recycling mode of
1358 T. Liu et al.
China. In line with the goals and scope of the analysis, the recycling process
results were evaluated, from which we were able to come to conclusions and
develop recommendations.
The inventory results from the model focused on three main aspects. The
first was the recovery of resources and materials. Valuable materials, such as
copper, gold, and silver, which can be recycled and reused, were retrieved and
the other materials treated as waste. The second aspect was human health con-
cerns. Many related environmental issues such as carcinogenic materials, climate
change, ozone depletion, radiation, and organic and inorganic respiratory irri-
tants can cause various diseases, premature death and other non-normal deaths,
reducing life expectancy. This aspect was measured in units of disability adjusted
life years (DALY), which examined two kinds of health losses; disability and
death; to determine the social impact of possible medical conditions on pop-
ulation health. This indicator has also been adopted by the World Bank and
the World Health Organization. The third aspect focuses on ecosystem quality
which refers to the ecological environmental biodiversity loss within a given time
and space caused by environmental issues such as acidification/eutrophication
of land and water bodies, toxicity, land-use changes [5], and air emission [9].
The measurement units for this aspect were the potentially disappeared fraction
(PDF) m2 yr and the potentially affected fraction (PAF) m2 yr.
4 Discussion
The Chinese recycling waste mobile phone process was examined in this paper
using LCA as the evaluation tool. A model was proposed and the possible three
impact factors; resources, human health, and ecosystem quality; were analyzed to
assess the environmental impact of the recycling process. In the model building
procedure, resources consumption and material emissions were found to have a
certain degree of impact on the environment, and further research can extract
specific data to analyze the effects of specific substances such as heavy metals,
CO2 and NO2 . Because of the speed of mobile phone technological development,
the rate of mobile phone disposal is expected to increase, further impacting the
environment. If waste mobile phones can be effectively recycled, there will be less
environmental and health effects, improving both the economy and the residents
quality of life. This paper gave an assessment on the recycling process of waste
mobile phone in China, identified the main factors affecting the environment.
On the basis of this analysis, proposals were given to improve the eco-efficiency
in recycling process. First, relevant policies should be presented to standardize
the recycling behavior of citizens and market. Second, the recycling technology
should be improved. Besides, a more eco-efficient recycling model is needed. The
three main factors in this paper; resources, human health, and ecosystem quality;
will be taken into account in the model.
References
1. Ang BW, Xu XY, Su B (2015) Multi-country comparisons of energy performance:
the index decomposition analysis approach. Energy Econ 47:68–76
2. Barba-Gutiérrez Y, Adenso-Dı́az B, Lozano S (2009) Eco-efficiency of electric and
electronic appliances: a data envelopment analysis (DEA). Environ Model Assess
14(4):439–447
3. Beccali M, Cellura M, Finocchiaro P (2014) Life cycle assessment performance
comparison of small solar thermal cooling systems with conventional plants assisted
with photovoltaics. Energy Procedia 104(1):893–903
4. Bian J, Bai H, Li W (2016) Comparative environmental life cycle assessment of
waste mobile phone recycling in China. J Clean Prod 131:209–218
5. Bieda B (2008) The use of life cycle assessment (LCA) conception for arcelormit-
tal steel poland S.A. energy generating-KRAKW plant case study. Wydawnictwo
Politechniki Krakowskiej Im Tadeusza Kociuszki
6. Buekens A, Yang J (2014) Recycling of weee plastics: a review. J Mater Cycles
Waste Manag 16(3):415–434
7. Domain TE (2016) Life cycle inventory database. http://www.ecoinvent.org/
8. Guinee JB (2002) Handbook on life cycle assessment operational guide to the iso
standards. Int J Life Cycle Assess 7(5):311–313
9. Gurjar BR, Butler TM, Lawrence M (2008) Evaluation of emissions and air quality
in megacities. Atmos Environ 42(7):1593–1606
10. Harris R (2015) Life cycle assessment of post-consumer plastics production from
waste electrical and electronic equipment (weee) treatment residues in a central
european plastics recycling plant. Sci Total Environ 529:158–167
11. Herat S, Agamuthu P (2012) E-waste: a problem or an opportunity review of issues,
challenges and solutions in Asian countries. Waste Manag Res J Int Solid Wastes
Public Clean Assoc Iswa 30(11):1113–1129
12. Korica P, Cirman A, Gotvajn A (2016) Decomposition analysis of the waste gen-
eration and management in 30 European countries. Waste Manag Res
13. Liu X, Tanaka M, Matsui Y (2006) Electrical and electronic waste management in
China: progress and the barriers to overcome. Waste Manag Res J Int Solid Wastes
Public Clean Assoc Iswa 24(1):92–101
14. Lundie S, Peters GM (2005) Life cycle assessment of food waste management
options. J Clean Prod 13(3):275–286
15. Middendorf A (2008) Assessment of toxicity potential of metallic elements in dis-
carded electronics. J Environ Sci 20(11):1403–1408
16. Navarro E, Baun A, Behra R (2008) Environmental behavior and ecotoxicity of
engineered nanoparticles to algae, plants, and fungi. Ecotoxicology 17(5):372–86
17. Organization IS, Iso ISO (2006) Environmental management-life cycle assessment-
principles and framework. International Standard ISO
18. Pradel M, Aissani L, Villot J (2015) From waste to product: a paradigm shift in
LCA applied to wastewater sewage sludge. In: Setac Europe meeting
19. Ramesh BB, Parande AK, Ahmed BC (2007) Electrical and electronic waste: a
global environmental problem. Waste Manag Res J Int Solid Wastes Public Clean
Assoc Iswa 25(4):307–318
20. Russo G, Vivaldi GA, Gennaro BD (2015) Environmental sustainability of different
soil management techniques in a high-density olive orchard. J Clean Prod 107:498–
508
1360 T. Liu et al.
1 Introduction
The industrial developments of China have received great achievement recent
years, but the past extensive industrial mode was at the cost of environment
pollution and excessive resources consumption, which results in the problems
of resource constraint and environment pressure more visibly. And the resource
utilization has also become one important factor to measure the competitiveness
of a country’s manufacturing industry, and the industrial green development is
the necessary way to promote international competitiveness. In order to change
the industrial mode of high involvement, high consumption and high pollution, to
promote the region industrial economic development, the 13th Five Year Plan
outline of national economic and social development had put forward higher
requirement for industrial green development.
Sichuan province, as the most important economic area of China western, its
industrial green development is in what level, whether the industrial green effi-
ciency has improved. The deep research of these issues would be great theoretical
and practical significance. So based on the urgent need of green development, the
presented paper takes the prefecture-level city as research objects, by using DEA
research method to dynamic evaluate the green efficiency of industrial inputs and
its output. And propose countermeasures and suggestions to improve industrial
green efficiency of Sichuan province.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 114
1362 X. Liu and X. Jie
2 Literature Review
Industrial green development is the inner requirement and inevitable choice
of new industrialization. United Nations Industrial Development Organization
(UNID) definition that green development is a sustainable industrialization mode
of the production and consumption by effective utilization energy resources, low
carbon emission, low waste reduction, during the industrialization development
process to expand in order to eliminate poverty and create employment oppor-
tunities. The proposed research consider that the industrial green development
is under the theory guidance of sustainable development and circular economy,
through green design, green production, green management and other aspects of
the control, to realize the high utilization rate of energy resource, low environ-
mental disruption, high comprehensive benefits development mode.
At present, the foreign scholars are focused to research the connotation of
industrial green development and evaluation of its efficiency. Eiadat et al. [2]
and Nagesha [11] selected patent number of green process and the carbon diox-
ide emissions to assess the industrial green efficiency, Honma and Hu [6] used
the random front model to evaluate the industrial green efficiency. There are
scholars had researched the influence factors of green development on the spe-
cific industry, Vasauskaite et al. [15] put forward R&D investment, energy saving
technology or energy management system can promote the sustainable develop-
ment of Lithuania furniture industry. Fleiter et al. [4] evaluated the 17 process
technologies of the pulp and paper industry in Germany, and considered that the
innovation of paper mills and paper drying technologies in heat recovery systems
is the most influential technology for energy conservation and green development.
Mukherjee et al. [9] proposed computer-aided process design can optimize the
energy and material flow to minimize the impacts from other harmful emissions.
Kanagaraj et al. [10] reviewed different cleaner technological methods of reducing
the generation of solid and liquid wastes in leather industry.
Domestic scholars Chen et al. [1] researched the influencing factors
of industrial green development. Wang and Huang [16] according to the
five indexes of industrial and social development, recycling, resource reduc-
tion, pollution reduction, resource and environmental security, used AHP
method to evaluate the development of industrial circular economy in
Jiangsu province from 1985 to 2003. And they introduced the concept of
“obstacle degree”, to quantitatively diagnosis obstacle in different stages
of industrial circular economy development in Jiangsu province quantita-
tive. Tu and Xiao [14] established frontier function model of environmen-
tal by directional distance function, and calculated the green total factor
productivity of industrial enterprises above designated size, with energy con-
sumption and environmental pollution in 30 provinces of China in 1998–2005.
As well the pollutants considered the emissions of industrial sulfur dioxide. It
has found that the industrial development In the eastern coastal areas was more
harmonious with the environment, and the environmental technology efficiency
was generally low in the central and western regions. Mei et al. [8] established
the evaluation index system of green development efficiency of coal industry
A Malmquist Index-Based Dynamic Industrial Green Efficiency Evaluation 1363
the utilization efficiency of resources and energy. Figure 2 showed that the dis-
charge of industrial waste water, waste gas and solid waste (hereinafter referred
to as three wastes) per unit of GDP in Sichuan Province decreased year by year
during the period of 2005–2013, with the decreasing rate of 38.96%, 32.07% and
85.19%. These were higher than the decreasing rate of national industrial three
wastes emissions (30.63%, 21.85% and 72.89%). It indicates that in the process
of industrial development, Sichuan Province compared to the national, the level
of mitigation to environmental pollution is higher. In addition, although the
industrial waste water, waste gas per unit of GDP in Sichuan province is lower
than the national level after 2010, the industrial solid waste generated per unit
of GDP is still higher than the national recently. It showed that the situation of
environmental pollution in Sichuan province is still grim, and it is an important
way of strengthening the industrial green transformation to ease the pressure of
resources and the environment.
2005 2.94
1.01
2006 2.79
0.97
2007 2.60
0.93
2008 2.39
0.89
2009 2.25
0.84
2010 2.00
0.81
2011 1.84
0.77
2012 1.62
0.72
2013 1.51
0.68
Malmquist index was proposed by the famous economist Malmquist in 1953, then
Cavers et al. and Fare applied this index to estimate productivity. It can describe
multiple input variables and multiple output variables of production technology,
and is a method of measuring total factor productivity using panel data by
A Malmquist Index-Based Dynamic Industrial Green Efficiency Evaluation 1365
16.00
14.00
12.00
10.00
8.00
6.00
4.00
2.00
0.00
2005 2006 2007 2008 2009 2010 2011 2012 2013
Industrial waste water discharged per per unit of GDP in Sichuan(Million tons / Billion yuan)
Industrial waste water discharged per unit of GDP in China(Million tons / Billion yuan)
Industrial waste gas discharged per unit of GDP in Sichuan(Billion standard cubic meters/Billion yuan)
Industrial waste gas discharged per unit of GDP in China(Billion standard cubic meters/Billion yuan)
Industrial solid wastes generated per unit of GDP in Sichuan(Million tons / Billion yuan)
Industrial solid wastes generated per unit of GDP in China(Million tons / Billion yuan)
Fig. 2. Comparison of “three wastes” emissions in Sichuan province and China in 2005–
2013 (Source: calculated according to the Chinese Statistical Yearbook 2006–2014)
D0 t+1 xt+1 , y t+1 D0 t (xt+1 , y t+1 ) D0 t+1 (xt+1 , y t+1 )
t+1 t+1 t+1 t t
M0 x ,y ,x ,y = × × .
D0 t (xt , y t ) D0 t (xt , y t ) D0 t+1 (xt , y t )
(1)
In formula (1), M0t+1 is the index for decision making unit DMU0 in t+1
period, (xt , y t ) and (xt+1 , y t+1 ) represent input-output vectors for period t to
period t + 1 respectively, D0t and D0t+1 represent the distance function taking
technology as a reference for period’s t and t + 1 techniques. Further, under the
assumption of variable returns to scale, formula (1) can be rewrited as follows:
t+1
t+1 t+1 t t D0 t+1 xt+1 , y t+1 |V
M0 x ,y ,x ,y =
D0 t (xt , y t |V )
D0 t+1 xt+1 , y t+1 |C D0 t (xt , y t |V )
× (2)
D0 t (xt , y t |C) D0 t+1 (xt+1 , y t+1 |V )
D0 t (xt+1 , y t+1 ) D0 t (xt , y t )
× t+1 × .
D0 (xt+1 , y t+1 ) D0 t+1 (xt , y t )
In formula (2), V represents a variable size, C represents the same size. The
first item represents the pure technical efficiency index (Pech), the second rep-
resents the scale efficiency index (Sech), and the third represents the technical
progress index (Techch), that is Tfpch = Effch × Techch = Pech × Sech ×
Techch. Among them, the technical efficiency index measures the change degree
1366 X. Liu and X. Jie
(1) Standardizing the industrial emissions of waste water, waste gas (SO2 ) and
smoke (powder) dust. Xij Indicates J index values in I area:
(2) Calculating the entropy of each index, m is the number of evaluation units,
n indicates the number of indicators:
m
1 r rij rij rij
Hi = − mij × ln m while m = 0, ln m = 0.
lnm i=1 i=1 rij i=1 rij i=1 rij i=1 rij
(3) Obtaining the entropy weight of every index. To keep the longitudinal com-
parability of the dynamic evaluation results, according to the method of
Zhang and Zhou [23], the presented paper takes the average entropy weight
as the final entropy weight to avoid the differences in each year:
1 − Hj
Wj = n .
n − j=1 Hj
5 Result
5.1 Overall Analysis of Industrial Green Efficiency in Sichuan
The paper using DEAP 2.1 software to calculate the Malmquist index of indus-
trial green efficiency of Sichuan province and its decomposition during 2005–2014
1368 X. Liu and X. Jie
as shown in Table 2. It can be seen that the level of Sichuan’s industrial green
efficiency is low, but the development trend is increasing year by year. This
is consistent with Jia [7] results, which the Sichuan’s green industrial develop-
ment rate was still lagging behind, energy efficiency has gradually increased.
But the energy pressure is still large, environmental pollution has been curbed
but the situation is still grim. From the index decomposition terms, the impe-
tus for growth of industrial green efficiency comes from technical efficiency, and
its average annual growth rate is 2.1%. The improvement of technical efficiency
is mainly due to pure technical efficiency and scale efficiency, which increased
by 1.6% and 0.6% respectively. But the technical progress efficiency is low, far
below the effective value of 1. It indicates that it should increase the R&D,
introduction, digestion and absorption of green technology, to drive the green
development of Sichuan industry. One of the highest levels of Sichuan indus-
trial green development is in 2013–2014 with an increase of 16.6%, and mainly
depends on the improvement of technological progress index. This may be due
to the governments and enterprises had paid more attention to the green devel-
opment of industry in recent years, as well as the lag effect of green technology
has begun to produce.
Table 2. Average growth rate per annum and decomposition of TFP in Sichuan
province (2005–2014)
Table 3. Average growth rate per annum and decomposition of TFP in cities (2005–
2014)
Combined the Table 3 and Fig. 3, we analysis deeply the dynamic change
trend and the distribute state of the Malmquist index resolution item in Sichuan
province. Figure 3 reflects the growth pattern and characteristic of the industrial
green efficiency. The cross-point of the horizontal-vertical coordinate is the equal-
ization point of the technology efficient index and technology progress index, by
which the 18 prefecture-level cities of Sichuan province are divided into 4 quar-
tiles. In the first quartile, the Effch is higher and Techch is lower than the average.
It indicates that these regions with lower technology contribution and simplex
elements driven mode. The Effch and Techch are higher than equalization in the
second quartile, these regions with reasonable element input scale, high tech-
nology import and absorption contribution rate. It promotes the growth of the
1370 X. Liu and X. Jie
Techch
1.03 III Da zhou Chengdu II
Mia nya ng
0.98 Yibin Deya ng
0.93
Pa nzhihua
Ziya ng LuzhouZigong Na nchong
0.88 Suining
Neijia ng Gua ng'a n Ya ‘a n
Meisha n Lesha n
0.83 Gua ngyua n
Ba zhong
IV I
0.78
Effch
0.90 0.95 1.00 1.05 1.10 1.15
Fig. 3. The Techch and Effch distribution of the industrial green efficiency of 18
prefecture-level cities of Sichuan province
industrial green efficiency. The Techch is higher than Effch in the third quartile,
it manifests that the technology level of these regions is above of production fron-
tier. But it needs to improve the scale efficiency and pure technology efficiency.
The Effch and Techch are lower than province equalization in the fourth quartile,
these regions with low industrial green level and weak sustainable development
due to the technical constraints and scale bottlenecks.
In the meantime, from the four major economic regions data of Chengdu
Plain of Sichuan, Northeast Sichuan, South Sichuan, and Panxi (the presented
paper have not studied the economic region of northwest of Sichuan due to the
large missing data), it indicates that the industrial green efficiency is a gradual
reduce step distribute from north to south, and from west to east, as show in
Table 4.
The industrial green efficiency of Panxi head the list, secondly are Chengdu
Plain of Sichuan and South Sichuan, the worst is Northeast Sichuan, its year
equalization reduce is 12.7%. The technology progress indexes are inefficiency
state of the four economic regions, from Malmquist index and its resolution
items. Besides the south of Sichuan, the technology efficiency and its resolution
items of the other three economic regions are progressing, which reveals that
its resource allocation ability, resource utilization rate, and scale efficiency are
valid.
The driving force of technical efficiency in Chengdu Plain Economic Zone
is mainly pushed forward by pure technical efficiency. This also shows that the
radiation driven role of Chengdu, as the capital of Sichuan Province, has signif-
icant advantages in terms of production factors, management level and system
construction. The improvement of technical efficiency of Panxi Economic Zone
mainly originates in its advantage of scale efficiency (an increase of 6.8%). It also
shows that the resource input and its utilization efficiency have achieved scale
benefit. Conversely, the scale efficiency of South Sichuan Economic Zone has
decreased by 0.6%, which may be related to the unreasonable input structure of
production factors.
A Malmquist Index-Based Dynamic Industrial Green Efficiency Evaluation 1371
Table 4. Average growth rate per annum and decomposition of TFP in economic zone
(2005–2014)
Combined with the existing problems and deficiencies in the process of industrial
green development in Sichuan province, there are several measures could be
taken.
(1) Adjusting the industrial structure within industry.
To eliminate backward production capacity with high energy consumption
and low added value, and reduce the industries proportion of high energy con-
sumption and high pollution [8], such as steel, cement and others. Developing
renewable energy sources actively, like wind, solar, bio-energy and others to
improve coal-based industrial structure. Besides, it can expand the scale of envi-
ronmental protection industry, for instance, solid waste treatment industry and
sewage treatment industry.
(2) Promoting technological innovation.
It is beneficial to reform and upgrade traditional industries, to strengthen
technological transformation, thus promoting the development of green recy-
cling economy. Positively implementing environmentally friendly technologies
and industrial process, such as industrial energy and water saving, clean produc-
tion, recycling and others, to enhance green technology capability [12]. Encour-
aging university–industry collaboration to promote the industrialization of green
technology. Protecting intellectual property is to enhance the enthusiasm of
enterprises to develop green technology, and solve the market barriers. The gov-
ernment can set up innovation platform with the opportunities of the eastern
industry transfer to the west, to enhance the ability of independent innovation.
(3) Adjusting scale structure to improve industrial green efficiency.
Using “incentive” environmental regulation tools based on market tools,
and establish market based emission regulations and industry dynamics to pro-
mote industrial scale efficiency [5]. Increasing the environmental surveillance of
medium-sized enterprises and implementing environmental measures with pow-
erful binding. In the aspects of the industrial development policy, it’s should
play an exemplary role of leading enterprise, encourage the joint development of
small and medium enterprises, enhance the linkage with large enterprises in the
upstream and downstream industry chain. In addition, establishing green net-
work supply chain to reduce the damage to ecological environment and achieve
the goal of improving industrial green efficiency.
1372 X. Liu and X. Jie
Acknowledgements. This work was supported by the Key Program (Grant No.
16AGL003) of the National Social Science Foundation of China. And we thank the sup-
port of the Social Science Planning Annual Project of Sichuan Province (SC16C011),
Soft science research plan project of Sichuan province (2017ZR0098), Soft science
research project of Chengdu city (2015-RK00-00156-ZF), The Central University Basic
Scientific Research Business Expenses Project of Sichuan University (skqy201623).
References
1. Chen C, Han J, Fan P (2016) Measuring the level of industrial green development
and exploring its influencing factors: empirical evidence from China’s 30 provinces.
Sustainability 8(2):153
2. Eiadat Y, Kelly A et al (2008) Green and competitive? an empirical test of the
mediating role of environmental innovation strategy. J World Bus 43(2):131–145
3. Fare R, Grosskopf S, Norris M (1994) Productivity growth, technical progress, and
efficiency change in industrialized countries: reply. Am Econ Rev 84(5):1040–1044
4. Fleiter T, Fehrenbach D et al (2012) Energy efficiency in the german pulp and
paper industry-a model-based assessment of saving potentials. Energy 40(1):84–99
5. Fowlie M, Reguant M, Ryan S (2016) Market-based emissions regulation and indus-
try dynamics. J Polit Econ 124:249–302
6. Honma S, Hu JL (2014) A panel data parametric frontier technique for measuring
total-factor energy efficiency: an application to japanese regions. Energy 78:732–
739
7. Jia RM (2011) The ecological character of evolution in sichuan province and reg-
ulatory strategy. Ind Technol Econ 8:98–104
8. Mei S, Wang LJ, Li WB (2009) Efficiency assessment of coal industry sustainable
development by improved data envelopment analysis. In: International conference
on management science and engineering, pp 1830–1835
9. Mukherjee R, Sengupta D, Sikdar SK (2015a) Sustainability in the context of
process engineering. Clean Technol Environ Policy 17(4):833–840
A Malmquist Index-Based Dynamic Industrial Green Efficiency Evaluation 1373
1 Introduction
The objective in the next decades is to obtain a competitive and sustainable energy
to all countries, allowing to reduce dependence on fuels to households, industries
and transportation. Wind power and Concentrated Solar Power (CSP) are being
two of the main renewable energy sources. Its importance in the energy market is
being essential, and the forecasts show it will continue in the near future.
Renewable energy industry requires significant improvements in reliability,
lifetime or availability that it is done by an efficient maintenance based on Con-
dition Monitoring Systems (CMS). CMS is the process of determining the condi-
tion of system [37,38,49], where one of the main proposes is to identify changes
between two main states of the structure, damaged and undamaged [36,39,41].
They will provide different patterns that will be used in order to analyze the
condition of the system [19].
A complex CMS contains numerous sensors that generates different type of
signals with information and sampling frequencies [41]. It needs to be processed,
preferable on-line, with a minimum computational cost and with high accuracy
to reduce false alarms [39].
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 115
1378 A.A. Jiménez et al.
This paper present a novel approach to reduce the number of sensors, max-
imizing the signal processing analysis to detect different damages. The guided
wave ultrasonic signals are employed for fault detection and classification.
Structural Health Monitoring (SHM) is a technology that combines advanced
CMS, together with signal processing, to determine the condition of the struc-
tures on line or not [19,36,41]. SHM leads to increase the Reliability, Availability,
Maintainability and Safety (RAMS) of the system [16,41]. SHM allows also to
know the different levels of the defect severity. It will be useful for an optimal
maintenance management to reduce costs and increase the profitability.
Farrar and Sohn [16] considered pattern recognition in SHM. This method-
ology consists in four stages shown in Fig. 1.
2 Approach
Figure 2 shows the schematic approach for determining the level of damage or
anomaly based on SHM employing ultrasonic guided waves.
Machine Learning and Neural Network for Maintenance Management 1379
Sensors generate noise from a variety of sources and therefore a signal pre-
processing is necessary to eliminate/reduce the dataset that does not provide
useful informatio [45]. Standard statistical techniques have been employed in
references [10,53]. Yu et al. [64] were able to reduce global noise using averag-
ing techniques and Daubechies Wavelet (DW) to eliminate local high-frequency
perturbations. Denoising and compression signal in Guided Waves (GW) based
on the Discrete Wavelet Transform (DWT) was employed by Rizzo and Scalea
[55]. There are a large number of research and reviews articles on filtering in the
treatment of ultrasonic guided waves [46]. Hamming [21] performs a review about
low-pass filters available for data smoothing. This paper will consider Wavelet
transforms for filtering the signals.
The denoising of the signal is performed employing a multilevel 1-D wavelet
analysis using Daubechies family. The wavelet decomposition structure of the
signal is extracted. The threshold for the de-noising is obtained by a wavelet
coefficients selection rule using a penalization method provided by Birgé-Massart
[47]. An overly aggressive filtering could eliminate data that should show, for
example, small echoes that come from defects. Figure 3 shows the original signal
and the de-noised signal when it is applied the wavelet de-noised filter. The
Wavelet de-noising filter does not produce an unwanted signal delay in contrast
to other digital filters.
It is observed that the filter removes noise significantly, and does not eliminate
information that is related to different structural features.
2.2 FE Methods
-3
Noisy signal
x 10
2
Voltage
0
-2
200 400 600 800 1000 1200 1400 1600 1800 2000
Denoised signal
-3
x 10
Voltage
1
0
-1
Residual noise
-5
x 10
5
Voltage
-5
200 400 600 800 1000 1200 1400 1600 1800 2000
Time (samples)
Fig. 3. Decomposition detail five (D5), De-noised D5 and extracted residual noise
2.3 FS Methods
one of the most efficient techniques for order optimization [35]. The AIC is a
measure of the goodness-of-fit of an estimated statistical model, based on the
trade-off between fitting accuracy and the number of estimated parameters.
2.4 Classifiers
(1) Machine Learning Approach
1 Decision Trees
Decision Tree (DT)is a classifier used in many fields to study if the data contains
different classes of objects that can be interpreted significantly in the context of
a substantive theory [34,48,51,52].
DT generates a split of space from a labeled training set. The objective
is to separate the elements of each class into different labeled regions (leaves)
minimizing the local error. Each internal node in the tree is a question (decision)
that determines which branch of the tree must be taken to reach a leaf. DT is
determined for; (1) how to split the space (Splitting Rules); (2) stop condition
of splitting; (3) labeling function of a region, and; (4) measurement of error.
The purpose of the Splitting Rule is to minimize the impurity of the node. The
recursive splitting algorithm stops when it finds any of the following conditions:
the node is pre-set maximum deep; all elements of the node are same class; there
is no empty sub node or; SR does not reach a pre-set value.
To label a leaf or region once it is already considered as a terminal is consid-
ered to develop a DT. Equation (1) establishes the labeling function.
k
l = arg min l NI × cl,l , (1)
l=1
where NI is the number of elements of class l in the region, l is the class to label
and c( l, l ) is the labeling cost.
c( l, l ) with all classes is calculated and l is selected, which minimizes the
error. The label that minimizes the error of a region is the most populated class.
In case of a tie, a random one is chosen. Equation (2) gives the classification
average error
N
1
ε= err(l, li ), (2)
N
l=1
where err(l, li ) is the error of labeling a class l as l . This error is solved by
splitting the space and assigned to each split a label.
2 Discriminant Analysis (DA)
The ultrasonic signals considered are homogeneous for the same frequency and,
therefore, the classifier that provides better results is LDA [15]. This classifier is
valid for ultrasonic signals obtained at different frequencies and used for speech
recognition. This type of classifier is based on a geometrical approach. A linear
function representing a decision limit divides a feature space into regions that
1382 A.A. Jiménez et al.
have common properties. The aim of this method is to find the expression of lin-
ear discriminant functions that enable the objects classification in the considered
classes.
3 Support Vector Machine (SVM)
SVM [9] is a supervised multivariate classification method. “Supervised” refers to
a training step where the algorithm learns the differences between pre-specified
groups to be classified [59]. SVM treats each feature as a point in a high-
dimensional space, being the number of dimensions the same to the number
of rating levels. Each feature is assigned to a group and the linear classification
function (3) learns the characteristics to discriminate among five groups. A limit
of decision, or hyperplane (a generalization of a plane of n − 1 dimensions which
divides an n-dimensional space), must be defined to separate the data based on
class membership and to classify linearly the dataset. However, for a linearly
separable problem, there are an infinite number hyperplanes correctly classified
data. SVM algorithm finds the optimal one characterized by the largest margin
between classes. The margin is defined as the distance of the closest training
data points of the hyperplane. These points are the most difficult to classify and
they are called support vectors. The hyperplane is defined by a weight vector,
which is a linear combination of the support vectors, and specifies both a direc-
tion and a displacement which together define the maximum margin classifier.
The decision function D(x) is given by Eq. (3),
where w and b are the SVM parameters, and ∅(x) is a kernel function
The hyperplane is defined by Eq. (3), and the distance between the hyper-
plane and pattern x can be written by Eq. (4).
D(x)
. (4)
w
2
w
J= . (5)
2
In this case, the linear function Kernel and one vs one method have been
employed [43].
4 k-Nearest Neighbors
k-Nearest Neighbors(k-NN) [11,12] is a high-performance classifier widely used in
Machine Learning [42]. k-NN rule classification is an extension of the Nearest-
Neighbor (NN) rule. Given a set x of the k samples, the rule assign to each
sample is to label most frequently represented among the k nearest samples [15].
Machine Learning and Neural Network for Maintenance Management 1383
k-NN search technique and k-NN based algorithms are widely used as benchmark
learning rules. The relative simplicity of the k-NN search technique makes it easy
to compare the results from other classification techniques to k-NN results.
The accuracy of k-NN classification significantly depends on the metric used
to compute distances between different samples [61]. In most cases, the best
performing classifier is Fine k-NN [62], using metric Euclidean distance in Eq. (6).
d2st = (xs − yt ) · (xs − yt ) (6)
The recommendations by Demsar [4], and the extensions by Garcia and Her-
rera [17], have been employed to perform the comparative analysis of classifiers.
Friedman Test will be used to test the null hypothesis that all classifiers achieve
the same average. Bonferroni-Dunn Test is applied to determine significant dif-
ferences between the top-ranked classifier and the following. Holm Test is used
to contrast the results.
3 Approach Scheme
The methodological process is represented schematically in Fig. 4. Firstly, for
each damage or anomaly, the best classifier is selected.
Classifier
Feature (Machine
Data
Acquisition Extraction and Learning and Classifiers Decision
and Filtering Feature Neural Evaluation Making
Selecction Network)
The methodological process will follow the scheme given in Fig. 5 when the
best classifier for each damage is set.
Classifier
Damage-1
Classifier
Damage-2
Data Acquisition Feature
and Filtering Extraction Feature Selection
⋮
Classifier
Damage-n
4 Conclusions
The paper presents a novel approach to optimize the sensors in a condition
monitoring system employing ultrasonic waves. The approach can detect dif-
ferent potential faults with a single signal emitted by a sensor, such as delam-
ination, mud or ice on blades of wind turbines. This methodology allows to
1386 A.A. Jiménez et al.
References
1. Akaike H (1969) Fitting autoregressive models for prediction. Ann Inst Stat Math
21(1):243–247
2. Akaike H (1974) A new look at the statistical model identification. IEEE Trans
Autom Control 19(6):716–723
3. Alazrai R, Lee CSG (2012) An narx-based approach for human emotion identifi-
cation. In: Ieee/rsj international conference on intelligent robots and systems, pp
4571–4576
4. Ar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach
Learn Res 7(1):1–30
5. Bradley AP (1997) The use of the area under the roc curve in the evaluation of
machine learning algorithms. Pattern Recognit 30(7):1145–1159
6. Brankovic A, Falsone A et al (2016) Randomised algorithm for feature selection
and classification. Xiv preprint
7. Breiman L (2001) Random forests. Mach Learn 45(1):5–32
8. Breiman L, Friedman J, et al (1984) Classification and regression trees. CRC Press
9. Burges CJC (1998) A tutorial on support vector machines for pattern recognition.
Data Min Knowl Disc 2(2):121–167
10. Chopra I (2002) Review of state of art of smart structures and integrated systems.
AIAA J 40(11):2145–2187
11. Cover T, Hart P (1967) Nearest neighbor pattern classification. IEEE Trans Inf
Theory 13(1):21–27
12. Dasarathy BV (1990) Nearest neighbor (nn) norms: nn pattern classification tech-
niques. Los Alamitos IEEE Comput Soc Press 13(100):21–27
13. De Lautour OR, Omenzetter P (2010) Damage classification and estimation in
experimental structures using time series analysis and pattern recognition. Mech
Syst Signal Process 24(5):1556–1569
14. Dietterich TG (2000) Ensemble methods in machine learning. In: International
workshop on multiple classifier systems, pp 1–15
15. Duda RO, Hart PE, Stork DG (2012) Pattern classification. Wiley
16. Farrar CR, Doebling SW, Nix DA (2001) Vibration based structural damage iden-
tification. Philos Trans R Soc B Biol Sci 359(359):131–149
17. Garc S, Herrera F (2008) An extension on “statistical comparisons of classifiers over
multiple data sets” for all pairwise comparisons. J Mach Learn Res 9(12):2677–2694
18. Ghosh S, Maka S (2009) A narx modeling-based approach for evaluation of insulin
sensitivity. Biomed Signal Process Control 4(1):49–56
Machine Learning and Neural Network for Maintenance Management 1387
43. Milgram J, Cheriet M, Sabourin R (2006) “one against one” or “one against all”:
which one is better for handwriting recognition with svms? In: Proceedings of
international workshop on frontiers in handwriting recognition
44. Møller MF (1993) A scaled conjugate gradient algorithm for fast supervised learn-
ing. Neural Netw 6(4):525–533
45. Muñoz JMC, Márquez FPG, Papaelias M (2013) Railroad inspection based on acfm
employing a non-uniform b-spline approach. Mech Syst Signal Process 40(2):605–
617
46. Munoz CG, Márquez FG (2016) A new fault location approach for acoustic emis-
sion techniques in wind turbines. Energies 9(1):40
47. Munoz CQG, Márquez FPG, Tomás JMS (2016) Ice detection using thermal
infrared radiometry on wind turbine blades. Measurement 93:157–163
48. Pal M, Mather PM (2003) An assessment of the effectiveness of decision tree meth-
ods for land cover classification. Remote Sens Environ 86(4):554–565
49. Papaelias M, Cheng L et al (2016) Inspection and structural health monitoring
techniques for concentrated solar power plants. Renew Energy 85:1178–1191
50. Prechelt L (1998) Automatic early stopping using cross validation: quantifying the
criteria. Neural Netw Off J Int Neural Netw Soc 11(4):761
51. Quinlan JR (1986) Induction of decision trees. Mach Learn 1(1):81–106
52. Quinlan JR (2014) C4. 5: programs for machine learning. Elsevier
53. Raghavan AC, Cesnik CES (2007) Review of guided-wave structural health moni-
toring. Shock Vib Dig 39(2):91–114
54. Rijsbergen CJV (1979) Information retrieval. Butterworth-Heinemann
55. Rizzo P, Scalea FLD (2004) Discrete wavelet transform to improve guided-wave-
based health monitoring of tendons and cables. Proc SPIE Int Soc Optical Eng
5391:523–532
56. Rumelhart DE, McClelland JL et al (1988) Parallel distributed processing, vol 1.
IEEE
57. Staszewski WJ, Robertson AN (2007) Time-frequency and time-scale analyses
for structural health monitoring. Philos Trans R Soc A Math Phys Eng Sci
365(1851):449
58. Tang J, Alelyani S, Liu H (2014) Feature selection for classification: a review. In:
Documentación Administrativa, pp 313–334
59. Vapnik VN (1999) An overview of statistical learning theory. IEEE Trans Neural
Netw 10(10):988–999
60. Wei WWS (1994) Time series analysis. Addison-Wesley publ Reading
61. Weinberger KQ, Saul LK (2009) Distance metric learning for large margin nearest
neighbor classification. J Mach Learn Res 10(1):207–244
62. Xu Y, Zhu Q et al (2013) Coarse to fine k nearest neighbor classifier. Pattern
Recognit Lett 34(9):980–986
63. Yang Y (1999) An evaluation of statistical approaches to text categorization. Inf
Retr J 1(1):69–90
64. Yu L, Bao J, Giurgiutiu V (2004) Signal processing techniques for damage detection
with piezoelectric wafer active sensors and embedded ultrasonic structural radar.
Proc SPIE Int Soc Optical Eng 5391:492–503
Volatility Spillover Between Foreign Exchange
Market and Stock Market in Bangladesh
about the volatility help investors in their risk management, portfolio allocation
and stock trading strategies and is considered one of the ways of developing
a capital market. However, there is no notable study in Bangladesh that has
investigated the volatility of these financial markets, especially how these two
markets are linked through volatility.
The contribution of the research lies in the fact that no notable study has
been done so far that investigates the link between the volatility of the capital
market and foreign exchange markets in Bangladesh. It is hoped that this study
will inform policy-making decisions in relating to these two market and help
investors in making investment decision.
1.1 Methodology
The study uses CSE General Index of Chittagong Stock Exchange (the 2nd
largest in the port city) from January 1, 2009 to December 1, 2016. The official
exchange rate of Taka/USD of the same period was also used. These data were col-
lected from Chittagong Stock Exchange and from the central bank of Bangladesh,
‘Bangladesh Bank’ respectively. The Taka/USD exchange rates and the CSE Gen-
eral Index for the sample period are shown in Figs. 1 and 2 respectively.
The return of the stock market was calculated as follows:
85
80
80
USDSell
USDBuy
75
75
70
70
Fig. 1. Taka/USD exchange rate (From January 1 2009 to December 12, 2016)
15000
CSE General
10000
5000
Daily spread between buy and sell rate of Taka/USD was used to measure
return on Taka/USD exchange rate. The Taka/USD exchange rates return and
the CSE General Index daily return is for the sample period are shown in Figs. 3
and 4 respectively.
1
.5
Return2
0
-.5
-1
Fig. 4. CSE General Index stock return (Dec 12, 16 to Jan 1 2009)
standard errors and confidence intervals. GARCH models corrects these treat
heteroskedasticity as a variance inside the model.
The ARCH model was the forerunner to the GARCH specifications. Before
this, volatility used to be measured using rolling standard deviations over past
time periods. However, the problem with this approach is that each time period
receives equal weight in the estimation of the current period’s variance. The
ARCH process, proposed by Engle [5], solves this by letting these weights be
parameters to be estimated. This allows the data to determine the best weight
to be used on the present volatility [6]. An ARCH model is presented below:
r = π + εt ,
q
σt2 = c + αi ε2t−i ,
i
where εt = σt zt ; zt is an i.i.d random variable with mean zero and variance one
and σ 2 is the squared conditional variance in the model.
So the conditional variance is here described as a distributed lag of past
squared innovation [3]. That means that the variance of the current error term
is a function of the size of the previous period’s squared error term. In order to
avoid a very large number of coefficient in high order polynomial, Bollerslev [2]
developed the GARCH-model as a generalization of Engle’s [5] ARCH-model.
This makes a declining weight that never reaches zero [6]. GARCH (1, 1) is the
most common model to use in empirical research. A GARCH (1, 1) model is
presented below:
r = π + εt ,
σt2 = c + αε2t−1 + βσt−1
2
,
Table 1. Persistence and asymmetry of CSE General Index and Taka/USD exchange
rate
3 Conclusion
This study investigates and analyses the link between the stock market and for-
eign currency (ForEx) market of Bangladesh. The study was done to get insight
about on the volatility and its spillover effect between the stock and the foreign
exchange markets, and consequently the degree of their integration to expand
the information set available to international portfolio managers, multinational
corporations and policymakers for decision-making and policy formulation.
It was found that the persistence of volatility in stock market in Bangladesh
is very high which indicates that volatility clusters in stock market and decays
slowly. This study is similar to studies to Wu [11] and Jebran and Iqbal [7].
However, the study found low persistence in Taka/USD currency market. The
study also found that there is asymmetry in the volatility of in the stock market
i.e. negative news causes higher volatility compared to good news in the market.
The result shows that volatility in the CSE General Index spillover that of the
Taka/USD exchange rate but not the opposite.
1394 S. Rubayat and M. Tareq
References
1. Bala DA, Asemota JO (2013) Exchange-rates volatility in Nigeria: application of
garch models with exogenous break. J Appl Stat 4(1):89–116
2. Bollerslev T (1986) Generalised autoregressive conditional heteroskedasticity.
Econometrica 31(302):2–27
3. Campbell JY, Lo AW et al (1997) The econometrics of financial market. Princeton
University Press, Princeton
4. Cheung YW (2001) Equity price dynamics before and after the introduction of the
euro: a note. Multinational Financ J 5(2):113–128
5. Engle RF (1982) Autoregressive conditional heteroscedasticity with estimates of
the variance of United Kingdom inflation. Econometrica 50(4):987–1007
6. Engle RF, Patton AJ (2001) What good is a volatility model? Quant Financ
1(2):237–245
7. Jebran K, Iqbal A (2016) Dynamics of volatility spillover between stock market and
foreign exchange market: evidence from Asian countries. Financ Innov 2(1):1–20
8. Levine R (1999) Financial development and economic growth: views and agenda.
J Econ Lit 35(2):688–726
9. Levine R, Zervos S (1996) Stock markets, banks, and economic growth. Am Econ
Rev 88(3):537–558
10. Siddiqui J (2010) Development of corporate governance regulations: the case of an
emerging economy. J Bus Ethics 91(2):253–274
11. Wu RS (2005) International transmission effect of volatility between the financial
markets during the Asian Financial Crisis. Transition Stud Rev 12(1):19–35
12. Zivot E (2009) Practical issues in the analysis of univariate GARCH models.
Springer, Heidelberg
Cost/Efficiency Assessment of Alternative
Maintenance Management Policies
1 Introduction
Maintenance interventions in deteriorating equipment are aimed at guaranteeing
its availability, reliability and safe performance whilst maximizing output and
minimizing waste. Such interventions may be predictive, preventive, or correc-
tive. Regarding the planning horizon, whereas corrective maintenance is reac-
tive (interventions come as a response to a failure), predictive and preventative
interventions are typically scheduled using either time-based or condition-based
schemes. Time-based interventions are usually scheduled according to an age-
based regime or following a pre-determined calendar (usually as indicated by
the producer or by certain legal requirements). In this work we focus our atten-
tion on preventative maintenance interventions.
There is a vast amount of literature addressing the problem of scheduling
preventative maintenance interventions in one single machine. Some recent ref-
erences are [3–6,12,13]. Good reviews of earlier contributions please see [1,8,20].
However, it is hard to find references in literature addressing the problem of
scheduling maintenance interventions to more than one machine, among them
[10,15,21]. Moreover, in most of the work addressing single machine mainte-
nance, the attention is focused on finding optimal intervention times given the
information gathered about the current condition of the machine without taking
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 117
1396 D. Ruiz-Hernández and J.M. Pinar-Pérez
into consideration the costs associated with the machine operation, intervention
and potential breakdowns. These values become particularly relevant when it
comes to deal with the design of multiple machines maintenance schedules, as
the trade-off intervention/operation-breakdown becomes more relevant.
Notwithstanding the shortage of academic work, technical literature shows
that maintenance tasks are typically allocated following either calendar based
programmes or, alternatively, regimes based on information about the current
state of the equipment [2,7,11,14,18,22]. Based in this observation, in this work
we compare the performance of a number of maintenance scheduling policies
which broadly replicate the approaches taking in real life problems.
2 Problem Formulation
1
Please notice that, even if the machines’ condition or level of wear may not necessarily
be a discrete variable, if the deterioration of the machine occurs smoothly over
a linear space, then the state space can be discretized or uniformised in order to
obtain a discrete state space [17].
Cost/Efficiency of Maintenance Management 1397
3 Maintenance Policies
In this section we briefly describe the five maintenance regimes that have
been used in our work. All of them aim at minimizing the overall opera-
tion/maintenance cost of equipment. Three of them are condition based and
presume the existence of a condition monitoring system that provides timely
information about the system’s state. Most of the condition based policies that
are used in real life fit (with minimal adaptations) within the more general frame-
works described below. However, because of the large investment costs involved,
in some cases condition monitoring systems may not economically feasible (see,
for example, [16]). In such cases time based regimes are particularly useful. For
this reason, the fourth policy discussed in this work is a purely calendar based
one. Finally, the last policy discussed below combines features of both, condition
based and time based regimes.
(1) Threshold Policy
At each decision epoch, the manager observes the state of the system and
assigns maintenance tasks to those machines whose state is equal to or above
than certain threshold. If the number of candidate machines is larger than the
number of technicians, then the effort is allocated to those machines with larger
state of wear. Ties are broken randomly. More precisely, the action set of the
threshold policy for a threshold T at time t is given by a(t) = {am (t) : sm (t) ≤
M
T, m=1 am (t) ≤ R}.
policies exploit all the information available for each machine at any time in order
to compute a so-called activity or intervention index. The indices are calibrated
for each machine/state pair and the policy prescribes to intervene those machines
with larger indices. Typically, the indices will have negative values for early states
of wear, when the machine is supposed to be in an as-good-as-new condition.
Consequently, only machines with positive indices will be intervened. This result
has been extended to all the other policies evaluated in this article, imposing
that no machine must be intervened whenever it lies in a state whose index has
negative value2 .
If we let Im, sm represent the index of machine m at state sm , and I is
the Rth largest index, then the actions available at time t are given by a(t) =
{am |am = 1 ∀ m : Im, sm > I ; am = 0 ∀ m : Im, sm ≤ I }.
(4) Calendar Based Policy
Periodical or calendar based policies follow a pre-defined intervention plan.
In our setting, a periodical intervention time is fixed for all the machines, with
the first intervention scheduled after certain sojourn time (when the machine is
operated from new). Even though in practice this calendar is determined by the
acquisition date and the periodical interventions prescribed by the maker, in our
formulation the interventions calendar is determined as follows: Once the sojourn
period is finished, at each period the decision-maker intervenes as many machines
as technicians are available according to their level of wear (pristine machines
are not intervened). Once a machine has been intervened, a new intervention
is scheduled at certain time in future (the interval between interventions is the
same for all machines and exogenously determined). Once all the machines have
been scheduled and intervened, a new cycle begins. For the sake of results’s
comparability with the index and other pure condition-based regimes, a variant
of this policy consisting of preventing the machine to be intervened in early
states of wear, has been included. These states coincide with the minimal state
for which all the indices computed for the index policy are non-negative.
(5) Dynamic Calendar Policy
When the collection of machines includes equipment from different versions,
ages or makers, their deterioration rates may differ and a rigid calendar-based
policies may not be completely appropriate. In order to address these cases, we
propose an extension of the pure calendar-based policy which is an hybrid of
calendar and condition based policies: at each decision epoch, the less deterio-
rated machines in the plan are substituted for other -more deteriorated ones-
which are not in the current period’s plan. The machines whose intervention is
postponed join a queue or buffer of delayed machines that will be included in
future plans.
This policy, depicted in Fig. 1, works as follows. First, all the machines are
assigned an intervention period as in the Calendar Based Policy. Once the inter-
vention stage has started, at each period the manager checks on the condition of
2
Please notice that this condition has only been imposed for the sake of results’
comparability and does not apply for more general cases.
Cost/Efficiency of Maintenance Management 1399
all the machines. If any of the machines scheduled for intervention at the current
period is still at the pristine state, the machine is dropped-off from the list and
its identifier is included in the buffer. Simultaneously, if there are non-pristine
machines in the buffer, they are immediately included, by order of wear, in one of
the empty slots of the current plan. The decision maker then checks on the con-
dition of all the other, non-scheduled, machines and identifies those units whose
state of wear is worst than the best machine in the plan. If no machines are
identified, the intervention stage starts and the maintenance tasks are deployed.
If, otherwise, some machines are picked during the check, they are either allo-
cated to any empty slot in the current plan (if any), or substitute any scheduled
machines which are in a better condition. The machine (or machines if there
are ties) with the largest state of wear in the current intervention plan is never
substituted. The displaced machines are included in the buffer and the interven-
tion actions are deployed. As it was done for the purely calendar based policy,
a variant of this policy consisting of preventing the machine to be intervened in
early states of wear, has also been considered.
4 Numerical Experiments
A number of numerical experiments was conducted in order to compare the
performance of each of the policies discussed in Sect. 3 under a collection of
different scenarios characterized by the following variables: maintenance and
operation costs; breakdown costs; transition rate; breakdown probability; and
number of machines technicians.
1400 D. Ruiz-Hernández and J.M. Pinar-Pérez
after one period operation (notice that s∗ ∈ (0, s, s + 1)); finally, C(s) is the cost
of intervention when the machine is in s.4
Once the indices are obtained, the threshold policy is computed fixing the
threshold equal to the value Forbid ; likewise, the pure condition based policy is
only allowed to intervene machines with current state larger than Forbid. For
the calendar based policies two alternatives approaches were taken: in the first
one, the procedure was allowed to intervene any machine with state different
than zero; in the second one, intervention was only admitted for states larger
than Forbid.
A summary of the results obtained in a benchmark example is shown in
Table 1. The columns represent the seven alternative policies, with the suf-
fix (−F ) indicating that parameter Forbid has been used. The columns show,
respectively, the average visited state and its variance, the maximum visited
state, the number of breakdowns over 1040 periods, the average intervened state
and it variance, the total operation/maintenance cost and the total number of
interventions. The cost structure and the indices are illustrated in Fig. 2.
Figure 3 illustrates the evolution of an insulated machine under two of the
most relevant policies: Index and Calendar. It can be seen that, under the index
policy, the interventions tend to occur, in average, at state number 5, whereas the
evolution under the calendar policy tends to be more erratic, as this policy does
not take into consideration the state of the machine for scheduling interventions.
Given that the numerical values of the results of the experiments are not
as important as the patterns detected during the numerical computations, we
limit ourselves to presenting a graphical summary of our main findings. Firstly,
we notice that the average cost of the operation/maintenance schedule over the
4
Closed form expressions for this equations, which have been omitted for the sake of
briefness, are available from the authors upon request.
1402 D. Ruiz-Hernández and J.M. Pinar-Pérez
2500 5
4
2000
3
1500
Costs
Index
Operation Costs 2
1000
0
5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50
States States
5 10
4 8
Machine State
Machine State
3 6
2 4
1 2
0 0
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Period Period
20 years planning horizon follows a very similar pattern for all policies, with
the time dependent policies showing larger costs (in average) than the state
dependent ones. This can be seen in panel (a) of Fig. 4, where the index (blue),
Calendar-F (red) and the threshold policies (green) are compared5 . This panel
shows the total cost incurred by the seven policies over the 19440 scenarios and
(the values are averages over 30 repetitions for each case)6 .
The sketch suggest the existence of four well defined blocks, corresponding to
each of the four different specifications for the number of machines. Each of these
main blocks is composed by three more or less differentiated segments, which
5
For the sake of simplicity, the dynamic calendar policies and the simple version of the
calendar policy have not been illustrated in this figure. However, their behavior does
not present significant differences with respect to the results shown in the displays.
6
Each point in the graph corresponds to the average cost, computed over 30 ran-
domly generated problems, of a particular combination of operation, maintenance
and breakdown costs, deterioration rates, breakdown probabilities and number of
machines and repairmen.
Cost/Efficiency of Maintenance Management 1403
4.5
Operation/Maintenance Costs
8 4
3.5
Average State
6
3
2.5
4
2
1.5
2
0 0.5
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Cases × 10 4 Cases
(a) (b)
Number of Breakdowns (60 Machines) Number of Interventions (60 Machines)
7000 4500
Index Index
Calendar Calendar
Threshold Threshold
6000 4000
5000 3500
Interventions
Breakdowns
4000 3000
3000 2500
2000 2000
1000 1500
0 1000
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Cases Cases
(c) (d)
Fig. 4. Costs performance of the index, calendar-f and threshold policies over a 20 years
planning horizon
based policies intervene much less than the time based ones. Moreover, the index
policy clearly outperforms the threshold one. It is also interesting to see that the
calendar policy intervenes a constant number of machines irrespectively of the
specific cost structure of the particular scenario, with interventions depending
only on the number of machines and technicians available.
× 10 7 Costs for 100 Machines and 4 Technicians × 10 7 Costs for 100 Machines and 4 Technicians
6 6
Bc=500 Bc=500
Bc=1500 Bc=1500
Bc=3000 Bc=3000
5 5
Operation/Maintenance Costs
Operation/Maintenance Costs
4 4
3 3
2 2
1 1
Det. Rate Case 1 Det. Rate Case 2 Det. Rate Case 1 Det. Rate Case 2
0 0
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
Cases Cases
(a) (b)
We finally take a look inside of our plots in order to illustrate the inner
structure of the results. In order to do so, we pick the case where there are 100
machines, four technicians and the costs are quadratic. Figure 5 illustrates the
corresponding 810 results obtained for the index policy. Vertically, each panel
is divided in two sections, corresponding to each of the two alternative deteri-
oration rates proposed. Each section consists of five columns representing each
of the possible configurations of the breakdown probabilities. Horizontally, it is
possible to distinguish three blocks consisting of three groups of points (red, blue
and magenta). Each of these groups is associated with a different value of the
breakdown cost as illustrated by the legend. Additionally, each of the three main
levels (consisting of three colors each) correspond to a different operation cost.
For example, the southernmost red group corresponds to the smallest operation
and breakdown costs; the next one (blue) corresponds to the lowest operation and
the second breakdown cost, and so on. It can be seen that the colored points are
grouped in small clusters. Each of these clusters correspond to the nine different
combinations of the maintenance cost parameters. The second panel depicts the
cases of a fixed maintenance cost and two alternative (high and low) operation
costs, together with the three alternatives for the breakdown cost. This panel
allows a more clear visualization of the behavior of our formulation. It can be
seen, for example, that a larger deterioration rate implies larger costs for all the
configurations (comparing the right and the left hand side sections). In general,
the costs are larger as the likelihood of a breakdown increases, however, it is
Cost/Efficiency of Maintenance Management 1405
worth noticing that for low breakdown costs, this relationship may become neg-
ative. This may be reflecting an incentive to intervene less when the replacement
cost of the equipment is small compared with the maintenance costs. This struc-
ture can be observed all over the set of experiments with only small variations
between policies.
5 Conclusion
In this article presented a collection of general purpose machine maintenance
scheduling policies that mimic the intervention strategies that are used in prac-
tice. These policies can be grouped in two broad categories: time and condition
based policies. We additionally suggest a novel alternative consisting of a mixture
of both. Our aim has been to obtain an assessment of their relative performance
before different combinations of operation, intervention and failure costs, dete-
rioration rates and breakdown probabilities.
With this aim, we conducted a large number of experiments. In general the
results are according to what intuition will suggest: condition based policies in
general perform better than time based ones. We also found that threshold poli-
cies in general outperform the pure condition based ones and are almost as effi-
cient as the more sophisticated index policies. Our results show that when inter-
vention and breakdown costs are low, time based policies may perform better
than the index one, as this will try to postpone the intervention tasks, increasing
the probabilities of breakdowns.
Our results are consistent with the trend that has been observed in the last
few years in heavy equipment based industries: the use of time based policies
has been gradually relegated in favor of more efficient condition based strategies.
However, time based policies are still relevant because of both guarantee, safety
and maintainability purposes, and the fact that in many cases condition moni-
toring systems require an important investment that cannot always be afforded
by small firms.
References
1. Ahmad R, Kamaruddin S (2012) An overview of time-based and condition-based
maintenance in industrial application. Comput Ind Eng 63(1):135–149
2. Ben-Daya M, Ait-Kadi D et al. (2009) Handbook of maintenance management and
engineering, vol 7. Springer
3. Dao CD, Zuo MJ, Pandey M (2014) Selective maintenance for multi-state series-
parallel systems under economic dependence. Reliab Eng Syst Saf 121:240–249
4. Do P, Voisin A et al (2015) A proactive condition-based maintenance strategy with
both perfect and imperfect maintenance actions. Reliab Eng Syst Saf 133:22–32
5. Doostparast M, Kolahan F, Doostparast M (2014) A reliability-based approach to
optimize preventive maintenance scheduling for coherent systems. Reliab Eng Syst
Saf 126:98–106
6. Fitouhi MC, Nourelfath M (2014) Integrating noncyclical preventive maintenance
scheduling and production planning for multi-state systems. Reliab Eng Syst Saf
121:175–186
1406 D. Ruiz-Hernández and J.M. Pinar-Pérez
1 Introduction
Because of the decreasing self-sufficiency rate and the number of farmers, increas-
ing of the average age of farmers and limited agricultural land, the environment
of agricultural business is in tough phase [6]. In addition, extreme narrowness of
agricultural land area hinders production cost reduction, which leads to decline in
price competitiveness of domestic agricultural products against imported goods.
For this reason, in recent years the government has considered the ways of agri-
cultural reform and promotion of more efficient production activities. As the
result, the revised agricultural land law was implemented in 2009. So many
companies try to adopt the management approach developed in manufactur-
ing industries such as TPS (Toyota Production System) and aim for stable and
efficient supply. However this scientific approach against hidden problem was
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 118
1408 R. Aoki and H. Katayama
2 Related Study
Vassian [7] logically derived the periodic production ordering method that
achieves the minimization of inventory fluctuation by using the demand fore-
cast values, past production ordering and the term end stock amount. Based
on this, Chiyoma et al. [1] proposed the production planning model for agricul-
tural supply chain by the following mathematical Eqs. (1), (2), and (3), where
consideration is given to varied harvest quantity that peculiarly happens on
agricultural products.
L
L−1
Pt = D̂t,t+i − Pt−i − It + SS, (1)
i=1 i=1
It = It−1 + Qt − Dt , (2)
Qt = Pt−L + εt . (3)
Symbols
Pt : Production quantity planned at period t, completed at period t + L;
D̂t,t+i : Demand quantity of period t + i predicted at the end of period t;
It : Inventory quantity at the end of period t;
SS : Safety stock level;
L : Production lead time + 1;
Dt : Demand quantity at period t;
Qt : The yield quantity at period t, where it does not always coincide with
the planned quantity;
εt : The difference between Qt and Pt−L .
Estimates δ̂i and θ̂k can be evaluated by least square method through mini-
mizing the total difference between actual data of harvest and its estimated
quantity as shown in Eqs. (4), (5), (6) and (7).
2
min (Ĉi − Ci ) , (4)
i∈N
where, sample data of δ̂i is obtained by the formula δ̂i = ACθ̂i , which is
i k
deduced from formula (5) and S is the operator for standard deviation.
Step 3. Calculation formula of γ̂
Ai = T /N, (11)
Ĉi = Ai δ̂i θ, (12)
dˆi = Ri + si . (13)
Symbols
N : Number of segments of farmland;
T : Seeding capacity of entire farmland [kg];
Ai : Seeding quantity per segment at planning term i [kg/seg].
Ĉi : Estimated harvest quantity at planning term i [kg/seg];
δ̂i : Estimated value of aggregated yield of seeding quantity at planning term i;
θ : Supposed weight gain rate (percentage of weight increase);
dˆi : Estimated harvest date of seeded plants at term i;
Ri : Growth duration of seeds at planning term i;
Si : Seeding date at planning term i.
In addition, an example of seeding schedule is shown in Fig. 4, and an example
of input and output dates are shown in Fig. 5, where the number of segments is 9.
5 Result
In this Sect. 2 results derived from the model/procedure described in the previous
chapter are summarized.
Heijunka Operation Management of Agri-Products Manufacturing 1413
Table 3. Result of “Mura” reduction: first experiment (∗ is focused variable and per-
formance)
• Before and after ratio of standard deviation of harvest interval is 0.56 approx-
imately (from 7.71 to 4.32) and that of harvest quantity is 0.26 approximately
(from 18.72 to 4.89).
• Before and after ratio of average of harvest interval is 0.7 approximately (from
8.28 to 5.82) and that of harvest quantity is 0.8 approximately (from 71.27
to 59.29).
Relationship of before and after improvement is given in Fig. 6. By the above
procedure, harvest date and quantity can be predicted more accurately by reduc-
ing Mura which is general turbulent factor, and stock volume and space are
expected to be improved by reducing both seeding interval and quantity as men-
tioned in Fig. 3. This procedure leads us to the Heijunka world. Worker’s skill
and the number of necessary workers can be encouraged to improve and then
Muri of current system must be disappeared.
6 Concluding Remarks
In this study, agricultural production management method is suggested for
achieving Heijunka operation and applied to a production management division
of collaborating agri-company. The turbulent factors in the agricultural produc-
tion process were clarified and its performance is analyzed by simulation exper-
iments by controlling levels of these factors. The obtained results revealed that
elimination of turbulent elements such as seeding interval, growth duration and
aggregated yield are critical for performance improvement. Furthermore seeding
schedule on segmented farmland can respond flexibly to demand changes under
restricted usability of farmland. Obtained result can be a fine example of lean
management transfer to agriculture industry.
References
1. Chiyoma H, Murata K, Katayama H (2014) On performance analysis of sustainable
closed logistics system in agricultural industry. In: Proceedings of the 17th annual
conference of Japan society of logistics systems, pp 101–106
2. Hatake Company Co., Ltd. HP (2016) (in Japanese). http://www.t-k-f.jp/company.
html
3. Ito K (2006) Mathematical programming problems in agricultural management.
Oper Res Manage Sci 5:264–267 (in Japanese)
4. Katayama H (2016) Transfer activities of lean management to other industries-
transplanting Heijunka concept for leanised operations. In: Proceedings of the 20th
Cambridge international manufacturing symposium, pp 1–17
5. Katayama H, Aoki R (2016) On Heijanka manufacturing system for agricultural
products. In: Proceedings of the 46th computers and industrial engineering, p 8
6. Ministry of Agriculture, Forestry and Fisheries HP (2015). http://www.maff.go.jp/
7. Vassian HJ (1955) Application of discrete variable servo theory to inventory control.
J Oper Res Soc Am 3:272–282
Optimizing Reserve Combination with
Uncertain Parameters (Case Study: Football)
1 Introduction
Selecting an optimum reserve set and implementing the best replacement strat-
egy is a challenge for most industries. Likewise, in multiplayer sports, choosing
the right combination of the reserve team and taking a proper replacement strat-
egy needs an advance planning and optimization. Inaccurate arrangements can
cause significant financial loss as well as reputations.
When a team enters a tough league to play against different opponents with
different methods, these issues become even more critical. Success or failure
of each team depends on member’s ability and skills. Therefore, it is essential
to arrange the best combination both in main and reserve team in order to
obtain the optimum results [2,9]. A few structured models have addressed this
problem. Most of the studies emphasis on combination of the main team [1].
Since decisions are made prior to the games, we face probability and uncertainty
in the problem, for which we use uncertain programming. It is a powerful tool
in modeling industrial, human, natural and peddling and probability systems.
Certainly in intensive games, it is impossible to use 11 players. Players due
to fatigue, injury, disciplinary cases and other reasons aren’t available in all
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 119
1418 M.A. Monirian et al.
games, and coach should use all main and reserve players to reach victory and
profitability. Importance of reserve players when they replaced in proper time
with suitable person, is not hidden to anyone, because with their adequate energy
and by using skills and power can be influential. For instance, in final game of
world cup 2014 in extra times, substitution players led the German team won
the game.
Players are divided to 5 posts: goal keeper, defender, midfielder, sides, forward.
According to game’s technique, number of players in each post (expect goal
keeper) can be different. This classification is done according to each player’s
abilities. Players due to their skills can be suitable for several posts.
Broadly speaking, in selecting reserve team we assume that we have a main team
and according to its constraints, situation and game circumstances, a reserve
team should be selected. Replacement strategy conforming the game process
and player’s situation, declares that which player in the pitch should replace
with which player in reserve team, in best time in order to preserve quality
of the team. A football team plays in diverse leagues. Sometimes the schedule
becomes so compressed, even they may have 3 or 4 games in a week. This event
causes player’s exhaustion. Consequently, generally main combination are not
considered fix, so reserve combination selection problem and also replacement
planning are very important issues. The first and main step for selecting reserve
team and replacement strategy is recognition and evaluation of main team and
desired options for reserve so that able us to compare different players in each
post with stable criteria. First, players evaluated with belief degrees and by using
mathematical model, choose reserve team to maintain team quality at maximum
state. For rating players we need to consider several number of quantitative and
qualitative characteristics in player’s selection, like individual skills, statistics
data in previous matches, physical readiness, psychological factors, injury and
also opponent’s condition [8]. In average, each football team, in normal situation,
has more than 24 players that 10 of them plays in main combination and 6
of them are in reserve team. In this problem, we do not consider goal keeper
selection. Because they run less during the game so their fatigue is less, but for
compulsive situations, injury or expulsion it is necessary to have a goalkeeper
in reserve team [3,8]. For scoring and evaluating players, 18 indexes and criteria
such as heading, jumping, dribbling, tackling and so on are defined, that these
criteria are approved by football simulation’s companies. We value these criteria
according to coach’s opinion. In Table 1 these criteria are presented.
Uncertainty theory was founded by Liu [5] in 2007 and studied by many
researchers. Uncertainty theory is a new method for modeling undetermined
phenomena based on normality, duality, subadditivity and product axioms.
Optimizing Reserve Combination with Uncertain Parameters 1419
2 Preliminary
Uncertainty theory is a branch of axiomatic mathematics based on normality,
duality, subadditivity and product axioms. In this section, we introduce some
concepts in uncertainty theory, which are used throughout this paper.
{ξ ∈ B} = {γ ∈ Γ |ξ(γ) ∈ B} (3)
is an event.
φ(x) = M {ξ ≤ x} (5)
provided that the expected value E [ξ] exists. Lease acknowledge collaborators
or anyone who has helped with the paper at the end of the text.
3 Uncertain Programming
Uncertain programming, which was first proposed by Liu [5], is a type of mathe-
matical programming involving uncertain variables. Assume that x is a deci-
sion vector, and ξ is an uncertain vector. Since the uncertain programming
model contains the uncertain objective function f (x, ξ) and uncertain constraints
gj (x, ξ) ≤ 0, j = 1, 2, · · · , p, Liu [5] proposed the following uncertain program-
ming model,
for j = 1, 2, · · · , p.
minx E [f (x, ξ1 , ξ2 , · · · , ξn )]
(13)
s. t. M {gj (x, ξ1 , ξ2 , · · · , ξn ) ≤ 0} ≥ αj , j = 1, 2, · · · , p.
Parameters:
Decision variable:
1, if there is need to replace player i with player j
xij =
0, otherwise.
Optimizing Reserve Combination with Uncertain Parameters 1423
The proposed model is linear. The objective function (15) targets to maximize
the reserve team’s quality, which obtains from multiplying each player’s value in
desired post in probability of the need to replacement in that post. Constraint
(16) determines maximum number of allowable replacements. Constraint (17)
indicates the availability of players because it is possible that the player is not
in the team owing to Injury or deprivation of the game. Constraint (18) states
that each post should not be vacant. Constraint (19) indicates decision variable
type.
6 Experimental Results
The proposed model solved with IBM ILOG CPLEX-Optimizer v12. 3. We
choose a team with 24 players and solve the problem with these players. Due
to coach’s opinion the belief degrees are obtained. The results are presented in
appendix. In football games the maximum allowed replacements as 3 times. The
results of solving models using these data are examined in this section. The
main players’ number, which is determined according to coach’s opinion, is like
follows. This will be an input for model. The reserve team will be chosen due to
the input of model.
22 20 18 15 14 9 5 2 7 1
Table 2 shows the comparison of objective functions in two cases which indi-
cates the model’s efficiency.
7 Conclusion
The purpose of this article is to propose a model for choosing reserve team, so the
quality reduction will be minimum. We proposed a linear mathematical model.
To obtain uncertain parameters we use belief degree and uncertain programming.
For future works, authors can investigate main combination selection due to the
future games and simultaneous select reserve team.
Acknowledgement. This work was granted from Professor Baoding Liu, Tsinghua
University. The authors appreciate his financial and scientific supports. We also appre-
ciate Professor Mitsuo Gen for his guidance.
Appendix
See Tables 3, 4, 5, 6, and 7.
Optimizing Reserve Combination with Uncertain Parameters 1425
Number Defender Midfielder Sides Forwards Number Defender Midfielder Sides Forwards
of of
criteria criteria
C1 (0.9,1) (0.4,0.5) (0.6,0.75) (0.85,1) C10 (0.45,0.55) (0.6,0.75) (0.6,0.75) (0.85,1)
C2 (0.45,0.55) (0.6,0.75) (0.6,0.75) (0.85,1) C11 (0.85,1) (0.6,0.75) (0.45,0.55) (0.85,1)
C3 (0.85,1) (0.85,1) (0.85,1) (0.6,0.75) C12 (0.45,0.55) (0.45,0.85) (0.45,0.55) (0.6,0.75)
C4 (0.45,0.55) (0.6,0.75) (0.85,1) (0.6,0.75) C13 (0.4,0.5) (0.85,1) (0.4,0.5) (0.6,0.75)
C5 (0.6,0.75) (0.85,1) (0.85,1) (0.85,1) C14 (0.6,0.75) (0.45,0.55) (0.85,1) (0.85,1)
C6 (0.45,0.55) (0.9,1) (0.6,0.75) (0.85,1) C15 (0.3,0.45) (0.85,1) (0.6,0.75) (0.6,0.75)
C7 (0.45,0.55) (0.4,0.5) (0.6,0.75) (0.85,1) C16 (0.3,0.45) (0.6,0.75) (0.85,1) (0.85,1)
C8 (0.45,0.55) (0.6,0.75) (0.85,1) (0.85,1) C17 (0.6,0.75) (0.85,1) (0.6,0.75) (0.85,1)
C9 (0.45,0.55) (0.85,1) (0.9,1) (0.6,0.75) C18 (0.6,0.75) (0.85,1) (0.85,1) (0.6,0.75)
Number 1 2 3 4 5 6 7 8
of
players
P (0.6,0.75) (0.3,0.5) (0.1,0.3) (0.1,0.2) (0.3,0.5) (0.3,0.5) (0.1,0.3) (0.3,0.4)
Number 9 10 11 12 13 14 15 16
of
players
P (0.4,0.5) (0.5,0.6) (0.3,0.4) (0.55,0.8) (0.2,0.3) (0.3,0.5) (0.6,0.8) (0.3,0.45)
Number 17 18 19 20 21 22 23 24
of
players
P (0.3,0.4) (0.5,0.7) (0.1,0.3) (0.1,0.3) (0.3,0.5) (0.3,0.45) (0.3,0.5) (0.3,0.4)
References
1. Abdulwhab A, Billinton R et al (2004) Maintenance scheduling optimization using a
genetic algorithm (GA) with a probabilistic fitness function. Electr Power Compon
Syst 32(12):1239–1254
2. Ahmed F, Deb K, Jindal A (2013) Multi-objective optimization and decision making
approaches to cricket team selection. Appl Soft Comput 13(1):402–414
3. Liu B (2002) Theory and Practice of Uncertain Programming. Physica-Verlag, Hei-
delberg
4. Liu B (2009) Some research problems in uncertainty theory. J Uncertain Syst 3(1):3–
10
5. Liu B (2009) Uncertain entailment and modus ponens in the framework of uncertain
logic. J Uncertain Syst 3(4):243–251
6. Liu B (2010) Uncertainty Theory: A Branch of Mathematics for Modeling Human
Uncertainty. DBLP
7. Liu B (2015) Uncertainty theory. Stud Comput Intell 154(3):1–79
8. Tavana M, Azizi F et al (2013) A fuzzy inference system with application to player
selection and team formation in multi-player sports. Sport Manag Rev 16(1):97–110
9. Trninić S, Papić V et al (2008) Player selection procedures in team sports games.
Acta Kinesiologica 2(1):24–28
A Bayesian-Based Co-Cooperative Particle
Swarm Optimization for Flexible Manufacturing
System Under Stochastic Environment
1 Introduction
Over the past sixty years, a great number of researches have been conducted
on the job shop scheduling problem (JSP), which is a branch of the scheduling
problem and highly popular in the manufacturing industry. JSP is a classical
combinatorial optimization problem, and it is recognized as NP-hard under the
precedence and resource constraints [2,5]. The flexible job shop scheduling prob-
lem (FJSP) is a generalization of the classical JSP for flexible manufacturing
systems (FMSs).
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 120
A Bayesian-Based Co-Cooperative Particle Swarm Optimization 1429
nodes represent variables and arcs shows the relationship between two linked
nodes. So, if we can use Bayesian network to shown the relationships among the
variables, we will no longer need to determine the number of subcomponents
and their sizes. In the application of Bayesian network, Bayesian optimization
algorithm (BOA) is widely used in scheduling, such as, resource assignment, task
scheduling and so on. In 2011, Yang et al. [15] present a novel scheduling algo-
rithm based on BOA for heterogeneous computing environments. Li et al. [7]
proposed a Bayesian optimization algorithm to solve task assignment problems
in heterogeneous computing systems. Hao et al. [4] proposed a novel coopera-
tive Bayesian optimization algorithm (CoBOA) to overcome the challenges men-
tioned in the field of multiple resources scheduling problem (MRSP). Inspired
by BOA, in this paper, we proposed a learning-based grouping mechanism in
which we apply BOA to get the optimal Bayesian network structure for find-
ing the potential relationships among variables and then to divide the variables
according to relationships (network structure).
The remainder of this paper is organized as follows: Sect. 2 gives the formu-
lating process of S-fJSP and mathematical programming model; Sect. 3 describes
the BNPSO combined BOA Particle swarm algorithm (PSO) with multiple sub-
populations in detail; Sect. 4 gives kinds of detailed numerical experiments and
computational results; finally, Sect. 5 comes to the conclusion of the paper.
2 Mathematical Formulation
In order to solve a stochastic flexible job-shop scheduling problem (S-fJSP), it
assumes that the probability distribution of the processing time is known in
advance. In this paper, we use a pure integer programming model to transmute
the processing times in terms of stochastic variable. The S-JSP can be formu-
lated as an extended version of fJSP. Difference to the conventional fJSP, each
operation oij is carried out under uncertain random disturbance with pre-given
expected valued E [pij ] and variance vij , where pij is the processing time of Oij
on the machines. The distribution of the variance can be predicted from the
experimental data such as normal distribution, uniform distribution and expo-
nential distribution etc. It may also consist of several assumptions as follows:
The S-fJSP has already been confirmed as one of the NP-hard combinatorial
problems. There are N jobs and M machines to be scheduled; furthermore, each
job is composed of a set of operations and the operation order on machines is
A Bayesian-Based Co-Cooperative Particle Swarm Optimization 1431
Parameter:
Decision variables:
1, if oik is performed on machine j
xikj =
0, otherwise.
The objective function is minimizing the makespan, as follows:
min E [CM ] = E max max max ξcikj . (1)
i k i
For the process of scheduling, the operations are not allowed to terminate
until it is completed and for each resource. The resource constraints are described
as follows in Eq. (2):
For each task, it consists of several operations and each operation is allo-
cated different machine, although each operation can be processed on different
resources, the operation sequence within each task must be observed. Note that
the two constraints (3) and (4) only one need to be satisfied at the same time.
For each resource, it contains only two cases: chosen or not chosen. And The
constraint (6) and the constraint (7) gives the restriction constraint of decision
variables.
3 Algorithm Design
BNPSO starts by random grouping all decision variables encoded by real num-
bers into several subcomponents, each subcomponent contains s variables shown
in Fig. 1. Then, to optimize the global optimal solution based on the co-
evolutionary framework. In the process of evolution, to record down the data
set for BN learning according to the value variation of each decision variable.
Each 50 iterations, update the structure of BN follows BOA and adjust the
grouping scene. Meanwhile, self-adjust the parameters used in evolution algo-
rithms. Several key parts of BNPSO listed in the following subsections in detail.
vt (t + 1) = ωvi (t) + c1 rand1 [pbest (t) − xi∂ (t)] + c2 rand2 [lbest (t) − xi (t)], (8)
xt (t + 1) = xt (t) + vt (t + 1). (9)
In order to explore larger solution space, the velocity and position update
model with both Cauchy and Gaussian distributions was proposed in [15]. This
model updates the position directly through personal best: pbest , local best: lbest .
pbest means the best particle in the searching history of each particle, lbest means
the best one among the ith particle, (i − 1)th particle and (i + 1)th particle each
iteration. rand and p are used for deciding which formula to be selected, rand is
a random value within [0,1] and p is a given value at initial stage. C(1) means
the Cauchy distribution value with mean value 1; N (0, 1) means the normal
distribution value with mean value 0 and the standard variance 1. The position
updating formula is shown as follows, each time, generate rand, and compare
rand with p to decide which equation to be chosen. If rand ≤ p, the position will
update follow the first formula, otherwise, follow the second formula.
pbest (t) + C(10 |pbest (t) − lbest (t)| , if rand ≤ p
x(t + 1) = (10)
lbest (t) + N (0, 1) |pbest (t) − lbest (t)| , otherwise.
Table 3. Mean value and variance value of makespan for all algorithms
5 Conclusion
This paper presents an effective BNPSO, which solves the S-fJSP with the uncer-
tainty of processing time. It minimized the expected average makespan within
reasonable time. With the framework of the proposed BNPSO, we construct BN
structure according to the data showing the relationship among variables and
adjust the group scene based on the independence shown by structure. We first
proved the effectiveness of BN-based grouping mechanism, and then, we com-
pared proposed algorithm with other compared famous algorithms, the results
shown that, BNSPO performed better than other algorithms. In our future work,
1438 L. Sun et al.
we will extend BNPSO to adapt to real case study based on multiobjective sto-
chastic flexible job- shop models (moS-JSP).
References
1. Gao J, Sun L, Gen M (2008) A hybrid genetic and variable neighborhood descent
algorithm for flexible job shop scheduling problems. Comput Oper Res 35(9):2892–
2907
2. Gen M, Cheng R (2000) Genetic algorithms and engineering optimization. John
Wiley, NewYork
3. Gu J, Gu X, Gu M (2009) A novel parallel quantum genetic algorithm for stochastic
job shop scheduling. J Math Anal Appl 355(1):63–81
4. Hao X, Chen X et al (2011) Cooperative Bayesian optimization algorithm: a novel
approach to simultaneous multiple resources scheduling problem. In: Second inter-
national conference on innovations in bio-inspired computing and applications, pp
212–217
5. Lawler EL, Lenstra JK, Kan AHGR, Shmoys DB (1989) Sequencing and schedul-
ing: algorithms and complexity. Handb Oper Res Manag Sci 4(2):445–522
6. Lei D (2010) A genetic algorithm for flexible job shop scheduling with fuzzy process-
ing time. Int J Prod Res 48(10):2995–3013
7. Li J, Zhang JQ et al (2014) Team of Bayesian optimization algorithms to solve task
assignment problems in heterogeneous computing systems. In: IEEE international
conference on systems, man and cybernetics, pp 127–132
8. Li X, Yao X (2012) Cooperatively coevolving particle swarms for large scale opti-
mization. IEEE Trans Evol Comput 16(99):1
9. Liu B, Wang L, Jin YH (2005) Hybrid particle swarm optimization for flow shop
scheduling with stochastic processing time. In: Proceedings of the Computational
Intelligence and Security, International Conference, CIS 2005, Xi’an, China, 15–19
December 2005, pp 630–637
10. Omidvar MN, Li X et al (2010) Cooperative co-evolution for large scale optimiza-
tion through more frequent random grouping. In: Evolutionary Computation, pp
1–8
11. Potter MA, Jong KAD (1994) A cooperative coevolutionary approach to function
optimization. Lecture Notes in Computer Science, vol 866, pp 249–257
12. Price K, Storn RM, Lampinen JA (2005) Differential evolution: a practical app-
roach to global optimization. Natural computing series, vol 141(2), pp 1–24
13. Tavakkoli-Moghaddam R, Jolai F et al (2005) A hybrid method for solving sto-
chastic job shop scheduling problems. Appl Math Comput 170(1):185–206
14. Van den Bergh F, Engelbrecht AP (2004) A cooperative approach to particle swarm
optimization. IEEE Trans Evol Comput 8(3):225–239
15. Yang J, Xu H et al (2011) Task scheduling using Bayesian optimization algorithm
for heterogeneous computing environments. Appl Soft Comput 11(4):3297–3310
16. Yang Z, Tang K, Yao X (2008) Self-adaptive differential evolution with neighbor-
hood search. In: Evolutionary Computation, pp 1110–1116
17. Zhou R, Nee A, Lee H (2009) Performance of an ant colony optimisation algorithm
in dynamic job shop scheduling problems. Int J Prod Res 47(11):2903–2920
Exchange Rate Movements,
Political Environment and Chinese Outward FDI
in Countries Along “One Belt One Road”
1 Introduction
Since China adopted its “going out” policy in 2001, its OFDI flows have grown
rapidly, reaching more than $100 billion in 2013. As an upgraded strategy for
foreign investment, the “OBOR” was initiated in 2013 to better serve China’s
“going out” policy. In the area defined by this “OBOR” initiative, more than
60 countries have expressed their interest in cooperating with this program.
In addition, the China-led Asian Infrastructure Investment Bank (AIIB) was
created to provide professional, efficient financial support for the tremendous
infrastructure needs under the “OBOR” initiative. At present, a total of 57
countries covering five continents have been approved as founding members of
this international financial institution. As an Asian regional multilateral devel-
opment agency, AIIB will play a constructive role in strengthening economic
growth engine of infrastructure construction, improving the capital utilization
efficiency and raising the regional development level.
2 Literature Review
Theoretical and empirical studies have looked at the foreign investment behavior
of multinationals and have identified exchange rate movements and political
environment as important determinants of outward FDI. We conclude these
studies as follows.
The interest in the impacts of exchange rate and its volatility on international
capital flows such as FDI is growing among policy makers. As for empirical
research, quite a lot of scholars try to investigate the relationship between
exchange rate and FDI by taking developed countries and newly emerging coun-
tries as sample, which is critical to the formulation of FDI polices.
With regard to the exchange rate level, one strand of the literature empha-
sizes the positive correlation between the appreciation of source country currency
and FDI outflows based on the relative wealth effect, the expectation of future
profitability and capital market imperfection. Schmidt and Broll [23] showed
Political Environment and Chinese Outward FDI 1441
that the real exchange rate level has a positive effect on outward FDI flows in
nine industries from the US. Takagi and Shi [24] used the panel data of Japanese
FDI flows to nine dynamic Asian economics during 1978–2008 and found the cur-
rent depreciation of host country currencies significantly increased FDI inflows
from Japan. However, another strand emphasizes the ambiguous impact of cur-
rency appreciation. Pain and Welsum [21] provided no clear conclusions as to the
impact of exchange rate movements on FDI due to the types of multinational
activities undertaken in different countries. Lee and Min [17] took Korea’s eight
major FDI source countries in three different regions in the world as samples and
tried to identify the changing behavior of foreign investors in Korea following the
1997 crisis. The change in FDI in response to exchange rate level is quite mixed,
which is consistent with recently developed real option-based FDI theory.
There are contradictory conclusions of exchange rate volatility influences on
FDI flows in theoretical and empirical literature. The risk averse multinationals
consider the information-searching cost and may decide to put off investing over-
seas when faced with dramatic volatility of exchange rate. Udomkerdmongkol
and Morrissey et al. [25] found evidence for a negative impact of exchange rate
variation on FDI of US in emerging countries. However, another group of studies
highlight the positive impact of exchange rate volatility on FDI. Cushman [6], in
his theoretical models, concluded that the exchange rate uncertainty may posi-
tively affect FDI. In response to risk, the multinational firm reduces exports to
the foreign country but offsets this somewhat by increasing foreign capital and
stimulating direct investment. Goldberg and Kolstad [11] also came to a similar
conclusion, namely, an increase of the uncertainty stimulates FDI. Deseatnicov
and Akiba [8] employed a panel data analysis of 56 developed and developing
countries (country and industry level) and found that exchange rate volatility
positively affect Japanese FDI activities for all industry. The results showed that
Japanese MNCs could tolerate a slight increase in the exchange rate volatility
because the level of it may be far enough what is necessary.
Although Chinese OFDI has become a more interesting topic, relatively few
empirical studies have been conducted. Some research has made contribution to
the literature by examining the major determinants of Chinese OFDI including
ER risks as well. But the results are controversial and apparently they did not
include samples from enough counties. For example, Jin [14] found that the
appreciation of RMB promotes FDI after the reforms in the ER regime in 2005.
Hu [13] used a panel data including 49 countries from 2003 to 2010. They found
that Chinese OFDI was positively related to ER level and negatively related
to ER volatility. Liu and Deseatnicov [19] found out that Chinese MNCs tend
to invest in locations with higher financial uncertainty because exchange rate
volatility increases the competitive advantage of Chinese MNCs in developing
countries with respect to MNCs from developed countries motivating an increase
in Chinese OFDI.
1442 W. Zu and H. Liu
population of 3.08 billion, accounting for about 44% of the world’s population.
The total GDP of this area reaches $12.8 trillion, accounting for 17% of the
global economy.
Fig. 1. Chinese OFDI Stock into the “OBOR” countries (2003–2015) Sources: Statis-
tical Bulletin of China’s Outward Foreign Direct Investment.
Table 2. Top ten countries of Chinese OFDI stock along the “OBOR” by the end of
2015
region is highest among all regional groups. Kazakhstan, Mongolia and Russia
are currently among the largest FDI recipients in the region. It is worth noting
that the transitional region might become a new spotlight for Chinese foreign
investment, especially motivated by the “OBOR” initiative.
(2) The industry structure and investment project of Chinese OFDI
There has been a trend of diversification in the industrial structure of Chinese
OFDI along the “OBOR”. In 2005, the Chinese large-scale project investment
Political Environment and Chinese Outward FDI 1445
along the “OBOR” only involved energy industry that was mainly dominated by
oil and supplemented by natural gas and coal. Between 2006 and 2008, China’s
large-scale project investment extended to the metal ore industry, real estate,
transportation and other industries. The Chinese enterprises further expanded
their investment to high-tech, agriculture, finance, and chemical industries from
2009 to 2015. These changes in China’s OFDI along the “OBOR” show it has
experienced a steady process of ascension. In general, the focused industry is
energy. Metal ore, real estate and transportation rank second to fourth. Agri-
culture, chemical and high-tech industries account for a smaller part of total
shares.
The “OBOR” initiative makes the interconnection of infrastructure as a
breakthrough in cooperation. Driven by this initiative, China actively devel-
ops high-speed railway networks, expressway networks, and regional aviation
networks. In 2015, China plans to invest infrastructure construction along the
“OBOR” for 1.04 trillion Yuan, especially in railways, water conservancy and
port engineering, and airports, in that order (Fig. 2).
Fig. 2. China planed investment projects in “OBOR” countries (2015) (Sources: Min-
sheng Securities)
Fig. 3. Chinese Investment Subjects in “OBOR” Countries (By the End of the First
Half of 2014, Sources: Minsheng Securities)
4 Data
4.1 Sample Selection
The aim of this paper is to examine the impact of exchange rate movements and
political environment on Chinese OFDI along the “OBOR”. Since the Ministry
of Commerce, National Bureau of Statistics, and State Administration of Foreign
Exchange started to publish a detailed Statistical Bulletin of China’s Outward
Foreign Direct Investment from 2003, our dataset contains the Chinese OFDI
net flows to selected countries for a period of 2003–2015.
We realize that a large number of country samples are critical to our analysis.
The Land and Maritime Silk Roads pass through over 60 countries with transmis-
sion of cultures, religion and trade. However, these vast networks cover so many
developing and small countries, the detailed official FDI data of which cannot
be found in Statistical Bulletin of China’s Outward Foreign Direct Investment.
As a result, our study met difficulties in collecting enough samples of countries
along the “OBOR”. So we include countries that are not geographically located
on the “OBOR” but also belong to its coverage such as European countries,
Africa and Far East. In this paper, the total sample consists of 65 countries on
the basis of all available information (See Table 4 in Appendix).
24
LMEANR = log xi /24 .
2
i=1
actions can result in threats or do harm to the business climate. Thus, we calcu-
late the sum of PRS 6 indicators to capture the features of political environment
in host country. An increasing value (from 0 to 6) represents lower political risk
and better political environment.
(3) Control variables
We look at the macroeconomic determinants of outward FDI. The host country’s
CPI index is used to deflate the nominal values, with 2010 as the base year.
Following previous research, we tested main factors as follows:
• LRGDP is the natural logarithm of host country’s real GDP (Deflated by
CPI) based on current US dollars in a given year, which reflects the host
country’s future market potential or absorptive capacity.
• NATURAL is the natural logarithm of total natural resources rents (% of
GDP). The total natural resource rents are defined as the sum of oil rents, nat-
ural gas rents, coal rents (hard and soft), mineral rents, and forest rents. The
estimates of natural resources rents are calculated as the difference between
the price of a commodity and the average cost of producing it. This indica-
tor could be used to measure the abundance of natural resource in the host
country.
• LRW is the natural logarithm of real wage in host country. GDP per capita
based on purchasing power parity (current US $) serves as a proxy for real
wage due to the difficulties of getting average wage data for most of the
“OBOR” countries. It weighs the labor cost in host country and is widely
used to measure the advantage of human capital in attracting foreign manu-
facturing firms.
• LOPEN is the natural logarithm of the sum of annual imports and exports
volume of goods and services (% of GDP). It stands for the degree of host
country’s openness in the FDI recipient economy, which is considered to be
an important factor affecting FDI-related barriers.
• LTECH is the natural logarithm of host country’s high-technology exports (%
of real GDP). High-technology exports are products with high R&D intensity,
such as in aerospace, computers, pharmaceuticals, scientific instruments, and
electrical machinery. Data are in current U.S. dollars. This variable could
measure the level of technology in host countries, which becomes the one of
consideration in Chinese MNCs foreign investment activities.
• PLCY is a dummy variable that can be interpreted as the effect of the
“OBOR” initiative. The PLCY dummy takes a value of zero if the time
period is 2003–2013 and one otherwise. We splits the sample around 2013
because it is believed that Chinese OFDI activities in the countries along the
OBOR are strongly driven by the “OBOR” initiative that was first launched
by Chinese government in 2013.
estimator due to its capacity to deal with lagged values of the endogenous vari-
ables as poor instruments for first differences. With characteristics of missing
observations for some countries and years, our dataset represents as an unbal-
anced panel data. Thus, we use “forward orthogonal deviations” variables trans-
formation proposed by Arellano and Bond [7] in order to increase the number
of observations used in the analysis.
As for instruments choosing, the independent variables were treated as
strictly exogenous. So the system estimators use the first difference of all the
exogenous variables as standard instruments, and the lags of the endogenous
variables to generate GMM-type instruments. Finally, we added time dummies
in order to increase the likelihood of no correlation across individuals in the
idiosyncratic disturbances assumption to hold as suggested by Roodman [22].
As robustness check to our SYS-GMM regressions we report feasible gener-
alized least squares (FGLS) estimation method with heteroscedastic error term.
We include time dummies to the FGLS estimations as well and we exclude lagged
FDI variable in order to avoid the autocorrelation problem in the FGLS method.
We estimate the equation:
yit = δ × yit − 1 + Xit β + εit , (1)
where yit is the logarithm of annual outward FDI from China into a host ‘coun-
try i’ at time t and Xit denote an (1xk) vector of exogenous variables which
vary in the cross-section and in the time dimension. δ is a scalar. yit−1 is a
lagged dependent variable. εit is a stochastic error term, which is assumed to be
uncorrelated over all i and t. For FGLS estimation we omit δ yit−1 term.
All system GMM estimation results for Eq. (1) are shown in Table 3. In our
specifications, the Hansen test of overidentifying restrictions indicates that the
joint null hypothesis of valid instruments is not rejected. Besides, the FGLS
results displayed in Table 3 show that the results of system GMM estimation are
consistent and robust. Some interesting findings have been disclosed through the
regression and we give some interpretations and evaluations for them.
Especially after 2005, Yuan appreciated over 30% [28]. In this case, the yuan’s
bias toward relatively large appreciation shocks is associated with expectation
of the appreciation that leads to a lower cost of future reinvestment in the host
country and stimulates the foreign investment.
Table 3. Panel regression by System GMM and FGLS for the 65 “OBOR” Countries
(Dependent variable: LRFDI)
RMB appreciation. During our sample year, China’s Central Bank intervened
and managed the unexpected RMB exchange rate fluctuation. Also RMB is
usually expected to appreciate gradually. Thus, Chinese multinationals pay less
attention to the variance of ER than to the level of ER in managing financial
risk. Secondly, unlike the multinationals in developed countries, the motivation
for Chinese MNCs is not to substitute exports but to obtain higher values of
assets or access natural resources in a host country. ER volatility is a sign of
instability but at the same time it discourages potential investors from devel-
oped countries to involve in competition. Thus, Chinese MNCs may be indifferent
to exchange rate volatility because of their motivation. Thirdly, less hedge tools
and RMB investment instruments designed for safeguard the exchange risks are
used in the international settlement practices for Chinese MNCs. They suffer
large exchange loss due to a lack of awareness towards exchange rate volatility.
This is a rather dangerous trait as China is going to make the RMB more flexible
and the Chinese Central Bank will gradually quit the normal intervention.
effectively compensate for the loss of the assets value and expected return due
to the changes of political environment.
Third, China’s political relations with potential hosts have significant influ-
ences on firms overseas investment decisions and patterns. Countries keeping bet-
ter interstate relations with China tend to get more foreign investment from Chi-
nese multinationals [18]. The “OBOR” initiative covers many countries that have
developed good long-term relationships, supporting each other politically and
co-operating economically. For instance, the members of SCO (Kazakhstan and
Russia), some ASEAN countries and African countries have long-term strategic
partnerships with China.
The findings of our study have some policy implications for both the government
and firms under the “OBOR” initiative. The empirical results reveal that ER
level is a highly statistically significant determinant of Chinese OFDI while PE
and ER volatility are not. Based on this result, the Chinese government should
execute an active and strategy-focused diplomatic policy in the “OBOR” coun-
tries in order to better support and promote the Chinese multinationals’ foreign
investment. Research agencies and the Department of Commerce should coop-
erate to issue PE risk reference and precaution for the firms that have already
invested or have the potential to invest. Industry associations should also play
a role in guiding the firms to invest rationally at a steady pace.
Exchange rate risks should be taken into consideration as China is moving
towards a more flexible exchange rate regime. Especially, China’s Central Bank
has taken further action on the RMB internationalization, by widening the RMB
exchange rate’s floating band within the inter-bank market from 0.5% to 1.0%.
Becoming effective on April 16, 2012, this policy has been seen as a strong sig-
nal to the world that the RMB exchange rate regime is going to be driven more
and more by market forces, than by government intervention. Thus, exchange
rate volatility will increase as RMB exchange rate reforms are made. Finan-
cial institutions, like commercial banks, should provide more hedging tools for
Chinese-outgoing-firms to reduce exchange rate risks. As for Chinese firms, it
is vital to observe the ER indicators because these largely determine the safety
of investment and the overall profitability calculated in home currency. In con-
clusion, a trend that direct investment in the “OBOR” countries will steadily
increase can be foreseen. It is vital for Chinese Multinationals to be more aware
of risks to conduct efficient investment abroad.
Appendix
Middle East and Africa (15) Europe (27) East, South and Central Asia
(23)
Afghanistan, Angola, United Bulgaria, Belarus, Austria, Azerbaijan, Brunei
Arab Emirates, Israel, Algeria, Belgium, Switzerland, Czech Darussalam, Hong Kong,
Egypt, Liberia, Libya, Oman, Republic, Germany, Denmark, Indonesia, India, Jordan,
Qatar, Iraq, Iran, Syrian Arab Spain, Finland, France, United Kazakhstan, Lao PDR,
Republic, Saudi Arabia, Kingdom, Ireland, Romania, Pakistan, Philippines,
Yemen, Rep. Russian Federation, Georgia, Bangladesh, Sri Lanka,
Italy, Portugal, Luxembourg, Singapore, Uzbekistan,
Netherlands, Norway, Poland, Vietnam, Turkey, Thailand,
Ukraine, Sweden, Iceland, Turkmenistan, Macao,
Greece, Hungary Cambodia, Myanmar,
Mongolia Malaysia
Variables Sources
LRFDI Statistical bulletin of China’s outward foreign direct investment
LMEANR International financial statistics
LVARR International financial statistics
PE Political risk services group
LRGDP World Bank database NATURAL World Bank database
LRW World Bank database
LOPEN World Bank database
LTECH United Nations, Comtrade Database
Political Environment and Chinese Outward FDI 1455
References
1. Blonigen BA (1997) Firm-specific assets and the link between exchange rates and
foreign direct investment. Am Econ Rev 87(3):447–465
2. Buckley PJ, Zheng P (2009) The determinants of chinese outward foreign direct
investment. J Int Bus Stud 40(2):353–354
3. Busse M, Hefeker C (2007) Political risk, institutions and foreign direct investment.
Eur J Polit Econ 23(2):397–415
4. Cezar R, Escobar OR (2015) Institutional distance and foreign direct investment.
Rev World Econ 151(4):713–733
5. Chakrabarti R, Scholnick B (2002) Exchange rate expectations and foreign direct
investment flows. Rev World Econ 138(1):1–21
6. Cushman DO (1985) Real exchange rate risk, expectation, and the level of direct
investment’. Rev Econ Stat 67(2):297–308
7. Dan B (1991) Some tests of specification for panel data: Monte carlo evidence and
an application to employment equations: Monte Carlo evidence and an application
to employment equations. Rev Econ Stud 58(2):277–297
8. Deseatnicov I, Akiba H (2016) Exchange rate, political environment and FDI deci-
sion. Int Econ 148:16–30
9. Duanmu JL (2011) The effect of corruption distance and market orientation on
the ownership choice of MNEs: evidence from China. J Int Manage 17(2):162–174
10. Froot KA, Stein JC (1991) Exchange rates and foreign direct investment: an imper-
fect capital markets approach. Q J Econ 106(4):1191–1217
11. Goldberg LS, Kolstad CD (1995) Foreign direct investment, exchange rate vari-
ability and demand uncertainty. Int Econ Rev 36(4):855–873
12. Hayakawa K, Kimura F, Lee HH (2013) How does country risk matter for foreign
direct investment? Dev Economies 51(1):60–78
13. Hu B (2012) RMB exchange rate and Chinese outward foreign investment-a cross-
country panel analysis. Contemp Econ Res 11:77–82 (in Chinese)
14. Jin W, Zang Q (2013) Impact of change in exchange rate on foreign direct invest-
ment: evidence from China. Lingnan J Bank Finan Econ 4(1):1
15. Kolstad I, Wiig A (2009) What determines Chinese outward FDI? J World Bus
47(1):26–34
1456 W. Zu and H. Liu
16. Kurul Z (2016) Nonlinear relationship between institutional factors and FDI flows:
dynamic panel threshold analysis. Int Rev Econ Finan 48:148–160
17. Lee BS, Min BS (2011) Exchange rates and FDI strategies of multinational enter-
prises. Pacific-Basin Finan J 19(19):586–603
18. Li Q, Sr GL (2012) Political relations and Chinese outbound direct investment:
evidence from firm-and dyadic-level tests. SSRN Electron J. doi:http://dx.doi.org/
10.2139/ssrn.2169805
19. Liu HY, Deseatnicov I (2016) Exchange rate and Chinese outward FDI. Appl Econ
51:1–16
20. Mehlum H, Moene K, Torvik R (2006) Institutions and the resource curse. Econ
J 116(508):1–20
21. Pain N, Welsum DV (2003) Untying the gordian knot: the multiple links between
exchange rates and foreign direct investment. JCMS J Common Mark Stud
41(5):823–846
22. Roodman D (2009) A note on the theme of too many instruments. Oxford Bull
Econ Stat 71(1):135–158
23. Schmidt CW, Broll U (2009) Real exchange-rate uncertainty and US foreign direct
investment: an empirical analysis. Rev World Econ 145(3):513–530
24. Takagi S, Shi Z (2011) Exchange rate movements and foreign direct investment
(FDI): Japanese investment in Asia, 1987–008. Jpn World Econ 23(4):265–272
25. Udomkerdmongkol M, Morrissey O, Görg H (2009) Exchange rates and outward
foreign direct investment: US FDI in emerging economies. Rev Dev Econ 13(4):754–
764
26. Wang Y (2015) China’s evaluation of investment risks in countries along ‘one belt,
one road’ initiative. China Opening J 181(4):15–21
27. Chung Yeung HW, Liu W (2008) Globalizing China: the rise of mainland firms in
the global economy. Eurasian Geogr Econ 49(1):57–86
28. Yue LH, Qiang J, Kai TY (2016) Determination of renminbi equilibrium exchange
rate, misalignment, and official intervention. Emerg Mark Finan Trade 52(2):420–
433
Post-Traumatic Stress Disorder Among
Survivors in Hard-Hit Areas of the Lushan
Earthquake: Prevalence and Risk Factors
1 Introduction
On April 20, 2013 at 8:02 am, an earthquake measuring 7.0 on the Richter scale
occurred in Lushan County, Sichuan Province, China. Its epicenter and focal
depth were at latitude 30◦ 308’N, longitude 102◦ 888’E, and 14.0 km, according
to the location of U.S. Geological Survey. Lushan earthquake occurred in the
southwestern part of the Longmen Shan fault zone [5]. It was the strongest
earthquake after the 12 May 2008, Ms 8.0 (China Earthquake Data Center) or
Mw 7.9 (U.S. Geological Survey), Wenchuan earthquake in Sichuan province,
China. Unlike other natural disasters, earthquakes usually occur without any
warning and can cause widespread devastation and expose thousands of people
to sudden bereavement, injury, loss of property, homelessness, and displacement
[8]. Lushan earthquake still caused strong shaking for human society around its
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 122
1458 Z. Xie and H. Xiu
epicenter. Since the source regions of the Lushan earthquake are highly pop-
ulated, and with numerous mountainous, it resulted in more than 200 deaths
or missing persons, more than 10,000 injuries, and huge economic losses accord-
ing to the local government’s official report. Besides deaths, physical injuries and
economic losses, earthquakes can have serious psychological impacts for survivors
as well as physical injuries and various mental health problems [1].
Many survivors of natural disasters experience post-traumatic stress disorder
(PTSD) in their adjustment to the loss of resources (e.g. housing, belongings)
or loved ones (e.g. family members) [1]. PTSD is the most frequently reported
psychological sequelae among victims of natural disasters [15]. Numerous stud-
ies have documented the estimated the PTSD among other earthquake sur-
vivors. Through a longitudinal survey on the onset and development of DSM-IV
Post-traumatic Stress Disorder (PTSD) after the Zhangbei earthquake in North
China, Wang et al. reported the rate of onset of PTSD within 9 months in
severely affected village (30.3%) was higher than that in lightly affected villages
(19.8%) [13]. Another survey two and a half months following the earthquake
in Beichuan and Langzhong Counties in Sichuan Province found that the preva-
lence rates of suspected PTSD in heavily (Beichuan County) and moderately
(Langzhong County) damaged counties reached 45.5% and 9.4%, respectively [6].
To summarize the above studies, previous assessments among survivors of
natural disasters have shown that PTSD are common, and the prevalence rates
of PTSD among the hard-hit survivors were far higher than those of lightly hit
survivors. Therefore, understanding PTSD is essential for identifying vulnerable
populations and developing culturally specific mental health interventions [11].
For the purpose of public health emergency response, we conducted a rapid
assessment of the prevalence of symptoms of PTSD and associated factors among
random samples of survivors in the 5 countries of Lushan, Baoxing, Mingshan,
Qionglai, and Tianquan, which were the most severely affected by the Lushan
earthquake.
2 Methods
2.1 Subjects
According to the hard-hit counties list published by the Central People’s Gov-
ernment of the People’s Republic of China, we conducted a cross-sectional sur-
vey in heavily damaged counties. The hard-hit counties were Lushan, Baoxing,
Mingshan, Tianquan, and Qionglai. These counties were selected because they
had suffered more extensive damage than other counties in China. The inclusion
criteria were as follows-having a high degree of exposure to the earthquake and
experiencing the complete process of the earthquake-with a fair distribution of
sex, age, and place.
The survey teams were temporarily established, consisted of well-trained
master’s level psychology students. They participated in a 6-day training pro-
gram that included lectures describing the study protocol and instruments, role-
play interviews, and mutual discussion. The survey comprised all assessments
Post-Traumatic Stress Disorder Prevalence and Risk Factors 1459
and was administered by senior staff psychiatrists and psychologists from Sichuan
University’s Medical School.
Before conducting the formal investigation, a pilot test was carried out in
August and September 2013, with a group of randomly selected survivors par-
ticipating. Minor modifications and adjustments were made according to the
feedback from the pilot test. The final version of the questionnaire was used in
the formal investigation. All assessment forms were translated from English to
Chinese and back-translated by a bilingual team of professionals.
From December 2013 to January 2014, master’s level psychology students
working as research assistants approached the participants in their own homes or
in temporary accommodation. To ensure privacy, interviewers and participants
were encouraged to complete the questionnaires in private places. Supervision
came out on a day-to-day basis throughout the survey.
2.2 Instruments
3 Results
Characteristics of the entire study group in hard-hit areas of the Lushan earth-
quake were summarized. The demographic data of the 500 participants shows
in Table 1. Males comprised 48.4% of the participant population compared
with 51.6% for females. Of the participants, 489 (97.8%) were of Han national-
ity, and 11 (2.2%) were of Tibetan nationality. The average age was 38.0 years,
with 44.6% of the participants in the 31–50 year age group; 71.8% of the par-
ticipants were married. Only 4 participants (0.8%) had a graduate education,
144 (28.8%) had a bachelor education and 352 (70.4%) had an elementary school
education or were illiterate. Among our subjects, 18.4% had a monthly income
of over 2000 yuan, 38.2% had an income of 1000–2000 yuan and 38.4% earned
less than 1000 yuan. With regard to the survivors’ fear during the earthquake,
65.2% reported “yes”. Of the survivors, 51.7% reported that their social sup-
port was high in the past year, whereas 28.6% reported low. One hundred and
eight participants (21.6%) reported that they suffered from at least one physical
disease, and no participants reported suffering from psychic diseases.
Percentages of PTSD status, as well as their scores, among males and females
are shown in Table 2. Based on the PCL-C total scores, participants were clas-
sified as suffering from PTSD if the score was equal to or higher than 50. The
overall percentage of PTSD was 35.6% (n = 178). The percentage of PTSD was
significantly higher among females (43.4%) than among males (27.3%), as were
PCL-C total scores (32.78 vs. 26.64, respectively).
Table 3 compares PTSD status among different age groups. Overall, there was
a significant difference between the age groups with regard to PCL-C total scores
and rates. The PCL-C total scores were significantly higher in the 41 to 50 year
age group, and the prevalence rate was higher than other age group. The lowest
prevalence rates and PCL-C total scores were found in the 15 to 30 year age
group.
Table 4 shows the factors that affected post-traumatic stress disorder (PTSD)
symptoms by multivariate logistic regression. Various demographic and loss vari-
ables were entered into the models. The prevalence rate of probable PTSD
was 35.6% (n = 178) based on the PCL-C cut-off score of 50 (Table 4).
Results of the multivariate logistic regression analyses indicated that the
prevalence of probable PTSD was significantly higher among individuals with
Post-Traumatic Stress Disorder Prevalence and Risk Factors 1461
Variables n %
Location Rural 381 76.2
Urban 119 23.8
Gender Male 242 48.4
Female 258 51.6
Age (years) 15–30 191 38.2
31–40 88 17.6
41–50 135 27
≥ 51 86 17.2
Mean ±S.D.: 38.0 ± 13.7 < 1000 RMB 192 38.4
Monthly income 1000–2000 RMB 191 38.2
2000–3000 RMB 85 17
> 3000 RMB 32 6.4
Ethnic group Han 489 97.8
Tibetan 11 2.2
Education level
Graduate 4 0.8
Bachelor 144 28.8
No degree 352 70.4
Lossa No 417 83.4
Yes 83 16.6
Injury of body 39 7.8
Family member injured 42 8.4
Family member died 7 1.4
Loss was defined as death of family member, or injury to family
member or self as s result of the quake.
and over-51-year age group had higher PTSD symptoms and odds ratios (OR).
PTSD was significantly higher among victims who sustained an injury or whose
family member injured during the quake compared with noninjured survivors.
OR for risk of PTSD among victims who sustained an injury or whose family
member injured during the quake were 1.88 and 1.53 (compared with that of the
noninjured survivors) (Table 4).
4 Discussion
4.1 Prevalence of Probable PTSD
PTSD is still common eight months after the earthquake. Compared with the
prevalence of probable PTSD among hard-hit survivors soon after the earth-
quake, the prevalence after eight months has declined but remains significant.
The elevated prevalence rates of psychological symptoms showed that PTSD is
common mental health problems in the hard-hit areas after exposure to this
natural disaster and remained alarmingly high.
The present study found that approximately more than one third of partici-
pants suffered from probable PTSD (35.6%) based on the PCL-C cut-off score of
50. PTSD among survivors in hard-hit areas of the Lushan earthquake were com-
mon in our subjects. Our findings indicated a steady decline in the prevalence
of PTSD over time [4]. As time went by, the reduced mental health problems
among survivors in hard-hit areas may be associated with relatively good living
conditions and substantial social support given by the government and other
aid organizations. The prevalence of PTSD in our sample were high, compared
with rates of PTSD in previous studies after earthquakes. The prevalence rates
of probable PTSD among the Wenchuan earthquake victims in hard-hit areas
were 26.3% [15]. The prevalence rates of probable PTSD, among the survivors
in the hard-hit areas of the Yushu earthquake were 33.7% [16].
earthquake trauma, females were nearly twice as likely as males to develop PTSD
(OR = 1.84). This may be partly due to cultural factors, as women in traditional
societies such as China tend to be more dependent, with such disasters causing
their dependence object to encounter heavy loss. Initiatives should give priority
to female, as they were more likely to develop mental health problems than men.
In our study, ethnicity was another significant predictor for PTSD in hard-hit
areas, which agrees with the conclusions of many previous studies [3]. China has
a myriad of ethnic minority groups, including the Tibetan. This study found that
the odds of PTSD (OR = 1.87), in members of the Tibetan ethnic group were
higher compared with the Han ethnic group. After the earthquake, the ethnic
1464 Z. Xie and H. Xiu
minorities should receive more support, care, and help from the government,
aid organizations, and volunteers. However, a few studies were opposite to our
study [10].
In present study, being injured or having family members injured was also a
significant risk factor for the incidence of probable PTSD in the hard-hit areas.
The findings are consistent with the conclusion of many empirical studies that
the intensity of exposure to a disaster or loss is among the most robust predictive
factors for mental disorders [9].
Many studies have documented the significant relationship between social
support and PTSD in the aftermath of disaster [4]. Consistent with previous
studies on post-traumatic psychological health, the present study confirmed that
social support has a protective function [1].
4.3 Limitations
5 Conclusion
Despite the limitations, to the authors’ knowledge, this study has played an
exploratory role in revealing the prevalence and risk factors of probable PTSD
in the hard-hit areas among Lushan earthquake survivors. The findings revealed
that PTSD (35.6%) are common mental health problems in hard-hit areas even
eight months after the earthquake. Female sex, Tibetan ethnicity, being injured,
having family members injured and social support were significant risk factors
in heavily damaged areas. The findings were also one of a handful of studies on
the psychological sequelae of catastrophic natural disasters among non-Western
populations.
References
1. Andrews B, Brewin CR et al (2007) Delayed-onset posttraumatic stress disorder:
a systematic review of the evidence. Am J Psychiatry 164(9):1319–1326
2. Elhai JD, Gray MJ et al (2005) Which instruments are most commonly used to
assess traumatic event exposure and posttraumatic effects?: a survey of traumatic
stress professionals. J Trauma Stress 18(5):541–545
3. Galea S, Nandi A, Vlahov D (2005) The epidemiology of post-traumatic stress
disorder after disasters. Epidemiol Rev 27(1):78–91
4. Hafezalkotob A, Alavi A, Makui A (2015) Government financial intervention in
green and regular supply chains: multi-level game theory approach. Int J Manag
Sci Eng Manag 11:167–177
5. Jia K, Zhou S et al (2014) Possibility of the independence between the 2013 Lushan
Earthquake and the 2008 wenchuan earthquake on Longmen Shan Fault, Sichuan,
China. Seismol Res Lett 85(1):60–67
6. Kun P, Chen X et al (2009) Prevalence of post-traumatic stress disorder in Sichuan
Province, China after the 2008 Wenchuan earthquake. Public Health 123(11):703–
707
7. Li H, Wang L et al (2010) Diagnostic utility of the PTSD checklist in detecting
PTSD in Chinese earthquake victims. Psychol Rep 107(3):733–739
8. Mcmillen JC, North CS, Smith EM (2000) What parts of ptsd are normal: intru-
sion, avoidance, or arousal? data from the Northridge, California, earthquake. J
Trauma Stress 13(1):57–75
9. Panda S, Modak NM, Pradhan D (2014) Corporate social responsibility, channel
coordination and profit division in a two-echelon supply chain. Int J Manag Sci
Eng Manag 11(1):22–33
10. Sharan P, Chaudhary G et al (1996) Preliminary report of psychiatric disorder in
survivors of a severe earthquake. Am J Psychiatry 153(4):556–558
11. Van Griensven F, Chakkraband MLS et al (2006) Mental health problems among
adults in tsunami-affected areas in Southern Thailand. JAMA J Am Med Assoc
296(5):537–548
12. Wang L, Zhang Y et al (2009) Symptoms of posttraumatic stress disorder among
adult survivors three months after the Sichuan earthquake in China. J Trauma
Stress 105(5):879–885
13. Wang X, Gao L et al (2000) Longitudinal study of earthquake-related PTSD
in a randomly selected community sample in north China. Am J Psychiatry
157(8):1260–1266
14. Xu J, Song X (2011) A cross-sectional study among survivors of the 2008 Sichuan
Earthquake: prevalence and risk factors of posttraumatic stress disorder. Gen Hosp
Psychiatry 33(4):386–392
15. Zhang Z, Shi Z et al (2011) One year later: mental health problems among survivors
in hard-hit areas of the Wenchuan earthquake. Public Health 125(5):293–300
16. Zhang Z, Wang W et al (2012) Mental health problems among the survivors in the
hard-hit areas of the Yushu earthquake. PLoS ONE 7(10):e46449
How Corruption Affects Economic Growth:
Perception of Religious Powers
for Anti-corruption in Iraq
Abstract. Iraq is one of the most corrupt countries, as since it was part
of TI’s statistics occupy the last positions. The paper aims to analyze
and understand the perceptions of corruption in Iraq and provide a gen-
eral framework for anti-corruption by social tools. The education society
has selected as study sample by questionnaires randomly at the Univer-
sity students reached (600) student and economics data from 1979 to
2015. The multiple regression analysis, percentages and charts are used
to present the results. The findings indicate that corruption is a factor
behind the deterioration of the country. The regression test show nega-
tive relationship between corruption and economic growth then, religious
powers. In additional, the embezzlement and bribery are more common
forms in state institutions appearances. The study value suggests answers
out scope of government procedures in the fight against corruption that
provide social tools relate with self-behavior for corruption problems.
1 Introduction
The most common definition of corruption is the misuse of public office for
private gain [15]. It has serious consequences in reducing the economic level and
development [5]. As well as a reduction of per capita income [22,24], It hurts
the poor more than the rich [10], to load the economic costs of institutions and
social Large [18] and thus increases the levels poverty [3,9]. Corruption is not
a problem of theft only but includes many forms such as bribery that may be
in payments under the table and fraud, such as falsification of contract, assault,
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 123
How Corruption Affects Economic Growth 1467
for example, examination papers, or wasting time or selling state secrets and
nepotism seeking the benefit of friends and relatives for, misappropriation of
funds the public. What makes matters worse is the inability of the beneficiaries
of recording information about the services received [21]. It is worth mentioning
that there is corrupt behavior that considers a rate of return, including bribery
of more than 11–15 times [12]. Some divides corruption into the first two types
[8,20,21], a large corruption, which is intended to official corruption in the upper
levels. Small corrupt practices which routinely incorrect practiced by the staff
at the lower levels. Countries are trying, especially in developing countries to
build effective strategies to fight corruption, in spite of the existence of other
regulatory tools to take the fight against corruption in Iraq, such as the Integrity
Commission and the Financial Inspection Office and the offices of the inspectors
and the media, but discontent and still more from the Iraqi state institutions
reached levels alarming. Show us that since the beginning of the Iraq enters
Corruption Perception-wide and in 2003 for occupies ranked No. 133 and in
spite of increased measures to combat it, but it kept growing, because occupies
ranked No. 167 in 2015. As shown in Table 1. Also, the private sector is weak and
the level of economic growth and income individuals wobbling for these reasons
we focused on Iraq in our study. Here, the two puzzles: first, do you that the
Iraqi economy negatively affected by corruption, or is it similar to the Chinese
economy, which is growing with the increasing levels of corruption? Second, what
motives corruption in the country under study?
In this paper, we investigate into the impact of corruption on both eco-
nomic growth and capita income in Iraq. Second, provide estimates that relative
2 Research Methodology
The study analyzes the concept of citizens for corruption in Iraq for determina-
tion the actual behavior of society. A questionnaire was distributed to a sample
of university students very (600), considering that young students are the heart
and the leaders of the future. Distributing the questionnaire manually on (714)
students returned them (637) have neglected them (37) to identify because of
their health or for other reasons. It was the response rate (89%). We made this
questioner basic on [20]. In addition, as well as data obtained (37) observa-
tions about corruption and economic growth variables from the organization of
integrity and the Iraqi ministry of planning from 1979 to 2015. The method was
used Multiple regression analysis, percentages, tables, and diagrams in the data
analysis.
Bryant and Javalgi [4] found corruption affects economic growth and investment
in each country. Particularly the impact on investment and promise a tax on
profits in developing countries. But in South Korea found that more corruption
leads to more economic growth, as it works on the development of economic
strength [11]. In the same way, Ajao et al. [2] explained the impact of corruption
on economics growth which emphasizes that corruption is the biggest inhibitor
of growth in most countries of the world. The corrupt system gives contracts to
dealers who paid the highest amount of bribes, regardless of the level of efficiency,
as well as a tax in their pockets and not in the state treasury [13]. We summarize
business literature about corruption by three hypotheses:
where gcapi is the growth rate of capita income, and all other variables are as
defined in the equation above, ω is a growth rate of capita income error term.
In Fig. 1, We noted the higher percentage of the sample about (90%) of their
responses agree with the concept that argues corruption is a major problem in
How Corruption Affects Economic Growth 1471
the country by selected always scale about (50%) and often scale about (40%).
This is a very high percentage confirms the extent of realization of the corruption
Appearances in the society.
In this study, Fig. 3 presents that the embezzlement is more famous common
form to corruption phenomenon in Iraq by determined about (41.3%) of the
sample. While the second form of corruption is bribery by 29.8% of their pointed,
(16%) considered the third forms is favoritism, (8%) selected the extortion, and
the last percentage about (4.7%) considered the information misuse is type of
corruption there. Ajao et al. [2] confided that corruption forms of multiple and
different gradient and conflict of interest, embezzlement, extortion, bribery and
many others, which is a transfer or grab valuables and property to someone
1472 M.A. Mahmood et al.
who has no right to corruption, but used to do so by virtue of his position, and
resorting to the use of misleading and erroneous information [21].
Wedeman [27] showed that widespread corruption at all levels and in the
center of the large levels. This case is in our study sample consistent with studied
confirmed the higher levels in the state were the most corrupt. Tabish and Jha
[25] used anti-corruption strategies in Indian public sector, they are four latent
constricts leadership, rules & regulations, training, and fear of punishment. The
finding showed all of them are highlighted to help institutions for understanding
the role of anti-corruption. Figure 4 describes the reasons of corruption based on
education society. A large number of participants of the sample about) 30.5%
(believes that the bribes paid based on the employee’s request. Whereas about
(25.2%) underlines the lack of receipt of services in the event of non-payment of
a bribe, (17%) push for speeding up transactions, and (16.1%) in order to avoid
problems, and finally (11.2%) pay because the treatment is not a fundamentalist.
This study tried to know the rules and capabilities of government for anti-
corruption by education society with five parts because that will make overview
about the problem size and corruption effect. In Fig. 5, the higher percentage is
confirmed that governmental anti-corruption measures have a low efficiency by
(44.2) of the respondents, (31.7%) confirms that the measures originally ineffi-
cient and ineffective, (17.5%) considered the effect is mild, (5.2%) agreed with
How Corruption Affects Economic Growth 1473
efficient and effective, and (1.3%) were a very effective and influential. Sampford,
et al. [21] Delays in the implementation of judicial proceedings on the corrupt or
non-implementation makes the work of that anti-corruption a few effectiveness.
Lack of public confidence in the judiciary and the court and the police not to
push them to give information about corruption, because they believed that the
judge deal bribery or staff in court and this may be the result of the individual’s
personal experience in this regard. Corruption increases the uncertainty and
skepticism in the judicial and legal system, the presence of authority has indis-
putable sovereignty opportunistic supports corruption. Also, the performance of
organizations will decrease high levels of corruption, because corruption gives
individuals a competitive advantage unfairly [19]. The reasons may include the
failure of recent technology advances being on lower pollution emission intensity
[28] (Fig. 6).
4 Conclusions
This analytic investigating aimed to expand the understanding of the impact of
corruption on economic growth. Iraqi education society considers cultural focus-
ing that has simulation key for anti-corruption. The major problem and barely
missing from the country is Corruption. It also is inhibiting the development of
the nation and the inhibitor of economic development and growth. Poverty or
economic hardship of the individual may be the most important reasons driving
for corruption for the average employee, which paid to bribe demands, on the
other hand, the high cost of luxury, greed, panting behind excellence are paying
the highest levels in the country to engage in corruption. We found evidence that
the corruption levels are detrimental to long-run growth. This evidence is robust
1474 M.A. Mahmood et al.
Acknowledgements. The authors would like to thank Kufa University in Iraq for
supporting a research about data collection.
References
1. Aidt TS (2009) Corruption, institutions, and economic development. Oxf Rev Econ
Policy 25(2):271–291
2. Ajao OS, Samuel D, Samuel O (2013) Application of forensic accounting technique
in effective investigation and detection of embezzlement to combat corruption in
Nigeria. Unique J Bus Manage Res 1:65–70
3. Beekman G, Bulte EH, Nillesen EEM (2013) Corruption and economic activity:
micro level evidence from rural Liberia. Eur J Polit Econ 30(30):70–79
4. Bryant CE, Javalgi RG (2016) Global economic integration in developing countries:
the role of corruption and human capital investment. J Bus Ethics 136(3):1–14
5. Chen J, Jie X, Han S (2017) Tourism economic effect divergence analysis-panel
data analysis of Hunan Province. Springer, Singapore
6. Daskalopoulou I (2016) Rent seeking or corruption? an analysis of income satis-
faction and perceptions of institutions in Greece. Soc Sci J 53(4):477–485
7. Gould DJ, Reyes JAA (1983) The effects of corruption on administrative perfor-
mance. World Bank Staff Working Pap 580:2514
8. Graycar A, Monaghan O (2015) Rich country corruption. Int J Publ Adm 38(8):
1–10
9. Gupta S, Davoodi H, Alonso-Terme R (2002) Does corruption affect income
inequality and poverty? Econ Gov 3(1):23–45
10. Gyimah-Brempong K (2002) Corruption, economic growth, and income inequality
in Africa. Econ Gov 3(3):183–209
11. Huang CJ (2016) Is corruption bad for economic growth? evidence from Asia-
Pacific countries. North Am J Econ Finance 35:247–256
12. Jain PK, Kuvvet E, Pagano MS (2016) Corruption’s impact on foreign portfolio
investment. Int Bus Rev 26:25–35
13. Kunieda T, Okada K, Shibata A (2014) Corruption, capital account liberalization,
and economic growth: theory and evidence. Int Econ 139(139):80–108
14. Lisciandra M, Millemaci E (2016) The economic effect of corruption in Italy: a
regional panel analysis. Regional Studies, pp 1–12
How Corruption Affects Economic Growth 1475
15. Lučić D, Radišić M, Dobromirov D (2016) Causality between corruption and the
level of GDP. Econ Res-Ekonomska Istraživanja 29(1):360–379
16. Nazim M, Saeed R et al. (2017) Parametric analysis of leadership styles on orga-
nizational performance and the mediating role of organizational innovativeness.
Springer, Singapore
17. Paldam M (2001) Corruption and religion adding to the economic model. Kyklos
54(2–3):383–413
18. U. S. S. i. S. t. U. t. o. Peace (2013) Governance, corruption, and conflict. In: A
study guide series on PEACE And conflict for independent learners and classroom
instructors, pp 1–54
19. Rocca ML, Cambrea DR, Cariola A (2017) The role of corruption in shaping the
value of holding cash. Finance Res Lett 20:104–108
20. Sahu SK, Gahlot R (2014) Perception about corruption in public servicies: a case
of BRICS countries. J Soc Sci Policy Implic 2:109–124
21. Sampford CJ, Shacklock AH, Connors C (2006) Measuring corruption. Ashgate,
Farnham
22. Sanchez JI, Gomez C, Wated G (2008) A value-based framework for understanding
managerial tolerance of bribery in Latin America. J Bus Ethics 83(2):341–352
23. Shadabi L (2013) The impact of religion on corruption. J Bus Inq 12:102–117
24. Stevens A (2016) Configurations of corruption: a cross-national qualitative com-
parative analysis of levels of perceived corruption. Int J Comp Sociol 57(4):183–206
25. Tabish S, Jha KN (2012) The impact of anti-corruption strategies on corruption
free performance in public construction projects. Constr Manage Econ 30(1):21–35
26. Treisman D (2000) The causes of corruption: a cross-national study. J Public Econ
76(3):399–457
27. Wedeman A (2005) Anticorruption campaigns and the intensification of corruption
in China. J Contemp China 14(42):93–116
28. Yang A, Lan X et al. (2017) An empirical study on the prisoners’ dilemma of
management decision using big data. Springer, Singapore
Procurement Risk Mitigation for Rebar Using
Commodity Futures
This paper is based on a case study in rebar procurement for a Chinese met-
allurgical machinery company that supplies iron and steel mills in China with
drill pipe, drill head, steel making and refining equipment, and blast furnaces.
The commodities markets in China are now mature and well established with a
plentiful supply and fast delivery, and spot procurement is now widely used by
many companies to try and avoid unnecessary inventory built-up. In recent years,
however, the spot prices of steel materials have become much more volatile, and
companies, if they want to remain competitive, cannot safeguard against such
price volatility by raising the final product price or by imposing a contractual
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 124
Risk Mitigation for Rebar Using Commodity Futures 1477
a financial hedge using the London Metal Exchange (LME) copper futures con-
tracts. Another analytical framework was developed by Caldentey and Haugh [2]
to investigate the appropriate hedging strategies for hedging against a possible
drop in profit of a company when profit is correlated with certain returns in the
financial markets. These two studies suggest that a financial hedging strategy,
if carefully designed, can effectively mitigate the procurement risk. This was
followed by Ni, et al. [11], who develop a multistage hedging strategy to miti-
gate procurement risk using a dynamic price and demand information updating
process, and concluded that using appropriate information to update the hedging
strategy can substantially improve the effectiveness of the hedge. Such updat-
ing makes an interim multistage rebalancing of the futures position possible,
thereby providing a timely adjustment using up-to-date information. This study
differs from that of Ni, et al. [11] in that it develops a hedging strategy that
focuses on budgetary control. The management of the machinery company used
in this study are very concerned with budgetary control and they emphasized the
need to mitigate the risk of budget overruns, especially during the current eco-
nomic uncertainties. With this in mind, a procurement budgetary control model
(PBCM) is developed that aims at controlling the risk of excessive over bud-
get. The PCBM model developed does not seem amenable to a direct solution,
and therefore a heuristic approach had to be used to obtain efficient suboptimal
solutions. Using the heuristic procedure, numerical experiments are conducted
to determine the effectiveness of the multistage hedging strategy in controlling
procurement spending.
In summary, this paper develops a financial hedging strategy for procurement
risk mitigation in the form of an interim multistage rebalancing and dynamic
information updating process, with the general objective to address the need to
maintain adequate budgetary control.
dealing with one stage can be a period of one week to one month, depending on
the availability of updated information for both commodity price and customer
demand. As shown in Fig. 1, at the beginning a budget plan is produced by the
management that serves as a benchmark that the subsequent financial hedging
strategy aims to meet. Using the price and demand information at stage 0, the
multistage financial hedge is initialised by entering into a long position of the
commodity futures that expire at the end of the final stage. At the subsequent
stages, the futures position will be rebalanced to achieve the best hedge using
the latest information available at each stage. At the end of the final stage,
the actual procurement of the physical commodity takes place and the futures
position is settled in cash. The cash settlement will completely or partially offset
any overspending incurred because of the variability in both commodity price
and customer demand.
The multistage hedging strategy is aimed at balancing the uncertainty of
the unhedged procurement spending, C, which is defined as the product of the
volatile commodity price P and the uncertain procurement quantity X, or C =
P X. The hedge can generate a payoff from the cash settlement of the futures
position, which leads to the hedged procurement spending as follows:
C̄ = C − H. (1)
For the multistage hedging strategy, the payoff can be calculated by analyzing
the stage-by-stage initialisation/rebalancing volumes yt and the futures position
zt at each stage t (1 ≤ t ≤ T − 1). The variables yt and zt are related as follows:
yt , t=0
zt = (2)
zt−1 + yt , ∀ 1 ≤ t < T.
T
−1
H = P × zT −1 − Ft × y t . (3)
t=0
Note that we do not consider the transaction cost. This is because of the low
rebalancing frequency (weekly at most) and the low transaction cost relative to
the settlement amount ratio (less than 0.02% in SHFE). From Eqs. 1 and 3, the
hedged procurement spending will be:
T
−1
C̄ = P (X − zT −1 ) + Ft × y t . (4)
t=0
T
where Ŷ = {ŷi }i=0 , ŷ0 = E0 (X), and ∀ 1 ≤ t ≤ T , ŷt = 0.
In the heuristic used, the value of α̂ is calculated on the assumption of a static
hedge. It is in fact a simplified way to take into account the effect of financial
hedging on the value of α. The effectiveness of α̂ is examined in Sect. 4.
When α is constant, the objective of the PBCM model can be approximated
by an α-PBCM model as follows:
2
min E C̄ + α C̄ − K , (14)
Y
Assumption: if for every stage t(0 ≤ t < T ) X and Ft+1 are independent,
then:
ln (St ) = χt + ξt , (19)
dχt = −κχt dt + σχ dWt , (20)
dξt = μξ dt + σξ dBt , (21)
where dWt and dBt are correlated as dWt dBt = ρdt. Equations (19), (20) and
(21) produce the estimates, see Ni, et al. [11], as follows:
In connection with the stochastic product demand, Choi, et al. [4] describe
a Bayesian information update used in a two-stage application. However, for
the machinery company used in this study, the Bayesian method needs to be
extended to a multistage process. The raw material (rebar) required in this
company for physical procurement is an aggregation of customer orders over τ
stages. It is not unreasonable to assume that customer orders arrive randomly
as a Poisson process. If λ is the average number of customer orders that are
received in one stage, it follows that the number of orders Ñ arriving in τ stages
is a random number from the Poisson distribution:
−λτ
ke
P {Ñ = k} = (λτ ) . (25)
k!
and
Since the material demand can be computed almost instantly upon receiving
customer orders, the planned rebar supply quantities, D̃, required in stages T −
τ + 1 to T can be estimated as follows:
˜
D̃ = Ñ · d. (30)
SD EBO
No hedge 1,482.70 1,038.20
Multistage strategy 1,176.30 701.5
Table 2 shows that both the SD and EBO are significantly reduced by
the multistage financial hedge. The significant reduction of SD (from 1,482.7
to 1,176.3) indicates that the stability of the overall procurement spending is
improved by the multistage hedge. Further, the substantial reduction in EBO
(from 1,038.2 to 701.5) suggests that the procurement budget is much less vio-
lated under the multistage hedge than that with no hedging. Therefore, it can
be safely concluded that the multistage hedging strategy is effective in procure-
ment risk mitigation from the procurement budget perspective. Moreover, the
percentage reduction in EBO for the multistage hedge is larger than SD; EBO is
reduced by 32.4% while SD is reduced by 20.6%, see Table 2. These reductions
suggest that the proposed multistage hedging strategy is more effective in reduc-
ing overspending of the procurement budget than in reducing the uncertainty of
the overall total procurement spending.
A sensitivity analysis is conducted to judge the effectiveness of the multistage
hedging strategy when the price and demand become highly volatile, and also
when their volatility can be regarded as low. The sensitivity analysis first exam-
ines the influence of changing volatility in demand. This is done by assessing
the performance of the hedge with different levels of demand volatility, i.e., by
increasing/decreasing the standard deviation Σ0 through a changing ratio, such
that:
SDchanged = SDinitial × (1 + changing ratio). (33)
The influence of changing price volatility, represented by both the short-term
volatility σχ and the long-term equilibrium volatility σχ , is also investigated.
Here these two volatilities are simultaneously increased or decreased by a ratio
percentage. With either demand or price volatility changing by a ratio ranging
from −70% to 70%, the effects of such changes on the hedging performance are
shown in Fig. 2 (demand volatility) and Fig. 3 (price volatility).
Risk Mitigation for Rebar Using Commodity Futures 1487
Figure 2 shows that changing the volatility of demand will affect the perfor-
mance of the multistage strategy, an obvious result. However, the hedged EBO
remains below the unhedged EBO for all changes in Σ0 , suggesting that the
effectiveness of the multistage hedging strategy is always superior. When Σ0
decreases, the hedged EBO is reduced by a larger ratio than the unhedged EBO,
indicating that the procurement budget is less likely to be violated. This sug-
gests that the multistage hedging strategy becomes relatively more powerful as
Σ0 decreases. On the other hand, the reduction ratio of the hedged EBO when
compared to the unhedged EBO will decrease as Σ0 increases. So, the multi-
stage hedging strategy will be relatively less powerful as increases, an obvious
result, since any procurement policy can be expected to deteriorate as volatility
increases.
1488 J. Ni et al.
5 Concluding Comments
References
1. Alizadeh AH, Nomikos NK, Pouliasis PK (2008) A markov regime switching app-
roach for hedging energy commodities. J Bank Financ 32(9):1970–1983
2. Caldentey R, Haugh M (2006) Optimal control and hedging of operations in the
presence of financial markets. Math Oper Res 31(2):285–304
3. Chod J, Rudi N, Van Mieghem JA (2010) Operational flexibility and financial
hedging: complements or substitutes? Manage Sci 56(6):1030–1045
4. Choi TM, Li D, Yan H (2003) Optimal two-stage ordering policy with bayesian
information updating. J Oper Res Soc 54(8):846–859
5. Doege J, Schiltknecht P, Lüthi HJ (2006) Risk management of power portfolios
and valuation of flexibility. OR Spectr 28(2):267–287
6. Eppen GD, Iyer AV (1997) Backup agreements in fashion buying-the value of
upstream flexibility. Manage Sci 43(11):1469–1484
7. Lence SH (1995) On the optimal hedge under unbiased futures prices. Econ Lett
47(3–4):385–388
8. Mathews KH, Holthausen DM (1991) A simple multiperiod minimum risk hedge
model. Am J Agric Econ 73(4):1020–1026
9. Mcmillan DG (2005) Time-varying hedge ratios for non-ferrous metals prices. Res
Policy 30(3):186–193
10. Myers RJ, Thompson SR (1989) Generalized optimal hedge ratio estimation. Am
J Agric Econ 71(5):858–868
11. Ni J, Chu LK et al (2012) A multi-stage financial hedging approach for the pro-
curement of manufacturing materials. Eur J Oper Res 221(2):424–431
12. Ni J, Chu LK, Yen BPC (2015) Coordinating operational policy with financial
hedging for risk-averse firms. Omega 59:279–289
13. Øksendal B (2006) Stochastic differential equations - an introduction with appli-
cations. Universitext 51(10):1731–1732
14. Sayın F, Karaesmen F, Özekici S (2014) Newsvendor model with random supply
and financial hedging: utility-based approach. Int J Prod Econ 154(4):178–189
15. Schwartz ES (1997) The stochastic behavior of commodity prices: implications for
valuation and hedging. J Financ 52(3):923–973
16. Serel D (2007) Capacity reservation under supply uncertainty. Comput Oper Res
34(4):1192–1220
17. Serel DA, Dada M, Moskowitz H (2001) Sourcing decisions with capacity reserva-
tion contracts. Eur J Oper Res 131(3):635–648
18. Shi Y, Wu F et al (2011) A portfolio approach to managing procurement risk using
multi-stage stochastic programming. J Oper Res Soc 62(11):1958–1970
19. Spinler S, Huchzermeier A, Kleindorfer P (2003) Risk hedging via options contracts
for physical delivery. OR Spectr 25(3):379–395
20. Xu Y (2006) Procurement risk management using commodity futures: a multistage
stochastic programming approach/. Int J Prod Res
21. Zhao L, Huchzermeier A (2015) Operations–finance interface models: a literature
review and framework. Eur J Oper Res 244(3):905–917
Ubiquitous Healthcare and Ubiquitousness
of Chronic Disease Prevention and Control:
Theory and Design
Zhihan Liu(B)
1 Introduction
Community healthcare service was identified by the World Health Organiza-
tion (WHO) as an effective measure for chronic disease prevention and control.
However, since China started the reform of urban healthcare system and the
development of community healthcare service in 1997, the condition of suffer-
ing from chronic diseases has not been improved as much as expected and the
medical expenses have also risen rapidly. Why is that the community healthcare
service in China did not have satisfactory effects on residents’ health? In addi-
tion to the service object factors (e.g., people’s health consciousness, lifestyle
and stereotype of treatment selection) and the policy factors (e.g., government
investment and allocation of health resources), the traditional form of commu-
nity healthcare service, the content homogenization with hospitalization service,
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 125
Chronic Disease Prevention and Control 1491
and the maladaptation to the needs of patients with chronic diseases are also
worthy of consideration.
In fact, it is difficult to achieve the goal of building an integrated system
of prevention and control towards the growing outbreak of chronic diseases just
through the traditional medical information management. In 2002, Kirn [6] for-
mally put forward the concept of Ubiquitous Healthcare (u-Healthcare or u-
Health). It is a kind of information system configuration enabling individual con-
sumers to access any types of health services through mobile computing devices
whenever and wherever [1–3,5,8–13]. This concept actually originated from the
idea of Ubiquitous Computing (ubicomp or Pervasive Computing) in the field of
information science and technology. Transcending the traditional desktop com-
puting, ubiquitous computing is a bran-new computing mode aimed at fusion
of cyberspace and physical space, under which people could acquire the digital
services freely and transparently [7].
For being humanistic, ubiquitous computing was doomed to accomplish great
deeds in the fields of medicine and healthcare (especially in the health manage-
ment area) which are closely related to human health and well-being, since the
day of its birth. As the 10th International Conference on Ubiquitous Healthcare
held in Yokohama, Japan in 2013 declared: “Enhancement in the welfare for
future requires the change to the current healthcare system. Our concern for
the healthcare is shifting from ‘recovery from illness’ to ‘maintaining wellness
and improving quality of life’. For the care of daily health level we need special
kinds of methods and technologies that we can be applied into our daily life
smoothly” [10]. Many developed countries have established relatively complete
u-Health systems with the following typical architecture [11] (see Fig. 1). Based
on the development of Internet of Things (IoT) and cloud computing, the med-
ical industry in Taiwan, China, has also witnessed the transition from “e-Health”
to “m-Health” and eventually to “u-Health”, in a short span of ten years. How-
ever, due to the late start of ubiquitous computing research in mainland China,
previous work on u-Health (especially for the chronic diseases management) is
quite limited at this time [12,13]. To addresses the problem, this study is the
first one in mainland China to explore the application of u-Health in the chronic
disease scenario. Not only the theory and the functions were introduced, but
also the technical route and the implementation methods were presented, aim-
ing at helping the Chinese government implement its comprehensive strategy for
chronic disease prevention and control.
diseases. It is important to note that this study adopted the theory of Yang [14]
which insists that the receptors of community prevention and control should not
only include the patients with chronic diseases, but also the high-risk groups and
even the healthy ones, from the perspective of community health management.
Thus, the application system of this platform can be divided into two subsys-
tems: community health service center and community residents (not just the
patients). The main goal of the system is moving forward the “strategic pass” of
CDM to help in cost control under the guidance of health data real-time acqui-
sition, and carrying out a systematic, coordinated and integrated strategy on
chronic disease prevention and control by in-depth health data exploring. The
specific steps include: (a) collecting, managing and analyzing the personal health
information used in health risk factors assessment, tracking, and health behavior
guidance, for dynamical monitoring of community residents’ state of health; (b)
applying health interventions to the community residents based on the evalu-
ation results; and (c) tracking and evaluating the effects of interventions (see
Fig. 2).
through the loose coupling between the layers, while the loose coupling between
modules on the same layer was implemented through the form of services. All of
these equipped the platform with good ability of extension.
Fig. 3. Overall architecture of the mobile service platform for community health man-
agement of chronic disease
4 Conclusion
Just as Dr. Robert M. Kaplan, who was Associate Director for Behavioral and
Social Sciences in the National Institutes of Health of US at that time said in his
paper: “Health-related information collected in psychological laboratories may
not be representative of people’s everyday health. For at least 70 years, there
has been a call for methods that sample experiences from everyday environ-
ments and circumstances” [4]. Ubiquitous healthcare not only provides us with
this method, but also with the development direction of medical information con-
struction. Under the u-Health theory and aiming at the key requirements and
typical applications of CDM, this study is the first in mainland China to present
a mobile service platform for the big data collection and analysis of chronic
disease, on the comprehensive basis of cloud computing, intelligent sensing and
perception technologies of IoT, data acquisition, storage and processing tech-
nologies, and computer decision support technology. The platform can be used
in the health data acquisition, supervision and forecast for community residents,
contributing to improve the community health services and lift the population
health level. It is expected to help in realizing the humanistic self-health manage-
ment of community residents, and providing community health centers with the
ubiquitous, comprehensive prevention and control services of chronic diseases,
so as to accelerate the supply-side structural health reform in the field of CDM.
In consideration of the rapid progress in the international development of infor-
mation technology, direction for future research would be getting the platform
put into practice as soon as possible, and adjusting and improving the system in
a timely manner according to the process of Chinese healthcare system reform.
References
1. Arnrich B, Mayora O, Bardram J (2010) Pervasive or ubiquitous healthcare? Meth-
ods Inf Med 49:65–66
2. Conejar RJ, Kim HK (2015) Designing u-healthcare web services system. Int J
Softw Eng Appl 9(3):209–216
3. He C, Fan X, Li Y (2013) Toward ubiquitous healthcare services with a novel
efficient cloud platform. IEEE Trans Biomed Eng 60(1):230–234
4. Kaplan RM, Stone AA (2013) Bringing the laboratory and clinic to the community:
mobile technologies for health promotion and disease prevention. Ann Rev Psychol
64:471–498
5. Kim J, Ahn CW (2011) Diabetes management system based on ubiquitous health-
care. J Paediatr Child Health 12(3):133
6. Kirn S (2002) Ubiquitous healthcare: The OnkoNet mobile agents architecture. In:
Objects, Components, Architectures, Services, and Applications for a Networked
World. Springer, Heidelberg, pp 265–277
7. Krumm J (2010) Ubiquitous computing fundamentals. Ergonomics 53(5):724–725
8. Panagiotakopoulos T, Fengou MA et al (2009) Ubiquitous healthcare. In: Biocom-
putation and Biomedical Informatics: Case Studies and Applications, p 254
9. Shin D, Shin D, Shin D (2016) Ubiquitous healthcare platform for chronic patients.
In: International Conference on Platform Technology and Service, pp 1–6
10. Tamura T, Park KS (2016) Invitation to the 10th International Conference on
Ubiquitous Healthcare (u-healthcare 2013). http://u-healthcare2013.l-bmi.org/
11. Touati F, Tabish R (2013) U-Healthcare system: state-of-the-art review and chal-
lenges. J Med Syst 37(3):1–20
12. Wu X, Ye M et al (2012) Pervasive medical information management and services:
key techniques and challenges. Chin J Comput 35(5):827–845
13. Xie Y (2016) Ethical perspectives of ubiquitous healthcare. J Jishou Univ (Social
Science Edition) 37(3):56–62 (in Chinese)
14. Yang JX (2010) Thoughts about strategies for chronic disease integrated control
on the basis of community health management. Chin Health Econ 29(7):67–69 (in
Chinese)
Can Media Supervision Improve the Quality
of the Firms Internal Control?
1 Introduction
With the development of information technology and the spread of internet,
media’s functions as supervision and governance are taking more and more sig-
nificant roles in economy. Media, regarded as a forth power being independent of
legislation, administration and judiciary, plays an essential role in disseminating
information, restraining incompliant actions of government and businesses, and
enhancing market’s efficiency. Increasing number of scholars in economics and
management have focused on studying the corporate governance role of media
as well as its influence mechanism. Dyck and Zingales [8] found that the pressure
from media supervision contributes drastically to reducing the personal interests
of the controlling shareholders from their controlling rights. Dyck and Zingales
[9] discovered that media supervision is able to effectively suppress or mod-
ify the corporate decisions that may harm external investors’ interests. Miller
[20] made a conclusion that through rebroadcasting investigation of information
from information intermediary like securities analysts or original ones, media
can disclose accounting frauds in advance. Joe et al. [15] claimed that an ineffi-
cient board would take actively measures to improve efficiency after exposed by
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 126
1498 T. Yang et al.
media. Maistriau and Bonardi [19] reported that media visibility has a positive
effect on CSR performance of a sample of British firms, and Zyglidopoulos et al.
[22] found that media coverage is positively associated with heightened levels of
CSR performance of a sample of S&P 500 firms.
Using a large cross-country sample of firms, Ghoul et al. [10] further found
strong evidence that firms engage in more CSR activities if located in coun-
tries where the media has more freedom. Dai et al. [5] investigated whether the
media plays a role in corporate governance by disseminating news. Using a com-
prehensive data set of corporate and insider news coverage for the 2001–2012
period, they showed that the media reduces insiders’ future trading profits by
disseminating news on prior insiders’ trades available from regulatory filings.
Based on a sample of over two million newspaper articles, Kim et al. [17] found
the media in China has an incremental impact on stock price efficiency, and a
market-driven media can play the role of compensating for the underdeveloped
governance institutions in transitional economies such as China.
Although prior studies accumulate evidence that supports the monitoring role
of the media, they ignore an important issue: Can media supervision encourage
firms to take precautions and initiate to improve the quality of internal con-
trols and information disclosures? Previous literature regards media supervision
as a substitute of internal controls in corporate governance, focuses on corpo-
rate inherent characteristics only when investigating determinants of corporate
internal controls, and neglects the potential interaction between media’s exter-
nal supervision and corporate internal controls. Internal controls mean a series
of restrictive organizing, planning, procedures and methods executed in entities,
aiming to boost operating efficiency, acquire and use resources in an effective
manner, and achieve given objectives. The setting-up, perfection and valid oper-
ation of business internal controls have direct impact on disclosures of financial
information and investors’ interests. Therefore, launching a research on the way
media supervision influences corporate governance has theoretical and practical
meanings. This article selects data of GEM firms in China’s stock market from
2011 to 2015, and studies impacts of media coverage on internal controls and
related information disclosures. The contributions of this paper mainly lie in the
following two aspects.
Firstly, this paper enriches the literature on the corporate governance role of
media. Existing studies on the corporate governance role of media mainly focus
on the effect of media coverage on firms’ socially beneficial behaviors such as
CSR performance, or on firms’ frauds and misconducts hurting the interests of
the investors. This paper extends the scope of this research area by investigating
the effect of media coverage on firms’ internal control quality and internal control
information disclosure.
Secondly, this paper also enriches the literature on the determinants of inter-
nal control quality and disclosure. Most prior studies on the determinants of
internal control quality and disclosure mainly focus on corporate internal charac-
teristics (e.g., [4,7,11]), while very few researches investigated the role of external
factors, such as auditor expertise (e.g., [12,21]). In this paper, we find that media
Can Media Supervision Improve the Quality of the Firms Internal Control? 1499
Dyck and Zingales [9] further found that the media exposure could encourage
enterprises to correct violations of the rights and interests of external investors.
Zyglidopoulos et al. [22] found that media coverage is positively associated with
heightened levels of CSR performance. Dai et al. [5] showed that media cover-
age can reduce insiders’ future trading profits by disseminating news on prior
insiders’ trades available from regulatory filings.
We can conclude that existing researches have provided sufficient evidences
for the corporate governance role of media supervising. Therefore, we ascertain
that media coverage can urge firms to improve internal control quality, and
reduce adverse actions to investors as far as possible, to avoid greater negative
market response after exposed by media. At the same time, media coverage also
encourages firms to build a good image in external stakeholders and enhance
business values through improving internal controls. Hence, we can obtain the
following hypothesis:
Hypothesis 1: Media coverage has a positive effect on the firms’ quality of internal
controls.
One important characteristic of internal controls is that they consist of internal
managerial processes that are not directly observable for investors. Investors’
judgment of the effectiveness of these internal controls is based on the informa-
tion disclosed by managers.
From shareholders’ perspective, higher levels of internal control disclosure
reduce investors’ information risk, lower down their required rate of return (e.g.,
[3,6]). At the same time, higher levels of disclosure enable shareholders to mon-
itor managers more closely, resulting in fewer agency problems.
From the manager’s perspective, the decision to voluntarily disclose informa-
tion is based on a tradeoff between the expected benefits and costs of disclosing
[14]. The main benefit of voluntarily disclosing information on the firm’s inter-
nal controls is that it adds to managerial reputation building. Not having a
reputation for credible reporting not only reduces the effectiveness of the man-
ager’s communication efforts, but also adversely affects her reputation in the
managerial labor market [18]. Establishing a reputation for credible reporting
requires disclosure of accurate and timely information as well as of information
that is complete [14,20]. This means that career concerns incentivize a manager
to voluntarily disclose information, even if the information is not favorable [3].
Hammersley et al. [13] find a significantly negative abnormal return following
a firm’s announcement that the firm’s internal controls were not effective, but
more importantly also document that the adverse effects on returns are more
pronounced when the firm’s managers claim that the internal controls are effec-
tive but the independent auditor report indicates that they were not.
An important cost of voluntarily disclosing information on the firm’s internal
controls is that it may have potential legal consequences. If a manager discloses
inaccurate information or if a manager discloses incomplete information man-
agers could be sued, face legal liability, and owe damages [3]. Moreover, once this
matter is known to the public, it will also adversely influence the manager’s rep-
utation on the managerial labor market. To guarantee the accuracy and quality
Can Media Supervision Improve the Quality of the Firms Internal Control? 1501
3 Research Design
This article selects GEM Growth Enterprise Market firms from 2011 to 2015 as
initial sample, and eliminates observations with missing data on the key vari-
ables of this paper. The reason why we choose GEM firms is that these firms,
which have small sizes and young ages in most cases, are opaque in information
disclosure and have higher business risks in operation. In this case, the quality
of internal controls and information disclosure of GEM firms are of great impor-
tance for investors to understand their business situations. Besides, this article
uses balanced panel data, keeping only the firms with public financial data in
each year from 2011 to 2015. The final sample of this paper includes 151 GEM
firms, 755 observations in the period ranging from 2011 to 2015.
The corporate internal control quality index and its information disclosure
quality index derive from the DIB Internal Control and Risk Management Data-
base. Higher values of the two indexes represent higher qualities of the internal
control and information disclosure respectively. The data of media coverage is
from the News and Reports Database of WIND Info, which records the Chinese
listed firms’ daily news reported by more than 100 major financial newspapers
and websites in China, covering nearly all the relevant news about public firms
from major financial media in China. Using this database, we count the annual
number of news reports for each GEM listed firm, and construct a media coverage
indicator by taking the natural logarithm of one plus the number of new reports.
Referring to existing research results about determinants of internal control
quality and information disclosure quality (e.g., [7]), our paper includes a set of
basic financial characteristics and corporate governance variables as control vari-
ables. The basic financial characteristics variables include the firm size (natural
logarithm of total assets), asset-liability ratio, return on assets, and sales growth.
1502 T. Yang et al.
Table 2. The media coverage’s effects on the quality of internal control and internal
control information disclosure
−1 −2 −3 −4 −5 −6
IC IC IC Disclosure Disclosure Disclosure
Media 0.099∗ 0.161∗∗ 0.169∗∗∗ 0.146∗∗ 0.168∗∗∗ 0.153∗∗
− 1.89 − 2.15 − 2.66 − 2.13 − 2.72 − 2.49
Size 0.297∗∗∗ 0.137∗∗ 0.135∗∗ 0.131∗∗
− 3.86 − 2.27 − 2.15 − 1.98
LEV − 0.03 − 0.029 − 0.196∗ − 0.17
(− 0.31) (− 0.30) (− 1.94) (− 1.49)
ROA 0.256∗∗ 0.180∗∗ 0.245∗∗ 0.255∗∗
− 2.39 − 1.99 − 2.11 − 2.36
Growth 0.069 0.081 0.195∗ 0.181
− 0.89 − 1.11 − 1.71 − 1.5
Ten 0.004 0.111
− 0.04 − 0.96
Mshare 0.104 0.06
− 1.14 − 0.65
Herf 5 − 0.086 0.039
(− 0.91) − 0.34
Separation − 0.05 0.07
(− 0.56) − 0.89
constant 0.935∗∗∗ − 0.755∗∗∗ 0.549 1.116∗∗∗ − 0.645∗∗∗ 1.211∗∗∗
− 18.4 (− 3.21) − 1.39 − 23.32 (− 2.89) − 2.71
Industry dummy YES YES YES YES YES YES
N 755 755 755 755 755 755
R2 0.158 0.188 0.236 0.133 0.207 0.212
∗∗∗ ∗∗ ∗
Note: , , represents significance at 1%, 5% and 10% level respectively.
The t-statistics reported in parentheses are based on standard errors clustered by firm.
1504 T. Yang et al.
that media coverage has a positive and significant effect on internal control qual-
ity. In column 2, we controlled the basic financial characteristics of the firms,
including firm Size (Size), asset-liability ratio (LEV), return on assets (ROA),
and the firms’ sales growth (Growth). The positive regression coefficient of media
coverage, with a value of 0.161, is significant at 5% level, indicating that the pos-
itive relationship between media coverage and internal control quality becomes
more apparent after controlling the influences of these variables of firms’ finan-
cial characteristics. In column 3, we further controlled the corporate governance
characteristics, including ownership concentration (the total share ratio of the
top 10 major shareholders), executives’ share holding, equity separation degree,
and the separation of ownership and control. The results show that after the
control of the corporate governance, media coverage illustrates an even greater
positive effect (β1 = 0.169, p < 0.1) on the internal control quality. Thus, we
can conclude that media coverage does help improve the quality of the firm’s
internal control, which is consistent with Hypothesis 1 above.
Column 4 to 6 use quality of internal control information disclosure as depen-
dent variable. It could be found that whether using media coverage as an explana-
tory variable alone, or controlling the firm’s basic financial characteristics and
characteristics of corporate governance, there remains a positive effect of media
coverage on the quality of internal control information disclosure. Thus, the
media supervision not only helps to improve the quality of internal control of the
company, but also help to improve the company’s internal control information
disclosure, thereby enhancing the company’s transparency and self-restraint.
In the above regressions, the measure of media coverage includes the news reports
from the mainstream financial media channels in China, but excludes the Wind
Info’s own news reports. Although Wind Info is not a common source of infor-
mation for retail investors, it is widely regarded by institutional investors and
analysts as one of the important sources of information access. In order to test
the reliability of the previous empirical results, this paper uses an alternative
measure of media coverage, which includes Wind Info’s own news reports, to
do the robustness test. The regression results in Table 3 show that the main
conclusions of the paper have not changed because of the replacement of media
coverage measure. With the alternative measure of media coverage, denoted as
M edia A, the regression results still show that media coverage has a positive
effect on the quality of internal control and the quality of internal control infor-
mation disclosure, and this conclusion remains the same regardless of whether
we control the company’s basic financial characteristics or corporate governance
variables.
Can Media Supervision Improve the Quality of the Firms Internal Control? 1505
−1 −3 −4 −1 −3 −4
IC IC IC Disclosure Disclosure Disclosure
Media A 0.142∗ 0.186∗∗ 0.205∗∗ 0.129∗ 0.180∗∗ 0.193∗∗
− 1.81 − 2.08 − 2.39 − 1.71 − 2.1 − 2.46
Size 0.207∗∗∗ 0.133∗∗ 0.132∗∗ 0.135∗∗
− 3.01 − 2.15 − 2.03 − 2.18
LEV 0.042 − 0.012 − 0.190∗ − 0.154
− 0.43 (− 0.12) (− 1.81) (− 1.29)
ROA 0.232∗∗ 0.182∗∗ 0.248∗∗ 0.260∗∗
− 2.18 − 2.07 − 2.16 − 2.48
Growth 0.035 0.017 0.188 0.175
− 0.35 − 0.17 − 1.62 − 1.43
Ten 0.003 0.105
− 0.03 − 0.9
Mshare 0.117 0.072
− 1.28 − 0.78
Herf 5 − 0.055 0.068
(− 0.59) − 0.58
Separation − 0.085 0.093
(− 0.91) −1
cons 0.658∗∗∗ − 0.767∗∗∗ 0.728∗ 1.726∗∗∗ − 0.649∗∗∗ 1.375∗∗∗
− 8.86 (− 3.26) − 1.75 − 19.84 (− 2.80) − 2.93
N 755 755 755 755 755 755
R2 0.168 0.194 0.234 0.121 0.194 0.21
Note: ∗∗∗ , ∗∗ , ∗ represents significance at 1%, 5% and 10% level respectively.
The t-statistics reported in parentheses are based on standard errors clustered
by firm.
5 Conclusion
Based on the financial data of 151 listed companies in GEM from 2011 to 2013,
this paper analyzes the impact of media coverage on the internal control qual-
ity and its information disclosure quality. The results show that media coverage
positively relates to the quality of internal control and the quality of internal
control information disclosure, and the conclusion is still valid after control-
ling corporate financial characteristics and corporate governance characteristics.
This article provides new evidence for the corporate governance role of media
supervision from the perspective of corporate internal control. The conclusion of
this paper implicates that, under the imperfect Chinese legal system and market
institutional environment, media, as an alternative form of supervision mecha-
1506 T. Yang et al.
nism other than legal system, plays an important role in regulating the behavior
of listed companies and protecting the interests of small and medium sharehold-
ers. In the current stage of China’s market-oriented reform, strengthening the
external supervision role of the media is of great significance to improving cor-
porate governance of listed companies and protecting the interests of investors.
References
1. Ashbaugh-Skaife H, Collins DW et al (2008) The effect of sox internal control
deficiencies and their remediation on accrual quality. Acc Rev 83(1):217–250
2. Barber BM, Odean T (2008) All that glitters: the effect of attention and news
on the buying behavior of individual and institutional investors. Rev Finan Stud
21(2):785–818
3. Campbell JL, Chen H et al (2014) The information content of mandatory risk
factor disclosures in corporate filings. Rev Acc Stud 19(1):396–455
4. Chen Y, Knechel WR et al (2016) Board independence and internal control weak-
ness: evidence from sox 404 disclosures
5. Dai L, Parwada JT, Zhang B (2015) The governance effect of the media’s news
dissemination role: evidence from insider trading. J Acc Res 53(2):331–366
6. Dan D, Hogan C et al (2011) Internal control disclosures, monitoring, and the cost
of debt. Acc Rev 86(4):1131–1156
7. Doyle J, Ge W, Mcvay S (2007) Determinants of weaknesses in internal control
over financial reporting. J Acc Econ 44(1–2):193–223
8. Dyck A, Zingales L (2004) Private benefits of control: an international comparison.
J Finan 63(2):537–600
9. Dyck A, Zingales L (2008) The corporate governance role of the media: evidence
from Russia. J Finan 63(3):1093–1135
10. Ghoul SE, Guedhami O et al (2016) New evidence on the role of the media in
corporate social responsibility. J Bus Ethics 62:1–29
11. Guo J, Huang P et al (2015) The effect of employee treatment policies on inter-
nal control weaknesses and financial restatements. Soc Sci Electron Publishing
91(4):1167–1194
12. Haislip JZ, Peters GF, Richardson VJ (2016) The effect of auditor it expertise on
internal controls. Int J Acc Inf Syst 20:1–15
13. Hammersley JS, Myers LA, Shakespeare C (2008) Market reactions to the disclo-
sure of internal control weaknesses and to the characteristics of those weaknesses
under section 302 of the sarbanes oxley act of 2002. Rev Acc Stud 13(1):141–165
14. Healy PM, Palepu KG (2001) Information asymmetry, corporate disclosure, and
the capital markets: a review of the empirical disclosure literature. J Acc Econ
31(1–3):405–440
15. Joe JR, Louis H, Robinson D (2009) Managers’ and investors’ responses to media
exposure of board ineffectiveness. J Finan Quant Anal 44(3):579–605
16. Kim JB, Song BY, Zhang L (2011) Internal control weakness and bank loan con-
tracting: evidence from sox section 404 disclosures. Acc Rev 86(4):1157–1188
Can Media Supervision Improve the Quality of the Firms Internal Control? 1507
17. Kim JB, Yu Z, Zhang H (2016) Can media exposure improve stock price efficiency
in China and why? China J Acc Res 9(2):83–114
18. Kothari SP, Shu S, Wysocki PD (2009) Do managers withhold bad news? J Acc
Res 47(1):241–276
19. Maistriau EA, Bonardi JP (2014) How much does negative public exposure on envi-
ronmental issues increase environmental performance? Academy of Management
Annual Meeting Proceedings 2014(1):11328
20. Miller GS (2006) The press as a watchdog for accounting fraud. J Acc Res
44(5):1001–1033
21. Schroeder JH, Shepardson ML (2015) Do sox 404 control audits and management
assessments improve overall internal control system quality? Acc Rev A Q J Am
Acc Assoc 91:1513–1541
22. Zyglidopoulos SC, Georgiadis AP et al (2012) Does media attention drive corporate
social responsibility? J Bus Res 65(11):1622–1627
Fuzzy Chance Constrained Twin Support Vector
Machine for Uncertain Classification
1 Introduction
Nowadays, support vector machines (SVMs) are considered as one of the most
effective learning methods for classification, which emerged from research of sta-
tistical learning theory [10,14]. The main idea of this classification technique is
by mapping the data to the higher dimensional space with some kernel meth-
ods and then determining a hyperplane separating binary classes with maximal
margin [9,19,26].
In recent years, SVM classification methods have made breakthrough progress
and enjoyed great success in many fields. Binary data classification methods have
made breakthrough progress in recent years. Mangasarian et al. [15] proposed gen-
eralized eigenvalue proximal support vector machine (GEPSVM). Jayadeva et al.
[12] proposed a twin support vector machine (TWSVM) to solve the classification
of binary data, motivated by GEPSVM. The main idea of TWSVM is generating
two nonparallel planes, one of which is closest to one class and another plane is
as far as possible from the other. At the same time, the ν-TWSVM [18] was pro-
posed for handling outliers as an extension of TWSVM. Some extensions to the
TWSVM can be founded in [8,13,20].
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 127
Fuzzy Chance Constrained Twin Support Vector Machine 1509
For the methods mentioned above, the parameters in the training sets are
implicitly assumed to be known exactly. However, in real world application,
the parameters have perturbations since they are estimated from the data sub-
ject to measurement and statistic errors [11]. When the data points are uncer-
tain, different have been proposed to formulate the traditional SVM with uncer-
tainties. Bi et al. [5] assumed the data points are subject to an additive noise
which is bounded by the norm and proposed a very direct model. However, this
model cannot guarantee a generally good performance on the uncertainty set.
To guarantee an optimal performance when the worst case scenario constraints
are still satisfied, robust optimization is utilized. Trafalis et al. [22,23,25] pro-
posed a robust optimization model when the perturbation of the uncertain data
is bounded by norm. Robust optimization [4,21] is also used when the constraint
is chance constraint which is to ensure the small probability of misclassification
for the uncertain data. Based on different bounding inequalities, Ben Tal et al.
[1,2] employed moment information of uncertain training points to developing
different chance-constrained SVM model. However, to our best knowledge, there
is no researcher considering the chance-constrained optimization in TWSVM
problem. Therefore, it is interesting and important to study the TWSVM with
chance constraints for the uncertain data classification problem. The main pur-
pose of this paper is to make an attempt in this direction.
Combining the capability of processing the uncertainty of chance constraints
and the benefits of TWSVM, in this paper, we propose a fuzzy chance con-
strained twin support vector machine (FCC-TWSVM). The main method of
this paper is to apply the moment information of uncertain data to transform
into second order cone programming (SOCP). The rest of this paper is organized
as follows. Section 2 recalls SVM and TWSVM briefly. In Sect. 3, we introduce
the model of FCC-TWSVM. Experimental results on the uncertain data sets are
presented in Sect. 4. Conclusions are provided in Sect. 5.
2 Preliminaries
In this section, we briefly recall some concepts of TWSVM and CC-TWSVM for
binary classification problem.
2.1 TWSVM
and
min 1 Bw− + e− b− 22 + C2 eT+ η
w− ,b− 2 (3)
s.t. (Aw+ + e+ b+ ) + η ≥ e+ , η ≥ 0,
where C1 , C2 are positive numbers, e+ , e− are vectors of ones of corresponding
dimensions. The nonparallel hyperplanes Eq. (1) can be obtained by solving Eqs.
(2) and (3). Then the new point is classified by following decision function
xT wr + br = min | xT wr + br | . (4)
r=+,−
When uncertainties exists in the data points, the TWSVM model need to be
modified to contain the uncertain information. Suppose there are l1 and l2 train-
ing data points in Rn , use A i = [Ai1 , · · · , Ain ], i = 1, · · · , l1 to denote the
uncertain data points and the label is positive +1. And let B i = [Bi1 , · · · , Bin ],
i = 1, · · · , l2 to denote the uncertain data points and the label is negative −1
respectively. Then A = [A1 , · · · , A
l ]T and B = [B 1 , · · · , B
l2 ]
T
represent two
1
data sets. The chance-constrained program is to ensure the small probability of
misclassification for the uncertain data. The chance-constrained TWSVM(CC-
TWSVM) formulation is
l1
2 E{Aw+ + e+ b+ 22 } + C1
1
min ξi
w+ ,b+ i=1
i w+ + b+ ) ≤ 1 − ξi } ≤ ε (5)
s.t. P{−(B
ξi ≥ 0, i = 1, · · · , l1
and
l2
2 E{Bw− + e− b− 22 } + C2
1
min ηi
w− ,b− i=1
i w− + b− ) ≤ 1 − ηi } ≤ ε (6)
s.t. P{(A
ηi ≥ 0, i = 1, · · · , l2 .
where E{·} denote the expectation under corresponding distribution, C1 , C2 are
positive numbers, e+ , e− are vectors of ones of corresponding dimensions, 0 <
ε < 1 is a parameter close to 0 and P{·} is the probability distribution. The
model ensures an upper bound on the misclassification probability.
Fuzzy Chance Constrained Twin Support Vector Machine 1511
and B = [B 1 , · · · , B
T
l2 ] represent two data sets. By considering the contribution
of different misclassified points, the chance-constrained program is to ensure the
minimum misclassification rate for the uncertain data. The chance-constrained
TWSVM(FCC-TWSVM) formulation is
l1
2 E{Aw+ + e+ b+ 22 } + C1
1
min ti ξi
w+ ,b+ i=1
i w+ + b+ ) ≤ 1 − ξi } ≤ ε (7)
s.t. P{−(B
ξi ≥ 0, i = 1, · · · , l1
and
l2
2 E{Bw− + e− b− 22 } + C2
1
min ti ηi
w− ,b− i=1
i w− + b− ) ≤ 1 − ηi } ≤ ε (8)
s.t. P{(A
ηi ≥ 0, i = 1, · · · , l2 .
where E{·} denote the expectation under corresponding distribution, C1 , C2 are
positive numbers, ti ∈ (1, 1] denotes the fuzzy membership of positive and neg-
ative samples, 0 < ε < 1 is a parameter close to 0 and P{·} is the probabil-
ity distribution. The model ensures an upper bound on the misclassification
probability, but two quadratic optimization problems (7) and (8) with chance
constrained are typically non-convex, so the model is very hard to solve.
The work so far to deal with the chance constraint is to transfer them by dif-
ferent bounding inequalities. When the mean and covariance matrix of uncertain
data points are known, then multivariate bound [3,16,17] by robust optimization
can be used to express the chance constraints in special condition [4,21].
Lemma 1 [3,16]. Let X ∼ (μ, Σ) denote random vector X with mean μ and
covariance matrix Σ, the multivariate Chebyshev inequality states that for any
closed convex set S, the supremum of the probability that X take a value in S is
sup P{X ∈ S} = 1
1+d2
X∼(μ,Σ)
−1 (9)
d2 = inf (X − μ)T (X − μ).
X∈S
Theorem 1. Assume the first and second moment information of random vari-
ables A i are known. Let μ+ = E[A
i and B i ] and μ− = E[Bi ] be the mean
i i
1512 Q. Cao et al.
+ i − μ+ )] and − = E[(B
i − μ+ )T (A i −
vector seperately. And let i = E[(A i i i
− T −
μi ) (Bi − μi )] be the covariance matrix of the two data set uncertain points
respectively. Then the problems (7) and (8) could be reformulated respectively as:
1 T + T
l1
min 2 w+ G w+ + w+ μ b+ + 12 l1 b2+ + C1
T +
ti ξi
w+ ,b+ i=1 (10)
1
−2
s.t. −(μ−
i w+ + b+ ) ≥ 1 − ξi + k i w+ , ξi ≥ 0
and
1 T − T −T
l2
min 2 w− G w− + w− μ b− + 12 l2 b2− + C2 ti ηi
w− ,b− i=1 (11)
1
+2
s.t. μ+
i w− + b− ≥ 1 − ηi + k i w− , ηi ≥ 0,
1−ε
where k = ε and
l1
T
l1
G+ = (μ+ + +
i μi + Σi ), μ+ = μ+
i
i=1 i=1
with
l2
T
l2
G− = (μ− − −
i μi + Σi ), μ− = μ−
i .
i=1 i=1
Proof. Now we prove that problem (7) can be reformulated to Eq. (10). In fact,
it follows from Eq. (7) that
l1
1
min + + e+ b+ 22 } + C1
E{Aw ti ξi
w+ ,b+ 2 i=1
l1 l1 l1
1
i w+ b+ + l1 b2+ } + C1
= min E{ (Ai w+ )2 + 2 A ti ξi
w+ ,b+ 2 i=1 i=1 i=1
l1 l1 l1
1 T T
i }w+ b+ + 1 l1 b2+ + C1
= min w+ E{ Ai Ai }w+ + E{A ti ξi
w+ ,b+ 2 i=1 i=1
2 i=1
l1 l1 l1
1 T T
i }w+ b+ + 1 l1 b2+ + C1
= min w+ E{Ai Ai }w+ + E{A ti ξi
w+ ,b+ 2 i=1 i=1
2 i=1
l1 l1 l1
1 T T i } + Σi+ )w+ +
i }w+ b+ + 1 l1 b2+ + C1
= min w+ (E {Ai }E{A E{A ti ξi
w+ ,b+ 2 i=1 i=1
2 i=1
l1 l1 l1
1 T +T + + 1
= min w+ (μi μi + Σi+ )w+ + μi w+ b+ + l1 b2+ + C1 ti ξi
w+ ,b+ 2 i=1 i=1
2 i=1
l1
1 T + T T 1
= min w+ G w+ + w+ μ b+ + l1 b2+ + C1 ti ξi ,
w+ ,b+ 2 2 i=1
Fuzzy Chance Constrained Twin Support Vector Machine 1513
where
l1
T
l1
G+ = (μ+ + +
i μi + Σi ), μ+ = μ+
i .
i=1 i=1
Moreover, for the constraint of (7), we know that the set {−(Bw + + e+ b+ ) ≤
1 − ξ} is a half-space produced by a hyperplane and so it is a closed convex set.
Using Lemma 1, we obtain
sup i w+ + b+ ) ≤ 1 − ξi } =
P{−(B 1
1+d2
i ∼(μ− ,Σ − )
B i i
−1
d2 = inf (X − μ− T +
i ) Σi (X − μ−
i ).
−(Xw+ +b+ )≤1−ξi
sup i w+ + b+ ) ≤ 1 − ξi } = 1,
P{−(B
i ∼(μ− ,Σ − )
B i i
= uTi ui
γ2
= Ti
vi vi
(μ−
i w+ + b+ − 1 + ξi )
2
= T − .
w+ Σi w+
For the constraint (7), by
sup P{−(Xw+ + b+ ) ≤ 1 − ξi } < ε,
X∼(μ− −
i ,Σi )
1 T + 1 l1
T +T
L(w+ , b+ , ξ, λ, β) = w+ G w+ + w+ μ b+ + l1 b2+ + C1 ti ξi (15)
2 2 i=1
l1 1
l1
λi −(μ− − 2
− i w+ +b+ )−1 + ξi −kΣi w+ − βi ξi ,
i=1 i=1
where λi , βi ≥ 0.
Fuzzy Chance Constrained Twin Support Vector Machine 1515
Recall that for any x ∈ Rn , we have the relationship x = max xT y. Then
y≤1
the equivalent model of Eq. (15) are given as follows:
l1
1 T + T +T 1
L1 (w+ , b+ , ξ, λ, β, ν) = w+ G w+ + w+ μ b+ + l1 b2+ + C1 ti ξi (16)
2 2 i=1
l1 1
l1
− λi −(μ−
i w+ + b+ ) − 1 + ξi − k(Σi
−2
w+ )T ν − βi ξi ,
i=1 i=1
Similar to the discussion of Sect. 5 of [6], we know that solving Eq. (10) is equiv-
alent to finding the saddle-point of the Lagrangian L1 . This fact combining the
convexity implies
min max L(w+ , b+ , ξ, λ, β)
w+ ,b+ ,ξ λi ,βi
By eliminating the primal variables in Eq. (17), we can obtain the dual problem.
Taking partial derivatives of L1 with respect to w+ , b+ , and ξ, respectively,
one has
⎧
⎪
⎪ T l1 T l1 1
⎪ ∂L1 (w+ ,b+ ,ξ,λ,β)
λi μ− λi Σi− ν,
2
⎪ = G+ w+ + μ+ b+ + +k
⎪
⎨ ∂w+
i=1
i
i=1
∂L1 (w+ ,b+ ,ξ,λ,β) T +T
l1
(18)
⎪
⎪ = w+ μ + l 1 b+ + λi ,
⎪
⎪
∂b+
⎪
⎩ ∂L1 (w+ ,b+ ,ξ,λ,β) = C t − λ − β .
i=1
∂ξi 1 i i i
Since H + is positive define, it is easy to see the solution of Eq. (20) can be
obtained as
⎧
⎪ l1 T 1
l1
⎪
⎨ w+ = −H1
+
λi μ− i + kΣi− ν ,
2
λi = H1+ s+i ,
i=1 i=1
(21)
⎪
⎪ l1
−T −2
1
l1
⎩ b+ = −H2+ λi μi + kΣi ν , + +
λi = H2 si ,
i=1 i=1
+−1
where H = [H1+ , H2+ ].
According to Eqs. (16), (19) and (21), we can obtain the dual problem of Eq.
(10) as follows
l1 T T T T
min λi − 12 s+
i H1 G H1 si − 2 l1 si H2 H2 si − μi H1 si H2 si
+ + + + 1 + + + + + + + + +
λi ,u i=1
l1 T 1
l1
λi μ− + kΣi− ν ,
2
s.t. − i λi = s+
i
i=1 i=1
0 ≤ λi ≤ C1 ti , ν ≤ 1.
Similarly, we know that the dual problem of Eq. (11) can be expressed as
Eq. (14). This completes the proof.
4 Numerical Experiments
In this section, our FCC-TWSVM model is illustrated by numerical test based on
two types of data sets. The first test is implemented to certify the performance
of our FCC-TWSVM by artificial data. And in second test, we also test the
performance of FCC-TWSVM model on real-word classifying data sets from
UCI Machine Learning Repository. All results were averaged on 10 train-test
experiments and carried out by Matlab R2012a with 2.5GHz CPU, 2.5G usable
RAM. The SeDuMi 1 software is employed to solve the SOCP problems of FCC-
TWSVM.
4.3 Application
1
N
Si = (xi − xi )(xik − xi )T
N − 1 i=1 k
Σi = E[(xi − μi )(xi − μi )T ].
However, these could cause possible estimation errors. Some special cases
were proposed when the mean vector μi and covariance matrix Σi may not
exactly known. Panos M. Pardalos et al. [24] has discussed the way to process-
ing these special cases. In our practical experiments, similar to Pardalos, we
employ mentioned methods to modify the estimation and make the result easier
to interpret.
Since the data sets are uncertain, the performance measures are worth dis-
cussed. Ben-Tal et al. [2] proposed using nominal error and optimal error to
evaluate the performance. In our experiment, we choose these index to calculate
the accuracy of our model.
The expression for NomErr is
i 1yi =yi
pre
NomErr = × 100%.
the amount of training data
The optimal error is based on the probability of misclassification. The chance
constraints in the model (7), (8) can be reformulated to Eqs. (10) and (11), then
Fuzzy Chance Constrained Twin Support Vector Machine 1519
we can derive the least value of ε called εopt . Then the OptErr of data point xi is
1 if yipre = yi
OptErr =
εopt if yipre = yi .
The average results for VC data set and Ionosphere set are shown in Fig. 2.
The boxplots of the results for two different sets show that the misclassification
rate decreases when ε reduces. In addition, the OptErr is always bigger than
NomErr. The experiment time is stable for different parameters ε. We can draw
a conclusion that regional disparity is the biggest factor in the difference of
community. There is a high information asymmetry between enterprises and VC
firms, and the latter usually give more attention to local projects.
5 Conclusions
A new chance constrained twin support vector machine (FCC-TWSVM) via
chance constrained programming formulation for classification was proposed,
which can deal data with measurement noise efficiently. This paper studied twin
support vector machine classification when data points are uncertain statistically.
With some properties known for the distribution, the FCC-TWSVM model was
used to ensure the small probability of misclassification for the uncertain data.
The FCC-TWSVM model could be transformed to second-order cone program-
ming (SOCP) by the properties of moment information of uncertain data and
1520 Q. Cao et al.
the dual problem of SOCP model was also introduced. Then we obtained the
twin hyperplanes by calculating the dual problem. In addition, we also showed
the performance of FCC-TWSVM model in artificial data and real data by
numerical experiments. In the future work, how to further make the model more
robust is under our consideration. In addition, dealing the situation of nonlinear
classification with chance constrained is also interesting.
References
1. Ben-Tal A, Nemirovski A (2008) Selected topics in robust convex optimization.
Math Progam 112(1):125–158
2. Ben-Tal A, Bhadra S et al (2011) Chance constrained uncertain classification via
robust optimization. Math Program 127(1):145–173
3. Bertsimas D, Popescu I (2005) Optimal inequalities in probability theory: a convex
optimization approach. SIAM J Optim 15(3):780–804
4. Bhattacharyya C, Grate LR et al (2004) Robust sparse hyperplane classifiers: appli-
cation to uncertain molecular profiling data. J Comput Biol 11(6):1073–1089
5. Bi J, Zhang T (2004) Support vector classification with input data uncertainty.
Procof Neural Infprocsyst 17:161–168
6. Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press,
Cambridge
7. Cao Q, Lu Y et al (2013) The roles of bridging and bonding in social media
communities. J Assoc Inf Sci Technol 64(8):1671–1681
8. Carrasco M, López J, Maldonado S (2016) A second-order cone programming for-
mulation for nonparallel hyperplane support vector machine. Expert Syst Appl
54(C):95–104
9. Chang CC, Lin CJ (2007) Libsvm: a library for support vector machines. ACM
Trans Intell Syst Technol 2(3):389–396 Article 27
10. Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297
11. Goldfarb D, Iyengar G (2003) Robust convex quadratically constrained programs.
Math Program 97(3):495–515
12. Jayadeva Khemchandani R, Chandra S (2007) Twin support vector machines for
pattern classification. IEEE Trans Pattern Anal Mach Intell 29(5):905–910
13. Lee YJ, Mangasarian OL (2001) SSVM: a smooth support vector machine for
classification. Comput Optim Appl 20(1):5–22
14. Lin CF, Wang SD (2002) Fuzzy support vector machines. IEEE Trans Neural Netw
13(2):464–471
15. Mangasarian OL, Wild EW (2006) Multisurface proximal support vector machine
classification via generalized eigenvalues. IEEE Trans Pattern Anal Mach Intell
28(1):69–74
16. Marshall AW, Olkin I (1960) Multivariate Chebyshev inequalities. Ann Math Stat
31(4):1001–1014
17. Nemirovski A, Shapiro A (2006) Convex approximations of chance constrained
programs. SIAM J Optim 17(4):969–996
18. Peng X (2010) A v-twin support vector machine (v-TWSVM) classifier and its
geometric algorithms. Inf Sci 20(180):3863–3875
Fuzzy Chance Constrained Twin Support Vector Machine 1521
19. Scholkopf B, Smola AJ (2003) Learning with kernels: support vector machines,
regularization, optimization, and beyond. MIT Press, Cambridge
20. Shao YH, Deng NY (2012) A coordinate descent margin based-twin support vector
machine for classification. Neural Netw Official J Int Neural Netw Soc 25(1):114–
121
21. Shivaswamy PK, Bhattacharyya C, Smola AJ (2006) Second order cone program-
ming approaches for handling missing and uncertain data. J Mach Learn Res
7(7):1283–1314
22. Trafalis T, Gilbert R (2007) Robust support vector machine for classification and
computatioanla issues. Optim Meth Soft 1(22):187–198
23. Trafalis TB, Gilbert RC (2006) Robust classification and regression using support
vector machines. Eur J Oper Res 173(3):893–909
24. Wang X, Fan N, Pardalos PM (2015) Robust chance-constrained support vector
machines with second-order moment information. Ann Oper Res 253:1–24
25. Xanthopoulos P, Pardalos PM et al (2013) Robust data mining. Springer briefs in
optimization
26. Yang B, Wang MH et al (2016) Ramp loss quadratic support vector machine for
classification. Nonlinear Anal Forum 1(21):101–115
A Projection Pursuit Combined Method
for PPP Risk Evaluation
1 Introduction
In recent years, PPP (Private-Public-Partnership) pattern has been introduced
to China as a new mode, aiming at attracting foreign capital to participate in
infrastructure construction projects to make up for the financial lack of public
sectors as well as improving the operational efficiency. Thus the mode is widely
regarded as a new infrastructure project financing tool.
However, due to the long cooperation process, the wide investment scale,
and the large number of participants, PPP projects have more potential and
complex risk factors during the whole life cycle compared with average engineer-
ing project. Especially, risk assessment is a crucial link in the risk management
process. However, currently few domestic cases have involved the implementa-
tion of the PPP project, thus the PPP risk assessment mostly depends on the
expert’s subjective judgment, such as the fuzzy mathematical evaluation method
and decision tree method often used in related research and field [3].
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 128
Combined Method for PPP Risk Evaluation 1523
In recent research, s the methods for PPP assessment are mainly quantitative
assessment methods. For example, fuzzy-AHP-based risk assessment and fuzzy
synthetic evaluation approach to PPP projects were established to assess risk
factors in the PPP expressway project in China [8]. Another research identify
the risk factors in PPP projects through a comprehensive literature review, and
then introduce fuzzy logic in the pairwise comparison [12]. A few studies deal
with uncertainty by presenting several estimates based on different values for the
exogenous inputs using sensitivity analysis. Sensitivity analysis is a typical way
to address uncertainty via variations in key inputs. It hence provides an insight
into what happens if some variables values differ from the basic case [7].
As research shows, system dynamics (SD) modelling aims at developing a
dynamic model to assess demand risk by evaluating how different variables
jointly affect demand for services provided by PPP infrastructure projects. The
research objectives thus involve identifying, understanding, mapping and mea-
suring these variables and their complex interrelations [1].
FSE is a branch of fuzzy set theory (FST), a number of researchers have
attempted to exploit FSE in the construction project risk management field.
Yeung et al. present a model to assess the level of risk of PPP toll road projects
in which experience knowledge based on linguistic variables were incorporated
into the analysis using FSE [2].
In summary, the current PPP project risk assessment has a series of complex
problems, for example, the original project data involves various risk factors
which share no obvious function relationship between these factors. It is difficult
to precisely describe the PPP risks with accurate mathematical model since most
risk assessment methods are limited to subjective factors. The risk assessment
is a high-dimensional problem, which the current assessing method is unable to
solve. Thus the good risk management needs to establish a risk estimation model
which can convert high-dimensional problem into low-dimensional problem.
Considering the deficiencies of existing evaluation methods, we present the
projection pursuit regression method (PPR) based on the data analysis as
an exploring method of risk assessment. The method can convert the high-
dimensional risk variables into low-dimensional projection variables, meanwhile
building up the relationship between the projection variables and the economic
evaluation index variables [13].
The realization of the new method is further elaborated in four parts. Section 2
introduces a PPP risk assessment method with PPR; Sect. 3 offers the building
and parameter optimization process of the new PPP risk assessing model, followed
by a case study applying the model into assessing the risks of an ecological land-
scape PPP project in Sect. 4. We present our conclusions in Sect. 5.
the imprecise non-numerical (linguistic) terms and accounting for the fuzziness
in expert knowledge that typifies risk assessment [11]. As Fig. 1 shows, the hier-
archical analysis approach utilized for assessing the risk level of principal risk
factors.
b1
P b2
w1 w2 w3 w4 w5 w6 w7 w8 wn
x1 x2 x3 x4 x5 x6 x7 x8 xn
where xi represents the ith risk factor of the risk factor layer (i = 1, 2 · · · , n),
ri represents the sub factor layer, r1 , r2 , r3 respectively represents the second-
class risk factor layer, R1 ,R2 respectively represents the first-class risk fac-
tor layer. P represents the comprehensive risk membership of the project. wi
(i = 1, 2, · · · , n) respectively represents the ith risk factor’s weight in the second-
class risk factor layer. c1 ,c2 ,c3 respectively represents the second-class risk fac-
tor’s weight in the first-class factor layer. b1 , b2 respectively represents the first-
class factor’s weight in the comprehensive risk membership of the project. The
relationships among these parameters can be presented as
X = {x1 , x2 , x3 , · · · , xn } , (1)
R = {r1 , r2 , r3 } , (2)
R = {R1 , R2 } , (3)
C = {c1 , c2 , c3 } , (4)
B = {b1 , b2 } , (5)
W = {w1 , w2 , · · · , wn } . (6)
And then,
R = W × X, (7)
R = R × C, (8)
P = R × B. (9)
After several transformations of matrices, we get the value of the evaluation
index. Though the FSE method has been widely used in the risk assessing field,
it still has its limitations:
(1) The mechanism of how a risk factors exerts impact on the project objectives
is unclear, so the actual process might be fuzzier.
Combined Method for PPP Risk Evaluation 1525
function, aji is the ith component projection direction of the j th smooth ridge
function. The obtaining of big data needed in the function and algorithm of the
calculation of PPR will be presented in Sect. 3.
Scene Analysis
Modeling
calculation
External environmental Data flow
process
and historical data
Input variable:
precision of
the test
X-axis
Let
Back model
Projection Pursuit Regression
Step 2. Extract random number through Monte Carlo simulation and get the
sample observation value.
We establish the project’s economic evaluation model, and set the known
variable as constant parameters variable of fixed value model. We set
Ii as input random variable, which is followed by selecting economic
evaluation index (EEI) according to the demand of project economic
evaluation and setting it as an output variable. In order to obtain more
1528 X. Zhang et al.
We send l groups of data to the PPR model, where Xk represents the
k th set of the input variable xik (i = 1, 2, · · · , n), yk is the correspond-
ing output variable. Other parameters mentioned in the Eq. (15) can
refer to Sect. 2. Meanwhile, There are many forms to fit a variety of
cell functions, such as the numerical function [13] and the polynomial
function [11], a number of numerical functions, etc.
Step 4. Output the results
After the algorithm realization process, we get αj and REk of the
regression model. Generally, the smaller the REk , the better the fit-
ting effect. Then according to the distribution of EEI, we can arrive at
some conclusions about the project’s risk rank.
(1) Generate m projection directions randomly in the range of [−1, 1], which is
followed by selection, crossover and mutation of genetic algorithm to generate
3 × n new projection directions [14].
(2) Respectively project m construction parameters of l sample projection on
the projection direction, obtaining l projection variables for each projection
direction.
Combined Method for PPP Risk Evaluation 1529
In the Eq. (16), represents the actual value of the economical evaluation
index. y is the corresponding fitting value, and y stands for the average
value of actual internal rate of return.
(4) In the 3 × n projection direction, the coefficient d is calculated, and n pro-
jection directions with superior deterministic coefficient are chosen to enter
the optimization process in the next round of genetic algorithm.
(5) After several times of optimization, when the deterministic coefficient’s
absolute value of difference between two calculations from start to finish
come out less than some arbitrary positive number, the optimization of first
unit function finishes.
(6) Output polynomial and its corresponding projection direction m, which lead
to calculate the relative error between the fitting value and the actual value.
According to the risk assessment, if the relative error is less than 10%, the
model parameters are determined [6]. Otherwise, it is indicated that an arbi-
trary unit function can not meet the fitting requirements, and we should
move to the next step.
(7) In the this step, we follow the same principle, which infers that the residual
Δy(y − y ) completed in the fitting of the first step is to replace y to repeat
the first step’s work. We start the optimization aimed at second element
function, until we have obtained fitting unit function and projection direction
requirements which meet the requirements [9].
4 Case Study
The PPP risk assessment model based on projection pursuit regression method
was validated through a case study, an ecological landscape PPP project case
from Sichuan province. In this case, we identified 10 risk factors basing on the
historical data, which respectively refers to cost overruns risk, construction dura-
tion extended risk, design risk, construction safety risk, operation risk, quality
risk, policy changing risk, force majeure risk, tax increasing risk, and laws risk.
In the PPP risk assessing model, we select (the impact value of the ith risk) as
independent variable, meanwhile ΔIRR as dependent variable, which can rep-
resent the increment of the internal rate of return, thus reflecting the economic
viability of the project.
1530 X. Zhang et al.
70
Δ IRR
60
50
40
30
20
10
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
-0.05
Fig. 7. The comparing results between the actual values and fitting values
Fig. 8. The comparing results between the actual values and tested values
1532 X. Zhang et al.
The percentage of relative error The sample size The percentage of all samples
More than 20% 25 6%
The percentage of relative error The sample size The percentage of all samples
More than 20% 6 20%
The results above show that PPR can get relatively accurate fitting per-
formance for 88% sample. What’s more, the majority of the relative errors are
less than 10%, which indicates the variation range of the IRR is small. Namely,
the anti-risk of this PPP project is good. Meanwhile, considering the risks, the
investors can compare the IRR with the IRR of the benchmark yield to decide
whether to invest the project.
Combined Method for PPP Risk Evaluation 1533
5 Conclusion
This study introduces the projection pursuit regression method as a new risk
assessment method, establishing a new risk assessment model related to the tra-
ditional risk assessment methods: Firstly we fit out the probability distribution
of risk value based on historical data and expert’s subjective opinions. Secondly
we take the random numbers through the Monte Carlo simulation as input vari-
ables, and other economic indicators as output variables to obtain the sample
data. Thirdly, we set up the interrelation between the two variables based on
the data above. And then we establish a risk assessment model by projection
pursuit regression method combined with the actual case, which is used to deal
with the sample data regression to achieve a reasonable calculation effect. The
results above shows that the model can reveal the change regulation of risk
factors affecting PPP project economic evaluation index. The new assessment
model can catch the risk features of PPP project influenced by many factors, to
give an integrated results through dimension reduction.
The proposal of the evaluation model establishes direct interrelations between
the risk factors and economic evaluation of PPP projects, reducing the intrica-
cies of traditional risk assessment and making the risk assessment process more
effective and rapid. At the same time, the evaluation model is also more accu-
rate compared to its predecessors. However, some shortcomings also exist in this
study: (1) the existing data is not sufficient enough to completely fit the proba-
bility distributions of all risks, which implies that the existing assessment of the
risk still partly relies on expert assessment; (2) the evaluation model in normal
circumstances is relatively stable, but part of the deviation still persists. Thus
the accuracy of the algorithm calls to be improved in the optimization model in
the future to improve the accuracy of risk assessment.
References
1. Alasad R, Motawa I (2015) Dynamic demand risk assessment for toll road projects.
Constr Manag Econ 33:1–19
2. Ameyaw EE, Chan APC (2015) Evaluation and ranking of risk factors in public-
private partnership water supply projects in developing countries using fuzzy syn-
thetic evaluation approach. Expert Syst Appl 42(12):5102–5116
3. Chan APC, Yeung JFY et al (2011) Empirical study of risk assessment and alloca-
tion of public-private partnership projects in China. J Manag Eng 27(3):136–148
4. Du H, Wang J et al (2008) Prediction of retention times of peptides in rplc by using
radial basis function neural networks and projection pursuit regression. Chemometr
Intell Lab Syst 92(1):92–99
5. Durocher M, Chebana F, Ouarda TBMJ (2016) Delineation of homogenous regions
using hydrological variables predicted by projection pursuit regression. Hydrol
Earth Syst Sci Discuss 20(12):4717–4729
6. Friedman JH, Stuetzle W (1981) Projection pursuit regression. J Am Stat Assoc
76(376):817–823
1534 X. Zhang et al.
7. Kriger D, Shiu S, Naylor S (2006) Estimating toll road demand and revenue.
NCHRP Synthesis of Highway Practice
8. Li J, Zou PXW (2011) Fuzzy AHP-based risk assessment methodology for PPP
projects. J Constr Eng Manag 137(12):1205–1209
9. Mas A, Ruymgaart F (2015) High-dimensional principal projections. Complex Anal
Oper Theor 9(1):35–63
10. Samarov AM (1993) Exploring regression structure using nonparametric functional
estimation. J Am Stat Assoc 88(423):836–847
11. Taheriyoun M, Karamouz M, Baghvand A (2010) Development of an entropy-based
fuzzy eutrophication index for reservoir water quality evaluation. Iran J Environ
Health Sci Eng 7(1):1–14
12. Valipour A, Yahaya N et al (2015) A fuzzy analytic network process method for
risk prioritization in freeway PPP projects: an Iranian case study. J Civ Eng Manag
21(7):933–947
13. Yamout G, Jamali D (2007) A critical assessment of a proposed public private
partnership (PPP) for the management of water services in lebanon. Water Resour
Manag 21(3):611–634
14. Zhang X, Wang S, Ding J (2002) Application of projection pursuit regression in
estimating building’s material consumption. J Sichuan Univ 29:17–23
Optimal Ownership Pattern to Control Agency
Conflict in Manufacturing Industry of Pakistan
1 Introduction
After the research stream on agency cost developed by Jensen [22] extensive
empirical results can be found on evidence claiming that presence of agency
conflicts has an adverse effect on business sustainability. Two types of agency
cost are found: Agency cost of equity (ACE) and agency cost of debt (ACD).
If alignment of interest of owners and managers is missing, this is hazardous
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 129
1536 M.K. Khan et al.
is more committed towards stability and continued existence of the firm due
to their reputations [10]. Family firms perform better than nonfamily firms [4].
Due to higher proportion of insider ownership, founding family firms have lowest
agency cost [41]. Founding family ownership is also associated with a significantly
lower cost of debt financing [4]. Family firms control agency problem by delivering
more amount of dividends to shareholders and by employing higher amount of
debt in capital structure [43]. But in family owned firms, managers act for the
controlling family, but not for shareholders in general [37]. Following [3,4,41],
study also hypothesizes the inverse relationship between agency cost and family
ownership.
Managers of a firm, whose ownership is held by blocks of shares, are less
involved in utilization of firm resources for their discretionary activities [9,14,17].
More the outsider block holding of the firm, more is the monitoring of the man-
agerial activities [44]. Recent research has proposed a signaling theory related
with block-holding ownership, according to which an entrepreneur chooses to
attract an outside blockholder in order to signal his low “propensity to expro-
priate [45]. Large board and small non-management block-holder ownership
face severe agency problems and poor corporate governance [30]. But there is
also evidence that high level of block holding is responsible for higher level of
agency cost in firms [41]. If ownership is intense, it can also create agency con-
flict [16]. Some researchers found that block-holders are not expert enough in
monitoring [24].
Bank debt is an effective device to control the agency problem. Bank credit
shows the credit worthiness of borrower, reducing the information asymmetry,
which affects ACD [19]. Agency cost decreases with the increase in the monitor-
ing by the banks [6]. Bank debt reduces cash holdings [17]. Institutions monitor
and evaluate the company regularly for extending credit [22].
3 Empirical Implementation
turnover represents that the managers are using assets for unproductive pur-
pose rather than utilizing them in the activities that generate cash flows [27].
However firm with low assets turnover ratio bears more agency conflicts relative
to the firm with high turnover ratio. See also [6,19,36,44]. In addition, we also
assess the effect of different types of ownership structures on agency cost of debt.
This notion is examined substituting the agency cost of equity variable ACE by
Agency cost of debt ACD variable. The new model is described in Eq. (2). Debt
agency cost is defined as conflict between shareholders and bondholders [13].
Manso took the ACD as the difference between the total value of the all-equity
and levered firms [34]. We obtained the proxy of agency cost of debt (ACD)
from previous studies [13,26,40]. If major proportion of capital is invested in
fixed assets, then managers have no more liquid resources to use it on their own
luxuries. If firms has more liquid assets, managers may found it very easy to use
these assets for their own requisites or transfer cash to shareholders in form of
dividends, rather than keeping it for future payments of interest payments or
principal amount to debt holders [25].
also hypothesize that there is positive relationship between firm size and agency
cost. Model also has variable of growth opportunities because literature directs
that if there are growth opportunities for the firms, then more cash generation
in the future is expected. When firms expect incremental growth opportunities,
they hold extra cash in hands to invest in future projects. It leads towards the
extra cash available to the manager to be used for wasteful expenditures. Thus
chances of agency problem increases with increased level of growth opportuni-
ties [17,19]. Conflicts of interest between shareholders and managers vary with
growth opportunities and free cash flow [49] (Fig. 1).
Out of 819 listed companies in Karachi Stock Exchange, which is largest
stock exchange of Pakistan, we selected 100 top capitalized firms. Our sample
contains only non financial firms. So we ignored 353 firms related with finan-
cial services in Pakistan. By this, study got 22% representation of all the non
financial firms in Pakistan, present during the period of 2010 to 2015. Our sam-
ple got representation of all the manufacturing sectors listed in KSE for the
stated time. Manufacturing sector is the largest sector of the Pakistan economy.
Table 2. Estimation results of model 1 and 2 by fixed effect and random effect models
Variables Model
1 2
MO 2.612∗ −0.227
FO −0.914 −0.169
IO 1.053∗ .153∗
BO .617∗ −0.047∗∗∗
∗∗
D .166 −0.058∗
DIV −0.007 0
Size 0.071∗∗ .021∗
∗
Q 0.160 .033∗
R 0.481 0.083
Model type Fix effect model Random effect model
Estimation by FE, RE and Hausman test. Dependent
variables are ACE and ACD in models 1&2 respec-
tively. ∗ , ∗∗ , ∗∗∗ show significance at 1%, 5% and 10%
respectively.
5 Conclusion
This research examines and provides further evidence on agency theory. Present
study looks into the impact of ownership structures over equity agency cost
(ACE) and debt agency cost (ACD). Equity agency cost is the disagreement
of interest between shareholders and mangers whereas debt agency cost is the
divergence of interest between shareholders and debt-holders or debt-holders
and managers. Equity agency cost incurs when managers don’t act in the best
favor of shareholders and acquire perquisites for themselves. Agency cost of
debt arises when managers, themselves or on behalf of shareholders, expropriate
wealth from debt-holders and increase wealth of shareholders or do discretionary
actions on the expense of debt-holders interest. Study incorporates four types of
structures of ownership namely: managerial, general public, family, institutional
and ownership of block-holders. We used different proxies for ACE and ACD.
ACE is measured by asset turnover ratio. ACD is measured by amount of firm
assets not invested in fixed plant and equipment and by liquidity of firm assets.
Our sample is companies from KSE 100 index and study window is 2010–2015.
We found no empirical study that has explored agency cost with set of hypotheses
which present study has used.
Our research tested the convergence of interest hypothesis. In Pakistan, con-
vergence of interest hypothesis is followed because ownership of managers’ result
in the reduction of agency cost. Ownership of family is causing reduction in
efficient utilization of assets. However firms having managerial ownership and
family ownership exhibit lower agency cost of debt. This conveys that insiders
in Pakistan protect the interest of debt-holders. Due to greater monitoring abili-
ties, institutions and block-holders have positive impact on firm performance by
reducing agency cost of equity. In firms, which are more owned by institutions
and block-holders, there is better protection of shareholders’ interests in case
of Pakistan. But institutions only take care of shareholders wealth and don’t
safeguard the interest of debt-holders; agency cost of debt is present in firms,
which are more owned by institutions. But ownership of block-holders, not only
reduces equity agency cost, but also mitigates debt agency cost by protecting
debt-holders wealth and interests. So ownership of block-holders and managers
are the structures of equity ownership, which reduce both types of agency costs.
Pakistani firms which has greater debt ratio, also has efficient asset turnover and
lower proportion of such assets which are difficult to monitor by debt-holders.
Size worsens the both types of agency problems whereas growth opportunities
for firm reduce agency cost of equity but exacerbates agency cost of debt.
Present study classified block-holders as all the shareholders that have more
than 10% ownership of equity. Future researchers may expand this concept by
classification of block-holders. For example, there are two major types of block-
holders in Pakistan, family block-holders (more than 10% equity is owned by
certain family) and institutional block-holders (more than 10% equity is owned
by institutions). There are many types of institutions in Pakistan which have
ownership in companies like NIT, pension fund managers, insurance companies
etc. Future research should check individual monitoring impact of each type of
1544 M.K. Khan et al.
institution in mitigating agency costs. The study employs five year data (2010–
2015) from Pakistani economy. This period is recognized as unchecked inflation
but stable or declining interest rate. Present study did not control variables for
industry classification. No doubt asset utilization is different for different types of
industries. Similarly other variables are different too for different industries. We
were unfortunately enough that Pakistan has not codes for specific industries like
SIC codes in other countries. If we use codes for different industries for control
for industry specification, results might be better.
Appendix
Variables Formula
Asset Turnover Ratio Net Income divided by Total Assets
Agency Cost of Debt Proportion of firm assets not invested in fixed plant &
equipment = One minus ratio of fixed assets to total
assets
Managerial Ownership The number of common shares own by insiders divided
by total number of common shares outstanding. Insid-
ers consist of officers, affiliated directors, beneficial
owners and principal shareholders [26]
Institutional Ownership Number of shares held by institutions as a proportion
from total ordinary shares of the company [36]
Family Ownership Ratio of shares held by family as a group to total
shares [43]
Ownership of Block-holders Total number of shares held by block-holders divided
by total number of shares outstanding. Where block-
holders are shareholders having more than 10% own-
ership in total equity outstanding
Size Natural Logarithm of total assets
Debt Ratio Book value of contractual long term debt/Book value
of total assets
Dividend Dividend per share divided by earnings per share
Growth Ratio of (market value of equity + book value of debt)
to Book value of total assets
Optimal Ownership Pattern to Control Agency Conflict 1545
References
1. Agrawal A, Knoeber CR (1996) Firm performance and mechanisms to con-
trol agency problems between managers and shareholders. J Financ Quant Anal
31(3):377–397
2. Ahmed HJA (2008) Managerial ownership concentration and agency conflict using
logistic regression approach: evidence from Bursa Malaysia. J Manage Res 1(1):1–
10
3. Anderson RC, Reeb DM (2003) Founding-family ownership, corporate diversifica-
tion, and firm leverage. J Law Econ 46(2):653–684
4. Anderson RC, Mansi SA, Reeb DM (2003) Founding family ownership and the
agency cost of debt. J Financ Econ 68(2):263–285
5. Ang JS, Cox DR (1997) Controlling the agency cost of insider trading. J Financ
Strateg Decis 10(1):15–26
6. Ang JS, Cole RA, Lin JW (2000) Agency costs and ownership structure. J Finance
55(1):81C106
7. Borokhovich KA, Brunarski KR et al (2005) Dividends, corporate monitors and
agency costs. Financ Rev 40(1):37–65
8. Brockman P, Unlu E (2009) Dividend policy, creditor rights, and the agency costs
of debt. J Financ Econ 92(2):276–299
9. Byrd JW (2010) Financial policies and the agency costs of free cash flow: evidence
from the oil industry. SSRN Electron J https://ssrn.com/abstract=1664654
10. Casson M (1999) The economics of the family firm. Scand Econ Hist Rev 47(1):10–
23
11. Demsetz H, Villalonga B (2001) Ownership structure and corporate performance.
J Corp Finance 7(3):209–233
12. Dharwadkar B, George G, Brandes P (2000) Privatization in emerging economies:
an agency theory perspective. Acad Manag Rev 25(3):650–669
13. Doukas JA, Pantzalis C (2000) Security analysis, agency costs, and company char-
acteristics. Int Rev Financ Anal 14(5):493–507
14. Ducassy I, Guyot A (2017) Complex ownership structures, corporate governance
and firm performance: the french context. Res Int Bus Finance 39:291–306
15. Faccio M, Lang LHP, Young L (2000) Dividends and expropriation. Am Econ Rev
91(1):54–78
16. Fan JPH, Wong TJ (2001) Corporate ownership structure and the informativeness
of accounting earnings in East Asia. CEI Working Pap 33(3):401–425
17. Ferreira MA, Vilela AS (2004) Why do firms hold cash? evidence from EMU coun-
tries. Eur Financ Manage 10(2):295–319
18. Fleming G, Heaney R, Mccosker R (2005) Agency costs and ownership structure
in Australia. Soc Sci Electron Publ 13(1):29–52
19. Florackis C (2008) Agency costs and corporate governance mechanisms: evidence
for UK firms. Int J Manag Finance 4(1):37–59
20. Guariglia A, Yang J (2015) A balancing act: managing financial constraints and
agency costs to minimize investment inefficiency in the Chinese market. J Corp
Finance 36:111–130
21. Jensen MC (1986) Agency costs of free cash flow, corporate finance, and takeovers.
Am Econ Rev 76(2):323–29
22. Jensen MC, Meckling WH (2000) The theory of the firm: managerial behavior,
agency costs and ownership structure. Theory Firm, Bd 1:248–306
1546 M.K. Khan et al.
23. Jong AD, Dijk RV (2007) Determinants of leverage and agency problems: a regres-
sion approach with survey data. Eur J Finance 13(6):565–593
24. Jung K, Kwon SY (2002) Ownership structure and earnings informativeness: evi-
dence from Korea. Int J Account 37(3):301–325
25. Kalcheva I, Lins KV (2007) International evidence on cash holdings and expected
managerial agency problems. Rev Financ Stud 20(20):1087–1112
26. Kayakachoian G (2000) On agency costs and firms’ decisions
27. Kim KA, Kitsabunnarat P, Nofsinger JR (2004) Ownership and operating per-
formance in an emerging market: evidence from thai IPO firms. J Corp Finance
10(3):355–381
28. Kim S, Lee H, Kim J (2016) Divergent effects of external financing on technology
innovation activity: Korean evidence. Technol Forecast Soc Change 106:22–30
29. Kumar J (2003) Ownership structure and corporate firm performance. Finance
https://core.ac.uk/download/pdf/9315361.pdf
30. Kusnadi Y, Kusnadi Y (2004) Corporate cash holdings and corporate governance
mechanisms. SSRN Electron J https://ssrn.com/abstract=479401
31. Lafond R, Roychowdhury S (2008) Managerial ownership and accounting conser-
vatism. J Account Res 46(1):101–135
32. Lasfer M (2002) Board structure and agency costs. SSRN Electron J https://ssrn.
com/abstract=314619
33. Lie E (2000) Excess funds and agency problems: an empirical study of incremental
cash disbursements. Rev Financ Stud 13(1):219–247
34. Manso G (2008) Investment reversibility and agency cost of debt. Econometrica
76(2):437–442
35. Mao CX (2003) Interaction of debt agency problems and optimal capital structure:
theory and evidence. J Financ Quant Anal 38(2):399–423
36. Mcknight PJ, Weir C (2009) Agency costs, corporate governance mechanisms and
ownership structure in large UK publicly quoted companies: a panel data analysis.
Q Rev Econ Finance 49(2):139–158
37. Morck R, Shleifer A, Vishny RW (1988) Management ownership and market val-
uation: an empirical analysis. J Financ Econ 20(88):293–315
38. Moussa FB, Chichti J (2011) Interactions between free cash flow, debt policy and
structure of governance: 3SLS simultaneous model. J Manage Res 3(2):1–34
39. Mustapha M, Ahmad AC (2011) Agency theory and managerial ownership: evi-
dence from Malaysia. Manag Audit J 26(5):419–436
40. Prowse SD (1990) Institutional investment patterns and corporate financial behav-
ior in the United States and Japan. Brain Res 61(1):267–78
41. Randøy T, Goel S (2003) Ownership structure, founder leadership, and perfor-
mance in norwegian SMEs: implications for financing entrepreneurial opportuni-
ties. J Bus Ventur 18(5):619–637
42. Schulze WS, Lubatkin MH, Dino RN, Buchholtz AK (2001) Agency relationship
in family firms: theory and evidence. Organ Sci 12(2):99–116
43. Setia-Atmaja L, Tanewski GA, Skully M (2007) How do family ownership and con-
trol affect board structure, dividends and debt? australian evidence. In: Proceeding
of The 16th European Financial Management Association Conference. Citeseer,
Vienna, Austria, pp 27–30
44. Singh M, Iii WND (2003) Agency costs, ownership structure and corporate gover-
nance mechanisms. J Bank Finance 27(5):793–816
45. Stepanov S, Suvorov A (2009) Agency problem and ownership structure: Outside
blockholder as a signal. SSRN Electron J 133:87–107
Optimal Ownership Pattern to Control Agency Conflict 1547
46. Ugurlu M (2000) Agency costs and corporate control devices in the Turkish man-
ufacturing industry. J Econ Stud 27(6):566–599
47. Vilasuso J, Minkler A (2001) Agency costs, asset specificity, and the capital struc-
ture of the firm. J Econ Behav Organ 44(1):55–69
48. Woidtke T (2002) Agents watching agents?: evidence from pension fund ownership
and firm value. J Financ Econ 63(1):99–131
49. Wu L (2004) The impact of ownership structure on debt financing of Japanese
firms with the agency cost of free cash flow. SSRN Electron J
50. Yeo GHH, Tan PMS, Ho KW, Chen S (2002) Corporate ownership structure and
the informativeness of earnings. J Bus Finance Account 29(7–8):1023–1046
The Impact of Institutional Investors on Stock
Price Synchronicity: Evidence from the
Shanghai Stock Market
Abstract. The article mainly consists of four sections, the first section
is the basic introduction and literature review, the second is the back-
ground review of institutional investors and stock synchronization, the
third is the empirical research and the last is the Conclusion. This paper
use the basic literature review method, descriptive statistics and OLS
models analysis to explore the Impact of institutional Investors on Stock
Price Synchronicity. Using the data of 569 listed firms at Shanghai stock
market from 2009 to 2012, this paper investigates the impact of insti-
tutional investors’ behaviors on stock price synchronicity. We find that
the institutional investors’ long-term investments can reduce stock price
synchronicity and increase the information efficiency of the stock market.
However, the short-term trading of institutional investors does not nec-
essarily reduce stock price synchronicity. Therefore, the Chinese police-
makers should regulate institutional investors and encourage construc-
tive long-term investment behaviors, to make it really rise to enhance the
effectiveness of capital market information, and improve the efficiency of
capital market resource allocation. The main contributions of this paper
is that we investigate the stock price synchronicity from the perspective
osf institutional investors’ behaviors and A–share market.
1 Introduction
Morck and Yeung [12] examined the stock price synchronicity in the major
stock markets around the world during the first 26 weeks of 1995. They found
that the average stock price synchronicity was higher in less developed coun-
tries. In the Chinese stock markets, approximately 79% of stocks move in the
same direction, only lower than Poland where all the stocks moved in the same
direction. After a lapse of nearly two decades, what can be observed regarding
stock price synchronicity in the Chinese stock markets? Figure 1 illustrates the
price synchronicity of 1,218 stocks on the Shanghai Stock Market during the
first 26 weeks of 2013. As Fig. 1 shows, averagely more than 70% of stocks were
moving in the same direction, suggesting that price synchronicity still remained
at a high level in China.
The remainder of this paper is organized as follows. Section 2 provides the the-
oretical background. Section 3 presents the research design and the main empir-
ical results. Section 4 concludes the paper.
The sample consists of listed firms on the Shanghai Stock Market from 2009
to 2012, and we exclude the following firms from the sample: (1) firms without
complete data; (2) financial institutions; (3) utility firms; (4) firms that con-
ducted IPO or major assets restructuring in the current year; (5) and firms
whose stocks were delisted or suspended. Finally we obtain 569 eligible compa-
nies, a total of 2276 sample points of panel data. All data are from China Stock
Market & Accounting Research Database (CSMAR) and RESSET database.
Table 2 shows the descriptive statistics.
where ri,w is the return of firm i in week w, rm,w is the market return in week w.
Since R2 is highly skewed and bounded between 0 and 1, we apply a logistic
transformation to obtain a near formally distributed variable, SYNCH. A higher
value of SYNCH indicates that the stock price is more synchronized.
Ri,t
2
SYNCHi,t = ln . (2)
1 − Ri,t
2
The Impact of Institutional Investors on Stock Price Synchronicity 1553
has a stronger ability to avoid risk, leading to less stock price volatility. Thus
it can be concluded that the larger size of a company would bring about more
significant stock price synchronicity. Others, however, hold that due to its greater
scale, the company can garner more attention, which can add more information
into its stock price and attenuate price synchronicity. Because of this controversy,
we do not calculate a regression coefficient of the relation between SIZE and stock
price synchronicity.
LEV is the book value of all liabilities scaled by total assets at the end of the
fiscal year. LEV can reflect the level of a company’s financial risk, and a higher
LEV indicates a higher financial risk in a company. In this case, the company is
more sensitive to credit risk, and thus the price fluctuation is mainly affected by
the firm-specific financial risk. Thus, we expect a negative correlation between
LEV and stock price synchronicity.
CEN is the concentration of corporate ownership, and a higher CEN means a
stronger control held by large shareholders and more rights to internal informa-
tion. These large shareholders can transfer firm-specific information into stock
prices through their transactions, accordingly raising the firm-level information
content in stock price [5]. However, some researchers found a concave functional
relationship between stock price synchronization and CEN: CEN increases at a
decreasing speed with a rising CEN until the maximum point, and then it starts
to fall [11]. This suggests an unclear relationship, and any clarification would
require further research.
where the explained variable SYNCHi,t is the measurement of stock price syn-
chronicity of firm i in the year t, LIi,t is the shares proportion held by institu-
tional investors of firm i in the year t, and SIi,t is the transaction activity of
firm i conducted by institutional investors during the fiscal year t, as a proxy
for the short-term behavior of institutional investors.
In order to decide whether we should use fixed effect panel data model or random
effects model on the panel data, we conduct a Hausman Test on the foregoing
model.
This table shows the Hausman Test result of the institutional investors’ over-
all investment behavior. The significant result encourages us to apply a fixed-
effects model in the regression process.
1556 L. Kun et al.
Table 3 reports the estimated coefficients and associated p values. From this
table we find the estimated coefficient of LI is −0.487 in Model (1) and −0.527 in
Model (3), and both are significant at the 5% level. This demonstrates a signifi-
cantly negative correlation between the proportion of institutional shareholding
and stock price synchronicity, namely, the stock with a higher proportion of
holding produces a less synchronous stock price. This supports the argument
that institutional investors’ behavior helps to convey firm-specific information
to the market and facilitate a company’s signal transmission mechanism, which
can improve the efficiency of resource allocation with prices.
The other explained variable, SI, is a proxy variable for institutional short-
term transactions. The empirical results show a negative correlation (−0.055) in
Model (2) and a positive correlation (0.149) in Model (3), neither is significant,
which is consistent with expectations earlier in this paper. This indicates the
insignificant correlation between the institutional investor activity and stock
price synchronicity.
In this paper, we believe this insignificant correlation is caused by the double
effects of institutional investors’ speculative behavior. On the one hand, frequent
short-term trading transmits corporate-specific information to prices, which can
raise the information content in stock prices and help relieve high stock price
synchronicity. On the other hand, there herding is visible in the behavior of
these informed investors, causing stock price to fluctuate in the same direction.
Therefore, the offsetting dual functions lead to an insignificant effect on stock
price synchronicity.
Empirical result shows that ROE and BM have significant influences on stock
price synchronicity, and both of them are significant at the 1% level in all the
three models. Firstly, the higher ROE, the less there is price synchronicity. The
firm with a higher ROE suggests better corporate management; consequently,
the firm gains more attention by institutional investors and the public, and thus
the firm-specific information can be readily conveyed in the price. However, the
absolute value of its estimated coefficient is small, which illustrates a limited
The Impact of Institutional Investors on Stock Price Synchronicity 1557
4 Conclusion
The empirical results of this paper suggest that institutional investment can sig-
nificantly decrease stock price synchronicity on the Shanghai Stock Market, and
that this influence is achieved by long-term shareholding and not by frequent
transactions. This paper provides empirical grounds for encouraging institutional
investors to engage in long-term value investment, which can increase the effi-
ciency of price-based resource allocation in our capital markets. For the policy
- makers, it is necessary to improve the regulations of China’s capital market in
order to create a favorable external environment for institutional investors. They
also should guide institutional investors to long-term value investment, enhance
the confidence of long-term value investment and improve the quality of listed
companies.
References
1. An H, Zhang T (2013) Stock price synchronicity, crash risk, and institutional
investors. J Corp Financ 21(1):1–15
2. Boubaker S, Mansali H, Rjiba H (2014) Large controlling shareholders and stock
price synchronicity. J Bank Financ 40(1):80–96
3. Cfa JHG (2006) R2 around the world: new theory and new tests. J Financ Econ
79(2):257–292
4. Epps TW, Epps ML (1976) The stochastic dependence of security price changes
and transaction volumes: implications for the mixture-of-distributions hypothesis.
Econometrica 44(2):305–321
5. Han JF, Wang ZY (2012) Can the major shareholders trading improve market
efficiency-from the perspective of the information content of stock price. J Shanxi
Financ Econ Univ 7:38–45 (in Chinese)
6. Hasan I, Song L, Wachtel P (2014) Institutional development and stock price syn-
chronicity: evidence from China. J Comp Econ 42(1):92–108
7. Jin Y, Yan M et al (2016) Stock price synchronicity and stock price crash risk:
based on the mediating effect of herding behavior of qfii. China Financ Rev Int
6(3):230–244
8. Rao Y, Min L (2013) The influence of qfii holding on stock price synchronicity in
China. J Manage Eng 02:202–208 (in Chinese)
1558 L. Kun et al.
9. Skaife HA, Gassen J, LaFond R (2006) Does stock price synchronicity represent
firm-specific information? The international evidence. MIT Sloan Research Paper
10. Song L (2015) Accounting disclosure, stock price synchronicity and stock crash
risk: an emerging-market perspective. Int J Acc Inf Manage 23(4):851–854
11. Wang X (2013) Ownership concentration and stock price synchronicity: the evi-
dence from Chinese listed companies. Contemp Econ 14:94–95 (in Chinese)
12. Yeung B, Wu W (1999) The information content of stock markets: why do emerging
markets have synchronous stock price movements? J Financ Econ 58(12):215–260
Research on Risk Allocation Model in PPP
Projects: Perspectives from the SOEs
1 Introduction
while the private enterprises only accounts for 20% of the private sector [14].
With a large amount of investment, long term cooperation, technical complex-
ity, and uncertainty factors, appropriate risk allocation is not only a critical
success factor (CSF) of PPP but also an important driving factor of meeting
the Value for Money [4,6]. While the country issued relevant legal policies to
standardize the operation and execution of PPP projects, the guidance of risk
allocation is still relatively broad and vague [7].
At present, there is weak control force between government and SOEs, and
government always has absolute competitive advantage. Initially, some risks
would be allocated to SOEs with underperforming ability to control risk and no
willingness to take risks. Once the risks occur, SOEs are unable to control the
consequences of risk, thus the government need to bear losses, which inevitably
causes most risks retransferred to the government at a higher cost. In addition,
the complex contractual arrangements in PPP projects would lead to risk expo-
sure, and inappropriate risk allocation only let the infrastructure debt simply
transfer from the local government balance sheets to SOEs. However, it can’t cut
overall leverage utility of the public sector effectively. In terms of risk allocation,
the government and SOEs always face severe challenge, strict requirements and
high standards. However, there are few systematic analysis of the state-owned
enterprises participating in the PPP project in Chinese journals. Therefore, this
paper will establish risk allocation model combined with fuzzy comprehensive
evaluation method and entropy coefficient method to optimize the allocation of
risk perspectives from the SOEs.
This paper is organized as the following: Sect. 1 begins with an introduction,
Sect. 2 makes a literature review about PPP risk allocation, Sect. 3 describes
the modeling ideas and methods combined with fuzzy comprehensive evalua-
tion method and entropy coefficient method, which optimizes multi-objective
decision of risk allocation in PPP projects perspectives from the SOEs, Sect. 4
takes the case of Mianzhu Integration of Water Supply and Drainage Project to
demonstrate the effectiveness of the model, Sect. 5 presents a brief conclusion .
2 Literature Review
Zhang put forward that the research topics of PPP papers in Chinese and Inter-
national Journals mainly include PPP model’s application, risk management,
financing and economic issues, legal and procurement issues and government
regulation [16]. The foundation of financing risk-sharing in PPP projects is the
accurate identification of risk factors. Grimsey identified nine risks: financial risk,
political risk, environment risk, construction risk, technology risk, operational
risk of infrastructure, infrastructure recovering risks, force majeure and project
default risk, introduced the risk evaluation index and evaluation method from
the perspective of different project stakeholders [9]. Bing creatively carried out
the dividing analysis of risk from the macro, medium and micro three levels
[2]. Zou identified twenty-seven critical risk factors influencing the success of
PPP projects from five dimensions: the owner, design, contractors, government
institutions and the external environment [17].
Risk Allocation Model in PPP Projects 1561
The academic research for the risk-sharing mechanism in PPP project mainly
concentrated on the risk-sharing principle, method and model. Abednego put for-
ward the connotative meaning and conditions of risk sharing [1]. Deng analyzed
and summarized nine principles in risk-sharing of PPP projects, such as the
principle of responsibility, justice principle [8]. Bing put forward that the risk-
bearer should be divided into the private sector, public sector, together under-
taking, which is depended on the specific conditions of project [2]. Jin and Chang
explained the risk allocation of PPP projects from the perspective of transaction
cost theory (TCE) and resource-based view (RBV) [5,11]. At present, most stud-
ies tend to adopt quantitative methods to optimize the PPP project risk sharing
mechanism and process. The questionnaire survey was the typical quantitative
research method in risk allocation [16], such as Ke investigated the experienced
practitioners to determine the preferences of PPP project risk allocation [13],
Guo adopted the real option method to research the risk allocation strategy of
highway PPP projects under the delay investment decisions [10], Jin selected
the artificial neural network to realize the allocation [12].
In conclusion, with the perfection and development of theory and practice of
risk-sharing in PPP projects, the scholars at home and abroad had a thorough
study in risk allocation, which mostly confined to some specific risks that should be
shared to specific stakeholders, government or private sector. However, there are
few studies involved specific allocation strategies and proportions. Moreover, there
is almost no study about the risk allocation whose perspective is from the SOEs.
Fig. 1. The procedure of risk allocation whose perspective from the SOEs in PPP
project
3 Method
The quantitative method adopted in optimizing risk allocation mainly include
game theory, real option theory, artificial neural networks and fuzzy compre-
hensive evaluation theory. The game theory, option theory and artificial neural
1562 C. Chen et al.
networks objectively reflect the actual negotiation process with complex model,
which are difficult to apply in practice. In consideration of subjectivity and uncer-
tainty in risk-decision mechanism, the fuzzy comprehensive evaluation (FCE)
based on fuzzy mathematics will be reasonable in optimizing risk allocation. It
can turn the qualitative questions into quantitative analysis based on member-
ship degree theory. In addition, the method has the advantages of clear result,
quite strong systematicness and can better solute the problems that are difficult
to quantify, which has been widely applied in the field of engineering and applied
science, such as water quality evaluation, clinical diagnosis, information technol-
ogy, air quality assessment and product design. Some scholars also gradually
attempt to apply fuzzy mathematics in the construction risk management. The
paper establishes the PPP project risk allocation model to optimize the multi-
objective decision-making in PPP project risk allocation perspectives from the
SOEs. Figure 1 summarizes the procedure of PPP project risk allocation.
3.2 Determine the Risk Evaluation Index System for PPP Project
This section discusses the risk evaluation index system based on allocation prin-
ciple and influence factors. There are many factors and principles affecting the
risk allocation of state-owned enterprises, the paper selects the basics of qual-
itative research to comb the influencing factors and principles. The influenc-
ing factors include project system properties, stakeholders’ comprehending mis-
take for the PPP model, the risk-taking attitude and intention of stakeholders.
Combined with the theoretical research and case study, the guiding principle of
PPP project could be summarized as follows: equitable principle, criterion of
liability, effective control principle, risk premium peer, upper limit principle and
dynamic principle. Setting up the evaluation index system should be carried out
in accordance with the principle: quantifiable, critical, objectivity, systematic,
operability, independence, comparability and representative, which can full and
accurately reflect the comprehensive management ability to control risk of the
specific subject. Table 1 shows the evaluation index system.
Risk Allocation Model in PPP Projects 1563
The triangular fuzzy function T = (f, g, h), Very low (0%, 15%, 30%), Low
(15%, 25%, 35%), Medium (35%, 45%, 55%), High (55%, 65%, 75%), Very high
1
(75%, 85%, 95%), Through the formula V (T ) = (f + g + h) to calculate the
3
fuzzy evaluation of value V = (15%, 25%, 45%, 65%, 85%).
⎛ ⎞
r11 r12 · · · r1m
⎜ r21 r22 · · · r2m ⎟
⎜ ⎟
R=⎜ . . . . ⎟.
⎝ .. .. . . .. ⎠
rn1 rn2 · · · rnm
We use Delphi method to determine the evaluation value. This paper establishes
the subordinated matrix R based on the index system and evaluation standard
to calculate the subordinated vector ri = (ri1 , ri2 , · · · , rim ) indicates the value
of index i in the evaluation standard of j.
the commonly used fuzzy operators include M (∧, ∨), M (•, ∨), M (∧, ⊕), M (•, ⊕),
M (•, ⊕) comprehensively consider the subordinated vector and weight of the
index, and other fuzzy operator would lack some information M (∧, ∨), M (•, ∨)
are suitable for single constraints model, which only take the main indicators
into account and ignore the other secondary index. The fuzzy operators’ char-
acteristics are shown in Table 2.
Based on the above analysis, this paper will choose M (•, ⊕)to calculate the
comprehensive evaluation vector B = (b1 , b2 , b3 · · · bm ). The methods widely
adopted to deal with the evaluation vector are weighted average method and
the principle of maximum membership degree. However, the latter take the
concept of optimal sorting in the evaluation scheme, and sometimes it would
miss key information. So this section select the weighted average method to
normalize the index system, which makes the comprehensive evaluation vector
B = (b1 , b2 , b3 , · · · , bm ) converted to comprehensive score S. S = BK × V , V
represent the weight of e index evaluation standard, V = (0.150.250.450.650.85).
4 Case Study
To further integrate the existing system of water supply and drainage, improve
the systematicness, harmonization, sharing and economy of water supply and
drainage, the government of Mianzhu, Sichuan province proposed to launch the
Mianzhu Integration of Water Supply and Drainage project in November 2015.
The project’s investment is 680 million yuan, and its concession period is about
30 years, which includes the construction and reform period of 3 years. Mianzhu
1566 C. Chen et al.
Mianzhu goverment
Authorize
Franchise Agreements
Asset transfer
agreement
Special Purpose Vehicle
Organize Loan Financing Institution
(SPV)
Joint venture
agreement
BOT+TOT
Pochuan Water Investment
Equity ratio 90%
Co. Ltd
Table 3. Risk identification list of Mianzhu Integration of Water Supply and Drainage
project
B1 = W ◦ R1
⎡ ⎤
0.50 0.23 0.20 0.04 0.03
⎢ 0.15 0.42 0.24 0.13 0.06 ⎥
⎢ ⎥
⎢ 0.01 0.03 0.26 0.40 0.31 ⎥
⎢ ⎥
⎢ 0.59 0.32 0.03 0.03 0.03 ⎥
⎢ ⎥
⎢ 0.01 0.01 0.22 0.50 0.26 ⎥
= 0.09 0.06 0.12 0.10 0.10 0.12 0.08 0.16 0.14 0.03 ◦ ⎢
⎢ 0.08
⎥
⎢ 0.04 0.53 0.26 0.09 ⎥
⎥
⎢ 0.02 0.14 0.37 0.34 0.14 ⎥
⎢ ⎥
⎢ 0.02 0.01 0.53 0.40 0.04 ⎥
⎢ ⎥
⎣ 0.06 0.13 0.52 0.23 0.06 ⎦
0.33 0.27 0.14 0.17 0.09
= 0.15 0.13 0.34 0.27 0.11 ,
B 2 = W ◦ R2
⎡ ⎤
0.29 0.34 0.25 0.12 0.01
⎢ 0.02 0.10 0.37 0.34 0.17 ⎥
⎢ ⎥
⎢ 0.00 0.05 0.44 0.34 0.17 ⎥
⎢ ⎥
⎢ 0.05 0.20 0.27 0.24 0.24 ⎥
⎢ ⎥
⎢ 0.07 0.12 0.49 0.25 0.07 ⎥
= 0.09 0.06 0.12 0.10 0.10 0.12 0.08 0.16 0.14 0.03 ◦ ⎢
⎢ 0.01
⎥
⎢ 0.05 0.49 0.32 0.14 ⎥
⎥
⎢ 0.07 0.10 0.51 0.25 0.07 ⎥
⎢ ⎥
⎢ 0.01 0.07 0.59 0.24 0.10 ⎥
⎢ ⎥
⎣ 0.01 0.05 0.65 0.22 0.07 ⎦
0.08 0.21 0.37 0.21 0.13
= 0.05 0.11 0.47 0.25 0.11 .
4.5 Determine the Sharing Subject and Sharing Proportion for Tax
Regulation Change Risk
Then the section normalize the index system, which makes the comprehensive
evaluation vector B = (b1 , b2 , b3 , · · · , bm ) converted to comprehensive score S.
T
S1 = B1 × V = 0.15 0.13 0.34 0.27 0.11 × 0.15 0.25 0.45 0.65 0.85 = 0.48080,
T
S2 = B2 × V = 0.05 0.11 0.47 0.25 0.11 × 0.15 0.25 0.45 0.65 0.85 = 0.50897.
5 Conclusion
Risk allocation is the key factor in the success of infrastructure PPP projects,
and the reasonable risk allocation can effectively reduce the risk level. This
paper explores the effective approach of risk allocation form the perspectives
of the SOEs, and builds the allocation model based on fuzzy comprehensive
evaluation and entropy coefficient method. In addition, the paper demonstrates
the objectivity, rationality and feasibility of the allocation model, which provides
the scientific methods and guidance for SOEs to participate in PPP projects,
further deepening the reform of state-owned enterprises and gradually making
the economic grows change from the factor-driven into the innovation-driven,
promoting the transformation of public ownership economy.
References
1. Abednego MP, Ogunlana SO (2006) Good project governance for proper risk allo-
cation in public-private partnerships in Indonesia. Int J Project Manage 24(7):622–
634
2. Bing L, Akintoye A et al (2005) The allocation of risk in PPP/PFI construction
projects in the UK. Int J Project Manage 23(1):25–35
3. China Public Private Partnerships Center (2016) Quarterly report on the project
library of the national PPP integrated information platform. http://www.cpppc.
org/en/Quarterly/4007.jhtml
4. Chan APC, Lam PTI (2010) Critical success factors for PPPs in infrastructure
developments: Chinese perspective. J Constr Eng Manage 136(5):484–494
5. Chang C (2013) A critical review of the application of TCE in the interpretation
of risk allocation in PPP contracts. Constr Manage Econ 31(2):99–103
6. Chen C, Chen P, Wang Q (2017) Comparing the efficiency of public-private part-
nerships with the traditional procurement: based on the Chengdu No. 6 water plant
B. In: Proceedings of the tenth international conference on management science
and engineering management, pp 487–501
7. Chou JS, Pramudawardhani D (2015) Cross-country comparisons of key drivers,
critical success factors and risk allocation for public-private partnership projects.
Int J Project Manage 33(5):1136–1150
8. Deng X, Qiming LI et al (2008) Summary and application of the principles of risk
allocation in PPP model. Constr Econ 09:32–35 (in Chinese)
9. Grimsey D, Lewis MK (2002) Evaluating the risks of public private partnerships
for infrastructure projects. Int J Project Manage 20(2):107–118
1572 C. Chen et al.
10. Jian G (2013) The risk allocation strategy research of PPP project in highway
traffic infrastructure. Manage Rev 25(7):11–19
11. Jin XH (2009) Determinants of efficient risk allocation in privately financed public
infrastructure projects in Australia. J Constr Eng Manage 136(2):138–150
12. Jin XH, Zhang G (2011) Modelling optimal risk allocation in PPP projects using
artificial neural networks. Int J Project Manage 29(5):591–603
13. Ke Y, Wang SQ et al (2010) Preferred risk allocation in China’s public-private
partnership (PPP) projects. Int J Project Manage 28(5):482–492
14. Liu S (2016) Promote the reform of state-owned enterprises in the PPP way. Mod
SOE Res 9:78–81 (in Chinses)
15. The State Council (2015) China urges SOE modernization through mixed own-
ership reform. http://english.gov.cn/policies/latest releases/2015/09/24/content
281475197422388.htm
16. Zhang S, Chan APC et al (2016) Critical review on PPP research-a search from
the Chinese and international journals. Int J Project Manage 34(4):597–612
17. Zou PXW, Zhang G, Wang J (2007) Understanding the key risks in construction
projects in China. Int J Project Manage 25(6):601–614
An Exploratory Case Study of the Mature
Enterprise’s Corporate Brand Building
Based on Strategic Perspective
1 Introduction
Over the past two decades, corporate branding has been advocated by scholars
as an effective way for companies to build competitive advantage [4], it has
become a key strategic resource of enterprise [9]. In the contemporary global
competitive environment, products and services typically become more similar,
which makes it difficult for consumers to differentiate offerings coming from
different enterprises. Consequently, promoting entire companies as a brand has
become an efficient approach to create differentiation [8]. Corporate brand has
significant value in consumers concern, product supporting, core communication,
employee incentive [13]. And it is capable of helping enterprise to reduce cost,
giving customers a sense of belonging, building consensus in stakeholders [11].
Therefore, the study on corporate brand from the perspective of strategy is
significant.
To our knowledge, existing research about corporate brand mainly focuses on
corporate brand itself [3,5,10,12,14], or on the marketing area, which explores
how corporate brand affects the consumers’ reviews, attitudes or purchasing ten-
dency [3,5,15,16]. Besides, the relationships among corporate brand and orga-
nization identity, image, reputation, etc. [7,12], the influence factors of corpo-
rate brand, and the effect of corporate brand on other aspects [7,8,16,17] are
also research hotspots now. But there is little research on the inner relation-
ship between corporate brand and enterprise strategy. Especially for mature
enterprises, as to promote the corporate brand, the ways of facing the competi-
tion from start-ups and releasing from past traces become essential issues to be
resolved.
So, this research plans to carry on an exploratory case study of two mature
company cases, to analyze the role and impact of strategic actions taken in the
enterprise development process on corporate brand rebuilding.
2 Literature Review
The research on corporate brand sprang up in 1930s’. To some extent, corporate
brand is defined as an organization from the whole, which is the platform of all
corporate brands and carrier used for promoting various differential services [1].
However, Knox and Bickerton [12] thought corporate brand is a manifestation
of unique business mode of an organization in vision, diffusion and behavior.
Harch and Schultz [11] defined corporate brand as brand umbrella covering all
corporate products and plate brilliance for product.
Aaker [1] concluded the elements of corporate brand are historical foundation,
corporate performance, employee performance, value view and priority item,
local and global orientation, social resonance. Hatch and Schultz [9] proposed
that corporate brand is made up of vision, culture and image. Vision comes from
leadership consciousness of senior leaders, culture comes from employees, forming
internal cognition and cohesion, and image is the evaluation of external public.
Balmer [6,10] pointed out the components of corporate brand include brand
vision, culture, location, personal feature, public relation and information.
Brand building and maintenance work need to design systematic and clear guid-
ance plan from organizational strategic level. Philip Kolter [2] thought that enter-
prise should recognize advantages and disadvantages of brand regularly, maintain
the development of brand constantly and propose four aspects of brand construc-
tion which are positioning, name selection, holder’s decision and development
strategy. With the gradual increasing of competition, corporation pays more
attention to seek for differential competition of brand through unique emotional
experience rather than function or characteristic in brand construction [10].
Mature Enterprise’s Corporate Brand Building 1575
It’s clear that theoretical research on definition, composition, role and feature,
etc. of corporate brand at present has made some achievement, but corporate
brand management research is somewhat lacking in guiding practice. Mukher-
jee and Balmer [14] clearly pointed out that the current articles about corpo-
rate brand are too ideal, they lack empirical data support and related mea-
sure description for possible conditions. The news shows that many scholars are
developing related research on corporate branding, for example, Knox and Bick-
erton [12] pointed out corporate brand management is suggested to consider
four aspects as vision management, culture management, image management
and competition management. Hatch and Schultz [11] proposed corporate brand
management is the process of eliminating the difference of corporate vision, cul-
ture and image.
In conclusion, not only does corporate brand has profound relationship with
corporate strategic action, but it has close relation with corporate develop-
ment stage and business type. Strategic action is strategic operation affecting
future development direction of corporation and the result of executing corpo-
rate strategic plan, which reflects corporate strategic choice. However, strategic
choice will promote the formation and change of corporate brand directly. Cor-
porate brand can concentrate and abstract corporate vision, culture, value view,
behavior and the expectation of stakeholders, etc. as a kind of contract relation-
ship through certain strategic action so that corporation and stakeholders can
establish stable ‘emotion connection’. For many emerging enterprises, in order
to obtain competitive advantage through corporate brand management in grow-
ing period, strategic actions with great influence on corporate brand must be
adopted.
3 Research Design
Table 2. Corporate brand development process based on strategic actions for Luzhou
Laojiao
reform and opening-up policy, Jinjiang Hotel went into modern enterprise admin-
istration phase of operating independence and self-financing. Since then, the time
gave Jinjiang Hotel a great new identity. In 1995, Jinjiang Hotel, given the title
of five-star hotel by National Tourism Administration, became the first five-star
hotel in southeast of China. But with flourishing of Chengdu tourist market and
increasing number of new hotels entering the market, fierce competition caused
Jinjiang Hotel the development to receive the restriction again.
From 2001, the management team of Jinjiang Hotel decided to use advantage
of carrying out the reshaping of the brand rebuilding work. It used “building a
national top brand, being the best one of hotel industry” as vision, also offering
services and products for high-end business market. And it redesigned brand
logo, as well as inviting famous designer to redesign and redecoration the whole
hotel. Then Jinjiang Hotel started building corporate culture and educational
training system matched to corporate brand, and joined hands with strong brand
enterprise. Through more than ten years of development, Jinjiang Hotel has
successfully rebuilt its corporate brand through step-by-step strategic action.
Corporate brand development process based on strategic actions for Jinjiang
Hotel is as shown in Table 3.
Table 3. Corporate brand development process based on strategic actions for Jinjiang
Hotel
Period and Important strategic actions affecting Corporate brand the corporate brand
enterprise development situation development
status
1961–1979 Building and opening led by government 1. Initially establishing corporate brand
Proud of
“Yellow
Vest”
2. Political overtonesbuilding strong
brand image and basis
3. Official hostel, lack of awareness of
business operations
1979–2001 1. Getting rid of government plans, 1. Reforming corporate brand, laying the
Develop- becoming self-managed modern foundation of systematic brand
ment of enterprise management project
new
identity
2. Establishing enterprise management, 2. Learning operation and management
system and reforming official human resource system
3. Initiatingunion and establishing hotel 3. Becoming top1 hotel in southwestern
management company China
4. Carrying out hardware upgrading 4. Having some management
infrastructure
5. Regulating management 5. Difficult to resist external competition
6. Awarded as the title of five-star hotel
7. Intense competition and inner defect
2001 - 1. Selecting brand development strategy 1. Carrying out series of ways to rebuild
Vigorous corporate brand
of brand
develop-
ment
2. Establishing vision, defining position, 2. Discovering history heritage,
designing corporate logo increasing the essence of corporate brand
3. Upgrading hardware facilities, 3. Initially realizing the vision of
improving product and service quality “Nation’s top and Industry leaders’
brand”
4. Embedding ideas through culture to 4. Becoming the leading enterprise in
prove service quality Sichuan tourism and hotel industry
5. Starting service personnel training 5. Having ability to compete with
system international brands
6. Focusing on opinions of leaders
7. Group expansion
8. Extending the brand network
9. Associating with strong brand
enterprises
10. Rechanging management system
(1) Building the innovation expectation of corporate brand. And the con-
struction of corporate brand has to be adjusted according to the changes of
market conditions. At the first place, two sample enterprises immersed into the
past distinctive achievement without improvement, which led corporate brand
was lack of appropriate adjustment and revolution. Afterwards, in order to break
through management delimma, Luzhou Laojiao conducted series of strategic
actions to maintain its industry leading position.
(2) Knowing the methods of corporate brand system construction. Although
sample enterprises were able to comprehend the significance of building corpo-
rate brand, but they still came across the crisis without adopted proper strategic
actions. Luzhou Laojiao ignored the intrinsic connection between product brand
and corporate brand in the process of exploring corporate brand management,
so that the abuse of product brand weakened the value of corporate brand. How-
ever, through series of implementation, the corporate brand system was rebuilt.
Therefore mature enterprises should clarify the position and role of corporate
brand in the enterprise brand construction at the prior place. Designing the net-
work structure of corporate brand, and adopting strategic actions, like putting
the corporate brand as the core, to adjust brand structure is helpful for the
strategy planning of corporate brand construction, which can the relationship
and influence manner between corporate brand and product brand.
(3) Exploring the path of corporate brand rule development. Firstly, enter-
prises in different period of lifecycle, have different conditions of resources and
capability and face different strategic choice, therefore management ways suit-
ing corporate brand are different. And the difference of enterprise scale, the
competition position, business type and operation ways can lead changes of cor-
porate brand management ways. Enterprises are considered to constantly adjust
corporate brand development path in accordance with development phases and
strategic planning.
5 Conclusions
The research is based on strategic views, and integrates the strategic manage-
ment and brand management theory. Besides, the research adopts the methods
of vertical case study, and analyses ways of building corporate brand between
mature manufacturing enterprises and service-oriented enterprises. From the
process of building corporate brand for sample enterprises, implementing series
of strategic actions which benefit prospects of future corporate brand develop-
ment circumstance and direction, can boost the corporate brand construction
of mature enterprises. Moreover, mature enterprises can adopt certain tactics
to break through the dilemma of corporate brand management, thus realizing
the corporate brand construction. It can be concluded that corporate brand
construction and enterprises’ strategic actions are correlated, mature enterprises
brand construction is the consequence of related strategic actions, in the con-
trast, corporate brand construction has the certain impact on enterprises’ strate-
gic actions. The further discussion is placed regarding detailed roles, influence
ways and path on enterprises’ strategic actions from corporate brand.
1582 Y. Yuan et al.
References
1. Aaker DA (2006) Brand portfolio strategy. Strateg Dir 22(10):468–468
2. Armstrong G (2009) Marketing: an introduction. Pearson Education, New Jersey
3. Balmer J (2017) Advances in corporate brand, corporate heritage, corporate iden-
tity and corporate marketing scholarship. Emerald
4. Balmer JMT (2012) Strategic corporate brand alignment: perspectives from iden-
tity based views of corporate brands. Eur J Mark 46(7/8):1064–1092
5. Balmer JMT, Abratt R, Kleyn N (2016) Corporate brands and corporate market-
ing: emerging trends in the big five eco-system. J Brand Manag 23(1):3–7
6. Balmer JMT, Powell SM et al (2016) Advances in corporate branding. Palgrave
Macmillan, UK
7. Buil I, Catalán S, Martı́nez E (2016) The importance of corporate brand identity
in business management: an application to the uk banking sector. Bus Res Q
19(1):3–12
8. Chang A, Chiang HH, Han TS (2015) Investigating the dual-route effects of cor-
porate branding on brand equity. Asia Pac Manag Rev 5(3):120–129
9. Dinnie K (2009) Taking brand initiative: how companies can align strategy, culture,
and identity through corporate branding. J Brand Manag 16(7):496–498
10. Harris F, Chernatony LD (2001) Corporate branding and corporate brand perfor-
mance. Eur J Mark 35(3/4):441–456
11. Hatch MJ, Schultz M (2001) Are the strategic stars aligned for your corporate
brand? Harvard Bus Rev 79(2):128
12. Knox S, Bickerton D (2003) The six conventions of corporate branding. Eur J Mark
37(7/8):998–1016
13. Lewis S (2000) Let’s get this in perspective. Unpublished presentation given at the
Confederation of British Industry, Branding and Brand Identity Seminar, Bradford
University School of Management
14. Mukherjee A, Balmer JMT (2008) Preface: New frontiers and perspectives in cor-
porate brand management: in search of a theory. Int Stud Manag Organ 37(4):3–20
15. Srivastava K, Sharma NK (2013) Service quality, corporate brand image, and
switching behavior: the mediating role of customer satisfaction and repurchase
intention. Serv Mark Q 34(34):274–291
16. Tu YT, Li ML, Chih HC (2013) An empirical study of corporate brand image,
customer perceived value and satisfaction on loyalty in shoe industry. J Econ Behav
Stud 5:469
17. Voss KE, Mohan M (2016) Corporate brand effects in brand alliances. J Bus Res
69(10):4177–4184
Case Study: Packing and Distribution Logistics
Optimization of Fashion Goods
1 Introduction
1.1 Kolon Sport
Kolon Sport (K/S) is the signature brand of Kolon Industries, Inc., which was
founded in 1957 in South Korea and operates more than 10 different fashion
brands under its business unit. The total revenue of the Kolon Industries was
around US $5 billion in 2014 [3]. The K/S brand was first established in 1973
and is now one of the top three outdoor fashion brands in Korea, a country with
the world’s second-largest market share in the global outdoor fashion industry,
as of 2014, after the United States. K/S operates more than 250 stores over the
country and designs, manufactures, and distributes more than 4,000 different
types of products in all sales seasons. More information about the brand can be
found at its website (http://us.kolonsport.com/). Note here that K/S refers to
both the brand name and the department unit in Kolon Industries responsible
for K/S operations. We use the term K/S interchangeably for the two definitions.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 133
1584 Y.J. Jang and S.W. Sung
of different box configurations was restricted to less than 10. Consequently, K/S
should decide how to design the box configurations and determine which con-
figuration and how many boxes are distributed to a store. These decisions are
collectively called packing and distribution in K/S. Tables 1 and 2 show an exam-
ple of packing and distribution decisions. The decisions on the configuration for
each box type and the number of boxes for each type should be made in the
packing and distribution process. In addition, which types and how many boxes
of each type should be sent to each store must also be determined.
Because the OEM manufacturers charge packing fees based on the number of
configuration types and the number of boxes shipped to the stores, appropriate
packing and distribution can significantly reduce costs. However, due to the
complexity of the configuration, decisions had been made on an ad-hoc basis
that had caused inefficiency - some stores received more items than needed and
some received less than needed.
5 Validations
We conducted a simulation-based validation of the packing and distribution
process in one sales season to compare the performance of the proposed method
to that of the legacy procedure at K/S. The sales volume, i.e., number of sales,
from the proposed method was approximately 10.16% better than that with the
legacy method. We also conducted a pilot test of our approach by distributing a
number of actual products for the summer season of 2015, and the result shows
that compared to the legacy method of K/S, the inventory was distributed well
by our method in accordance with store demand. Finally, the proposed method
has now been completely implemented in the K/S’s internal system for use in the
upcoming packing and distribution season. The details of the validation results
can be found in [5].
Case Study: Packing and Distribution Logistics Optimization 1589
6 Conclusion
In this document, we describe the Business Analytics project jointly conducted
by K/S and KAIST. The project has successfully developed a decision support
system to assist the packing and distribution procedure at K/S. Throughout
the project, the framework of the Business Analytics was first defined, and the
roles and responsibilities of each team were clearly identified based on the frame-
work. Then, multiple tasks were listed and performed by the joint team. The
proposed packing and distribution approach includes a statistical data analysis
that estimates the demand of stores for each product, the optimization of the
box configurations and their distribution, and the IT implementation for sustain-
able use. The core technologies developed in the project were the optimization
modeling and algorithm development. The box configuration of each box and
the distribution quantity for the boxes to the stores were optimized by mixed
integer programming. The proposed method was validated with simulation and
actual tests in the stores. As a result, the proposed method showed significant
effect, and K/S decided to implement it within their internal IT system.
The financial benefit delivered by the project is still under investigation.
Based on the simulation result and on-site tests, we project that the proposed
packing and distribution will improve sales by 5% 10% in terms of revenue. From
the cost-saving viewpoint, we estimate that costs related to the man-hour work-
ing time for determining the configuration, the boxes, and shipping and handling
will be significantly reduced. The unquantifiable contribution of this project is
also very significant. The project demonstrates how advanced data-driven meth-
ods and algorithms can be incorporated into a traditional retail fashion business.
It proves that in the modern data-driven society, a high-technology company is
not defined by its product but rather by how it handles its operations. Mr.
J.H. Jang, the Lead Manager of the Big Data Analytics Team, stated that “this
project showed how the scientific method can improve the operation and pro-
vided the direction of the Big Data Analytics Team”.
Moreover, this project has been an example of successful academic-industry
collaboration deriving actual tangible results. Often, academic researchers find
topics for research from other academic literature or come up with hypothetical
imaginary topics in the hope that these kinds of problems will be valuable to
industry. However, the KAIST researchers identify multiple topics that are not
known to academia but which are worthy of further investigation. The authors
are also currently working on an academic paper presenting the optimization
algorithm they developed for this project.
Finally, this packing and distribution problem is common across the retail
fashion industry. To our best knowledge, few retail fashion companies use logical
methods for the process. The process, model, and algorithm developed in this
project can be further developed as a service or software solution. We discovered
three patents (two applications and one awarded patent) for solution algorithms
of the optimization problem similar to what we constructed [4,6,7]. Among these
patents, two are from the Oracle Corporation [6,7] and one is from the SAS
1590 Y.J. Jang and S.W. Sung
Institute [4]. However, we found that there is still room for improvement in the
algorithms presented in these patents.
Appendix
Decision Variables
xbs : number of box configuration b allocated to store s;
yb : binary variable, which is 1 if box configuration b is used and 0 otherwise;
uis : understocking of item i in store s;
ois : overstocking of item i in store s
Objective
(αis uis + βis ois )
s∈S i∈I
Constraints
cbi xbs − ois + uis = dis ∀i ∈ I, s ∈ S
b∈B
xbs ≥ Ns ∀s ∈ S
b∈B
xbs ≤ Ms ∀s ∈ S
b∈B
yb ≤ N B
b∈B
xbs ≤ T
s∈S
b∈B
xbs ≤ Ms yb ∀b ∈ B
s∈S s∈S
xbs ≥ yb ∀b ∈ B
s∈S
xbs ≥ 0, integer ∀b ∈ B, s ∈ S
yb ∈ {0, 1} ∀b ∈ B.
References
1. Fischetti M, Monaci M, Salvagnin D (2015) Mixed-integer linear programming
heuristics for the prepack optimization problem. Discrete Optim 22:195–205
2. Hoskins M, Masson R et al (2014) The PrePack optimization problem. Springer
3. Industries K (2014) Financial information http://www.kolonindustries.com/Eng/
Service/service02.asp
4. Pratt RW (2014) Computer-implemented systems and methods for pack optimiza-
tion. U.S. Patent Application
5. Sung SW, Jang YJ (2015) Kolon-KAIST business analytics project. Internal report,
Kolon-KAIST Life Innovation Center
6. Vakhutinsky A (2012) Retail Pre-Pack optimization system. U.S. Patent Application
7. Vakhutinsky A, Subramanian S et al (2012) Retail pre-pack optimizer. U.S. Patent
Application
Integrated Project Management
Algorithmical and Program Functions
of Innovation Project Management
in Technoloji Park
1 Introduction
As known, creation of new innovation projects in the many science technology
parks, Zouain [13] and Lamperti [7] considered the problems of science technol-
ogy parks from its laboratory investigation, manufacture till economical process
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 134
1596 E. Ghuseynov et al.
The first scientific profile of SSU technology park is one of priorities direc-
tions in technology park, as application of progressive information technology,
intelligence systems and program tools provide computing functions of many
medical, biology, engineering and others important technical problems for citi-
zen. Option of this scientific profile was justified by means of the formed scientific
base which organized scientists of SSU, their scientific works which they conduct
about 50 years. Under scientific leader of Rafik Aliyev, a famous in world sci-
entific circles by development of fuzzy logic theory, was organized the scientific
school of scientists of SSU and theirs works had been implemented in flexible
manufacture systems in the metallurgy enterprise of Sumgait city. At present
the scientists as doctorates and professors work in SSU by own specialty. At
present 4 the laboratories of the departments of faculty “Engineering” provide
all education and sciences works of students and teachers.
The second scientific profile of SSU scientific technology park is multi-
scientific area which is connected theoretical and experimental investigations,
also implementation of results in manufacture process. Investigation on this
directions conduct scientists of physical and chemical faculties which have 4
the scientific laboratories and some special devices for experiments, are founded
in the scientific part of SSU.
The third scientific profile of SSU scientific technology park (SSU STP) is
one of the basis profile, as that profile must provide economical realization of
1598 E. Ghuseynov et al.
• Input and presentation of scientist user idea with the necessary data of own
user;
• Expert analyze of project annotation;
• Experimental investigation in the laboratory and making a laboratory exem-
plar of the project;
• The project embedding in flexible manufacture and its output Elchan
Ghuseynov, Javanshir Mammadov, Gulnara Genjeliyeva [5] considered a prob-
lem of application of flexible industrial park in the scientific technology park
of Sumgait State University of Azerbaijan where the problems of innovation
project option and embedding in the manufacture are not considered.
Algorithmical and Program Functions of Innovation Project Management 1599
Fig. 1. The structural scheme of SSU STP with innovation project relocation
1600 E. Ghuseynov et al.
(1) The professors and teachers, also students of SSU present own ideas in the
global level of the system of registration. At first they input the registration
information to two the panels “Teacher registration” or “Student registra-
tion”. An User of the system inputs his data and information about himself
project (name, purpose of the project, short idea of the project-annotation)
which is saved in the “Annotation” panel for future familiarization of this
project by an expert. Also on the first stage for careful analysis in this part
of the system, the panel of “Prototipe of a project” is used.
(2) During a week, different experts in autonomy form control the registration
and annotation informations. On the base of standart form of the new project
receiving, a decision about acceptance or rejection of the project are given.
Procedures of an expert are executed and saved in the panel “Expert review”.
(3) For carefully presentation of the projects, the projects are represented to
the experts in the determined time. Actuality, newness, modernity, high
engineering decision and economical efficiency of the projects are the basis
of choosing the project for its first making as an experimental form in the
laboratory condition in accordance of the scientific profiles.
(4) In the experimental laboratory the best project is given for its constructor
design, material option, working out the first experience form of the project
and control of its mechanical, automation and others functions. At this stage
the basics technology characteristics of the project are defined and then
its quality level, the better data in difference of the prototypes ones. The
documents must be official formed by the status leader of the laboratory.
(5) All information of the designer is saved in his data base (in the part of
“Project data base”).
(6) The commercial department determines the basis of rules and the demanded
in the external and internal market and economical efficiency of the project is
computed. Čirjevskis [11] presented dynamically “signature business model”
which can provide durable competitive advantage at commercial procedures
of business incubator of technology park. “For presentation of the project to
the local and international markets Amit and Zott [1] worked out the model
for application in business process of innovation project by science and tech-
nology connection” in “Presentation of the new project” department, the
view of the 2, 3-measure pictures, animations, video and technical charac-
teristics are saved in the data base. The managers by the scientific profiles
chooses the clients and some information about them saves in the “Client
firm”. Between the scientific technology park and the firm, official meeting
is executed by means of this department. Therefore the official documents
are prepared.
(7) The procedures of project documents presentation are realized for the
experts in reality. On that stage all the documents registered by the expert
are sent to the flexible manufacture where a process of the project making
is executed.
The structural scheme of SSU STP with innovation project management is
given in Fig. 1. Also the functional connections between the jobs departments of
SSU STP are shown.
Algorithmical and Program Functions of Innovation Project Management 1601
corporative network
ROOM_3
In corresponding to functions of each the module, the program for expert assess-
ment of innovation project (EAIP) Carver et al. [2] considered the problem of
1602 E. Ghuseynov et al.
(1) At first stage a designer inputs his the data of authors names, title and aim
of a project, annotations of the project, the reciving date and time of project
into the system.
(2) On second stage all the data by the project are saved in the data base
management system as a table view.
(3) In third stage an expert checks information about the designer of a project
and begins to read and looking annotation of the project. After checking the
project, an expert assessment of this project gives by marks where a result
outputs in the system of EAIP and sends to the designer.
6 Conclusions
On the base of the made investigation by innovation project management, the
following results were got:
(1) In corresponding to the aim of the investigation problem, the algorithm pro-
cedure for management and assessment of an innovation project in scientific-
technopark was offered.
Algorithmical and Program Functions of Innovation Project Management 1603
References
1. Amit R, Zott C (2014) Business model design: a dynamic capability perspective.
Technical report. Citeseer
2. Carver D, Chan WK et al (2016) Special issue on software engineering technology
and applications. J Syst Softw 126:85–86
3. Correia AMM, Gomes MDLB (2014) Potentialities and limits for the local economic
and innovative development: a comparative analysis of technology parks located
in the northeast region of Brazil. Int J Innov Learn 15(3):274–298
4. Dı́ez-Vial I, Fernández-Olmos M (2015) Knowledge spillovers in science and tech-
nology parks: how can firms benefit most? J Technol Transf 40(1):1–15
5. Ghuseynov E, Mammadov J, Genjeliyeva G (2016) Application of flexible industrial
park in the scientific technology park of Sumgait state university of Azerbaijan.
Br J Appl Sci Technol 13(4):17–26
6. Kirchberger MA, Pohl L (2016) Technology commercialization: a literature review
of success factors and antecedents across different contexts. J Technol Transf
41(5):1–36
7. Lamperti F, Mavilia R, Castellini S (2015) The role of science parks: a puzzle of
growth, innovation and R&D investments
8. Murzina SE (2015) International practice of innovation infrastructure creation as a
mechanism for the innovative economy development and the improvement of land
use effectiveness. J Siberian Federal Univ Humanit Soc Sci 8:2535–2544
9. Mussi C, Angeloni MT, Faraco RA (2014) Social networks and knowledge transfer
in technological park companies in Brazil. J Technol Manage Innov 9(2):172–186
10. Prencipe A (2016) Board composition and innovation in university spin-offs: evi-
dence from the italian context. J Technol Manage Innov 11(3):33–39
11. Čirjevskis A (2016) Designing dynamically “signature business model” that sup-
port durable competitive advantage. J Open Innov Technol Mark Complexity
2(1):15
12. Volkonitskaia K (2015) Business models of technoparks in russia. In: Series: Science,
Technology and Innovation WP BRP, WP BRP 55/STI/2015
13. Zouain DM, Plonski GA (2015) Science and technology parks: laboratories of inno-
vation for urban development - an approach from Brazil. Triple Helix 2(1):1–22
Two-Stage Fuzzy DEA Models with Undesirable
Outputs for Banking System
Abstract. In this paper, we propose two stage fuzzy DEA models with
undesirable outputs to evaluate banking system. The banking system
is divided to two subsystems: production stage and profit stage. In the
model, two kinds of assumptions (constant returns to scale and variable
returns to scale) are considered, and fuzzy parameters are adopted to
describe the uncertain factors. Chance constrained operator is used to
handle the proposed model and equivalent transformations are given to
make the models solvable. We illustrate and validate the proposed models
by evaluating 16 Chinese commercial banks. Some discussions are also
presented to show the differences and advantages of the models.
1 Introduction
Efficiency is particularly illustrated in the comprehensive competitiveness of
commercial banks. In terms of investment, supervision or independent opera-
tion, it is of great significance to evaluate the efficiency of commercial banks.
Data Envelopment Analysis (DEA), a non-parametric mathematical program-
ming approach to evaluate group of Decision Making Units (DMUs) with com-
parative efficiency, is widely applied to analyze the efficiency in the process of
banking operation [1,4,5,12,13,17].
It seems like the traditional DEA model treats DMU as “black box”, which
is unable to explain the internal structure inside the “black box”. In a banking
system, taking deposits and making loans are the main activities. Some banks may
have greater advantage in taking deposits, but perform not as well in making loans
to realize profit. Hence, it is better to consider the banking system as a two-stage
process. We should evaluate not only each stage, but also the whole system.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 135
Undesirable Outputs for Banking System 1605
In the banking system, there are desirable outputs, like the total deposit
during the first stage, and the profit during the second stage. But in the oper-
ational process, banks want to get rid of the risk exposure and non-performing
loans/assets, which are undesirable outputs. Scheele [15] concluded that the
undesirable outputs could be handled by direct and indirect approaches. In
indirect approaches, the values of the undesirable outputs are converted to a
monotone decreasing function, and then we can treat the undesirable outputs
as the other normal desirable outputs. Direct approaches avoid data transfor-
mation and incorporate the undesirable outputs as the inputs directly into the
DEA models. In Hu’s work [20], undesirable outputs were treated as the inputs,
and the less of the undesirable outputs (inputs), the better. A new two-stage
DEA model with the undesirable outputs to measure the slacks-based efficiency
of Chinese commercial banks during years 2008–2012 was developed in [2], where
the banking operation process of each bank was divided into a deposit generation
stage and a deposit utilization stage. However, they neglected the undesirable
outputs of the deposit generation stage.
In recent years, scholars have already realized that some inputs and outputs
are imprecise in real life cases. Conventional DEA models used the accurate inputs
and outputs data, which may not fit in the real world cases. To deal with the impre-
cise data, the fuzzy theory was introduced in DEA to become fuzzy DEA.
According to Hatami-Marbini [7], the methods of handling the fuzzy DEA can
be divided into four types: (i) The tolerance approach [16]; (ii) The α-level based
approach [8,11]; (iii) The fuzzy ranking approach [6]; and (iv) The possibility
approach [9,10,19]. Puri [14] presented a fuzzy DEA model with undesirable
fuzzy outputs, giving a numerical illustration of the banking sector in India using
fuzzy input/output data for the period 2009–2011, but the internal structure of
banking operation system was not taken into account. Wanke [18] developed
new Fuzzy-DEA models to measure the impact of each model on the efficiency
scores and to identify the most relevant contextual variables of efficiency by using
bootstrap truncated regressions with fixed factors.
In this paper, we propose a two stage fuzzy DEA model with undesirable
outputs to evaluate 16 China’s commercial banks. The remainder of this study
is organized as follows. Section 2 presents problem statement. Section 3 develops
three kinds of two-stage DEA models with undesirable outputs. Section 4 dis-
cusses the solution method. Section 5 gives a case study with 16 banks in China.
Finally, conclusions are given in the last section.
2 Problem Statement
The efficiencies of several banking systems (DMUs) are under evaluation, and
each DMU, indexed by j = 1, · · · , J, has two sub-stages: production stage and
profit stage. During the production stage, funds are collected from depositors
while consuming resources such as number of employees xN j
E
, fixed assets xF A
j ,
operating expenses xj and number of institutions xj . Total deposits Zj
OE NI TD
is not only the desirable output of the production stage but also the input to
1606 X. Zhou et al.
x NE FA OE NI
j , xj , xj , xj Production z TD
j y RP
j
Profit Stage
Stage
∼z RWA ∼y NPA
j j
the profit stage. Total risk-weighted asset z̃jRWA is the undesirable output during
the production stage. In the profit stage, deposits ZjT D are used to invest into
other activities to get profits yjRP , but the undesirable output non-performing
loans/assets ỹjNPA will also be produced. Because deposits collected during the
production stage determines the investment decision in the profit stage, we model
this banking system as a two-stage problem, which is shown in Fig. 1.
In practice, some data could be imprecise. Total risk-weighted assets refer to
the sum of In-Balance-Sheet assets and Off-Balance-Sheet exposures, weighted
according to different risk coefficients, respectively. The risk coefficients are
affected by macroeconomic environment, current risk coefficient, exposure of
bank, etc., which is difficult to ascertain. The non-performing loans/assets are
loans with signs of difficulties to repay the loans principal and interest. It is
different to ascertain whether the borrowers can repay the loans on time, which
are affected by public policy, credit of the clients, actual situation of the clients,
etc. Due to the variability of risk coefficients and the repayment ability of bor-
rowers, we employ fuzzy numbers to describe the total risk-weighted assets and
non-performing loans/assets.
Definition 1. [21] Let L(·), R(·) be two reference functions. If the membership
function of fuzzy variable ξ has the following form:
m−x
L( α ), x ≤ m, α > 0
μξ (x) =
R( x−m
β ), x ≥ m, β > 0.
3 Models
Charnes-Cooper-Rhodes (CCR) model assumes all the DMUs are Constant
Returns to Scale (CRS) [3]. In some cases we also have Variable Returns to
Scale (VRS). In this section, three kinds of DEA fuzzy models are developed for
commercial banking system evaluation.
Undesirable Outputs for Banking System 1607
When the banking system is considered as a “black box”, then we formulate the
CCR model as follows.
ud y0RP
max hCCR0 = 4
i=1 vi xi0 + wu z̃0
RWA + u ỹ NPA
⎧ u 0
⎪
⎨ u d y RP
j (1)
4 ≤ 1, j = 1, · · · , J
s.t. v x + w z̃ RWA + u ỹ NPA
⎪ i=1 i ij
⎩ v , w , u , u ≥ ε > 0,
u j u j
i u d u
Next we discuss the case when the banking system is divided into two stages.
Suppose all the DMUs are CRS, we develop models (2) and models (3) to cal-
culate the efficiencies of the first stage and the second stage.
1 wd z0T D
max hCRS
0 = 4
i=1 vi xi0 + wu z̃0
RWA
⎧
⎪
⎨ wd zjTD
(2)
4 ≤ 1, j = 1, · · · , J
s.t. i=1 vi xij + wu z̃j
RWA
⎪
⎩ v , w , w ≥ ε > 0,
i d u
and
2 ud y0RP
max hCRS
0 =
⎧ wd z0 + uu ỹ0NPA
T D
⎨ ud yjRP (3)
≤ 1, j = 1, · · · , J
s.t. wd zjT D + uu ỹjNPA
⎩
wd , ud , uu ≥ ε > 0.
Then we use ω1 and ω2 to represent the weights of the first stage and the
second stage, and the formulations are as follows.
4
i=1 vi xij + wu z̃j
RWA
ω1 = 4 , (4)
i=1 vi xij + wu z̃j
RWA + w z T D + u ỹ NPA
d j u j
and
wd zjT D + uu ỹjNPA
ω2 = 4 . (5)
i=1 vi xij + wu z̃jRWA + wd zjT D + uu ỹjNPA
1608 X. Zhou et al.
Then the two-stage fuzzy CRS DEA model with undesirable can be proposed
as Eq. (6).
1 2 wd z0T D + ud y0RP
max hCRS
0 = ω1 hCRS0 + ω2 hCRS
0 = 4
i=1 vi xi0 + wu z̃0
RWA + w z T D + u ỹ NPA
⎧ d 0 u 0
⎪
⎪ w z
d j
TD
⎪
⎪ 4 ≤ 1, j = 1, · · · , J
⎪
⎨ i=1 vi xij + wu z̃jRWA
s.t. ud yjRP
⎪
⎪ ≤ 1, j = 1, · · · , J
⎪
⎪ wd zj + uu ỹjNPA
T D
⎪
⎩
vi , wd , wu , ud , uu ≥ ε > 0.
(6)
4 Solution Method
In this section, we propose a solution to the two-stage DEA models with fuzzy
parameters.
max⎧g0CCR = ud y0RP
⎪ 4
⎪
⎪ Pos u y RP
− v x − w z̃ RWA
− u ỹ NPA
≤ 0 ≥ γ, j = 1, · · · , J
⎪
⎪
d j i ij u j u j
⎪
⎪ i=1
⎪
⎨ 4
Pos vi xi0 + wu z̃0RWA + uu ỹ0NPA ≥ 1 ≥ γ
s.t.
⎪
⎪
i=1
⎪
⎪
4
⎪
⎪ Pos vi xi0 + wu z̃0RWA
+ uu ỹ0
NPA
≤1 ≥γ
⎪
⎪
⎩ i=1
vi , wu , ud , uu ≥ ε > 0
(9)
After getting the optimal solutions vr∗ , wu∗ , u∗d and u∗u , the CCR DEA effi-
ciency g0CCR of the evaluated whole system can be obtained.
By solving models (11) and (12), the CRS and VRS DEA efficiencies of the
evaluated whole system, g0CRS and g0VRS , can be obtained. Also the efficiencies
1 1 2 2
of each sub-stage g0CRS (g0VRS ) and g0CRS (g0VRS ) can be obtained.
5 Case Study
In this section, we evaluate the efficiencies of the top 16 Chinese banks: Bank Of
China (BOC), Agricultural Bank of China (ABC), Industrial and Commercial
Bank of China (ICBC), China Construction Bank (CCB), Bank of Communica-
tions (BCM), China Citic Bank (CNCB), Ping An Bank (PAB), Hua Xia Bank
(HXB), China Everbright Bank (CEB), China Industrial Bank (CIB), China Mer-
chants Bank (CMB), Bank Of Ningbo (NBB), Bank of Nanjing (NJB), Bank of Bei-
jing (BJB), Shanghai Pudong Development Bank (SPDB), and China Minsheng
Banking (CMSB).
5.1 Data
Table 1 presents the data of the inputs and outputs of the selected 16 Chinese
commercial banks in 2015.
5.2 Results
Based on the data in Table 1, first we set the decision-makers’ confidence levels
for the two stages as γ = 0.9, then the efficiencies of each bank in different
circumstances can be produced after solving two-stage fuzzy CCR, CRS, and
VRS DEA models by LINGO, respectively. The results are shown in Tables 2
and 3. And Fig. 2 compares the system efficiencies of 16 commercial banks based
on the proposed three kinds of models.
Undesirable Outputs for Banking System 1611
Table 1. Data
According to Table 2, ICBC, CCB, CNCB, CEB, CIB, CMB, NBB, BJB, SPDB,
and CMSB are at the frontier when using traditional CCR DEA model evalu-
ation, but the advantages are not maintained when using CRS DEA model.
At the same time, the efficiencies of BOC, BCM, HXB, CMSB also decline in
varying amounts. We conclude that often the CCR model may get high effi-
ciency scores when the system is considered as a “black box” by using four
inputs (Employees, Fixed assets, Operating expenses and Institutions) to pro-
duce three outputs (Total Risk-Weighted Assets, Retained profits and Non-
Performing loans/Assets). However, the efficiencies may be not that high any
more when the intermediates are taken into account. In other words, the tra-
ditional CCR model can only find weak effective system, but it is different to
distinguish which banks are truly more efficient.
1612 X. Zhou et al.
Bank CCR DEA CRS two-stage DEA efficiency Bank CCR DEA CRS two-stage DEA efficiency
efficiency efficiency
System Stage 1 Stage 2 System Stage 1 Stage 2
BOC 0.89 0.85 0.95 0.75 CEB 1 0.86 0.99 0.73
ABC 0.79 0.82 0.99 0.66 CIB 1 0.9 0.78 1
ICBC 1 0.92 1 0.84 CMB 1 0.9 1 0.8
CCB 1 0.91 1 0.82 NBB 1 0.94 0.75 1
BCM 0.89 0.85 0.96 0.73 NJB 1 1 1 1
CNCB 1 0.83 1 0.64 BJB 1 0.93 1 0.86
PAB 0.81 0.81 1 0.62 SPDB 1 0.89 0.92 0.85
HXB 0.9 0.84 1 0.69 CMSB 0.99 0.83 0.82 0.84
Bank CRS two-stage VRS two-stage DEA efficiency Bank CRS two-stage VRS two-stage DEA efficiency
DEA efficiency System Stage 1 Stage 2 DEA efficiency System Stage 1 Stage 2
BOC 0.85 0.85 0.95 0.75 CEB 0.86 0.93 1 0.74
ABC 0.82 0.82 0.99 0.66 CIB 0.9 0.91 0.8 1
ICBC 0.923 0.924 1 0.84 CMB 0.9 0.91 1 0.8
CCB 0.914 0.915 1 0.82 NBB 0.94 1 1 1
BCM 0.85 0.85 0.96 0.73 NJB 1 1 1 1
CNCB 0.83 0.88 1 0.65 BJB 0.926 0.93 1 0.86
PAB 0.81 0.82 0.97 0.63 SPDB 0.89 0.89 0.92 0.85
HXB 0.84 1 1 1 CMSB 0.834 0.835 0.82 0.85
By using two stage DEA model, we can further find out more discriminating
DEA efficiencies for DMUs because of the stronger restriction on the production
possible set. Although ICBC, CCB, CNCB, PAB, HXB, CMB and BJB are DEA
efficient at the production stage, all the efficiencies of them at the profit stage
are below 1. Similarly, CIB and NBB are inefficient at the profit stage while
both of them are DEA efficient at the production stage. That is due to CCR
DEA model neglects the internal interaction between the two stages. It also
proves that model (6) reflects not only the efficiency of each stage, but also the
efficiency of the banking system considering the relationship of two sub-stages.
Interestingly, the efficiency of ABC is improved by using model (6), which
manifests ABC have a great advantage of the internal operational process. We
find that the ratio of risk-weighted asset and inputs of the NJB is relatively small,
the ratio of deposit and input is relatively high, and the input/output ratio of
the NJB is at a reasonable level. Further analysis shows that both efficiency
values of NJB in model (1) and model (6) are 1, i.e. they are both located in
the frontier. This is consistent with our analysis, and policy makers can consider
them as the industry benchmark.
If all the inputs of the bank have changed in the same proportion, this change
will have an impact on the total outputs of the system. That is, the banking
system will have VRS in the actual operation process. Based on above, we pro-
pose two kinds of models under the CRS and VRS assumptions. According to
Undesirable Outputs for Banking System 1613
the model (7), the corresponding DEA efficiency can be obtained in Table 3.
In Table 3, column 2 and column 3 are the two-stage banking system overall
efficiency values obtained under the two assumptions, respectively.
It is found that HXB, NBB and NJB are DEA efficient not only in the whole
bank system, but also in both of the two sub-stages. Compared with two-stage
CRS DEA model, the efficiencies of two-stage VRS DEA increase slightly, and
the number of points on the frontier also increases. Accordingly, we can draw
a conclusion that commercial banks will have better performance in the VRS
conditions. Therefore, in order to obtain a reasonable input/output combination
to improve the performance of the system, the decision-makers can improve
the scale of return by some actions (such as training staff to improve business
efficiency, upgrading of banking equipment, improve its business throughput, and
so on). In general, the difference between DEA efficiencies under the assumption
of different scale returns is in a small range, which proves that the models we
proposed have certain stability.
6 Conclusion
This paper explores a problem of efficiency evaluation of banking system under
an uncertain environment. We consider the bank operating process consists of
production stage and profit stage, and the two sub-stages of system are connected
with each other by intermediate factor. Funds are collected from depositors in
the production stage, which are used to invest into other activities to get profits
in the profit stage. The undesirable outputs of both two sub-stages are treated as
inputs. On this basis, we present two-stage fuzzy DEA models with undesirable
outputs based on two different returns to scale.
In this paper, we have three main contributions as follows: First, based on
two assumptions of returns to scale, we present the two-stage fuzzy CRS and
VRS DEA models with undesirable output. The efficiency of the whole system
is defined as the weighted sum of the efficiencies of two sub-stages, where the
weights depend on the importance of the two stages. Second, considering the
variability of risk coefficients and the repayment ability of borrowers, we apply
fuzzy numbers to describe the total risk-weighted assets and non-performing
loans/assets, which is closer to practice. Then we apply chance constrained
operator to handle the fuzzy factors, and the objective is optimized subject
to the chance constraints under certain confidence levels. Third, a case study
and detailed comparison discussion are given by examining the top 16 China’s
commercial banks in 2015.
This paper focuses on the actual operation of commercial banks, considering
the internal operational structure of commercial banks and its undesirable output
in the process of operation. Through the application of the model, it is found
that the model can better evaluate the input/output structure of the bank and
has the ability of pointing out the direction to be improved.
1614 X. Zhou et al.
References
1. Akther S, Fukuyama H, Weber WL (2013) Estimating two-stage network slacks-
based inefficiency: an application to bangladesh banking. Omega 41(1):88–96
2. An Q, Chen H et al (2015) Measuring slacks-based efficiency for commercial banks
in China by using a two-stage DEA model with undesirable output. Ann Oper Res
235(1):13–35
3. Charnes A, Cooper WW, Rhodes E (1978) Measuring the efficiency of decision
making units. Eur J Oper Res 2(6):429–444
4. Chen Y, Cook WD, Zhu J (2010) Deriving the DEA frontier for two-stage processes.
Eur J Oper Res 202(1):138–142
5. Cook WD, Liang L, Zhu J (2010) Measuring performance of two-stage network
structures by DEA: a review and future perspective. Omega 38(6):423–430
6. Hatami-Marbini A, Saati S, Tavana M (2010) An ideal-seeking fuzzy data envel-
opment analysis framework. Appl Soft Comput 10(4):1062–1070
7. Hatami-Marbini A, Emrouznejad A, Tavana M (2011) A taxonomy and review of
the fuzzy data envelopment analysis literature: two decades in the making. Eur J
Oper Res 214(3):457–472
8. Kao C, Liu ST (2011) Efficiencies of two-stage systems with fuzzy data. Fuzzy Sets
Syst 176(1):20–35
9. Khodabakhshi M, Gholami Y, Kheirollahi H (2010) An additive model approach
for estimating returns to scale in imprecise data envelopment analysis. Appl Math
Model 34(5):1247–1257
10. Lertworasirikul S, Fang SC et al (2003) Fuzzy data envelopment analysis (DEA):
a possibility approach. Fuzzy Sets Syst 139(2):379–394
11. Liu ST (2014) Restricting weight flexibility in fuzzy two-stage DEA. Comput Ind
Eng 74:149–160
12. Liu W, Zhou Z et al (2015) Two-stage DEA models with undesirable input-
intermediate-outputs. Omega 56:74–87
13. Puri J, Yadav SP (2013) A concept of fuzzy input mix-efficiency in fuzzy DEA and
its application in banking sector. Expert Syst Appl 40(5):1437–1450
14. Puri J, Yadav SP (2014) A fuzzy DEA model with undesirable fuzzy outputs and
its application to the banking sector in India. Expert Syst Appl 41(14):6419–6432
15. Scheel H (2001) Undesirable outputs in efficiency valuations. Eur J Oper Res
132(2):400–410
16. Sengupta JK (1992) A fuzzy systems approach in data envelopment analysis. Com-
put Math Appl 24(8):259–266
17. Tavana M, Khalili-Damghani K (2014) A new two-stage stackelberg fuzzy data
envelopment analysis model. Measurement 53:277–296
18. Wanke P, Barros C, Emrouznejad A (2016) Assessing productive efficiency of banks
using integrated fuzzy-DEA and bootstrapping: a case of mozambican banks. Eur
J Oper Res 249(1):378–389
Undesirable Outputs for Banking System 1615
19. Wu DD, Yang Z, Liang L (2006) Efficiency analysis of cross-region bank branches
using fuzzy data envelopment analysis. Appl Math Comput 181(1):271–281
20. Hu XY, Cheng XJ, Ma LJ (2013) Efficiency evaluation of commercial banks based
on two-stage DEA model considering undesirable outputs. J Univ Chin Acad Sci
30(4):462–471 (in Chinese)
21. Xu J, Zhou X (2011) Fuzzy-like multiple objective decision making, vol 263.
Springer
Increasing Effect in Lodger Number of Hot
Spring Hotel According to the Started
Operation of Hokuriku Shinkansen
1 Introduction
Japan’s population has entered into a decreasing phase due to the decreasing
birthrate and aging population. Productive population also decreases remark-
ably, therefore a measure to put on the brake for the phenomenon is required.
National budget is also tight due to the increasing of the cost of social security
including medical expense as the society ages [2]. It is essential to build a policy
which could create a new industry and put the brakes on the depopulation in
a rural area. One of the regional-revitalization measures is “tourism industry”.
Human exchange is a basic policy to develop the tourism industry. The following
measures are important in the exchange, namely information and transportation
infrastructures [4].
Comfortable and express railway system (e.g. Shinkansen) had been desired
in Hokuriku district (Fukui, Ishikawa and Toyama Prefectures) for 50 years. The
newest section of the Hokuriku Shinkansen Line, between Nagano and Kanazawa,
opened on March 14, 2015. The line was opened about fifty years after Tokaido
Shinkansen Line started operations. The number of visitors was increased con-
siderably due to the operation in Kanazawa City (Shinkansen effect), which
was the prefectural capital of Ishikawa. The occupancy rate for hotels was over
85%. There are nine main spa areas mainly in Hokuriku district, namely Awara,
Yamanaka, Yamashiro, Katayamatsu, Awazu, Yuwaku, Wakura, Wajima and
Unazuki. The number of lodgers has increased nearly 15 to 30% in the areas.
The areas except for Unazuki (Toyama Prefecture) and Yuwaku (Kanazawa
City) take one hour and half by bus from each Shinkansen station. Unazuki
and Yuwaku take within thirty minutes by car or train. The nearby station of
Unazuki is Kurobe-Unazuki-Onsen and the one for Yuwaku is Kanazawa. The
number of visitors for both spa areas increased about 30% and the start of the
operation significantly affected the number for both areas. It is desirable for the
ripple effect to affect whole areas. The mass media reported that the number
increased in whole spa areas. It is necessary to investigate the effect and apply
the survey result to the future strategy.
Outline of ripple effect can be known by examining the monthly variability
of lodger number in each spa area and it is possible to determine the date of
various kinds of providing events [1,3,11]. Moreover, it is helpful to construct
the effective strategy that increases the guest number. The variability of lodger
number in nine spa areas of Hokuriku District for three years is summarized as
Shinkansen effect in this study. The data for December 2015 are estimated by
the authors.
2 Hokuriku Shinkansen
Hokuriku Shinkansen is a railway rout to connect Tokyo and Osaka via Hokuriku
District at the entire line available. When the entire rail line of Hokuriku
Shinkansen is opened, it can assume the bypath function of Tokaido Shinkansen.
A part of the line (from Tokyo to Nagano) opened in ahead of schedule in
1997 due to Nagano Olympics (1998 Winter Olympics). It took 18 years until
Kanazawa opening of business. It was fifty years late than the opening of Tokaido
Shinkansen (opened in 1964, Tokyo to Shin-Osaka). The line was the world’s
first high speed railway. The operation of Shinkansen line connected to Tokyo
was desired earnestly for the tourism promotion and economic development
in the district. The passenger number increased triple than the previous one.
However, airplane passenger decreased sharply (more than 30%) and the num-
ber of the flights (including number of seats) was decreasing after the start of
the Hokuriku Shinkansen Line. There are three airports in Hokuriku, namely
Komatsu (Ishikawa Pref.), Noto (Ishikawa) and Toyama (Toyama). The maxi-
mum speed of the line is restricted to 260 km/hour for noise reduction and it is
investigated to speed up (300 km/hour) in consideration of noise environment.
1618 T. Oyabu and J. Nakamura
The maximum speed of each Shinkansen Line is indicated in Table 1. The speed
of Tohoku Shinkansen is the fastest (320 km/hour). Speedup is important in
future. Photograph of Hokuriku Shinkansen is shown in Fig. 1. Sky blue and
copper color of the body provide a calm atmosphere.
Shinkansen km/hour
Tokaido 285
Sanyo 300
Tohoku 320
Jyoetsu 240
Kyushu 260
Hokuriku 260
Hokaido 260
It will take about twenty years [5]. The Linear Chuo Shinkansen will be able to
run between Tokyo and Nagoya in just 40 min in 2027. The line will be expanded
to Osaka from Nagoya in 2045 and it will take 67 min between Tokyo and Osaka.
Strategy to take the interchange of the person is necessary, but it is also nec-
essary to remark the comfortability, safety and security besides speed. Japan
developed firstly a rapid transit railway in the world. Moreover, noise and envi-
ronmental assessment should be remarked. Hokuriku Shinkansen is a pivot of
the regional revitalization.
About 8.15 million passengers used Hokuriku Shinkansen in total since
started operation (for Mar. 14, 2015 to Jan. 20, 2016; about 10 months). The
number was about 3 times than heretofore and the handling incomes increased
by 33%, which was released by JR West. The change of passengers for Mar.
14 to Sep. 13 (six months) is indicated in Fig. 3. The data was measured for
Jyoetsu-Myoko to Itoigawa (working area of JR west). The values in March and
September are very small because they are in half month. The average for April
to August is about 0.8 million and the number of passengers a day each station
is represented in Table 2. The one for Kanazawa is less than twice the one for
Toyama.
The characteristics of lodger number for three years (2013–2015) in the area
are shown in Fig. 4. The values in 2013 and 2014 are 1.73 and 1.74 million.
They are nearly equal. The percentages in the areas are as follows, Yamanaka:
25, Yamashiro: 39, Katayamazu: 27 and Awazu: 9%. The value for Yamashiro
is maximum. The mean total visitors for April to November in 2013 and 2014
were derived and compared with the one for the same term in 2015 after the
Shinkansen opening of business. It increased by 16% after the opening. The
percentage is thought as the Shinkansen effect. The total lodger number for
2015 in the area is about 2 million. The characteristic of the lodger in number
in the spa areas of Hokuriku District has a peak in August. The tendency is
the same as the characteristic in Japan. However, there are also small peaks in
March and November and the characteristics resemble each other depending on
tourism resources. The visitors from Kanto area (area population: 42 million)
increased to double after the start of the Hokuriku Shinkansen Line but the
tendency was the same.
Fig. 4. Characteristic of lodger number for Kaga four major spa areas
Fig. 6. Correlation diagram of lodger number between Kaga four major spa areas and
Awara spa area
has a very high correlational relationship. If the coefficient is small, it can sup-
plement the seasonal food and hotel guest mutually in the region. It is desirable
for each facilities to have a specific tourism resources. And it is desirable that
R is small. Lodger characteristics in five major spa areas of Kaga Onsen-Kyo
should be varied due to a proposed strategy.
Yuwaku spa is located in Kanazawa city (population; 0.46 million) and is called
as an inner room of Kanazawa. Business firms often give parties there to entertain
their customers. There is Yumeji Takehisa Kan (a kind of museum) and many
Yumeji-fans visit there. There is Lake Gyokusenko which is suited in the inner
part of the hot-spring street. There is Himuro House (icehouse) by the lake and
the winter snow is stored in the house. The stored operation is carried out in
June every year. The snow ice was offered to Tokugawa family by Kaga Domain
in the Edo Period. The many citizens have the habit of eating Himuro manju
(steamed bun) to pray for good health at July 1. The area was the location of
Hanasaku-iroha (an anime drama) broadcast on TV in 2011 and there were many
Increasing Effect in Lodger Number of Hot Spring Hotel 1623
Shinkansen Line started operation. It became 0.17 million in 2015 and the
increase of 30% is recognized. The increase rate of lodger number for April to
November in 2015 was 33%. The number of visitors for Wajima morning market
increased by 30%. The influence of the serial TV drama “Mare” produced by
NHK is also thought. Wajima was used as a location for making the drama.
It was broadcasted for March to September in 2015. There were really many
visitors to Osawa area in Wajima City. Magaki (board fence) which is made
up of bamboos, is famous in the area. Magaki protects a house from a strong
winter wind (sea breeze). Lodger-number characteristic for Wajima spa area is
presented in Fig. 10. The number increased after the broadcasted TV drama
(May). The details for the increased number (30%) are as follows. Shinkansen
effect for Wakura spa nearby Wajima is 20%. So Shinkansen effect is estimated
by 20%, TV drama effect is 10%. It is difficult to judge the effect clearly because
there is a synergistic effect.
There were various kinds of advantages and disadvantages due to the Hokuriku
Shinkansen started operation. The start of the operation spreads to over the
entire area of Hokuriku (three prefectures) but there are the light and shade of
the effect [8]. The movement of person is sucked into an area where an attrac-
tive event is held. This phenomenon is called “straw effect”. Some points to be
improved were listed by visitors to the area. The points are as follows.
1626 T. Oyabu and J. Nakamura
The increasing rates of lodger number for six hot spring areas as described
above, is indicated altogether in Table 3. The rates of three areas (Yuwaku,
Wajima and Unazuki) are over 30%. Yuwaku and Unazuki are near the Shinkansen
stations, and Wajima is the filmed place for the drama (Mare). Consequently, the
one for Wajima will decrease gradually. The correlation diagram for the lodger
number between Yuwaku and Unazuki is derived and indicated in Fig. 12. The
correlation coefficient R is 0.75 and the value is high. It is thought that there are
some different types of tourism resources for the visitors in the area, for example
autumn color of canyon, ski slope and cheap accommodation fee.
It was thought that the visitor number to Wajima increased by 10% due to the
place for the TV drama “Mare”. There was a synergistic effect with Shinkansen
began the operation (increased by about 20%) and many persons visited Wajima
to see the scene sites of the drama. Tokyo where the area population is 42 million,
is the starting station of the Hokuriku Shinkansen. There is high potential to
increase the number when the access becomes easy. The increasing rates of the
five spa resorts (Kaga three major spa areas, Awazu and Awara) were almost
20%. The one for Wakura was 21%. The access for Wakura is relatively good
because Wakura is connected directly to Kanazawa by JR railway and there is
also a bus route. The rate is higher than the ones for the above mentioned five
areas.
Coefficients of variation (cv: standard deviation/the mean value) for six spa
resorts are summarized in Table 4. The values in 2014 and 2015 are represented.
The ones for Wajima and Unazuki are exceed 0.3, namely the fluctuations for the
lodger number of every month are larger than the other areas. A variation of the
occupancy rate for each hotel is large, namely maintenance cost for the facilities
including labor cost becomes higher. The coefficient for Wajima becomes large.
Fig. 12. Correlation diagram for the lodger number between Yuwaku and Unazuki
Increasing Effect in Lodger Number of Hot Spring Hotel 1627
Table 3. Increasing rates for six hot spring areas in Hokuriku district
Table 4. Coefficients of variation for six spa resorts in 2014 and 2015
There was a tourism spot in which the visitor number increased remarkably. It
is Asakura remains in Ichijodani and located in Fukui prefecture. A CM movie
was filmed in the spot and about 1.1 million persons visited the spot [7]. It is nec-
essary to construct some facilities for visitors, namely parking place, rest house
and souvenir shop. The elderly interested in history are also visiting. The service
introducing the situation in ancient or olden times using a tablet is carried out.
Kenrokuen (one of the three largest gardens in Japan) is the spot which
has the most visitors in Ishikawa prefecture and the number is 2.9 million in
2015 (increased by 45.5% than the one in 2014). There was 2.25 million visitors
(increased by 81% compared with the previous year) in Kanazawa Castle Park
which is adjacent to Kenrokuen. The number of foreign people visited Kenrokuen
was 0.292 million (increased by 24.4% compared with the previous year). The
visitors from Taiwan occupied by 46.5%. The one from Hong Kong was 10% and
the one from USA was 5.3% (China was 5%). The rate of the one from Korea
was few [6].
There is the Noto Railway Corporation in Noto area where Wajima and
Wakura (in Nanao city) are located. It is a third sector railway company. The
service is operating between Wakura and Anamizu (about 40 min by train), and
JR operates a train between Nanao and Wakura. Many group tourists use the
Noto Railway Corporation after the Hokuriku Shinkansen line started its opera-
tion. Some events were hold in the train, for example famous sweets-service and
special guide by a conductor in the train. The events and TV drama “Mare”
affected to the increase of the visitor number. The number was 61 thousand in
2015 (increased by 73% compared with the previous year). The visitor number
form Kanto area were 42 thousand (70%). It is very large percentage compared
with the ones for other areas [10]. The number for Tateyama Kurobe Alpine
Route was 0.997 million in 2015 (increased by 10% compared with the previ-
ous year). There was a big snow wall (named Yuki-no-Otani in Japanese) in the
route. One million visitors were expected as a goal in 2015. It was slightly few. As
the details, the number of Japanese was 0.782 million and the one for foreign vis-
itors was 0.215 million (increased by 12%). The number of visitors from Taiwan
1628 T. Oyabu and J. Nakamura
(132 thousand) is the biggest and the second is Hong Kong (25 thousand) fol-
lowed with Thailand (17 thousand) and Korea (16 thousand). Higher two coun-
tries are the same as Kenrokuen [9].
5 Conclusion
It is necessary to examine the visitor number after the Hokuriku Shinkansen
line started its operation and to apply the result to a future strategy in the
district. It is said that the number increases remarkably in the first year after
the opening. However, it is also said that the effect does not continue after the
second year. This phenomenon is called “Shinkansen jinx”. Some proper and
attractive events are required to break the jinx every season and to secure the
repeaters. Peculiar local hospitality (“Omotenashi” in Japanese) is also necessary
to secure the repeaters. Some travel agents say that the traffic infrastructure
and the accommodation cost (lodging and transportation expenses) are more
important for repeaters. Some regional inhabitants have the objection against the
increasing the number of visitors sharply because the community is in disorder.
It is necessary to constitute the area in which the habitants and visitors can live
together. The action of “Civic Pride” is also effective.
In this study, the fluctuation of the visitor number in spa resort areas is
examined as a Shinkansen effect. It is important to understand the fluctuation
and is also necessary for forming the social environment of the area in future.
The visitor number of nine spa areas in Hokuriku district were investigated.
As a result, it becomes obvious that the effect of visitors spreads to Hokuriku
district whole. Especially, the lodger number of the areas near the two Shinkansen
stations increased by about 30%. The Noto area was over 20% and Kaga area was
less than 20%. The lodger number of the spa area near the station remarkably
increased due to the beginning of the operation. It is necessary to review the
comfortable Omotenshi (hospitality) to the guests and the staffing management
in the high increasing areas. It is also necessary to level the fluctuation on the
characteristic of the lodger number by holding events. There are the dispersion
in the lodger number every month. The leveling leads to the effective utilization
of the resources, especially capacity and the meal supply.
In future, it needs to be examined on foreign visitors (inbounds) who have an
enhanced economic effect. It will also contribute Japanese globalization. Increas-
ing the inbounds is one of Japanese strategies.
References
1. Bureau JT (2013) Viewpoint and practice of the management in tourism spot,
Maruzen
2. Insatsu N (2015) White paper on aged society. Cabinet Office (in Japanese)
3. Oyabu T, Nakajima M, Ohe Y, Hosono M (2013) Tourism and regional develop-
ment. Kaibun-do (in Japanese)
4. Kagaku Sha K (2015) Introduction to tourism informatics. Society for tourism
informatics in Japan (in Japanese)
5. Shinbun H (2016) The demand of Obama-Kyoto. Morning edition (in Japanese)
6. Shinbun HC (2015) Facilities user of Ishikawa prefecture becomes the most. Morn-
ing edition (in Japanese)
7. Shinbun HC (2015) Tourists exceeded one million. Morning edition (in Japanese)
8. Shinbun HC (2015) Tourists increased remarkably in each spot. Morning edition
(in Japanese)
9. Shinbun HC (2016) 0.29 million foreign visitors in Kenroku-en garden. Morning
edition (in Japanese)
10. Shinbun HC (2016) The advancement of the achievement (the aim 60,000 visitors).
Morning edition (in Japanese)
11. Suda H (2009) Tourism. Gakugei-shuppan (in Japanese)
Charging Infrastructure Allocation for Wireless
Charging Transportation System
1 Introduction
Electric vehicles (EVs) have attracted international attention as an alterna-
tive to internal combustion engine (ICE) vehicles, with the aim of reducing the
dependence on petroleum and adverse environmental effects. Many countries and
authorities have devoted much research and development to EVs, and numerous
policies have also been suggested with respect to electric transportation systems
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 137
Charging Infrastructure Allocation 1631
[15]. The number of EV users has grown in accordance with the global change,
and in particular, the market share of plug-in EVs (PEVs) becomes larger. In
addition, wireless power transfer (WPT) technology for EVs has recently been
introduced to overcome the disadvantages of the existing PEVs, for example
short travel distance and range anxiety.
The dynamic wireless charging electric vehicle (DWC-EV) is the most
advanced form of EV, and uses a wireless charger embedded under the road,
which charges the vehicle when it moves or stays on it. As it is possible to
charge during operation, the driver does not need to stop for a long time to
recharge the battery. The battery size can be reduced as the charging infrastruc-
ture can supplement the electric power throughout the operation. The online
electric vehicle (OLEV) is a DWC-EV developed by the Korea Advanced Insti-
tute of Science and Technology (KAIST). In DWC-EV system, however, the
charging infrastructure and the battery account for the largest proportion of the
initial investment cost. InGumi city, about 80% of the total cost was allocated to
these two key factors for setting the system [3,12]. In other words, the investment
cost is mainly determined by battery capacity and the allocation of the charging
infrastructure, so we focus on these factors in this study. To install a DWC-EV
system at a reasonable cost, an optimum level of charging infrastructure and an
estimation of battery capacity are required.
The purpose of this study is to identify the optimal allocation of charging
infrastructure and battery capacities for the stable and economic operation of
the DWC-EV system which is applied to public transportation buses. We focus
on the minimization of the initial investment cost, not the operational cost. In
this study, we suggest a mixed integer programming (MIP) model considering
a multiple-route public transportation system. To find optimal solutions for the
MIP model, specific algorithms would typically be used, but obtaining optimal
solutions to the complex problems with these algorithms is difficult. In this
case, genetic algorithm (GA), a widely used metaheuristic, can be useful for
finding optimal or near-optimal solutions within a short time. Therefore, we
design a GA procedure to solve the multiple-route problem. We analyze the
overall performance of the solutions suggested by GA and also compare it with
the optimal solutions obtained by solving MIP problem.
Several studies have been conducted on the system design for EVs, and some
focus specifically on infrastructure planning and economic analysis of hybrid
EVs or PEVs [4–6,9,14]. These studies generally focus on the deployment of
charging stations or determination for the capacity of the charging stations.
Also, recent studies related to the system design of DWC-EVs have dealt with
the minimization of total cost by determining the battery size and the positions of
power transmitters [7,8,8,10,11,13]. These studies have considered single route
cases or do not suggest practical methods to solve large and complex problem.
In actual, the optimization approach for these cases is not appropriate if the
DWC-EV application is planned at the city or multiple district scale.
The remainder of this paper is organized as follows. Section 2 presents the
basic characteristics of a DWC-EV system, optimization issues, and several main
1632 M.S. Lee and Y.J. Jang
The DWC-EV system has two main parts: the vehicles and the charging
infrastructure. The vehicle part mainly consists of a pick-up device for charging,
a battery, and a motor for moving vehicles. The charging infrastructure is com-
posed of an inverter and an inductive power cable, which is installed beneath the
road. In this paper, we refer to the charging infrastructure as the power track.
When a DWC-EV operates on the inductive cable, the power is delivered to the
motor or the battery through the regulator. Figure 1 illustrates the main compo-
nents of the DWC-EV system. The wireless power transfer system is described
in detail by Ko and Jang [10] and Ahn et al. [2].
The battery and power tracks are essential elements of the DWC-EV system
and constitute most of the total initial investment. It is important to find an
appropriate balance between the battery capacity and the allocation of power
tracks, but it is not economically viable to install power tracks under whole
routes or to equip vehicles with excessively large battery capacity. Two extreme
examples can help to explain the relationship between battery and power track. If
a vehicle has a large battery that is adequate for operation over the whole route,
it does not need to collect electric power during operation, and if it is charged at
the base station just before operation there is no problem. The battery will here
constitute most of the investment cost. If power tracks are installed under the
whole route, the vehicle can travel with a very small battery, and the cost of the
power track would be much higher than the battery cost. From the examples, we
know that the battery cost and the power track cost have a trade-off relation.
Therefore, determining the battery capacity and the allocation of the power
tracks should be verified precisely in the commercialization of the DWC-EV
system. The purpose of the optimization is to minimize the initial investment
cost of the DWC-EV system.
3 Optimization Modeling
We consider multiple routes for public transportation systems in our study. The
investment cost will change according to the number and the length of power
tracks, so we must identify the number, length, and location of power tracks.
Suppose that there are a total of N stations and M routes in the system. We
define the integer decision variable Xs as the length of the power track installed
at station s (s = 1, · · · , N ). Also, let ys be the binary variable for whether the
power track is installed at station s or not. The battery capacity of a vehicle in
route r (r = 1, · · · , M ) is denoted by Erc . Figure 2 describes decision variables
for a multiple-route optimization model.
1634 M.S. Lee and Y.J. Jang
3.3 Constraints
In most cases, battery manufacturers set maximum and minimum energy levels
for battery stability, so the energy level is recommended to remain within a
fixed range. The maximum and minimum levels are represented by multiplying
a constant and the battery capacity. These equations are as follows:
Eru = nu × Erc ,
Erl = nl × Erc ,
0 ≤ nl ≤ nu ≤ 1.
this remaining energy excess energy. The excess energy quantity at k th station
of route r is denoted by variable qr,k . The energy level Er,k is written in the
following form:
where dr,k means the energy consumption between k th station and k + 1th sta-
tion on route r, and p is the amount of energy charge per unit length of the
power track.
The system must maintain the battery level within the upper and lower
bounds while in operation.
Er,1 = Erc r ∈ R.
The length of the power track cannot exceed the length of the station area.
Installing very short power tracks is not economically viable. The following
inequality constraint states the upper and lower bounds of the length of the
power track.
ys × X min ≤ Xs ≤ ys × X max s ∈ S,
where X min and X max are the minimum and the maximum length of the power
track.
We develop the following mathematical model for the optimization problem
using the objective function and constraints explained above.
min kr × cb × Erc + cv × Xs + cf × ys (1)
r∈R s∈S s∈S
s. t. Er,k = Er,k−1 − dr,k−1 + p × XI(r,k) − qr,k r ∈ R, k = 2, · · · N (r) (2)
Er,k ≤ Eru r ∈ R, k = 1, · · · N (r) (3)
Er,k − dI(r,k) ≥ Erl r ∈ R, k = 1, · · · N (r) (4)
Er,1 = Eru r∈R (5)
ys × X min
≤ Xs ≤ y s × X max
s∈S (6)
ys ∈ {0, 1} s ∈ S (7)
Xs and Erc are zeros or positive intergers s ∈ S. (8)
4 Genetic Algorithm
Several algorithms are used to solve MIP problems, such as the branch and
bound algorithms. However, these algorithms cannot suggest optimal solutions
1636 M.S. Lee and Y.J. Jang
capacity or the length of the power track can replenish the energy amount dr,k −
Er,k , and then we calculate the corresponding cost for each additional element.
If the additional battery cost is cheaper than the additional power track cost, we
add the battery capacity to the existing battery capacity of route r instead of the
power track. In the opposite case, we attach the additional power track to the
existing power track. However, if the length after adding the additional power
track is longer than the upper bound, the rest of the additional power track is
converted to battery capacity. This generated battery capacity is then added to
the existing capacity. When the length after adding the additional power track
is shorter than the lower bound, we choose additional battery capacity instead
of power track.
Next, we modify the energy level Er,k , and go to the k + 1th station and
calculate the remaining energy at the station again. The mechanism should start
from the first station of the first route and finish after the process of the last
station on the last route. The new chromosome created by the repair mechanism
will satisfy all constraints in the MIP model.
the lower bound of the energy level, because station N(r)+1 represents the base
station of route r. When the bus departs from N (r)th station on route r, it needs
at least dr,N (r) + Wr,N (r)+1 . If a power track of XI(r,N (r)) in length is installed
at N (r)th station, Wr,N (r) would be max 0, dr,k + Wr,k+1 − p × XI(r,k) . In this
manner, it is possible to explain other Wr,k . Therefore, Wr,k can be expressed
as follows:
Wr,k = max 0, dr,k + Wr,k+1 − p × XI(r,k) , r ∈ R, k = 1, · · · N (r) (9)
The battery capacity Erc is the maximum value among all batr (k) to cover
all movements between two stations. Therefore, Erc is represented as follows:
Fitness function is an indicator of the quality of the solutions in GA. The objec-
tive function of the mathematical model is the fitness function of our problem.
The purpose of the GA procedure is to find a solution that minimizes the fit-
ness value. Before starting GA procedure, we generate the initial population to
enhance the GA performance. The chromosome in the initial population is gen-
erated using the optimization results of each single-route problem. Most single
route solutions are easily obtained using the algorithm for MIP model.
Selection function finds parents for the reproduction of new chromosomes
using the fitness value. In our case, the selection function is based on the roulette
wheel selection. After that, a portion of offspring is reproduced by the crossover
function, which generates a new chromosome by combining a part of each parent.
First, the crossover function creates a random binary vector that has a length
identical to a chromosome. From the first parent, genes placed at the location
1 in the binary vector are selected. Those placed at the location 0 are selected
from the second parent. The selected genes from the two parents generate a new
chromosome.
The mutation function prevents the solution from converging to a local opti-
mum, and is used to find various solutions in the searching space. In our case, it
generates a random vector with length the same as that of the chromosome, and
the random vector is added to a selected chromosome. If the new chromosome
satisfies the bounds and constraints, it enters the next generation, and we can
call it a mutated chromosome. We also set the elite reproduction which means
Charging Infrastructure Allocation 1639
Description Values
Unit battery cost (cb ) 800
Fixed cost (cf ) 20,000
Unit cable cost (cv ) 100
Charging rate (p) 0.5
The number of buses on each route (kr ) 5
The maximum length of the power track (X max ) 30
The minimum length of the power track (X min ) 10
The MIP model for the DWC-EV system is solved using CPLEX 12.5 embed-
ded in a JAVA environment, and the GA coded with the MATLAB Global Opti-
mization Toolbox is used to solve the problems. There are three approaches in
the GA. First, we apply the chromosome format with battery capacity, which
does not conduct a repair mechanism. This approach is defined as GA method 1.
In this method, infeasible chromosomes should have infinite fitness values. The
second approach is GA method 2, and carries out the repair mechanism to the
same format. The last approach, GA method 3, uses the chromosome format
without the battery capacity. Other GA procedures are applied identically to all
approaches. The constant values for the numerical experiments are summarized
1640 M.S. Lee and Y.J. Jang
Description Values
Population size 1,000
The number of generations 1,000
Crossover fraction 0.84
Mutation fraction 0.01
Elite fraction 0.15
in Table 1. In this example, the maximum and the minimum energy level of the
battery are the battery capacity Erc and 0, respectively. Several parameter set-
tings for the GA procedures are defined in Table 2. These values are determined
from a number of test experiments to find the better solutions.
We select the routes in a regular sequence starting from route 1; for example,
if we select 3 routes, we choose route 1, 2, and 3. The total cost obtained from
CPLEX and three GA approaches is listed in Table 3 according to the number
of routes.
The GA results of Table 3 are obtained from 100 experiments for each case.
The results of CPLEX are objective values, which are calculated by optimal
solutions, so it is impossible to have costs lower than the results. These values
can be used as criteria to assess the performance of the GA results. In the case
of one route, the best value for each GA method is identical to the objective
value, but average values are different from each other. GA method 3 suggests
the smallest average value of all GA methods, while GA method 1 has the worst
average value. In the case of two routes, the best value of GA method 3 also has
the same value as the objective value, but GA methods 1 and 2 do not suggest
the objective value, and GA method 3 still draws the best values between the
three methods in both the best and average values. From the case of three to
six routes, no GA methods suggest optimal results, and the performance of
solutions in GA method 3 is always better than the others, in the same manner.
Charging Infrastructure Allocation 1641
GA method 2 takes second place in every case, and GA method 1 proposes the
worst results.
Though methods 1 and 2 use the same format of chromosome in the GA
procedure, there are huge differences in the results, depending on whether the
repair mechanism is used or not. In GA method 1, many infeasible chromosomes
can be created during the 1,000 generations, so feasible chromosomes that do
not have good fitness value in comparison with the optimal results will easily
survive. In addition, the speed of improvement in the chromosomes slows, as the
population generally consists of poor chromosomes. The quality of the best chro-
mosome in the final generation will consequently be worse than in other meth-
ods. However, if the repair mechanism is applied to GA, it is impossible to have
infeasible chromosomes, so all chromosomes can have specific fitness values that
are not infinite. Unlike GA method 1, in which feasible chromosomes compete
with many infeasible chromosomes, GA method 2 creates competition between
comparable chromosomes, and the outstanding chromosomes will survive. Some
infeasible chromosomes may also be changed to remarkable chromosomes during
the repair mechanism. Therefore, we claim that GA method 2 can suggest better
solutions than GA method 1.
GA method 3 provides the best solutions among all GA methods because of
the format of chromosomes, and the fact that battery capacity plays an impor-
tant role in the total cost. Figure 5 represents the total battery cost of the best
chromosomes in all GA methods. GA method 3 suggests the smallest battery
cost in all cases except for the one route case. GA method 3 can have an advan-
tage in the battery cost as the battery capacities in the chromosome format are
calculated to have the minimum values without depletion when it is driving. In
contrast, other methods randomly allocate battery capacities, regardless of the
gene values in the power track part.
The power track costs of the best chromosomes in all GA methods are shown
in Fig. 6. In contrast with the battery cost, the power track cost has the biggest
value in GA method 3. The power track costs are identical in the different meth-
ods for the one route case, in the fourth case GA method 3 has a slightly smaller
value, but for the other cases method 3 has the largest. From the perspective of a
trade-off relationship, the results are reasonable as a small amount of power track
is allocated to the solution with large battery capacity, and a large amount of
power track is provided to the solution with small battery capacity. By compar-
ing battery and power track costs, we confirm that the difference in the battery
cost has a huge effect on the trend of the total cost.
takes over 11 hours. This trend shows the characteristics of the exact algorithm,
such as the Branch and Bound algorithm, which means that the exact algorithm
is more sensitive to the size of problems compared to the GA. From the numer-
ical experiments, we identify that GA methods 1 and 2 have an advantage in
calculation time compared to GA method 3, but the quality of solutions from
method 3 outperforms the other two. The GA can suggest reasonable solutions
in a timely manner, but does not always guarantee optimal solutions. Therefore,
the GA can be useful for large scale problems.
6 Conclusion
This study has introduced a new kind of dynamic wireless charging electric
vehicle (DWC-EV) system. A mixed integer programming (MIP) model, which
reflects multiple-route public transportation systems, was suggested to identify
the economic allocations of the charging infrastructure and the battery capacity
of the vehicle. A GA was introduced as a solution approach, and two approaches
for chromosome design were suggested. Numerical experiments for an exact algo-
rithm and a GA were conducted, and we analyzed the performance of three GA
methods by comparing the optimal results of CPLEX. We checked that the
repair mechanism for infeasible chromosomes improved the algorithm perfor-
mance. A chromosome design without battery capacity was found to provide
better solutions than other GA methods, though its calculation time was slow.
These solutions are thus almost optimal solutions.
In this study, several assumptions were applied to develop the mathematical
model. The assumption that the energy consumption between two stations is
linearly proportional to the distance is particularly impractical, as this does not
reflect the uncertainties of traffic or unpredictable situations. The assumption
that the amount of charging is linearly proportional to the length of the power
track was used to simplify the model. These assumptions should be proved quan-
titatively, and robust optimization can be used to consider uncertainties. The
procedure of the GA must be established to solve the problem efficiently and
effectively. After extending the limits of this study, we plan to apply our app-
roach to broader multiple-route systems on a city or national scale. The GA is
1644 M.S. Lee and Y.J. Jang
a simple approach and can be useful for discovering solution properties, and the
solutions obtained can be reference points for heuristics and other algorithms,
which will be examined in future research.
References
1. Abdelmaguid TF, Dessouky MM (2006) A genetic algorithm approach to the inte-
grated inventory-distribution problem. Int J Prod Res 44(21):4445–4464
2. Ahn S, Suh NP, Cho DH (2013) Charging up the road. IEEE Spectr 50(4):48–54
3. Bulletin M (2013) Korean electric buses go wireless. Wall Street J 32:161–174
4. Chung SH, Kwon C (2015) Multi-period planning for electric car charging station
locations: a case of korean expressways. Eur J Oper Res 242(2):677–687
5. Dong J, Liu C, Lin Z (2014) Charging infrastructure planning for promoting bat-
tery electric vehicles: an activity-based approach using multiday travel data. Transp
Res Part C Emerg Technol 38(1):44–55
6. He F, Wu D et al (2013) Optimal deployment of public charging stations for plug-in
hybrid electric vehicles. Transp Res Part B Meth 47(1):87C101
7. Jang YJ, Jeong S, Ko YD (2015) System optimization of the on-line electric vehicle
operating in a closed environment. Comput Ind Eng 80(C):222–235
8. Jang YJ, Suh ES, Kim JW (2015) System architecture and mathematical models
of electric transit bus system utilizing wireless power transfer technology. IEEE
Syst J 99(2):1–12
9. Jia L, Hu Z et al (2012) Optimal siting and sizing of electric vehicle charging
stations. Dianli Xitong Zidonghua/Autom Electr Power Syst 36(3):54–59
10. Ko YD, Jang YJ (2013) The optimal system design of the online electric vehicle
utilizing wireless power transmission technology. IEEE Trans Intell Transp Syst
14(3):1255–1265
11. Ko YD, Jang YJ, Min SL (2015) The optimal economic design of the wireless pow-
ered intelligent transportation system using genetic algorithm considering nonlin-
ear cost function. Comput Ind Eng 89(C):67–79
12. Shin J, Shin S et al (2014) Design and implementation of shaped magnetic-
resonance-based wireless power transfer system for roadway-powered moving elec-
tric vehicles. IEEE Trans Ind Electron 61(3):1179–1192
13. Song YU, Park S et al (2013) A study on the validity of the infrastructure con-
struction cost for the commercialization of online electric vehicles. J Soc e-Bus
Stud 18(1):218–222
14. Sweda T, Klabjan D (2011) An agent-based decision support system for electric
vehicle charging infrastructure deployment. In: Vehicle Power and Propulsion Con-
ference, pp 1–5
15. Tali T, Telleen P (2013) Global EV outlook: understanding the electric vehicle
landscape to 2020. Int Energy Agency 87:22–30
The Research of Incentive Model Based on
Principal-Agent for R&D Personnel in
System-Transformed Institutes
1 Introduction
Over 10 years’ reform, some scientific research institutes have been transformed
into enterprises. In this process, the system-transformed institutes actively cul-
tivated the ability of technological innovation with excellent talent teams, and
obtained advanced scientific and technological innovation constantly, those are
inseparable with the creation and contribution of scientific and technological
innovation talents. R&D personnel as the key power of the system-transformed
institutes, is irreplaceable in the development and growth of enterprises. There-
fore, how to provide maximize innovative space and their creative enthusiasm
has become the most important issue for the institutes. But due to historical and
practical reasons, there are still some problems such as the inflexible incentive
mechanism, the single incentive means and so on, which restrict the innovation
ability and enthusiasm seriously. Consequently, whether from the development of
institutes itself or from the realization of the personnel value, it is very necessary
to study the motivation of R&D personnel in system-transformed institutes.
Although some institutes have already developed incentive policies, but most
of them are one-time awards, and usually pay attention to the number of achieve-
ments while ignore the quality and the efficiency, which are not conducive to
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 138
1646 Y. Wei and J. Gang
long-term enthusiasm for the R&D personnel. Many scholars at home and abroad
have carried out a lot of research work about the motivation. Ou [5] researched
on incentive of science and technology plan project by principal agent, ana-
lyzing the incentive factors and expound the design ideas of project incentive
mechanism. Dikolli and Kulp [1] studied the impact of interaction between per-
formance indicators of each task with multitask principal agent model. From the
perspective of creating core competence, Gao [3] focused on human resources
management problems and disadvantages about system-transformed institutes.
Guo and Zhang [4] summarized the policies and regulations of the income dis-
tribution of personnel achievements, and proposed suggestions to improve the
incentive mechanism. Ding and Chen [2] analyzed the Chinese S&T personnel
motivation legal laws and regulations, and put forward policy recommendations
to break the barriers to strengthen the transformation achievements motivation.
In summary, the current researches mainly concentrate on the demand char-
acteristic, incentive factors and other aspects of R&D personnel, having certain
referential significance to solve the motivation problem. On the basis of above
researches, combining with the actual situation of system-transformed institutes,
this paper took use of the principal-agent theory, optimizing the incentive mech-
anism design, discussed the incentive model of income distribution and bonus
incentive model respectively though the balance of benefits and risks among the
R&D personnel.
2 Background
2.1 The Connotation of the Collaborative Governance
A questionnaire survey for 18 system-transformed institutes in Sichuan has been
carried out. As we can see in Fig. 1 that R&D personnel are always with high
degree and different titles, showing that knowledge talents are the backbone of
personnel team. Speaking of the salary system and incentive mechanism, we
find that there are two main problems in current R&D personnel innovation.
One is how to combine the performance appraisal with income, the other is
the relationship between achievements and income. Each institute is actively
exploring or thinking to solve the innovation problems. About 63.6% of institutes
have a preliminary study on how to establish the performance evaluation and
incentive mechanism, while others do not give a complete management advice
on scientific and technological performance appraisal, still in a state of wait and
see on the issue of income distribution about achievements.
The composition of R&D personnel income includes intrinsic compensation
and extrinsic compensation, which can be seen in Fig. 2. Further, the intrinsic
compensation is closely related to incentives, containing basic salary, merit pay,
research prize and others. And the R&D personnel’s income is mainly dependent
on the basic salary and merit pay, taking 81.4% of the whole income, while less
incentive income in Fig. 3. Most system-transformed institutes have not estab-
lished a performance evaluation system different from the business production
personnel, especially the case of performance appraisal system is not well sound.
Incentive Model Based on Principal-Agent for R&D Personnel 1647
Even though some institutes set up a merit pay, but not link to the performance
of scientific research, so the income systems are still egalitarianism with small
incentive income, especially lack of long-term incentive mechanism.
Research
Variable pay
prize
Welfare
treatment
Others
Extrinsic Position
compensation promotion
Others
3 Incentive Modeling
3.1 Income Distribution Incentive Model
Through the investigation and analysis above, it is not difficult to find that
even there are four parts of R&D personnel income, the first two are actually
relatively fixed, which can be seen as fixed income. And the other two are more
fluctuant and associate with the degree of effort, can be regarded as incentive
income. Assuming that the effort degree is proportional to the output, so the
output of R&D personnel can be shown as:
X = W0 + a + ε ε ∼ N (0, σ). (1)
1648 Y. Wei and J. Gang
At the same degree of effort, the output embodied in two levels, the single
value of scientific and technological achievement and the number of achievement.
For the labor of R&D personnel is the knowledge labor, long term on a small
number of achievements will get more value than those focus on numbers of
achievements. For example, a patent will be specialized or in 10 patents be
invented in one year. Obviously, the 10 patents tend not to produce too much
real value. Therefore, hypothesis that and satisfy the following relations:
a = qv 1/n . (2)
So, the actual benefits from R&D personnel to the institutes can be expressed
as:
n−1
S = vq = v n a. (3)
Assume that R&D personnel share the results of income in accordance with
a certain proportion k1 , so the personal income can be expressed as:
n−1
I = W0 + v n a. (4)
In addition, R&D personnel have to pay a certain cost as well as the effort.
Taking use of the common quadratic model in principal-agent theory, use to
represent the effort cost, which is the relationship between effort and cost, so
the cost can be expressed as:
1 2
c1 = ba . (5)
2
Otherwise, comparing with the scientific research institutes, R&D personnel
usually take risk aversion which needs to pay a certain risk costs. Set c2 on
behalf of the risk aversion, using Arrow-Pratt measure of absolute risk aversion
to construct the risk cost function as follows:
1 2 2n−2 2
c2 = ρk v n σ . (6)
2 1
Incentive Model Based on Principal-Agent for R&D Personnel 1649
Thus it can be seen that, in the optimal state, the income coefficient, the effort
level and the income of institutes are negatively correlated with the risk aver-
sion, the effort cost and the minimum number of achievements. This suggests that
under the incentive model of income distribution, the system-transformed insti-
tutes should reduce the limit number of achievements can not only encourage R&D
personnel in a greater extent, but also improve the actual income of institutes.
1650 Y. Wei and J. Gang
This part will discuss the incentive problem of R&D personnel under the bonus
incentive model. Hypothesis that the bonus is positively with the number of
achievements, and the achievement bonus of the institute is k2 . Still, the single
value υ and the number q satisfy the following relations at the same level of
effort:
a = qv 1/n . (12)
For the bonus is positively with the achievement number, so under the same
level of effort, R&D personnel must ignore the quality of the results, and pay
more attention to the number of results. Set v as the lower limit of the single value
of achievements, and the upper limit is a/q when meet the minimum number of
results, so taking the cost and risk into account, the expected return model of
R&D personnel is:
1 2
max E[I] = W0 + k2 v − n a − 12 ba2 − 12 ρk22 v− n σ 2
a,v (14)
s.t. v ≤ υ ≤ a/q.
So, under the bonus incentive model the incentive model can be shown as
follows:
n+1 1
max E[V] = v n a − k2 v n a − W0
k2 ⎧
−1 2 −2 2
⎨ W0 + k2 v n a − 2 ba −1 2 ρk2 v n σ ≥ Imin , 2
⎪ 1 2 1
(16)
s.t. max E[I] = W0 + k2 v − n a − 12 ba2 − 12 ρk22 v− n σ 2 ,
⎪
⎩
a,v
s.t. v ≤ υ ≤ a/q.
There we can get the following conclusions, (1) under the bonus incentive
model, in order to maximize the personal interests, R&D personnel will ignore
the actual value of the research results and blindly pursue the numbers. As a
principal, institutes can set the appropriate technology bonus by assessing the
lowest value of achievements. (2) For the bonus amount is proportional to the
lowest value, so the lower value of achievements, the more achievements R&D
personnel create in the same level of effort, so as to obtain more revenue. (3)
Stand in the perspective of the institutes, to avoid the emergence of low value
achievements, they should set a threshold of achievements, and strictly evaluate
them.
Incentive Model Based on Principal-Agent for R&D Personnel 1651
4 Comparison Analysis
Compare the income distribution incentive model with the bonus incentive model
(see in Table 1), we can find that: (1) R&D personnel have different pursuit in
the two models. R&D personnel pay more attention on the value of achieve-
ment under the income distribution incentive model, while pursue to create a
great number of achievements in the bonus incentive model. For the system-
transformed institutes take profit as the goal, thus using income distribution
incentive model more. And the colleges use bonus incentive model frequently
when facing with a large number of assessment indicators. (2) In the income
distribution incentive model, the revenue sharing factor of R&D personnel is
relative with effort cost, risk aversion coefficient and output random error, but
has nothing to do with the result itself. While the bonus factor in addition to
the impact above, also related to the lowest value of scientific and technological
achievements.
5 Conclusion
Scientific incentives can attract and retain excellent researchers for scientific
research institutes, meeting the ultimate needs of beneficiaries constantly. And
salary is the prerequisite to ensure the material needs of R&D personnel, also is
the prerequisite and basis for the scientific and technological talents to pursue
higher level needs. Even though China’s policies and regulations for the allo-
cation of R&D personnel incentive requirements are put forward, most of the
institutes only pay fixed income when they can observe the effort level from
personnel. Normally, it is difficult to observe the effort level, still let researchers
share part of the output share, otherwise R&D personnel will not work hard
with basic salaries, which is detrimental to the development of enterprises, also
the country. From the analysis, R&D personnel focus on the value is applied
to the system-transformed institutes, while a great number of achievements will
be made to the colleges. The revenue sharing factor of R&D personnel is rela-
tive with effort cost, risk aversion coefficient and output random error, but has
nothing to do with the result itself. Therefore, the greater effort cost, the smaller
share of output, and the smaller risk for those who fear of hard work. Conversely,
the stronger ability, the lower effort cost.
1652 Y. Wei and J. Gang
Acknowledgements. This research was conducted with the support of science plan
program of Chengdu in 2016 (No. 2015-RK00-00062-ZF).
References
1. Dikolli SS, Hofmann C, Kulp SL et al (2009) Interrelated performance measures,
interactive effort, and incentive weights. J Manag Acc Res 21(1):125–149
2. Ding M, Chen B (2016) Research on the strengthening the S&T personnels trans-
formation achievements motivation based on the improvement of the reward mech-
anism determined by knowledge, technology, management, skills and other factors
market. Sci Manag Res 142(3):96–100
3. Gao J (2013) The incentive mechanism of human resources management of sci-
entific research institution after ownership transformation. Municipal Eng Technol
61(2):46–54
4. Guo Y, Zhang S (2015) The incentive mechanism of the income distribution of
scientific and technological achievements of technical personnel. Sci Sci Manag S&T
13:183–190
5. Ouyang J, Yan H, Huang C et al (2008) Research on incentive of science and tech-
nology plan project based on principal agent. Sci Technol Manag Res 28(8):246–248
Research on the Influencing Factors of Public
Sense of Security and Countermeasures in Major
Emergencies–An Empirical Analysis on the
Public Sense of Security in View Of the
Explosion in Binhai New Area in Tianjin
1 Introduction
A crisis is often caused by both the “objective crisis of the events” and “subjec-
tive crisis of the individual”, but the negative impact of the subjective crisis of
individual is often greater than that of the events. Just as the attack on Septem-
ber 11 has happened for 15 years while its impact and psychological fears left
to people have a long-term influence on American society. In 2011, Fukushima
nuclear accident caused “salt-buying panic” in China, the reason of which is
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 139
1654 J. Yang and W. Xu
people’s fear for the crisis. When the crisis comes, the biggest threat to the pub-
lic is the loss of their sense of security which is a prominent feature of public
emergency psychology after the crisis. But why a few events that are actually
less harmful are more influential on the public than those that have fatal con-
sequences? Why do crisis of the same types have different effects on the public
sense of security in a different time and place? This series of events have shown
that the formation of public sense of security requires both objective conditions
and also subjective factors.
Based on the existing research on the public sense of security, this paper
designs a model of the influencing factors of public sense of security and also
analyzes the data through questionnaire and the relationship between data and
their characteristics based on SPAA as well as the influencing factors of the public
sense of security based on the structural equation of AMOS. Besides, it measure
how these factors influence the public sense of security, which contributes to the
design of the emergency psychological governance mechanism in the emergency.
Foreign research has gone through two stages: the first stage is the public sense
of security in the perspective of “crisis management”. “Bounded Rationality” of
Simon [7] thinks that due to the limitation of the individual in memory, think-
ing, computing power and other aspects, individual rationality is just bounded
rationality in constrained conditions. The second one is to introduce the the-
ory of “risk perception” with the public sense of security examined by “crisis
communication” model. Industrial pollution happening in 1984 in Union Car-
bide Corporation triggered panics due to slow release of information, which then
Research on the Influencing Factors of Public Sense of Security 1655
urged the scholars to reflect. Covello finds that the risk assessment by the author-
ities is very different from the risk perception by the public and also the public
show great distrust in the authority of risk management, which weakened their
sense of security and even lead to fear [1]. Sun Ding and Matthew, the American
scholars, thinks that the public sense of security is “the degree of anxiety and
concern of those who are becoming victims”, so the potential victims is selected
as the objects in their research. However, they also agree that the public sense
of security is a psychological phenomenon, which is characterized by anxiety or
concern. Lois Mok, an American scholar, put more focus on the power of control
in the sense of security and think it is the reflection on the normal order and the
weakened social control from of the public [9].
In recent years, we have conducted deeper research on the public sense of
security in the crisis, and the theoretical circle has shown a basic trend of mul-
tidisciplinary research and innovation. In terms of the social and psychological
impact from critical events, Zhang Yan, in her study of the critical events, divided
the psychological impact on the victims from critical events into three stages:
the first stage is the formation of risk experience; the second is psychological
cognition of risk; the third one is the implementation of mental decision and the
spread of psychological influences [10]. The most important empirical investiga-
tions of the public sense of security was the sample survey conducted by Institute
of public sense of security in the Ministry of public sense of security in 1988 in
15 provinces and urban areas with its results published in the book–“Do you feel
safe?”. The group defined the public sense of security as: the subjective feelings
and evaluation of the public for the social security are the comprehensive psy-
chological reflection of the infringement that the personal rights, property rights
and other lawful rights and interests of citizens have suffered or will suffer and
the protection they expect in a certain period of the social life [4]. Based on the
theory of prospect, Sun Doyong constructed the research model to study the
personal perception of fears in the crisis from three aspects: the features of the
event itself, the characteristics of the individual and the social factors [8].
Fig. 1. Model of the influencing factors the public sense of security in the emergent
events
According to the theory of “Herd Behavior” which has been widely verified,
individual in the crisis, often affected by others’ behavioral strategies, will often
adopt the same behavioral strategy. That is to say, instead of being based on
their own information, their behavior choice is the imitation of others or over-
reliance on the public opinion, which tends to lead to the herd behavior. Thus,
we assume that:
Hypothesis 4 : In the emergencies, the psychology and behavior of the sur-
rounding people are positively related to the public sense of security. The more
stable the emergency psychological behavior of the group is, the safer the public
will feel [8].
5 Results
5.1 Analysis of the Impact of Gender on the Public Sense of
Security
The results of T test show that the public sense of security is significantly
affected by the gender after the explosion and the sense of security of men is dis-
tinctly higher than that of women, which may be explained by the environment
where men and women grew up or their physical differences and also shows that
in the post-disaster relief, we need to put more focus on women to rescue them.
SPSS was used to carry out exploratory factor analysis on the samples with
its results shown in Fig. 3. v13 to v32 are the corresponding items of the four
influencing factors in the questionnaire. Among them, v13 to v15 correspond to
critical events, v16 to v23 to the countermeasures of the government and the
media, v24 to v27 to the emergency response capabilities of the individual, and
v28 to v32 to the emergency psychological behavior of the group.
The results of exploratory factor analysis showed that the influencing factors
mainly concentrate on four aspects, which was consistent with the hypothesis
of the paper. Because of the dispersion of the concentration of the emergency
psychological ability of the group in exploratory analysis, after analyzing and
Research on the Influencing Factors of Public Sense of Security 1659
Elements
1 2 3 4
v13 - - - 0.852
v14 - - - 0.777
v15 - - - 0.619
v16 0.778 - - -
v17 0.822 - - -
v18 0.775 - - -
v19 0.747 - - -
v20 0.674 - - -
v21 0.725 - - -
v22 0.687 - - -
v24 - - 0.646 -
v25 - - 0.479 -
v26 - - 0.630 -
v27 - - 0.781 -
v28 - 0.641 - -
v29 - 0.508 - -
v30 - 0.680 - -
v32 - 0.730 - -
Attainment Method Element
analysis of the subject. Pivot-
ing method Formal maximum
variation method with Kaiser.
and the countermeasures of the media are combined as one influencing factor. At
the same time, according to the value of how these factors influence the public
sense of security, we can see that the emergency psychological behavior of the
group has the greatest impact, followed by the emergency capacity of the indi-
vidual, the countermeasures of the government and the media and the severity
of the critical events.
Fig. 2. Model of the influencing factors of the public sense of security. (ERCOG-
Emergency response capacity of the group, COGM-Countermeasures of the government
and the media, ERCOI-Emergency response capacity of the individual)
Table 4. Exploratory factor analysis of the structure dimension of the public sense of
security
Fig. 3. Model of how the influencing factors of the public sense of security affect
the sense of belonging. (ERCOG-Emergency response capacity of the group, COGM-
Countermeasures of the government and the media, SOB-Sense of belonging, ISOB-
Individual’s sense of belonging, ERCOI-Emergency response capacity of the individual)
First of all, we tested the structural equation model based on how the influ-
encing factors of the public sense of security affect its second structural factors–
“security needs” with its results shown in Fig. 4. The results showed that the
emergency response capacity of the individual was the most important factor
that affects the sense of security of life, followed by the emergency response
capability of the group and the countermeasures of the government and the
media. Although personal characteristics, knowledge of emergency response and
reactions on the spot are vastly different in the individuals, the security needs, for
us, is the most essential experience of our safe lives and our emergency response
capacity can directly affect our sense of security. In spite of the fact that the per-
sonal characteristics are difficult to control, it is possible to maximize the ratio-
nality by improving the personal crisis knowledge and his emergency response
capacity in the face of major emergencies and minimize the casualty through
effective emergency response. Once the individual has the clear understanding
of that he should take immediate effective emergency measures, he will maximize
the index of his sense of security. At the same time, the results also showed that
in the crisis the emergency psychological behavior of the surrounding people will
have great effect on the security experience of the individual.
(3) How the influencing factors of the public sense of security affect the certain
sense of control
First of all, we test the structural equation model based on how the influenc-
ing factors of the public sense of security affect its third structural factors–the
Research on the Influencing Factors of Public Sense of Security 1663
Fig. 4. Model of how the influencing factors of the public sense of security affect
the security needs. (ERCOG-Emergency response capacity of the group, COGM-
Countermeasures of the government and the media, SOS-Sense of security, SOSOL-
Sense of security of life, ERCOI-Emergency response capacity of the individual)
“certain sense of control” with its results shown in Fig. 5. The results showed
that the emergency psychological behavior of the group is the most important
factor which can affect the certain sense of control of the public with the coun-
termeasures of the government and the media coming second and the emergency
response capacity of the individual third. The emergency behavior of the group
can offer the demonstration and guidance to personal behavior, therefore, in the
security incidents, their rational and effective emergency behavior can enhance
the sense of control of the individual. In this mutual action model, the counter-
measures of the government and the media also have greater influence, because
it is a key to obtain the adequate, accurate and timely information for enhancing
the sense of control. In China, our government and media have the ability to
grasp the information of the emergencies and delivery them to the public ade-
quately, timely and accurately to help themata in the whole information and
improve their certain sense of control.
6 Countermeasures
In the major emergencies, the biggest problem that the public face is the loss of
their sense of security, which, as a concept of subjective cognition, is to a large
extent affected by self psychology. Thus, we can by certain means and ways affect
our sense of security so as to enhance the effectiveness of disaster aid.
1664 J. Yang and W. Xu
Fig. 5. Model of how the influencing factors of the public sense of security affect the
certain sense of control. (ERCOG-Emergency response capacity of the group, COGM-
Countermeasures of the government and the media, SOC-Sense of control, CSOC-
Certain sense of control, ERCOI-Emergency response capacity of the individual)
In light of the countermeasures of the government and media as the key influenc-
ing factors of the public sense of security, unscientific reports on critical events
from medias and the delay of the emergency measures from the government will
Research on the Influencing Factors of Public Sense of Security 1665
lead to a serious decrease of the public sense of security and also fuel their fears.
In the case of asymmetric information, government and the media should play
their role of “delivering information timely” to enhance the sense of certainty
of the public. The emergency measures taken by the government departments
should be combined with the information distributed by the media and the view
of the media to keep the information timely, adequate and accurate so that they
can enhance the certain sense of crisis of the public.
References
1. Covello VT, Winterfeldt DV, Slovic P (1987) Communicating scientific information
about health and environmental risks: problems and opportunities from a social
and behavioral perspective. In: Covello VT, Lave LB, Moghissi A, Uppuluri VRR
(eds) Uncertainties in risk assessment and risk management, pp 221–239
2. Heath RL, Nathan K (1990) Public relations’ role in risk communication: informa-
tion, rhetoric and power. Public Relat Q 68(3):15
3. Lin Y (2007) Public sense of security and the construction of index system. Soc
Sci 7:67–68
4. Ren Y, Wei J (2015) Analysis on the influencing factors of public concern in public
crisis. Stat Decis 1:67–70
5. Robert M (2004) Heath, crisis management. Zhongxin Press, Beijing
6. Rocky B, Zheng Z, Wang B (2004) From “dependence on the media system” to
“communication infrastructure”-looking back to the development of “media system
dependency theory” and its new concept. Int Journalism 2:9–12
1666 J. Yang and W. Xu
1 Introduction
Radical surgery for malignant tumors of oral and maxilla-facial region involves a
wide range of injury, resulting in soft and hard tissue defects, seriously affecting
the appearance and function of patients. Using their own tissue flap free graft to
the mouth for large area of tissue defects repair has become a common method to
improve the quality of life of patients [11]. It is the most important care content
after free skin flap transplantation surgery to timely detect and actively deal
with crisis of blood-circulation, as well as to promote the survival of the flap.
In order to improve the success rate of free tissue flap, reduce the occurrence of
crisis of blood-circulation and improve the success rate of the crisis tissue flap
rescue, the branch designed the colorimetric card for free tissue flap in 2011 and
applied the national utility model patent certificate (Certificate No. 3767860).
Whats more, it has been used in has been in clinical application for more than
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 140
1668 X. Yang et al.
3 years, and the reflecting is very good. In this paper, the effect of free tissue
flap colorimetric card in the postoperative flap management for tumors of oral
and maxilla-facial region was concluded.
Parameter Pale 1 Pale 2 Pale 3 Rosy 1 Rosy 2 Rosy 3 Violaceous 1 Violaceous 2 Violaceous 3 Atropurpureus
Tonal 20 20 20 20 9 9 234 235 225 225
Saturation 240 240 240 240 231 232 80 79 75 75
Brightness 226 216 204 195 166 152 169 146 126 45
Red R 225 225 205 255 252 252 205 188 172 63
Green G 240 230 217 207 136 113 155 122 96 33
Blue B 225 205 179 159 100 70 162 131 125 44
Note: the form of parameters are computer palette.
The patient opened his mouth as fully as possible in order to expose the flaps
in the irradiation conditions of natural light or fluorescent white light. At first,
let the colorimetric card range by fan-out, compared and estimated flap color
belongs to which card 1 color area 3. Then in order to improve the accuracy,
the card 1 of the color region 3 is further inserted into the oral cavity of the
patient, and the contrast is confirmed in close proximity. If colorimetric card
was contaminated by the patient’s oral secretions, blood, etc. in the usage of
the card, it is necessary to be disinfected with alcohol or chlorine preparations
after cleaning with water, and to prepare a set of colorimetric cards for every
patients with specific infections. Colorimetric cards can also be set into a PVC
gloves, discarded after every used. Generally, each card 1 is provided with a
color region 3, and the number of the cards 1 can be determined according to
1670 X. Yang et al.
the circumstances. When more accurate conditions are required, more cards 1
are required to make the whole structure have more color areas 3, in order to
obtain more accurate flap discoloration.
3 Results
The 42 flaps were all live. Twelve patients had vascular disease, three arterial
insufficiency, and nine venous return obstruction. Among them, 2 cases of mild
arterial insufficiency maintain the correct position to release the pressure around
the wound and the symptom relief after strengthening the local negative pressure
drainage. 6 cases of mild venous return disorder were light by roasted lights.
Among them, 2 cases relief after plus acupuncture flap bleeding, heparin sodium
coated. 1 case of severe hypoperfusion of the arteries and 2 cases of severe venous
return obstructed patients received re-anastomosis underwent vascular surgery
and bleeding debridement surgery. 1 case of venous return to the middle and
severe blocked, skin flap dead. All the patients recovered well after phlebotomy,
and all the patients recovered well in phonation, chewing and swallowing (seen
Table 2).
4 Discuss
Large area of soft and hard tissue defects after the radical surgery for malignant
tumors of oral and maxilla-facial region, often resulting in severe postoperative
facial deformity and dysfunction. Although the survival rate of various free tissue
flap has reached more than 90% [1,2,7,9], there are still a small number of
patients whose tissue flap dead due to various reasons. The rate of crisis of blood-
circulation is about to 3.5%–9.1% [8,10,14,15] according to domestic and foreign
literature. Once the free tissue flap dead, it will give patients a disastrous blow,
not only to extend the length of stay, increasing medical costs, more likely to
make the mortality rate increased. The success of flap repair surgery, in addition
to surgical techniques and equipment, but also thanks to good observation and
nursing measures.
and report to the doctor timely. If the drainage lower 10–15 ml/d, estuation can
be considered after 48–72 h.
(7) Oral care
It was necessary to do oral management and to keep the flap area clean, if
there were oral bleeding, more exudate after surgery. Routine oral care should be
three times a day. Cotton balls should be along the line of blood vessels and pay
attention to grasp the intensity when scrubbing. If necessary, to do oral cleaning.
With 20 ml syringe aspiration of 3% hydrogen peroxide and saline as well as
compound chlorine has been mixed, it was repeatedly washed, companying with
a negative pressure. Note that suction should be on the contralateral, so as not
to damage the flap and incision.
4.2 The Care of the Usage Free Tissue Flap Colorimetric Card
in the Postoperative Flap Management for Tumors of Oral
and Maxilla-Facial Region
After free flap anastomosis, a common complication is crisis of blood-circulation,
divided into venous crisis and arterial crisis, usually occurring after 24–48 h [4].
The main monitoring indicators is the color, acupres sure and flap temperature
difference [5]. The change of the flap color is the main basis for recognition
and treatment. Early period of flap reconstruction, targeted nursing monitoring
measures can help find vascular complications, providing first-hand information
for clinical treatment. Weng [13] reported that flaps generally tolerate up to 6 h of
ischemia without severe necrosis, 75% of the flaps would be necrosis after venous
thrombosis continued 6 h and Flap necrosis rate would be 100% if beyond 8 h.
Therefore, there some studies [6,13] pointed out that occurrence of flap ischemia
and the timely monitoring of care is to determine the success of the key to the
success of flap transplantation.
(1) Content of flap observation
Skin color changes and capillary reflow measurement, is the early minimum
two indicators of reflection of blood circulation status of the most direct and
rapid, subject to outside interference, and receiving repeat verification. Where
both the skin color and capillary reflow abnormalities or more than two abnormal
monitoring indicators, there is the crisis of blood-circulation. Venous crisis is
usually manifested as red flap color, capillary response, and gradually become
dark red, dark red, late black, acupuncturing out of dark red or black liquid
when swollen hardened. Arterial crisis showed that the flap color is pale, lack of
elasticity, acupuncturing without blood flow. However, the clinical description
of the flap color and elastic temperature difference is often subjective, and the
evaluation result is limited by the professional skill and nursing condition. First-
line duty personnel for flap blood flow status judgment may be insufficient. If
the flap color change with not knowing whether abnormal and the person do not
have enough judgment experience, it may delay the flap vascular crisis rescue.
We set up a free tissue flap management team, the department head deputy
head appointed director and deputy director and head nurse as deputy head.
1674 X. Yang et al.
lymph node dissection and thoracic duct ligation as well as common carotid
artery epineurium and sublingual resection and free forearm flap excision repair
and small vascular anastomosis and abdominal free skin graft repair complained
neck pressure, pain, physical examination after 17 h. And physical examination
only saw the flap suture and tongue bruising. But shift officers have not attracted
great attention, so that the white skin flap color deepening, the area expanded
to 1/2 or more when the shift after 6 h. It found negative pressure drainage
tube shift, subcutaneous hemorrhage obvious, vascular anastomotic pressure and
venous return blocked when sent to the operating room exploration. However,
there was no obvious improvement in the flap condition at 6 h after operation,
and the capillary reflex was not obvious. The elasticity also was poor and the skin
temperature was low. Then the doctor surgery to trim the skin flap, to strengthen
drainage, with local iodoformize gauze dressing and intravenous anti-infection
as well as blood circulation and other treatment, so the flap survived.
5 Summary
Crisis of blood-circulation is one of the main complications of vascular anasto-
mosis after vascular tissue transplantation. It is an important factor to timely
find problems and take appropriate measures to ensure the survival of the flap.
By using free tissue flap colorimetric card, it describe the flap color, capillary
filling and the scope of flap color changes with the objective data as well as
text, which makes the observation of free tissue flap more Simple and accurate.
Therefore, it is worthy to promote in clinic.
References
1. Corbitt C, Skoracki RJ et al (2014) Free flap failure in head and neck reconstruc-
tion. Head Neck 36(10):1440–1445
2. Eckardt A, Meyer A et al (2007) Reconstruction of defects in the head and neck
with free flaps: 20 years experience. Br J Oral Maxillofac Surg 45(1):11–15
3. Guo Q, Lu L, Xu M (2012) Observation of vascular crisis after transplantation of
12 cases of anterolateral flap. Chin J Nurs 47(3):215–217 (in Chinese)
4. Guo QY, Lu L et al (2012) Observation and nursing care of vascular crisis in 12
patients after anterolateral thigh flap transplantation. Chin J Nurs 47(3):215–216
(in Chinese)
5. Huang Z, Tao S, Hu L (2012) Nursing of 12 cases of complex tissue flap for large
area soft tissue defect. Chin J Nurs 47(3):213–214 (in Chinese)
6. Jallali N, Ridha H, Butler P (2005) Postoperative monitoring of free flaps in UK
plastic surgery units. Microsurgery 25(6):469–472
7. Kesting MR, Hölzle F et al (2011) Microsurgical reconstruction of the oral cavity
with free flaps from the anterolateral thigh and the radial forearm: a comparison
of perioperative data from 161 cases. Ann Surg Oncol 18(7):1988–1994
1676 X. Yang et al.
1 Introduction
In the industrial age, human capital is the driving force of economic growth,
but in the age of knowledge economy, intellectual capital has become the power
source of economic growth, which is also changing the rules of business and
national competitiveness. Also, nowadays intangibles are almost universally con-
sidered as the main value drivers for companies [12].
China as a rapidly developing country, has strongly encouraged entrepreneur-
ial activities especially in high-tech fields. Similarly, intellectual capital seems
more important than any other factor in the fields of high-tech, that intellectual
capital management have become the domain of the so-called Chief Knowledge
Officer [1], there is a great need to study the impact of intellectual capital in
Chinese high-tech enterprises. The result will not only aid in selection for man-
aging intangibles, but also accelerate the pace of growing of high-tech companies.
2 Literature Review
2.1 Intellectual Capital and Sub-Components of Intellectual Capital
The word Intellectual capital was first proposed by Senior as a synonym for
human capital in 1836, which he believes is a combination of knowledge and skills
of human beings. John Galbraith extended the concept of intellectual capital,
noting that intellectual capital was not just knowledge or pure intellect, but
also the process of making use of knowledge effectively. Guenther, Beyer [11]
argued that intellectual capital represents the sum of all the intangible assets of
the company. Also Intangible assets are defined as the company’s non-material
and non-financial resources, including technology, customer information, brand
name, honor and corporate culture. The intellectual capital of an enterprise can
improve the performance of an enterprise by relying on the dynamic interaction
of IC components [4].
IC is also considered as the sum of following three categories: (1) Human
capital, which refers to the people in an organization and describes their cumu-
lative tacit knowledge skills. (2) Structural capital, which consists of supportive
infrastructure, processes, and databases of corporation that enable human cap-
ital to function [8]. (3) Relational capital.
Another definition is that IC is derived insights about head value, future
earnings capabilities, based on human capital, as well as organizational, struc-
tural and relational capital [5]. Some scholars hold the view that intellectual
capital can be classified into human capital and structural capital, divides intel-
lectual capital into human capital and structural capital, which is accepted by
author of this article.
the relationship between corporate performance and human capital with more
attention to human capital.
They also have mentioned that performance of enterprises should be con-
sidered by combining the characteristics and degree of competition of specific
industry. Zhu et al. [13] assumed that human capital is the key factor to deter-
mine enterprise performance.
(1) Difference between market value and book value, which assumes that the
intellectual capital of enterprises could be the difference between their market
value and book value.
(2) Tobin’sq, which refers to the ratio between a physical asset’s market value
and its replacement value. It was proposed in 1968 by James Tobin and
William Brainard [3]. If the replacement cost of the company’s assets is
lower than the company’s market value, the company’s investment behaviors
would have obtained excess profits, which are generated from IC [10].
(3) VAICAnte Public proposed Value Added Intellectual Coefficient (VAIC) in
1998 to measure the intellectual capital value of an enterprise. According
to this method corporate capital consists of physical capital and intellec-
tual capital, and corporate performance depends on the efficiency of value
added by a firm from its physical capital and intellectual capital resources.
This method assumed that the enterprise’s intellectual value-added efficiency
represented its performance.
The sample of this article comes from the financial statements of 80 companies
listed on the GEM from 2013 to 2015. Research samples in this paper are all
certified as high-tech by related government agency and listed on China’s second-
board market in Shenzhen stock exchange from 2013 to 2015. In order to ensure
the validity of the data, the criteria for screening the original samples are as
follows:
(1) Selecting the samples of the listed GEM companies, which are in continuous
operation during this period to ensure the necessary continuity of the sample
set.
(2) Financial services listed companies are eliminated because of their large total
share capital, which may cause deviation of the regression results.
(3) Excluding listed companies of which annual financial statement data are
incomplete, or not continuously operating in the time-span 2013–2015.
(4) Elimination of abnormal operating listed companies (e.g. Debt ratio surpass-
ing 100%) and companies with deteriorating financial condition to ensure the
validity of the sample data.
The Impact of Intellectual Capital on High-Tech Enterprise Performance 1681
(5) Listed companies who have changed their main business because of mergers
and acquisitions or other acts will not be included in the sample set. The
final sample set includes 80 high-tech enterprises. This article used EXCEL
and EVIEWS9.0 in the processing of sample data.
ROA shows the percentage of how profitable a company’s assets are in gener-
ating revenue. In the following analysis we use ROA to express the performance
of a company (Table 1).
(2) Selection of independent variables
(See Table 2)
level of T test. At the same time, there are positive correlations between HCE,
SCE and CEE, and all of them have passed T test with confidence level of
1%. This result tells that influencing factors of business performance are not
independent of each other, but interacting with each other.
(3) Multiple regression
The regression results have shown that the results are not very ideal. Only the
value of the intercept term in there gression equations passed the T test for linear
regression (here 5%), with CEE, HCE, SCE coefficients did not. This result tells
that the coefficients are not very credible. However, the F-test values 6.107650,
9.607723 and 5.317085 of each model have passed the F-test for linear regression
(here 5%), which indicates explained variables in the model could be explained
by explanatory variables (Table 6).
Besides, we can see that the coefficients of CEE in 2013 and 2014 respectively
are −0.0602 and −0.237977, with 0.040470 in 2015, which indicates that CEE
has a negative effect on ROA. Coefficients of other explanatory variables to
ROA are positive, which tells that human capital and structural capital drive
performance of high-tech enterprises. DW 1.626685, 2.100722, 1.851820 in the
time-span are all close to 2, indicating that the regression equation does not have
strong autocorrelation.
1686 H. Mei and K. Wang
5 Conclusion
5.1 Limitations
The results seem to be achieved, but there are some limitations of the research.
The first concerns the fact that the sample set limited to listed high-tech compa-
nies, because their financial data are available, while financial data of non-listed
high-tech companies are not. Accordingly, carrying out further research linking
IC and non-listed high-tech enterprises could develop this research field.
Another limitation concerns that some indicators of basic data used substi-
tutions due to the availability of fundamental data on each company’s financial
report. So it needs more in-depth, exhaustive analysis to be carried out. A fur-
ther limitation is, this analysis considers only the time-span 2013–2015 without
consideration of environmental differences.
References
1. Bontis N (2001) CKO wanted-evangelical skills necessary: a review of the chief
knowledge officer position. Knowl Process Manage 8(1):29–38
2. Bontis N, Keow W, Richardson S (2002) Intellectual capital and business perfor-
mance in malaysian industries. J Intell Capital 1(1):85–100
3. Brainard WC, Tobin J (1968) Pitfalls in financial model building. Cowles Found
Discuss Pap 58(2):99–122
The Impact of Intellectual Capital on High-Tech Enterprise Performance 1687
1 Introduction
Electric vehicles (EVs) are considered as a significant solution to reduce the
dependency on crude oil and minimize transportation-related carbon dioxide
emissions along with other pollutants [7]. As the energy provider of EVs, elec-
tric vehicle charging station (EVCS) is the foundation of electric vehicle indus-
try development [4]. Efficient, convenient and economic EVCS is in favour of
promoting willingness of consumers and also the industry development. As the
preliminary work of EVCSs constructions, the EVCS site selection is quite impor-
tant in the whole life cycle, which has significant impacts on the service quality
and operational efficiency of EVCS. Therefore, it is necessary to employ proper
methods to determine the optimal EVCS site.
Li et al. [6] considered dynamic origin-destination trip satisfaction and pub-
lic electric vehicle (EV) charging network expansion to develop a multi-period
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 142
Site Selection of Public Fast Electric Vehicle Charging Station 1689
2 Preliminaries
In this section, we briefly review the basic concepts of IFSs.
where μA (x) denotes the membership degree of the element x to the set A.
A = {x, μA (x), νA(x) |x ∈ X, μA (x) ∈ [0, 1], νA (x) ∈ [0, 1]}, (2)
where μA (x) and νA (x) respectively represent the membership degree and non
membership degree of the element x to the set A with the condition 0 ≤ μA (x) +
νA (x) ≤ 1.
then, πA (x) is called the indeterminacy degree or hesitant degree of x to the set
A. The bigger the πA (x), the more the indeterminacy degree of knowledge about
x [15]. Especially, when μA (x) + νA (x) = 1, for ∀x ∈ X, the intuitionistic fuzzy
set A is reduced to an ordinary fuzzy set [9].
α + β = (μα + μβ − μα μβ , να νβ ); (4)
In general, the larger the value of P rojβ (α), the more the degree of the α
approaching the β.
Definition 6. [15] Let A = (αij )mn and B = (βij )mn are two intuitionistic
fuzzy matrices. Then the projection of A on B is defined as follows:
m n α β α β α β
i=1 j=1 (μij μij + νij νij + πij πij )
P rojB (A) = β . (7)
m n 2 + (νβ )2 + (π β )2
i=1 j=1 (μ ij ) ij ij
Formulate the Identify the decision Select the criteria. Construct the
alternatives. makers. decision makers'
( A1, A2 , Am) (C1, C2 , , Cn ) evaluation matrices.
(D1, D2 , , Dt )
TOPSIS, which was first developed by Hwang and Yoon [5], is a technique for
establishing the order of preference by its similarity to the ideal point [13].
Inspired by [13,16], we present an extended TOPSIS method for the decision
makers’ weight determination under an intuitionistic fuzzy environment.
First, we construct the positive ideal decision (PID) matrix D+ based on the
average of all decision matrices, which is treated as a cursory group consensus
of decision-making group. The equation is as follows:
⎛ + + + ⎞
r11 r12 · · · r1n
⎜ r21+
r22
+ + ⎟
· · · r2n
⎜ ⎟
D+ = ⎜ . .. .. ⎟ , (9)
⎝ .. . . ⎠
rm1
+
rm2
+
· · · rmn
+
where
t
1 k
rij
+
= rij . (10)
t
k=1
of worst decision matrix possesses is the highest in all decision matrices. The
equation is as follows:
⎛ − − − ⎞
r11 r12 · · · r1n
⎜ r21
− −
r22 − ⎟
· · · r2n
⎜ ⎟
D− = ⎜ . ,
.. . . . ... ⎟
. (11)
⎝ .. ⎠
− − −
rm1 rm2 · · · rmn
where
−
rij = (min μkij , max νkij ). (12)
t t
Since PID and NID have been determined, we consider the projection of
each decision matrix on the ideal decision matrices. According to Eq. (7), the
projection of Dk on D+ can be calculated as follows:
m n Hk H+ k + k +
k i=1 + νH H H H
ij νij + πij πij )
j=1 (μij μij
ProjD+ (D ) = . (13)
m n H+ 2 H+ 2 H+ 2
i=1 j=1 (μij ) + (νij ) + (πij )
After that, the relative closeness of each individual decision matrix is deter-
mined as follows:
ProjD− (Dk )
RC k = . (15)
ProjD− (Dk ) + ProjD+ (Dk )
The relative closeness is used to determine the rank order for all decision
makers. Obviously, the closer to the best decision result, the larger the value of
RC k is. Therefore, based on the relative closeness, the weight of each decision
maker is obtained as follows:
RC k
k = t (16)
k
k=1 RC
D = (D1 , D2 , · · · , Dt ) · (1 , 2 , · · · , t )T
⎛ ⎞
r11 r12 · · · r1n
⎜ r21 r22 · · · r2n ⎟
⎜ ⎟
=⎜ . .. . . .. ⎟ , (17)
⎝ .. . . . ⎠
rm1 rm2 · · · rmn
A+ = (r11
+
, r12
+
, · · · , r1n
+
), (19)
where
rij
+
= (max μij , min νij ). (20)
m m
The positive ideal alternative (PIA) should has the maximal separation from the
PIA A+ . It is a very natural idea that we consider following decision:
− − −
A− = (r11 , r12 , · · · , r1n ), (21)
where
−
rij = (min μij , max νij ). (22)
m m
Then, the projection of each alternative on the ideal alternatives are calcu-
lated as follows:
n i A+ Ai A+ Ai A+
(μA
ij μij +νij νij +πij πij )
ProjA+ (Ai ) = j=1
n , (23)
A+ 2 A+ 2 A+ 2
j=1 (μij ) +(νij ) +(πij )
n i A− Ai A− Ai A−
(μA
ij μij +νij νij +πij πij )
ProjA− (Ai ) = j=1
n . (24)
A− 2 A− 2 A− 2
j=1 (μij ) +(νij ) +(πij )
Finally, the relative closeness is defined to determine the rank order for all
alternatives. We have
ProjA− (Ai )
RVi = (25)
ProjA+ (Ai ) + ProjA− (Ai )
Step 1. We transform the decision matrices into intuitionistic fuzzy sets, and
then determine the PID and NID respectively based on Eqs. (9)–(12).
The projection of each decision matrix on PID and NID are calculated
using Eqs. (13), (14). Then, according to Eq. (15), (16), the weights for
decision makers are calculated as = (0.3351, 0.3317, 0.3332).
Step 2. Based on the weights for decision makers and Eq. (17), the aggregated
decision matrix is calculated. Besides, the PIA and NIA are determined,
respectively, using Eqs. (19)–(22).
Step 3. The projection of each alternative on PIA and NIA are calculated based
on Eqs. (23), (24). Then the ranking value for each alternative is cal-
culated as RV = (0.9583, 0.9445, 0.9344, 0.8973). Therefore, the ranking
order for alternatives is A1 > A2 > A3 > A4 .
As a result, site A1 is the best site for public fast electric vehicle charging
station.
5 Conclusion
References
1. Atanassov KT (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst 20(1):87–96
2. Dyer JS, Fishburn PC et al (1992) Multiple criteria decision making, multiattribute
utility theory: the next ten years. Manage Sci 38(5):645–654
3. Frade I, Ribeiro A et al (2011) Optimal location of charging stations for electric
vehicles in a neighborhood in Lisbon, Portugal. Transp Res Rec J Transp Res
Board 2252(2252):91–98
4. Guo S, Zhao H, Yan J (2015) Optimal site selection of electric vehicle charging
station by using fuzzy topsis based on sustainability perspective. Appl Energy
158:390–402
5. Hwang C, Yoon K (1981) Multiple attribute decision making. Springer, Heidelberg
6. Li S, Huang Y, Mason SJ (2016) A multi-period optimization model for the deploy-
ment of public electric vehicle charging stations on network. Transp Res Part C
Emerg Technol 65:128–143
7. Shareef H, Islam MM, Mohamed A (2016) A review of the stage-of-the-art charging
technologies, placement methodologies, and impacts of electric vehicles. Renew
Sustain Energy Rev 64:403–420
8. Sheppard CJR, Gopal AR et al (2016) Cost-effective electric vehicle charging
infrastructure siting for Delhi. Environ Res Lett 11(6):064,010
9. Shu MH, Cheng CH, Chang JR (2006) Using intuitionistic fuzzy sets for fault-tree
analysis on printed circuit board assembly. Microelectron Reliab 46(12):2139–2148
10. Wang JQ, Wang J et al (2014) An outranking approach for multi-criteria decision-
making with hesitant fuzzy linguistic term sets. Inf Sci 280:338–351
11. Wang JQ, Wang P et al (2015) Atanassov’s interval-valued intuitionistic linguistic
multicriteria group decision-making method based on the trapezium cloud model.
IEEE Trans Fuzzy Syst 23(3):542–554
12. Wang T, Liu J et al (2016) An integrating owactopsis framework in intuitionistic
fuzzy settings for multiple attribute decision making. Comput Ind Eng 98:185–194
13. Wu ZB, Zhong L (2016) Weight determination for magdm with linguistic informa-
tion based on it2 fuzzy sets. In: IEEE International Conference on Fuzzy Systems,
pp 880–887
14. Xu ZS (2007) Intuitionistic fuzzy aggregation operators. IEEE Trans Fuzzy Syst
15(6):1179–1187
15. Yue Z (2013) An intuitionistic fuzzy projection-based approach for partner selec-
tion. Appl Math Model 37(23):9538–9551
16. Yue ZL (2011) An extended topsis for determining weights of decision makers with
interval numbers. Knowl Based Syst 24(1):146–153
17. Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353
18. Zadeh LA (1975) The concept of a linguistic variable and its application to approx-
imate reasoning. Inf Sci 8(4):301–357
If the Medical Expenses Do Effect the Rural
Residents’ Consume
1 Introduction
As the uncertainty of global economic recovery increases, the major economic
growth engines of investment and export are facing the constraints of environ-
mental protection and trade barriers. In order to keep economic growth at a
reasonable rate China must pay more attention to the role of consumption in
stimulating the economy.
Figure 1 illustrates the change in consumption and investment rates between
1978 and 2013 in the gross domestic product of the expenditure approach. Where
S represents the consumption rate, and I represents the investment rate. It is
clearly that from 1978 to 2013 the rate of consumption in our country shows
a general downward trend, especially after 2000, it shows a steep downward
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 143
1698 S. Jiang and Y. Ji
70
60
50
40
30
S I
trend, and it does not start to stabilize until 2010, and rises slightly. In sharp
contrast to this is that since 1978, China’s economic growth into the fast-rising
channel, which indicated that China’s consumption rate is not with China’s rapid
economic growth and increase.
70
60
50
40
30
China India
Brazil Russian
USA Japan
Germany
From the international comparison point of view, we can also get the same
conclusion. Figure 2 shows the trend of final consumption among main countries
in GDP. Can be seen from Fig. 2, China’s consumption rate is not only lower
than developed countries such as European countries and the United States,
with India, Brazil and other emerging countries, which were also significantly
lower. Developed countries from Europe and the United States, its consumption
rate is generally high, and long-term stable operation. From the Fig. 2, The US
consumption rate has maintained a high level of smooth operation (basically
remained above 64), and in a steady rise with a slight trend; Germany and
Japan, similar to the situation, although the final consumption rate ate not as
high as the United States, but can also maintained at a high level (basically
If the Medical Expenses Do Effect the Rural Residents’ Consume 1699
remained around 55) and with a more stable trend. While the emerging market
countries, the final consumption rate of residents compared with the developed
countries is characterized by the fluctuations are generally large, including China.
Whether China, India, Brazil or Russia, the consumption rate of its residents in
the 1990–2013 years have experienced a more vibration. It is obvious that the
final consumption rate of Chinese residents is still very low compared with that
of emerging market countries.
When looking at the household consumption which play a decisive role of con-
sumption, we can find that the rural residents’ consumption as a share of house-
hold consumption has not increased with the urban residents’, and it makes rural
consumption become a short board of China’s strategy of expanding domestic
demand.
80
60
40
20
urbcomsu ruralconsu
Figure 3 shows the proportion of urban residents’ consumption and rural resi-
dents’ consumption in China’s household consumption during 1978–1930, where
urbcomsu’ represents urban residents consumption in the proportion of con-
sumption, ‘ruralconsu’ represents rural residents consumption in the proportion
of consumption. As can be seen from the figure, before 1992, the proportion of
consumption of rural residents in China’s household consumption in a dominant
position, but the early 90s of last century, the proportion of rural residents began
to decline sharply and the urban residents began to dominate the proportion of
consumer demand. The consumption gap between urban and rural residents has
been widening year by year, until 2010, the gap widening trend has slowed down.
Generally, there are many factor which constrain China’ rural consumption,
but the imperfect medical security system and the uncertainty of medical expen-
diture of rural resident cannot be ignored. As the medical consumption has the
characteristics of uncertainty and urgency, the impact of medical consumption
on rural residents’ overall consumption may be significant. This paper using
the provincial-level panel data from 2000–2013 to test if the uncertain medical
expenses of rural residents reduce their total consume by adopting fixed effect
model. Through the empirical research, we expecting find out the relationship
1700 S. Jiang and Y. Ji
between medical expenses of rural residents and rural residents’ total consump-
tion. And based on the conclusion, we may provide some reference for the policy
of rural residents’ consumption.
2 Literature Review
Scholars have studied the reasons for the downturn in rural consumption and
how to start rural consumption from different aspects: Wang’s research [14]
indicated that the uncertainty expectations of rural residents are one important
reason that restricts the consumption in rural areas in China. Similar views
also include Tan [11]. Qu [13] found that the pessimistic expectations on future
expenditure of rural residents will place greater emphasis on deposit but not
expenditure. Therefore to start rural consumption China’s policy makers should
improve the rural consumption environment and reduce the burden on farmers.
Sun and Zhu’ research [9] also emphasized the importance of optimizing the
rural consumption environment to improve the consumption of rural residents.
Cai [1] had point that in addition to income levels, rural social security system
construction is also an important factor affection rural consumption. Clearly, as
a important part of social security system, medical security system are closely
related to medical expenditure, which indicate that the medical expenditure of
rural residents may play an important role in consumer decision making.
There are also scholars’ studies focused on the consumer environment and
the expectations of rural residents. Liu [7] found that insufficient financial sup-
port limited the development of rural residents’ consumption. Yang and Chen’s
research [16] based on the view of rural public goods supply, proposed that
increasing rural public goods supply can reduce the uncertainty of expenditure
expectations of rural residents which may benefit the rural consumption. Zhou
and Yang analyzed the consumption both urban and rural areas and pointed
out that, precautionary saving motive may also effect rural residents as well as
urban residents.
Furthermore, some scholars studied about rural medical expenditure and the
existing literature mainly focuses on two aspects: first, the impact factors of
rural residents’ medical expenditure such as Tan and Zhang et al. [10] whose
study explores the influencing factors of rural households’ medical consumption
spending and its demand elasticity based on QUAIDS model and the cross-
section; And Huang and Liu [6] provided a similar study, which state that the
income level, government health investment and other factors do affect the med-
ical consumption of rural residents; Zhao [17] used the 1993–2012 panel data to
analysis the regional differences of rural residents’ health care consumption, and
state that the income, education, economic development and other factors on the
impact of rural resident’s medical spending are significant regional differences.
Second, the function of rural medical security system such as Bai and Li et al.
[3] found that the insurance coverage increases non-health care consumption by
more than 5% and the insurance effect exists even for households with medical
spending, by exploiting a quasi-natural experiment; Ding and Ying et al. [2]
If the Medical Expenses Do Effect the Rural Residents’ Consume 1701
also stated that the propensity to consume of rural households is universally low
in China, and the medical insurance do increased rural durables consumption
significantly, base on the analysis of China Health and Nutrition Survey panel
data from 1991 to 2009. Tan and Qin [12] used IV-TOBIT model to test the
crowding out effect of China’s households’ medical expenses and conclude that
the medical expenses do reduce the households’ expenses on food and clothing,
whose research also used the China Health and Nutrition Survey data (2013).
Although the above study about rural medical expenditure focused on difference
aspects, they provided us with the method of reference.
In general, although many studies focused on the rural residents’ consump-
tion expansion directly or indirectly mentioned factors such as rural residents’
expectation, consumption environment, and precautionary saving motive. But
there are few research put rural residents’ medical expenses as an independent
factor which effect the rural residents’ consumption behavior. For this reason,
we develop an empirical model that includes both residents’ medical expenses
and a set of factors that may affect the rural residents’ medical expenses. This
model also contain standard economic variables. Our study adds to the current
literature by utilizing pooled time series and provincial cross-sectional data over
a longer period than the household survey data cover. In addition, by utilizing
medical expenses variable, we are looking for if the medical expenses of rural
residents do effect the rural consume.
C
where
Y it
: rural residents’ consumption as ashare of provincial domestic
product;
medratioit : the share of rural household medical expense in rural household
consumption per capita;
goeratioit : local government expenditure on health as a share of total local
government expenditure;
loandepit : the share of loans plus deposits in provincial domestic product;
Pi , Dt : provincial and time dummies;
Zit : vectors of economic system variables;
growthit : the annual growth in provincial domestic product;
dincoit : tprovincial net income of rural residents per capita;
depratioit : dependency ratios;
eduratioit : the share of rural household education expense in rural household
consumption per capita.
1702 S. Jiang and Y. Ji
(eduratioit ) because it has been pointed out by the literature that education
expenses is an important factor that affected residents saving motions. By the
way, adopting the education expenses variable can make a comparison with the
medical expenses variable.
the full data set from 2000–2013. The share of rural household medical expense
variable (medratioit ) is significant and negative. Which met our hypothesis that
the increase in the share of rural household medical expenses in rural household
consumption will lead rural residents to reduce consumption. While the gov-
ernment expenditure on health variable (rugeoratioit ) is significant and positive
which met our hypothesis too that the government health expenditure variable
is positively correlated with the dependent variable.
There after we add cross-provincial fixed effect to controls for omitted vari-
ables that vary across provinces but do not change over time. Furthermore,
vast differences in economic, industrial and fiscal structures across provinces
are likely to influence household consumption. So the intercepts generated for
each province in this model should absorb the influences of these omitted
If the Medical Expenses Do Effect the Rural Residents’ Consume 1705
variables that differ for each province but are constant over time. Model 2 is
also estimated with ordinary least squares, but with provincial fixed effects
reported in Table 2. The F-statistics of the test on the significance of provin-
cial fixed effects show that there are significant variations among provinces
(F (30, 396) = 11.18, Pr ob > F = 0.0). In model 2 the government expendi-
ture on health variable (rugeoratioit ) is significant but negative, reflecting that
government expenditure on health increasing would not stimulate the rural res-
idents’ consumption. Although the share of rural household medical expense
variable (medratioit ) is also significant and negative.
Table 3 presents a series of robustness checks on model 2 including het-
eroskedasticity, cross-section correlation, and sequence correlation. As the test
results shown in Table 3. The result estimated in model 2 is not robustness. So
we re-estimates model 2 using Daniel Hoechle’s [5] method to get stronger results
in model 3. Compared with model 2, the estimate result in model 3 are fairly
similar, only the significant level of the coefficients has changed. But our main
independent variables are still significant.
Based on model 3, we then add time-fixed effects to control for variables that
are constant across provinces but evolve over time, such as the introduction of
new healthcare regime “Xinnong he” (New Cooperative Medical Scheme), the
nationwide economic reforms deepen in China rural area, etc. Model 4 replicates
model 3 except that these estimates include the time fixed effects as well as the
provincial fixed effects. We included dummy variables chosen for several time
periods to capture the trend of the introduction of new healthcare regime. As
the New Cooperative Medical Scheme start the pilot at 2003 and realized full
coverage at 2009, considering the lag of policy, our first time dummy starts from
2000 and ends in 2004. The next time dummy covers the period of 2005 to 2009.
The last dummy variable covers the time period between 2010 and 2013. In model
4, at least one newly generated time dummy variables is statistically significant
and produce coefficients with a trend: over time the coefficient increase each year.
The basic findings of model 3 remain robust in model 4, although some coefficient
changed in magnitude. So we can draw our conclusion based on model 4.
In model 4, our main dependency variable were all significant although we
added the time-fixed effects. The share of rural household medical expense is
negative, which is consistent with our assumption. The government expenditure
on health is negative, implying that government expenditure on health do not
1706 S. Jiang and Y. Ji
release rural residents’ saving motive, which doesn’t met our hypothesis. The
financial development is negative in model 1–model 4, which indicated that the
financial development did not lessened rural households’ liquidity constraint. In
Table 2 we also found that the shares of rural household education expense in
rural household consumption, which we adopted to make a comparison with the
medical expenses as a control variable, were significant and positive in model
1–model 4. So we can conclude that the impacts of medical expenditure and
education expenditure on rural residents’ consumption are not consistent.
As the government expenditure on health and the financial development’s
estimated results did not consistent with our expectations and the general theory,
we re-estimated model 4 by alternative measures of these two variables as a
further robustness check, presented in model 5.
In model 5 we used the number of beds in township health institutions to
replace the government expenditure on health. Simultaneously, we adopted the
net saving rate of rural residents [15] per capita to replace the financial devel-
opments as the proxy variable of liquidity constraint of rural residents. Since
the statistical agency is no longer announced the number of beds in township
health institutions of Shanghai and Beijing after 2009, our estimation in model
5 did not include the two sections of Shanghai and Beijing. Also because we
adopted the panel data contain 31 sections, so there is no substantial impact on
the estimation from the elimination of the two sections.
In model 5, although there are some changes in the estimation results com-
pared with model 4, but the share of rural household medical expense remain
significant and negative, by the way the shares of rural household education
expense are also significant and positive, which are corresponded with model 4.
Furthermore, the number of beds in township health institutions is significant
and negative too, as a part of government expenditure on health, which indi-
cated that the result of government expenditure on health estimated in model 4
has quite robustness. As the net saving rate of rural residents is significant and
negative, we can conclude that the rural residents faced obvious liquidity con-
straints. In addition, as the model 5 adopt the net saving rate of rural residents,
the coefficient of the share of rural household medical expense becomes smaller,
indicated that medical expense always accompanied with liquidity constraint,
which is associated with the amount of medical expenditure relatively large.
We use various models and estimation techniques to examine if and how the
medical expenses affect the variation in rural residents’ consumption as a share
of GDP. As our expectation the medical expenses do lessened the rural resi-
dents’ consumption as a share of GDP. In our study, the most significant and
robust discovery is that reducing rural residents’ motivations for precautionary
savings through government’s expenditure at the local level have the opposite
effect in increasing rural residents’ consumption. While this result contrary to
our expectation and the general theory, the alternative measure that captures
If the Medical Expenses Do Effect the Rural Residents’ Consume 1707
the role of the government expenditure through the number of beds in town-
ship health institutions was also significant. From the direct meaning of this
estimation result, we can conclude that the increasing of government spending
on health has stimulated the rural household saving motive. But this is by no
means a justification for reducing the fiscal responsibility of the government. On
the contrary, the result implied, to some extent, that the government expendi-
ture on rural health is not enough, and there is a heavy responsibility on the
government to improve the rural medical and health environment. Because at
the pooled medical and health environment level, the rural household’s medical
consumption demand has been repressed.
The financial development and its alternative measurer always robust regres-
sors in all the models, while the measurements are small and negative, which
means the liquidity constraints is a factor that do not bode well for the rural
consumption. The shares of rural household education expense are always signif-
icant and positive, while the coefficient absolute value is large. As demonstrated
in this study, this means that, the impacts of medical expenditure and educa-
tion expenditure on rural residents’ consumption are not consistent, so we need
different options to develop the medical and education in rural areas.
For the control variables in the Z vector, although the annual growth in
provincial domestic product is significant influenced the independent variable but
the coefficient absolute value is too small, while the provincial net income of rural
residents and the provincial dependency ratios suffer from the same problem.
These results point to several suggestions that the increasing medical
expenses is the obstacle of boosting domestic consumption especially in rural
areas. So the policy makers need to formulate policies to optimize the consumer
environment in rural areas, just like increase investment in rural health care,
although the government expenditure on health is negative in this paper. The
cautious motive which lead rural household to increase their savings has been
proved indirectly by the estimated result in this paper as the share of rural house-
hold medical expense in rural household consumption is significant and negative.
In addition, as the liquidity constraint has also been found in the rural are as from
the estimated results. To enhance the rural consumption, the government needs
to not only lift the burden of rural households medical expenditure but lessen the
liquidity constraint also. In our opinion, to increasing the coverage and funding
levels of NRCMS is an effective method. Furthermore, increasing the accessibility
and convenience of rural health care facilities may also a effective way.
References
1. Cai YZ (2009) Economic stimulus package and start-up of rural consumption in
China: empirical study based on the decomposition of rural household income. J
Financ Econ 9:4–12
2. Ding J, Ying M, Du Z (2013) China’s rural household consumption behavior
research: based on the perspective of health risk and medical insurance. J Financ
Res 10:154–166
1708 S. Jiang and Y. Ji
3. Ding JH, Ying ML, Du ZC (2013) Health insurance and consumption: evidence
from China’s new cooperative medical scheme. Consum Econ 10:154–166
4. Goldsmith RW (1969) Financial structure and development. Stud Comp Econ
70(4):31–45
5. Hoechle D (2007) Robust standard errors for panel regressions with cross-sectional
dependence. Stata J 7(3):281–312
6. Huang XP, Liu H (2011) An analysis on the influencing factors of medical con-
sumption of Chinese rural residents. Consum Econ 11:77–80
7. Liu GM (2011) Discussion and analysis about financial support to rural consumer
market’s exploitation. J Central Univ Financ Econ 6:35–40
8. Mckinnon RI (1973) Money and capital in economic development. Brookings insti-
tution
9. Sun HQ, Zhu C (2012) An empirical analysis of urbanization and rural consumption
growth in China. Stat Decis 5:90–93
10. Tan T, Zhang YY, Jun HE (2014) An analysis of influencing factors and elasticity
of rural households’ medical consumption spending in China. J Shanghai Univ
Financ Econ 3:63–69
11. Tan YH (2009) On fiscal policies of stimulating consumer demand in rural areas.
J Central Univ Financ Econ 8:5–9
12. Tang Q, Qin XZ (2016) An empirical study on the crowding-out effect of family
medical expenditure in China. Econ Sci 3:61–75
13. Tao Q (2009) Analysis on the consumption behavior and its constraint factors in
rural China. Economist 9:54–60
14. Wang B (2012) The deep reason and improving ways of rural consumption demand
in China. Consum Econ 1:29–32
15. Wang P (2014) Rural institutional change, liquidity constraint and consumption
growth of rural residents in China. J Shanxi Univ Financ Econ 10:1–10
16. Yang L, Chen C (2013) An analysis of the effect of education and health pub-
lic goods on rural residents’ consumption: from the perspective of human capital
promotion. J Agrotechnical Econ 9:4–12
17. Zhao ZR (2014) A study on reginal differences of influencing factors of rural resi-
dents’ health consumption in China. Consum Econ 3:24–29
18. Zhou L, Wang Z (2008) An empirical analysis of financial development and eco-
nomic growth in China. J Financ 10:1–13
A Discrete Time-Cost-Environment Trade-Off
Problem with Multiple Projects: The Jinping-I
Hydroelectric Station Project
Huan Zheng1,2(B)
1
Management School, Chongqing Technology and Business University,
Chongqing 400067, People’s Republic of China
290871340@qq.com
2
Engineering School, Widener University, Philadelphia 19013, USA
1 Introduction
Two of the most crucial aspects of any such construction project are time and
project cost, both of which have received considerable research attention [9,12].
One particularly important element of effective project scheduling theory and
applications is the discrete time-cost trade-off problem (DTCTP) introduced
by Harvey and Patterson [6]. Of more recent concern, however, are accusa-
tions that the construction industry is causing environmental problems that
range from excessive consumption of global resources-both in terms of construc-
tion and operation-to pollution of the surrounding environment. Hydroelectric
projects, particularly, contribute significantly to changes in river environments
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 144
1710 H. Zheng
[2,17]. Yet previous studies have paid more attention to evaluating the envi-
ronmental impact at later stages of reservoirs and hydraulic electricity rather
than considering it from the construction stage. Recently, however, a tighten-
ing of environmental protection regulations, especially for hydroelectric projects,
has increased pressure on project managers to reduce the environmental impact
through the selection of optimal construction modes. There is an imminent need,
therefore, to study management decisions on selection methods to ensure more
environmentally friendly means of construction during the project planning stage
when the environmental impact can best be incorporated into other project
objectives. For hydroelectric projects such as the JHS-I, this planning stage
should include the optimization of construction and subsequent operations for
environmental impact as well as time and cost.
The expanding scale of such construction projects worldwide, however, makes
effective project management extremely complex. To deal with this complexity
while still achieving management objectives, construction managers must employ
a project management decision system that effectively controls total project
duration, penalty costs, and environmental impact. One effective method for such
control is the discrete time-cost-environment trade-off problem with multiple
projects (DTCETP-mP), which is an extension of the original DTCTP. This
analysis thus applies DTCETP-mP to solve the three main objectives of the
JHS-I problem: (1) minimization of the total project duration, (2) minimization
of the total project penalty cost, and (3) minimization of the environmental
impact.
In non-routine projects such as the new construction at JHS-I [10], the dura-
tion of each activity and completion time may be uncertain, so the project man-
ager must handle multiple conflicting goals in an uncertain environment in which
information may be incomplete or unavailable. In this context, activity duration
uncertainty can be modeled using either probability-based methods [12] or fuzzy
set-based methods [18,19] depending on the situation and the project manager’s
preference. When a project manager has difficulty characterizing the random
variables, as in the current scenario of a new construction project with unique
activities and a lack of historical data, the fuzzy method is the most effective
approach.
Because the DTCETP-mP is an extension of the DTCTP, it is an NP-hard
problem that is difficult to solve [15]. Yet even the most exact of currently avail-
able methods can only solve small projects with under 60 activities, a far cry
from the numerous activities and modes per activity involved in the large-scale,
complex JHS-I project, whose optimal solutions are beyond the capabilities of
traditional production scheduling methods like PERT (Program Evaluation and
Review Technique) and CPM (Critical Path Method) [3]. More suitable heuris-
tic solution procedures for solving the DTCETP-mP are thus needed, several
of which have been suggested in the literature. For example, Franck et al. [4]
demonstrated that a genetic algorithm (GA) performed slightly better than a
tabu search (TS) procedure but required more computing. In earlier work, Wang
et al. [16] addressed larger, more complex problems by introducing an improved
hybrid genetic algorithm (hGA) that uses a fuzzy logic controller (flc) to adap-
A Discrete Time-Cost-Environment Trade-Off Problem 1711
2.2 Assumptions
The DTCETP-mP model for the JHS-I makes the following assumptions:
(1) The DTCETP-mP comprises multiple projects, each containing several
activities;
(2) The start time of each activity is dependent upon the completion of its
predecessor;
(3) The capital used by all activities does not exceed the limited quantities in
any time period, and the total project budget is within a predetermined
limit;
(4) The environmental impact caused by the activities does not exceed the
limited quantities in any time period, and the total project environmen-
tal impact is within a predetermined limit;
(5) When an activity begins, it cannot be interrupted;
(6) The managerial objective is to minimize the total project time, total tardi-
ness penalty, and total environmental impact for all subprojects.
Decision variables:
t̃Sij : start time for activity j in subproject i
Functions:
z1 : total duration of the project;
z2 : total penalty costs of the project;
z3 : total environmental impact of the project
The second objective is to measure and minimize the total cost by minimizing
the total penalty costs of multiple projects:
I
min z2 = cpi (E(t̃F
iJ ) − E(t̃i )).
D
(2)
i=1
(2) Constraints
Because a specific subproject must be completed before another subproject
can be initiated (the precedence constraint of multiple projects), the model
includes the following constraint:
E(t̃SeJ ) ≤ E(t̃Si,1 ) − E(d˜eJ ), e ∈ Pr e(i). (4)
Likewise, because the start time of each activity is dependent upon the com-
pletion of some other activities (the precedence constraint of activities), the next
activity must be started after a specific activity is completed:
E(t̃Sil ) ≤ E(t̃Sij ) − E(d˜il ), l ∈ Pr e(j). (5)
The project is also subject to a limitation on the total capital and capital
per time period,
lijc ≤ bc i = 1, 2, · · · , I, (6)
j∈Sp
1714 H. Zheng
as well as on the total environmental impact and the environmental impact per
time period:
lijc ≤ Bc , (7)
i∈Sp j∈Sp
lije × wij ≤ be , i = 1, 2, · · · , I, (8)
j∈Sp
Vij
lije × wij ≤ Be , lije = . (9)
dij
i∈Sp j∈Sp
The nonnegative variables are described in the model by the following equa-
tion:
˜
ij ), E(dij ) ≥ 0, ∀i ∈ I, ∀j ∈ J.
E(t̃Sij ), E(t̃F (10)
Although mathematically, several Pareto optimal solutions are possible for the
multiobjective model formulated above, in real-world construction, only one opti-
mized solution is needed in each time-constrained decision-making situation.
Hence, the multiobjective model is transformed into a single-objective model
using a weighting method. Additionally, because accurately determining the GA
parameters is especially important in solving large-scale problems like the JHS-I
project, GA effectiveness is improved by adaptively regulating the crossover and
mutation rate during the genetic search process using the fuzzy logic controller
(flc) [16]. This regulation reduces CPU time and enhances optimization quality
and stability by regulating the increasing and decreasing crossover and mutation
rate ranges [1,5,13,14,20].
Solving the problem with the flc-hGA involves the following steps:
Step 7. Check of the termination: If one individual has achieved the predefined
fitness value, the process stops: otherwise, it goes on to step 8.
Step 8. Regulation of the mutation rate through adaptive use of the fuzzy logic
controller; otherwise, the process returns to step 4.
The model uses two hybrid genetic operators, a position-based crossover and
a swap mutation (SM) operator. The crossover operator randomly takes some
genes from one parent and fills any vacuum with genes from the other parent
by scanning from left to right, while the SM operator selects two projects at
random and swaps their contents.
A1 Spillway project A11 Earth-rock excavation A12 Concreting A13 Gates hoist
equipment installation A14 Clearing up and finishing work
A2 River diversion A21 Import and export hole dug A22 Concrete lining and
during construction check-gate installation A23 Lockup A24 Gen set installation
A25 Substation construction and equipment installation
A3 Dam construc- A31 Concrete cut-off wall A32 Dam foundation lock cut A33
tion Dam filled A34 Asphalt concrete watertight diaphragm
A4 Power capacity of A41 Diversion opening A42 Air pressure system A43 Second-
stream stage cofferdam A44 Bore-hole A45 Concrete lining
A5 Transport and A51 Road clear A52 Warehouse and factory construction A53
power system Water supply A54 Power transmission project
The data for the JHS-I project, obtained primarily from the Ertan Hydropower
Development Company, includes observations of managerial practice and inter-
views with designers, consultants, contractors, subcontractors, and a city govern-
ment officer at the station. This data set is supplemented by information from prior
research. The construction manager’s project experience, in particular, was invalu-
able for researcher comprehension of the projects’ specific nature and configura-
tion. In addition to two dummy (start and end) projects, JHS-I has five subprojects:
a transport and power system, river diversion during construction, dam construc-
tion, a stream power capacity project, and a spillway project (see Table 2).
A Discrete Time-Cost-Environment Trade-Off Problem 1717
Activity DA C EI and W PA DP PP
S Dummy project S < 1, 2, 3
1 s (dummy activity) s < 1,2 11 1<4,5
1 2 2 20.0,0.05 1 < 3,4
2 5 2 32.6,0.055 2 < 4,t
3 5 2 16.7,0.03 3<t
4 3 4 37.5,0.08 4<t
2 s (dummy activity) s < 1,2 11 2 < 4,5
1 4 2 23.5,0.037 1<3
2 2 3 24,2,0.048 2<3
3 4 2 23.8,0.038 3 < 4,5
4 3 1 19.8,0.053 4<t
5 2 3 15.7,0.027 5<t
3 s (dummy activity) s<1 11 3 < 4,5
1 2 3 25.0,0.027 1 < 2,3
2 5 3 23.6,0.019 2<4
3 3 3 21.3,0.033 3<4
4 3 1 21.6,0.041 4<t
4 s (dummy activity) s < 1,2 11 4<T
1 2 1 11.6,0.027 1 < 3,4
2 5 2 16.3,0.047 2 < 4,5
3 4 3 11.4,0.028 3<t
4 2 3 13.5,0.059 4<t
5 4 1 11.5,0.037 5<t
5 s (dummy activity) s < 1,2,3 11 5<T
1 1 1 21.6,0.031 1<4
2 2 2 26.7,0.0122 2<4
3 5 1 17.2,0.018 3<4
4 3 2 22.0,0.03 4<t
T Dummy project
Note: DA = expected value for activity duration (month),
C = cost (million RMB), PA = activity predecessors, DP =
expected value for project duration, PP = project predeces-
sors, EI and W = environmental impact and weight.
are as follows: The maximum capital (units: million RMB) and environmental
impact for each time period are 14 and 10 units, (Bc , Be ) = (12, 10), respectively,
while the maximum capital (units: million RMB) and environmental impact for
each subproject time period are 6 and 6 units, (bc , be ) = (8, 6). The penalty cost
for each subproject in each time period is cpi = 12 (units: million RMB). The
evolutionary parameters are a population size of 20, maximal generation of 200,
an optimistic-pessimistic index of λ = 0.5, and a weight for each objective of
η1 = 0.5, η2 = 0.2, η3 = 0.3. The remaining variables are activity duration, cost,
activity predecessors, project duration, and project predecessors.
References
1. Afruzi EN, Najafi AA et al (2014) A multi-objective imperialist competitive algo-
rithm for solving discrete time, cost and quality trade-off problems with mode-
identity and resource-constrained situations. Comput Oper Res 50(10):80–96
2. Chen S, Chen B, Su M (2011) The cumulative effects of dam project on river
ecosystem based on multi-scale ecological network analysis. Procedia Environ Sci
5:12–17
3. Eshtehardian E, Afshar A, Abbasnia R (2009) Fuzzy-based MOGA approach to
stochastic time-cost trade-off problem. Autom Constr 18(5):692–701
4. Franck B, Neumann K, Schwindt C (2001) Truncated branch-and-bound, schedule-
construction, and schedule-improvement procedures for resource-constrained
project scheduling. OR Spectr 23(3):297–324
5. Gen M, Cheng R, Lin L (2008) Network models and optimization: multiobjective
genetic algorithm approach
6. Harvey RT, Patterson JH (1979) An implicit enumeration algorithm for the
time/cost tradeoff problem in project network analysis. Found Control Eng
4(3):107–117
7. Holland JH (1992) Adaptation in natural and artificial systems. MIT Press, Cam-
bridge
8. Jeang A (2015) Project management for uncertainty with multiple objectives opti-
misation of time, cost and reliability. Int J Prod Res 53(5):1503–1526
9. Ke H, Ma J (2014) Modeling project time-cost trade-off in fuzzy random environ-
ment. Appl Soft Comput 19(2):80–85
10. Long LD, Ohsato A (2008) Fuzzy critical chain method for project scheduling
under resource constraints and uncertainty. Int J Project Manage 26(6):688–698
11. Meier C, Yassine AA et al (2016) Optimizing time-cost trade-offs in product
development projects with a multi-objective evolutionary algorithm. Res Eng Des
27(4):1–20
12. Monghasemi S, Nikoo MR et al (2014) A novel multi criteria decision making
model for optimizing time-cost-quality trade-off problems in construction projects.
Expert Syst Appl 42(6):3089–3104
13. Mungle S, Benyoucef L et al (2013) A fuzzy clustering-based genetic algorithm
approach for time-cost-quality trade-off problems: a case study of highway con-
struction project. Eng Appl Artif Intell 26(8):1953–1966
14. Said SS, Haouari M (2015) A hybrid simulation-optimization approach for the
robust discrete time/cost trade-off problem. Appl Math Comput 259:628–636
15. Tareghian HR, Taheri SH (2007) A solution procedure for the discrete time, cost
and quality tradeoff problem using electromagnetic scatter search. Appl Math
Comput 190(2):1136–1145
A Discrete Time-Cost-Environment Trade-Off Problem 1721
16. Wang PT (1997) Speeding up the search process of genetic algorithm by fuzzy
logic. In: Proceedings of 5th European congress on intelligent techniques and soft
computing
17. Xu J, Zheng H et al (2012) Discrete time-cost-environment trade-off problem for
large-scale construction systems with multiple modes under fuzzy uncertainty and
its application to Jinping-II hydroelectric project. Int J Project Manage 30(8):950–
966
18. Zheng H (2014) The fuzzy time-cost-quality-environment trade-off analysis of
resource-constrained multi-mode construction systems for large-scale hydroelectric
projects
19. Zheng H (2014) The fuzzy time-cost-quality-environment trade-off analysis of
resource-constrained multi-mode construction systems for large-scale hydroelectric
projects. Lect Notes Electr Eng 242:1403–1415
20. Zheng H, Zhong L (2017) Discrete time-cost-environment trade-off problem and
its application to a large-scale construction project. Springer Singapore
Research on Equalization of Public Services
Based on Changes of Urban and Rural
Population Structure
Abstract. This paper starts the discussion from five aspects population
distribution, age structure, employment characteristics, family charac-
teristics and flow population, and conducts qualitative and quantitative
analysis on the changes of urban and rural population of new period
of China, so as to master the basic situation and development trend of
urban and rural population. On this basis, the paper proposes sugges-
tion and countermeasures on equalization of public services in urban and
rural areas in our country, basically from three aspects-“increasing the
flexibility of the urban and rural public service supply mode”, “promot-
ing the urban and rural service supply balance and accessibility” and
“strengthening the effective supply to important groups and regions”.
1 Research Background
The problem of urban and rural areas is always a hot issue during the process of
China’s development. In November 2013, in the “Decision on Several Significant
Problems on Comprehensively Deepening Reform by the Central Committee of
the Communist Party of China” issued by the Third Plenary Session of the
18th PCP Central Committee, it is clearly proposed to “propel the equalization
of basic public services in urban and rural areas.” There is no doubt that in
the new period, the government has taken the perfection of urban and rural
development integration mechanism as the main path for solving the problem of
the unbalanced development between urban and rural areas, and has taken the
equalization of public services as the main measure for relieving the conflict on
development between urban and rural areas.
On the premise of the given orientation, it is urgent to conduct reasonable
allocation on the public service resources of urban and rural areas and ensure
the balance. In order to solve this problem, three aspects shall be taken into con-
sideration: Firstly, what is the current situation of urban and rural population
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 145
Research on Equalization of Public Services 1723
structure? Secondly, how to define the equalization of urban and rural public ser-
vices? Thirdly, how to realize the equalization of urban and rural public services?
Domestic and foreign scholars have given some answers to the above questions,
but not sufficient. Take the urban and rural population structure as an exam-
ple. In 2011, the proportion of urban residents in our country exceeded that of
the rural residents for the first time, which is called by media as “the reversal
of urban and rural population structure since thousands of years in China” [3];
China has turned to the urban civilization stage with characteristics of industry
and service industry from the giant agricultural country and agricultural civ-
ilization. It is noted in the “Social Blue Book” by Chinese Academy of Social
Sciences that: “this is not only a change in the urban population percentage, but
means the extremely profound change on the production method, occupational
structure, consumption behavior, living method and value system of people”
[12]. Is 2011 the time-point for the qualitative change of the urban and rural
structure of China? If we narrow the scope to four major economic regions east
region, middle region, west region and northeast region, we can find that such a
so-called reversal has already appeared in Jiangsu, Zhejiang and Fujian as early
as 2005. In the west region, the rural population in many regions is still more
than that of the urban population (such as Guangxi, Sichuan, Guizhou, Yunnan,
etc.), and no reversal has happened. The conclusion obtained from the average
value of the statistical data is both scientific and some bit of biased, which shall
not be taken too much consideration. Besides, the urban and rural population
structure often discussed by some scholars is actually the urban and rural reg-
istered population structure, i.e., the personal status relationship of urban and
rural residents, which is greatly different from the actual distribution of urban
and rural population. In 2014, the population of separation of registered and
actual residence in our country was 298 million, and the floating population was
253 million. It means that the era judging the population distribution accord-
ing to family register has passed in our country. In brief, the urban and rural
population structure in our country has come to a complex stage; from now on,
equalization of public service will step into a period of overall promotion, accu-
rate definition and courageous attempt. On the basis of accurate mastering of
the change of urban and rural population structure, we will propose reform ideas
and suggestion aiming at the new problem on urban and rural public service,
which is an interesting topic, and also the original intension of this research.
2 Research Review
When conducting retrieval by taking CNKI as the retrieval source, “public ser-
vice” and “public product” as the title and key word, and “urban and rural”
as the title, more than 1800 results are obtained, in which there are 189 master
and doctoral thesis, 149 conference papers, more than 570 papers on newspaper
and 940 journal literatures.
1724 Q. Fang and Y. Sheng
There are 344 literatures with high quality, which are published on SCI, EI,
Chinese core periodicals and CSSCI source periodicals. The following conclu-
sion is obtained after conducting analysis on literatures by utilizing Citespace
software.
Firstly, the high quality periodical achievements are concentrated during
2003–2016, mainly after 2010. Secondly, the top three authors in terms of the
number of publishing are Wu Yemiao, Liu Chengkui and Yu Yaguai. No prefer-
able cooperative relationship is formed between the authors, and most litera-
tures are researched independently. Thirdly, the top three research institutions
are the Institute of Public Administration of Sichuan University, the School
of Government of Nanjing University and the Financial and Tax Institute of
Shandong University of Finance. Fourthly, the research emphasis on urban and
rural public service field in our country is concentrated in basic public service,
equalization, urban and rural overall development, urban and rural gap, rural
public products, sports public service and public finance. The research emphasis
has been gradually transferred from the public product (2004), urban and rural
overall development (2005), urban and rural public service (2006–2007) of the
early stage to equalization of basic public service (2008–2010) and urban and
rural basic public service (2010). Since “the 11th five-year plan”, the domestic
research has tended to specific contents such as social insurance, sports public
service, infrastructure, etc. The research pays more attention to the evaluation
on public service, to discuss on public service in combination of the theme of
urbanization. Fifthly, there are two sudden nodes during the whole time period:
public product and urban and rural overall development. The little node quantity
indicates that the research on this field is not active enough, without significant
emerging trend. Of course, it may be due to the short term of the research on
this field in our country.
These researches mainly explore the equalization of public services from four
aspects. First, it is the research on the specific contents of the basic public ser-
vices. Secondly, it is the argument on the unbalanced situation of urban and
rural public service. Just as what is proposed by Wu [14] that the unbalanced
allocation of urban and rural public resources in our country is mainly expressed
on infrastructure, basic education, social insurance and public medical health.
According to Li [6], the unbalanced allocation of urban and rural public resource
in our country is embodied on social insurance resource, social welfare resource,
public health resource, basic education resource and infrastructure construction.
Thirdly, explore the reasons for unbalanced urban and rural public service. It
is regarded by some scholars that the subsidy on industry by agriculture and
the subsidy on city by village in early stages of our country is the fundamental
reason for unbalanced urban and rural public resource allocation [2]. It is con-
sidered by some scholars that the long term unbalanced urban and rural public
resource allocation is related to the thoughts of “inevitable primary accumula-
tion”, “insufficient national financial resources” and “obstruction on efficiency
by justice” in the society [15]; some scholars emphasize economic development
level [5], and that the finance is the initiator of the evil [7]. Fourthly, explore the
Research on Equalization of Public Services 1725
practice of promoting the equalization of urban and rural public service. Some
scholars propose on ideology aspect to formulate the development strategy in
accordance with the urban and rural integrated development, to increase the
investment intensity in rural areas [18], to break through the functional restraint
of “economic-oriented government”, and to create public service-oriented gov-
ernment [16]; some scholars propose on financial aspect to take the investment by
the Party Central Committee as the leading force, and to improve the efficiency
of implementation of policy [11]. Some scholars think that it is needed to stan-
dardize the government expenditure behavior, and clarify the functions of various
levels of governments, to increase the financial investment of the central and local
governments [2]. Some scholars advocate to promote the balanced allocation of
urban and rural public resources through marketization [1,19], but it is needed to
avoid the corruption generated thereby. Some scholars think that it is available
to conduct allocation on public resources by non-profit organizations [13]. Some
scholars think that the balanced allocation of public resources is actually a prob-
lem about institutional arrangement in essence; To change the administrative
philosophy of local government, to establish sustainable financial expenditure
mechanism of rural public resource, to establish the peasants-participating pub-
lic resource allocation decision mechanism, and to perfect the supervision and
restraint mechanism of balanced allocation of social public resource, are four
path choices for promoting the balanced allocation of public resources [6]; some
scholars propose that it is necessary to realize the integration and association
within the system, between the systems, between government and the society as
well as among the diversified subjects of the society [4]. Besides, some scholars
propose the method to improve the dual development on urban and rural areas
from the aspect of specific field of public resources such as infrastructure, public
education and medical health.
To sum up, the existing researches have laid firm foundation for us to continue
the exploration on the urban and rural public service field, but there are still
deficiencies on these researches. Firstly, compared with the discussion over the
country (for example, 31 provinces and cities), there is obviously insufficient
discussion on the equalization of urban and rural public services. Secondly, most
researches are only conducted superficially on the unbalanced allocation of public
service, resulting in superficial and impractical reform path. Thirdly, the existing
researches are static, taking no population mobility and demand variability into
consideration.
In recent ten years, there have been great changes on urban and rural population
structure in China, which are embodied on aspects such as space, ages, family,
education, income and employment. As for the population, the urban population
in our country in 2011 exceeded more than 50% of the total population, forming
1726 Q. Fang and Y. Sheng
a “neck and neck” situation on urban and rural population. On space distribu-
tion, the trend of migration from rural to urban area still goes on, and weight
transition of urban and rural population and the adjustment of population den-
sity have become normalcy. On aspect of age groups, the fierce urban and rural
population aging trend still goes on, and the situation of economic increase
depending on “demographic dividend” has passed. On aspect of employment
selection, the urban and rural population rushes to the tertiary industry (ser-
vice industry), to make the employment transferred from “industry-oriented” to
“service-oriented” form. With the resultant force of decreased family population
and higher income, it has been the general situation for public service to lean to
fields such as medical health and pension.
Since the establishment of the nation, China has had an increasingly grown
population, which increased to 1.36 billion in 2013 from 0.54 billion in 1949. The
urban population has been on the rise, which increased to 0.73 billion in 2013
from 0.0577 billion in the early years of the new nation. Compared to the “up
and down” situation of urban population, the rural population proportion was
on a slow and continuous downtrend. In the early years of the new nation, the
rural population in our country was near to 90%, which decreased to 46.27% in
2013.
Extend the analysis vision field to the four major economic regions, i.e., the
east region, the middle region, the west region and the northeast region. The
changes of urban and rural population show both generality and difference. On
one hand, the changing trends of urban and rural population proportion in the
east region, the middle region, the west region and the northeast region are
almost the same, i.e., the proportion of urban population increases year-by-year,
as shown in Table 1. On the other hand, there is significant difference on the
proportion of urban population among the four economic regions. At the end of
“the 10th five-year plan” (2005), the proportion of the urban population of the
east region had been near to 60%, and the proportion of the urban population of
the northeast had been near to 55%, while the proportion of the urban population
Region 2005 2006 2007 2008 2009 2010 2011 2012 2013
China 42.99 44.34 45.89 46.99 48.34 49.95 51.27 52.57 53.73
The east region 59.23 60.15 60.87 61.65 62.48 64.43 65.19 66.11 66.92
The middle region 37.58 38.96 40.27 41.73 43.03 44.44 46.28 47.98 49.26
The west region 35.18 36.15 37.31 38.53 39.61 41.45 42.81 44.26 45.43
The northeast region 54.77 55.15 55.42 56.22 56.39 57.04 57.98 58.75 59.35
Data source: Obtained by arrangement of “China Statistical Yearbook 2014”
Research on Equalization of Public Services 1727
of the middle region and the west region was between 35%–38%. During 2006–
2013, the urbanization of population in east region slowed down, with only 6.77%
growth during 8 years; the middle region had the quickest growth speed, with
10.3% growth; the west region had a growth speed of 9.28%; the northeast region
had the slowest growth speed, with only 4.2% growth during 8 years. Moreover,
the active force for urbanization during the latest ten years in China was in the
middle region and the west region, which are also the regions with the largest
development potential in the future.
Table 2. Comparison on population age structure of the four major economic regions
(2013)
has a considerable influence on family and the society. Compared with the east
region, the middle region and the northeast region, the aging population depen-
dency ratio in the middle region is on the high side, and that in the west region
and the northeast region is on the low side. The dependency ratio of children
in the west region is the most highest, and that in the northeast region and the
east region is on the low side. As a whole, the dependency ratio of families in
the west region is the highest (38.29%), and that of the northeast region is the
lowest (26.57%), and the middle region and the east region are between the two.
During the 30 years since the reform and opening-up policy, the changes on
employment structure of national urban and rural population are firstly embod-
ied on employment population distribution of three industries; it appears in a sit-
uation that the proportion of the employment population of the primary industry
decreases, and that of the secondary industry increases but with frequent fluctu-
ation, and that of the tertiary industry continuously increases. The national total
employment population in 2013 was 769.77 million, in which 382.4 million were
urban employment population and 387.37 million were rural employment popu-
lation. The employment population between urban and rural areas appeared in
a “fifty-fifty” situation.
There are totally different employment structures in urban and rural areas.
The urban employment population includes the people working in state-owned
units, collective units, joint stock partnership units and joint ownership units,
as well as in foreign-funded enterprises, private enterprises and individual enter-
prises. There are mainly three types of employment population in rural areas:
the employment population in private enterprises, individual employment and
faming population. Since 2004, the proportion of population working in private
enterprises of rural employment population has exceeded than that of individ-
ual enterprises, showing the agriculture industrialization and enterprise-oriented
development trend, as shown in Fig. 1.
During 1949–2010, there was a small fluctuation in family scale in our coun-
try, decreasing from 4.43 persons for each household, which was the maximum
value, to 3.10 persons for each household. In 2013, the national per household
scale decreased to 2.98 persons for each household. When making comparison
among the families in the east region, the middle region, the west region and
the northeast region, the middle region has the largest family household scale,
achieving 3.19 persons for each household. The northeast region has the smallest
family household scale, with only 2.76 persons for each household. The middle
1730 Q. Fang and Y. Sheng
region and the west region have the family household scales of 2.98 persons and
3.04 persons respectively.
From the comparison on urban and rural residents income situations among
the east region, the middle region, the west region and the northeast region, we
can know that the per capital income of the urban residents in the east region
is the highest, and that in the west region is the lowest, and that in the middle
region and the northeast region is between the two. The urban resident income
in the east region is 1.44 times of that in the west region, and the per capita net
income of rural residents is 1.76 times of that in the west region. Viewing from
the data in the recent ten years, the proportion of income between residents in
the east region and the west region gradually decreases, from 1.52 in 2005 to
1.43 in 2013, and the proportion of rural resident income decreases from 1.98 in
2005 to 1.76 in 2013.
According to the data, the top four expenditures for urban residents are
“food”, “transportation and communication”, “culture, education and entertain-
ment” and “clothing”; the top four expenditures for rural residents are “food”,
“residence”, “transportation and communication” and “medical health”. There
are great difference on public service demands between rural residents and urban
residents. “Transportation and communication” is the common part for urban
and rural residents, but rural residents need more on the improvement of “res-
idence” and “basic medical service”, while urban residents pay more attention
on “culture, education and entertainment”.
It can be known from the comparison on urban residents expenditure in
the east region, the middle region, the northeast region and the west region
(as shown in Table 3) that the top four expenditures for urban residents in the
four regions are “food”, “transportation and communication”, “culture, educa-
tion and entertainment” and “residence”. In which the consumption on “food”
(37.07%) is the largest in the west region; the consumption on “transporta-
tion and communication” (16.36%) and “culture, education and entertainment”
(13.35%) are the largest in the east region; the consumption on “residence” is
the largest in the northeast region. It can be known according to the expendi-
ture of rural residents among the east region, the middle region, the northeast
region and the west region (as shown in Table 4) that the top four expenditures
for rural residents in the four regions are “food”, “residence”, “transportation
and communication” and “medical health”. In which the consumption on “food”
occupies the largest proportion (38.17%) in the west region; the consumption on
“residence” (20.74%) occupies the largest proportion in the northeast region; the
“transportation and communication” (13.22%) occupies the largest proportion
in the east region; the consumption on “medical health” (12.16%) occupies the
largest proportion in the middle region. Therefore, the urban and rural residents
in the four regions have different emphasis on public service supply, and the
quality and quantity of public service supply shall be improved according to the
emphasis of resident expenditures.
Research on Equalization of Public Services 1731
Region Unit Food Clothing Residence Household Transportation Culture, Medical Others
articles equipment and and enter- education health
communication tainment
National % 35.02 10.55 9.68 6.74 15.19 12.73 6.2 3.88
The % 35.33 9.04 9.45 6.61 16.36 13.15 5.78 4.29
east
region
The % 35.68 11.36 10.02 6.97 13.28 12.87 6.38 3.41
middle
region
The % 32.28 12.19 11.25 5.92 13.44 11.62 9.08 4.23
north-
east
region
The % 37.07 11.51 9.57 6.67 12.99 11.93 6.69 3.57
west
region
Data source: obtained by calculation and arrangement of “China Statistical Yearbook 2014”
Table 4. Per capital consumption expenditure of rural residents in the four major
economic regions (2013)
Region Unit Food Clothing Residence Household Transportation Culture, Medical Others
articles equipment and enter- educa- health
and commu- tainment tion
nication
National % 37.67 6.62 18.62 5.84 12.01 7.33 9.27 2.64
The east % 37.35 6.66 16.98 5.66 13.22 8.12 8.86 3.16
region
The % 34.45 7.82 17.28 4.03 12.27 9.02 12.16 2.96
middle
region
The % 37.42 6.1 20.74 6.34 10.33 6.94 9.46 2.36
northeast
region
The west % 38.17 6.84 18.98 5.85 12.09 6.12 9.46 2.49
region
Data source: obtained by calculation and arrangement of “China Statistical Yearbook 2014”
The floating population of China in 2014 was 0.253 billion, occupying 18.5% of
the total population of the whole country. Liu, et al. [8] conducted measurement
on the directivity and activeness of regional floating population in our country,
and divide the floating population regional type into four types. In which the
urban agglomeration regions such as the Pearl River Delta, the Yangtze River
Delta and Beijing are the active regions for net inflow. The rural areas in the
east region and the middle and west regions with concentrated population are
the active regions for net outflow. The periphery or rural areas of the developed
urban agglomeration in Zhejiang, Fujian, Guangdong in the east region are the
active balanced regions. The vast middle region, the west region, Shandong,
Hebei and north Jiangsu in the east region, Jilin and Liaoning in the northeast
1732 Q. Fang and Y. Sheng
region are the non-active regions. There is relatively stable distribution propor-
tion of floating population in the east region, the middle region and the west
region during 2000–2010, with the increase rate of 115%–120%. The floating
population in the east region occupies 2/3 of that of the whole country, and
with rapid growth speed. The floating population concentration area in coastal
regions gradually diffuses, with a continuous trend [9]. This trend is very sig-
nificant in the Yangtze River Delta area, and the diffusion in the two floating
population concentration areas of the Pearl River Delta and the Beijing-Tianjin-
Hebei region is relatively limited. Megalopolises such as provincial capitals of
inland regions attract a large amount of floating population; at the same time,
there is significant northward movement on the distribution center of floating
population (Table 5).
Index Year The east region The middle region The west region
Total floating 2000 5110.4 1237.8 1552.6
population (10
thousand persons)
2010 10987.1 2661.9 3407.1
The growth rate during 115.00% 115.10% 119.40%
the 10 years
Proportion of floating 2000 64.7 15.7 19.7
population in the
whole country (%)
2010 64.4 15.6 20
Proportion of floating 2000 11.1 3 4.4
family resister (%)
2010 22.1 5.8 8.8
developed regions is relatively high, and the under developed regions fail to have
an attraction for family reunion to floating population because of the lack of
high quality resources” [17].
vice by the urban and rural human resource integration. Establish the urban-
rural unified employment standard, to desalt the localization and census register
conditions of employment field, determine the subjective position of the labor
force market, and realize the allocation of labor force resources by the market.
Establish the labor force market with “online + offline” dual platforms, and
improve the construction of information network, to establish the information
system with the joint participation of enterprises, job seekers and occupation
introduction institutions. Perfect the employee training and promotion system,
and perfect the incentive mechanism, and encourage enterprises to maintain the
employee training rights, and enlarge the promotion space for peasant workers,
to ensure them to acquire the living capital in cities in an effective way.
(2) Promote the Balance and Accessibility of Urban and Rural Service Supply
Through giving directional difference to the rural area, relieve the unbal-
anced urban and rural structure. Further perfect the infrastructure construction
of transportation in rural areas, to realize the smooth roads, and finish the con-
struction of outward roads of poor villages. Improve the management and main-
tenance level of rural roads, and stable the road situation of rural roads, and
extend the service life, and solidify the rural roads construction achievements.
Promote the urban-rural integration, and perfect the urban system planning and
village and town building plan for all the urban and rural areas. Pay more atten-
tion on aging population and school children on rural public service resources.
Promote development on small and middle-sized cities, and find out support-
ing point for city back seeding countryside. It is suggested to conduct compre-
hensive consideration on terrain and economic conditions on the basis of building
the center cities in the middle region and the west region, to select a batch of
type I and type small-sized cities, to cultivate and encourage them to grow larger
and stronger, to form different layers of radiation circles, to provide substantial
support for the development of rural areas.
Formulate differential equalized strategy according to regional characteris-
tics. Conduct consideration by linking the infrastructure and the accessibility of
public services with the space behaviors of residents, to strengthen the accessibil-
ity of public space, employment, administrative center and other comprehensive
public services. Conduct different environment governance measures between
urban and rural areas, and lay the emphasis on environment improvement (such
as on aspects of drinking water, toilet, rubbish and waste water processing) in
rural areas, and lay the emphasis on air quality and traffic congestion improve-
ment in urban areas. According to the public service levels in the east regions,
the middle region, the west region and the northeast region, formulate different
propelling schemes. Highlight the characteristics of urban and rural areas, and
strengthen the advantages of landscapes in urban and rural areas. Avoid rural
areas copying the urban community mode, or substituting rural landscape with
urban elements, and replacing greenbelts with firm grounds. It is not advocated
to promoting the unified allocation standard between rural and urban areas, to
avoid the loss of rural characteristics and folk culture in rural areas.
Research on Equalization of Public Services 1735
(3) Strengthen the Effective Supply to Important Groups and Key Areas
Improve the public service supply level and guide the peasant workers to
integrate in cities in an orderly way. Perfect the infrastructure construction of
the labor force market, and set up and integrate the “urban and rural labor
force market information platform” integrated with enterprise employment infor-
mation, labor force employment hunting information, industry employee infor-
mation and employment policy. Increase the employment stability of peasant
workers, and improve the vocational skill quality of peasant workers by means
such as releasing free training tickets and implementing enterprise training fee
deduction taxes and duties, to increase opportunities for employment. Increase
the participation of the peasant worker group to urban public affairs, and set
up peasant worker service platform in communities, to help them adapt to the
city life as soon as possible by providing services such as formality agency, living
guidance, employment consultation and children entrance information, etc.
Construct service network, so as to improve the medical and pension services
for senior citizens in urban and rural areas. Conduct general investigation on
the data, residence and family conditions of the senior citizens in national urban
and rural areas, to establish documents for the senior citizens. Arrange medical
health institutions or nursing institutions for the aged in urban and rural areas,
to build the pension community integrated with medical and pension services,
to form the urban-rural covering and reasonably functional medical and pension
service networks with sharing resources between pension service and medical
health service. Improve the ability of door-to-door service for senior citizens
by the basic level medical health institutions, and encourage medical institu-
tions to provide green channel for senior citizens to seek for medical service,
and encourage nursing institutions for the aged to provide medical service for
senior citizens. Guide the enterprises to develop integrated pension communi-
ties integrated with residence, health care, pension, recovery and entertainment;
guide the community health service institutions to conduct transformation and
upgrading, and encourage the comprehensive medical institutions to cooperate
with nursing institutions for the aged, to give play to the radiation and leading
functions of medical alliance.
References
1. Chen J (2008) Market-oriented allocation of public resources: theoretical basis, risk
analysis and route selection. J Municipal Party Committee Party Sch Yinchuan
1:59–62 (in Chinese)
2. Chen J (2010) Empirical analysis on different influences of fiscal expenditure behav-
ior on infrastructures of urban and rural areas. Xinjiang Acad Soc Sci 3:24–29 (in
Chinese)
3. Daily H (2011) Reversal of urban and rural population structure. Hubei Daily
4. Jing T (2015) Base line justice: welfare basis point of balanced development
between justice and development. J Beijing Technol Univ (Soc Sci Ed) 1:1–7 (in
Chinese)
5. Lei X, Zhang N (2012) Research overview on urban and rural overall development
of social insurance. Stud Soc Secur 1:91–96 (in Chinese)
1736 Q. Fang and Y. Sheng
1 Introduction
Financial constraints can effect the corporate innovation [9] and investment effi-
ciency [10], which seriously restrict the further development of corporates and
ultimately affect economic growth. The World Bank report [7] shows that 75% of
non-financial listed companies in China choose financial constraints as a major
obstacle to business development, the highest percentage in 80 surveyed coun-
tries. Chinese corporates are facing the test of financial constraints.
China’s economy characteristics has been markedly different from the past
30 years. It mainly expresses in the following aspects: (i) economic growth tailed
off to around 7%, or lower; (ii) a large sum of money flows into the real estate and
infrastructure construction instead of the real economy; (iii) the real economy
such as manufacturing achieve consistently poor performance. The lack of capital
in the real economy is detrimental to the long-term sustainable economic growth.
And capital flowing into the real estate sector may not be able to promote
economic growth and add to economic stability. Instead, it may boost the housing
prices and make the real economy more difficult. Against this background, it is of
practical significance to study the causes and solutions of the financial constraints
of traditional enterprises.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 146
1738 H. Wang et al.
Enterprises with poor financial status may carry out earnings management
to meet the financing requirements to get funds. Creditors and shareholders are
in a weak situation with less information and can’t effectively supervise the use
of funds. So enterprises are more prone to moral hazard and adverse selection.
Creditors and shareholders will require a higher risk premium for taking a greater
default risk. Therefore, under the same conditions, the higher degree of informa-
tion asymmetry, the more serious financial constraints enterprises suffer. Some
studies show that accounting conservatism can help to ease the agency conflict.
Beaver and Ryan [5] find that accounting conservatism mitigates agency conflicts
between management and creditors by undervaluing net assets and discouraging
management from transferring creditor interests to shareholders through asset
substitution. Lafond and Roychowdhury [14] argue that management share hold-
ing ratio is negatively correlated with accounting conservatism because account-
ing conservatism eases agency conflicts between management and shareholders
by suppressing earnings management and over-investment. As a signal trans-
mission mechanism, accounting conservatism can effectively reduce the agency
costs and avoid the moral hazard and adverse selection caused by information
asymmetry. It shows that accounting conservatism plays a positive role in easing
financial constraints. Based on this, Hypothesis 3 is proposed.
Hypothesis 3: Under the same conditions, the improvement of accounting con-
servatism can alleviate the corporate financial constraints.
Due to the special national situation of our country, the SOEs still occupy the
dominant position in the market economy. SOEs not only have more resources
and policy advantages, but also enjoy the implicit guarantees from government.
When the SOEs fall into financial troubles, the government is more willing to pro-
vide financial assistance and banks are more willing to provide loans. It reduces
investors’ concern of accounting conservatism, that is, accounting conservatism
has little effect on alleviating the financial constraints of SOEs. On the contrary,
accounting information quality of non-SOEs is one of the most important factors
considered by the external capital suppliers. The conservative accounting policies
can help enterprises obtain financing from banks and investors. In the downturn,
non-SOEs suffer more serious financial constraints, so it is extremely important
for non-SOEs to obtain financing from external markets. Using accounting con-
servatism to show their stable profitability and good operating conditions can
alleviate the financial constraints. Viewed from the bank point, SOEs also have
the credit quotas granted by banks. So the financial constraints suffered by SOEs
aren’t serious even in the downtown and the financial constraints caused by eco-
nomic downturn will be more serious in non-SOEs. Based on the above analysis,
the hypothesis 4 is proposed.
Hypothesis 4: Under the same conditions, the increase of accounting conservatism
weakens the negative influence of economic recession on corporate financial con-
straints, and the interaction is more significant in non-SOEs.
Economic, Accounting and Financial Constraints 1741
where i indicates the firm, EPSi,t is earnings, RETi,t is returns, and DRi,t is a
dummy variable that equals 1 if RETi,t is less than 0, and 0 otherwise. So α2 is
the good news timeliness measure and α3 is the incremental timeliness for bad
news over good news, or conservatism.
Khan and Watts [13] improved the Basumodel. They considered firm-year
specific coefficients α2 (timeliness of good news) and α3 (conservatism) can be
expressed by linear functions of firm-year characteristics that are correlated with
the timeliness of good news and conservatism:
where SIZEit is the natural log of the market value, M Bit is the market-to-book
ratio, and LEVit is the debt-to-equity ratio. Replacing α2 and α3 in Eq. (1) by
Eqs. (2) and (3), respectively, yields the following empirical regression model:
As for the research model of financial constraints, Almeida’s cash flow sensitivity
of cash holding model has been recognized by many scholars. The model suggests
that if enterprises suffer higher financial constraints, more cash will be held
from the enterprise’s cash flows to prepare for investment opportunities, and the
cash flow sensitivity of the enterprise will be higher. In order to verify the four
Economic, Accounting and Financial Constraints 1743
hypotheses, this paper builds extension models based on Almeida’s cash flow
sensitivity of cash holding model.
(1) Test the influence of economic cycles on corporate financial constraints
In order to test the influence of the economic cycle on financial constraints
suffered by enterprises, this paper adds a cross-term between the operating cash
flow and the economic cycle to cash flow sensitivity of cash holding model, and
prove whether the economic cycle affect the financial constraints by checking
the coefficients of the cross terms. As for the Hypothesis 2, the group multiple
regression analysis is performed according to the different property rights.
(3) Test the influence of economic cycle and accounting conservatism on corpo-
rate financial constraints
In order to test the interaction of economic cycles and accounting conservatism
on the degree of financial constraints, this paper adds the cross-terms of economic
cycle, operating cash flow and accounting conservatism to cash flow sensitivity of
cash holding model. We observe the coefficients of the three cross-terms to verify
how the financial cycles and the accounting conservatism interact the corporate
financial constraints. And the group multiple regression analysis is performed by
property rights also.
assets; (3) excluding enterprises with uncertain property rights. Finally, 6933
samples are obtained, including 3975 SOEs and 2958 non-SOEs. In addition,
extreme values of 1% and 99% of the relevant variables are Winsorized to elimi-
nate the effects of extreme values. All financial data comes from CSMAR data-
base and we use Stata11.0 to analysis the data (Table 1).
Year 2008 2009 2010 2011 2012 2013 2014 2015 Total
Number 593 626 675 853 989 1066 1066 1065 6933
4 Empirical Results
4.1 Descriptive Statistics
Table 2 provides descriptive statistics for the variables, while the whole sample is
grouped by property rights. Table 3 reports the descriptive statistics of the vari-
ables in SOEs and non-SOEs. It can be seen that the mean value of Cash is
−0.00048, the minimum value is −0.759 and the maximum value is 0.636, which
indicates that enterprises have large differences in cash holdings; And the mean
value of Cash of non-SOEs is significantly larger than that of SOEs, indicating
that non-SOEs are more cautious than SOEs; The mean of Cfo is 0.0472, the min-
imum value is −4.27 and the maximum is 0.549, indicating that cash flows of dif-
ferent enterprises is also a gap. The mean of Conserve is 0.0397, the minimum is
−0.00223 and the maximum is 0.276, which shows that accounting conservatism
are widely used in Chinese enterprises, but there are big differences among enter-
prises. The mean of Conserving SOEs is 0.0397, and the mean of the non-SOEs
is 0.0476, which indicates that non-SOEs have higher accounting conservatism
than SOEs. The mean of Lev is 0.432, indicating that debt financing is one of the
important financing channels. The mean of the growth is 0.376, the minimum is
−1, and the maximum is 665.5. Assets scale and debt level of SOEs are higher
than those of non-SOEs, but the non-SOEs have higher growth and profitability.
Table 4 reflects the results of the correlation analysis. It can be seen that the rela-
tionship between the dependent variable and independent variables is consistent
with the hypothesis. Cfo is positively correlated with Cash, indicating that
A-listed manufacturing enterprises generally have financial constraints. Cfo is
negatively correlated with Cycle GDP, which indicates that the economic cycle
has obvious inhibitory effect on the external financing ability of enterprises.
Conserv is negatively correlated with Cfo, which indicates that the improve-
ment of accounting conservatism can alleviate the financial constraints suffered
by enterprises; In the following empirical regression analysis, a more comprehen-
sive summary will be made. In addition, all the correlation coefficients are less
than 0.6, indicating that there is no multicollinearity between them.
1746 H. Wang et al.
Variable Cash Cfo Cycle GDP Conserv Size Lev Roa Growth Debt State
Cash 1.000
Cfo 0.135∗∗∗ 1.000
Cycle GDP 0.021∗∗∗ −0.007∗∗∗ 1.000
Conserv 0.164∗∗∗ −0.127∗∗∗ 0.098∗∗∗ 1.000
Size 0.161∗∗∗ 0.081∗∗∗ −0.107∗∗∗ 0.415∗∗∗ 1.000
Lev 0.165∗∗∗ −0.129∗∗∗ 0.096∗∗∗ 0.999∗∗∗ 0.402∗∗∗ 1.000
Roa 0.117∗∗∗ −0.140∗∗∗ 0.053∗∗∗ −0.323∗∗∗ 0.010 −0.319∗∗∗ 1.000
Growth 0.036∗∗∗ −0.006 −0.005 0.024∗ 0.021∗ 0.024
0.010 1.000
Debt 0.070∗∗∗ 0.053∗∗∗ 0.056∗∗∗ 0.124∗∗∗ 0.082∗∗∗ 0.121∗∗∗
−0.330∗∗∗ 0.059∗∗∗ 1.000
State −0.127∗∗∗ 0.036∗∗∗ −0.136∗∗∗ −0.336∗∗∗ −0.295∗∗∗ −0.333∗∗∗ 0.106∗∗∗ 0.009 0.041∗∗∗ 1.000
t statistics in parentheses, ∗ p< 0.1, ∗∗ p< 0.05, ∗∗∗ p< 0.01
financial constraints. The first Hypothesis has been confirmed. The coefficient of
Cycle-GDP ∗ Cfo of SOEs is −1.910, while the coefficient of Cycle-GDP ∗ Cfo
of non-SOEs is −2.254, which shows that the financial constraints of non-SOEs
are more affected by the changes in the economic cycle. The Hypothesis 2 has
been confirmed.
Economic, Accounting and Financial Constraints 1749
5 Conclusion
This paper examines the interaction of the economic cycle and accounting con-
servatism on corporate financial constraints using the data of A-share listed
manufacturing companies from 2008 to 2015 from the property right perspec-
tive. It is found that the macroeconomic recession aggravates the financing diffi-
culty of enterprises and cause more serious corporate financial constraints. And
this negative influence is more significant on non-SOEs. Besides, the improve-
ment of corporate accounting conservatism can help to alleviate the financial
constraints suffered by enterprises, and this positive influence is more significant
on non-SOEs as well. We test the dynamic response of enterprises in the face of
the economic cycle recession by combining the economic cycle with the micro-
enterprise behavior. The micro-enterprisescan weaken the negative influence of
the economic recession by increasing accounting conservatism, which not only
helps to understand micro-transmission mechanism of economics cycle, but also
helps enterprises to improve cash using efficiency.
The shortcomings of the study: As the national macro-data is annual
announcement, only the annual data of the sample companies can be chosen in
order to match with the macro data. It reduces the length of research period and
may not accurately reflect the influence of macroeconomic fluctuations monthly
on corporate financial constraints. In addition, the accurate measurement model
is the basis of empirical research. Although there are many measurement meth-
ods abroad, there is no conclusive conclusion. Domestic research in this area is
1750 H. Wang et al.
References
1. Almeida H, Campello M, Weisbach MS (2004) The cash flow sensitivity of cash. J
Finance 59(4):1777–1804
2. Balakrishnan K, Watts R, Zuo L (2016) The effect of accounting conservatism
on corporate investment during the global financial crisis. J Bus Finance Account
43(5–6):513–542
3. Ball R, Shivakumar L (2004) Earnings quality in uk private firms: comparative
loss recognition timeliness. J Account Econ 39(1):83–128
4. Basu S (2013) The conservatism principle and the asymmetric timeliness of earn-
ings. Contemp Account Res 24(1):3–37
5. Beaver WH, Ryan SG (2005) Conditional and unconditional conservatism: concepts
and modeling. Rev Acc Stud 10(2):269–309
6. Bernanke B, Gertler M (1989) Agency costs, net worth, and business fluctuations.
Am Econ Rev 79(79):14–31
7. Claessens S, Tzioumis K (2006) Measuring firms’ access to finance. World Bank
8. Dong J (2006) Measuring China’s business cycles. Econ Res 7:41–48 (in Chinese)
9. Efthyvoulou G, Vahter P (2016) Financial constraints, innovation performance and
sectoral disaggregation. Manchester Sch 84(2):125–158
10. Guariglia A, Yang J (2015) A balancing act: managing financial constraints and
agency costs to minimize investment inefficiency in the Chinese market. J Corp
Finance 36:111–130
11. Jiang GA, Rao P (2011) Macroeconomic policies and corporate behaviorllbroaden
accounting and corporate finance research horizon. Account Res 3:9–18
(in Chinese)
12. Jiang L, Liu X (2011) Economic cycle fluctuations and the cash holding behavior
of listed companies. Account Res 9:40–46 (in Chinese)
13. Khan M, Watts RL (2009) Estimation and empirical properties of a firm-year
measure of accounting conservatism. J Account Econ 48(2–3):132–150
14. Lafond R, Roychowdhury S (2008) Managerial ownership and accounting conser-
vatism. J Account Res 46(1):101–135
15. Lara JMG, Osma BG, Penalva F (2014) Information consequences of accounting
conservatism. Eur Account Rev 23(2):173–198
16. Lucas RE (1978) Asset prices in an exchange economy. Econometrica 46(46):1429–
1445
17. Macve RH (2015) Fair value vs conservatism? aspects of the history ofaccounting,
auditing, business and finance from ancientmesopotamia to modern China. SSRN
Electron J 47(2):124–141
18. Gregory MN (2009) Principle of economics. Tsinghua University Press, Beijing
19. Shen CH, Lin CY (2015) Political connections, financial constraints, and corporate
investment. Rev Quant Finance Account 47(2):1–26
Economic, Accounting and Financial Constraints 1751
20. Shi X, Zhang S (2010) Behavior of substitution between trade and bank borrowing
through economics cycles: evidence from China. J Manage Sci China 13(12):100–
122 (in Chinese)
21. Sun ZA, Liu F, Li ZC (2005) Market development, government influence and cor-
porate debt maturity structure. Econ Res J 5:52–63
22. Watts RL (2003) Conservatism in accounting part I: explanations and implications.
Soc Sci Electron Publ 17(3):207–221
23. Weiss A (1984) Information imperfection in the capital market and macroeconomic
fluctuation. Am Econ Rev 74(2):194–199
24. Wier HA (2009) Fair value or conservatism: the case of the gold industry. Contemp
Account Res 26(4):1207–1233
Traffic Lights Dynamic Timing Algorithm
Based on Reinforcement Learning
1 Introduction
Traffic congestion has been an urgent problem of metropolitan cities in the world.
It is generally recognized that traffic signal improvements offer the biggest payoff
for reducing congestion and increasing the effective capacity of existing road net-
works, and the adaptive traffic signal control systems hold the most promise for
improvement. Reinforcement learning approach implicitly models the dynamics
of complex systems by learning the control actions and the resulted changes of
traffic flow. Meanwhile, it seeks the (sub)optimal signal plan from the learned
input-output pairs. RL, usually formalized as a framework of Markov decision
process (MDP), assumed that an intersection behaves similar to an intelligent
agent learning to plan green times in each cycle using current traffic information.
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 147
Traffic Lights Dynamic Timing Algorithm 1753
In Fig. 1, phase-one denotes for the east-west direction, traffic light turns
out to be green at a given time and vehicles get the pass permission, and for
the south direction, the light is red at an three-arm intersection; phase-two is
quite the reverse at that junction, the south direction for green light, and for
the east-west direction, the light is red. In Fig. 2, phase-one denotes light turns
green in the east-west direction and vehicles get the pass permission, and for the
south-north direction, the light is red at a four-arm intersection; phase-two also
denotes at that junction on the opposite way. In a FST strategy, the phases are
set in a round-robin (RR) manner and the duration of green time for each phase
is assigned in advance.
Due to the dynamic and uncertain nature of the traffic environment, we sup-
pose that the phase of release order is changing frequently for light timing. The
study will set phase selection strategy according to vehicles density. If the vehi-
cles density in phase-one direction is higher, then choose phase-one; otherwise
Traffic Lights Dynamic Timing Algorithm 1755
choose phase-two. In such way, which phase obtained the evacuation right can
be easily determined.
phase-one ρphase-one > ρphase-two
phase(t) = (1)
phase-two ρphase-one ≤ ρphase-two ,
where, phase(t) is the phase we have choose by phase selection strategy, ρphase-one
is vehicles density of phase-one direction, ρphase-two is the density of phase-two
direction.
In Eq. (2), V π (s) is the value function accumulated under the policy, which
defined as follows:
V π (s) =
s ∈S P (s, a, s )[R(s, a, s ) + γV (s )], (3)
where γ ∈ (0, 1) is the discount factor, we use the value function denote the
discounted sum of the rewards.
In reinforcement learning, we can easily formulate light timing strategy
within the MDP framework described, and the light at a signalized intersec-
tion as a learning agent. With this, light agent observe the congestion situation
of the environment around, and take an action which based on exploration strat-
egy, act on the environment and obtain a reward to evaluate the action. The light
agent learning through the trial-error experience to achieve the optimal policy.
approach, and the traffic light at junctions as a learning agent. The light agent
are not necessary to know how the environment works. It works by estimating
the values of state-action pair, called Q-value, which represents the maximum
discounted sum of future rewards an agent except to receive if it starts in state
s, choose action a and then continues to follow an optimal policy.
This paper designed a distributed traffic light control strategy based on Q-
learning. In such traffic light timing control model, each traffic light is an agent
in Q-learning, agent chooses green time of traffic light as the action of agent,
the density of vehicles in traffic lane at the intersection as the state, vehicles’
average traveling time as the reward of Q-learning in traffic lane. Learning sys-
tem interact with the environment constantly to get feedback and adjust map
strategy of state to action.
The update of Q-learning formula is as follows:
where, α is the learning rate, and is the discount factor to decide convergence
rate, R(s, a) in function is the reward incurred by the light agent for taking an
action a when the current state s and end up in state s , Q(s, a) is the sum
of future rewards to evaluate the performance of action we have took. It is an
iterative method that modifies the estimates of Q-value. In the learning process,
according to the Q-learning update rule set so that, the estimates converge to
the optimal Q-value and the light agent’s knowledge tends to be precise.
In this paper, Q-values are updated by the Q-learning update rule using
Boltzmann based exploration strategy. The light agent decides its action based
on the current learnt Q-value, the next section with Boltzmann exploration
strategy that is used by the light agent to update Q-value and subsequently
for choosing actions, the formulate is defined as follows:
Q(s,a)
e −t
p[a |s ] = , (5)
Q(s,a)
a∈A e
−t
In this method, a fuzzy description of the traffic density during each phase
at the intersection and its joint intersection, and the reward are implemented
respectively, means fuzzify. After that, we develop fuzzy rulebase and initialize
the reward values by the if-then relations.
(1) Fuzzy sets
A fuzzy set is a pair (U, A) where for each element x ∈ U , here U is a set, and A,
a membership function, is the degree of membership of x in (U, A). The closer
A(x) to 1, the higher likelihood of x ∈ A, and the closer A(x) to 0, the lower
would be. We use trigonometric membership function in this paper. The domain
of traffic density during the phase at the intersection is [0, 1], the density level will
be divided into four categories, density level below 0.25, density level between
0.25–0.5, density level 0.5–0.75, and the density level above 0.75, furthermore
the fuzzy sub-sets of traffic density is “zero” (Z), “low” (L), “medium” (M),
“high” (H), the membership function of each sub-sets is shown in Fig. 4:
The domain of the action reward is the real unit interval [−2, 2], the four
fuzzy sub-sets of reward is “zero” (Z), “small (S), “medium” (M), “medium big”
(MB), “big” (B), the membership function of subsets is shown in Fig. 5:
Traffic light optimize action selection strategy according to density situation
at current intersection (CI) and its joint intersection (JI), to maintain synergy
between traffic lights in a small scale, and improve the traffic capacity of the
total traffic road network efficiently.
1758 C. Lu et al.
3 Experiments
In order to verify the validity and correctness of the traffic light control strategy.
In this paper, the vehicle induction system (SVIS) is based on the shortest path
Traffic Lights Dynamic Timing Algorithm 1759
algorithm. Simulation and experiment on the road network through the open-
source software SUMO simulator, which is wrote by java, the road network is
a part of the U.S. State of Vermont. In the formula about traffic light control
strategy is based on Q-learning, we set α is 0.7, and γ is 0.9. The road network
is shown in Fig. 6 (Table 2):
Attribute Quantity
The number of traffic lights (3-way or 4-way intersection) 51
Number of road sections 206
Starting point 8
Terminal point 8
Departure speed (v/h) 1440
Total simulation time 15000
From Figs. 7 and 8, we can observe that the evaluation data obtained in SVIS
and QTGCS collaboration is better than SVIS and FTGCS. The experimental
results show that the FTGCS can improve the efficiency of traffic system and
reduce vehicle travel time, compared with the traditional fixed-time strategy,
based on the QTGCS can using real-time information of the road network, deploy
green light of traffic light reasonably, and shorten vehicle travel time and delay
time.
In SVIS, vehicle path selection based on induction information, and through
the fuzzy logic control according to vehicles information to optimize timing
scheme (FQTGCS), the light timing based on fuzzy optimize action. The eval-
uation data of SVIS and FQTGCS were compared with the evaluation data
obtained by SVIS and QTGCS. The quantity of vehicles in traffic system is
shown in Fig. 9, and vehicle average travel time in the lane is shown in Fig. 10:
Fig. 9. The quantity of vehicles in traf- Fig. 10. Vehicle average travel time in
fic system the lane
Traffic Lights Dynamic Timing Algorithm 1761
From Figs. 9 and 10, we can observe that the evaluation data obtained the
FQTGCS performance better than QTGCS. The experimental results show that
the FQTGCS can improve the efficiency of traffic system and reduce vehicle
travel time, compared with QTGCS. And the FQTGCS can using real-time
information of the road network, deploy green light of traffic light reasonably,
and shorten vehicle travel time and delay time.
4 Conclusion
In this paper, we formulated the traffic light timing strategy as a MDP problem,
and applied Q-learning algorithm with Boltzmann based exploration strategy
and fuzzy logic control strategy. A light learning agent will represent an inter-
section, where an agent can control the traffic light as much as road connected
to the junction. The state will represent the congestion situation around the
junction, it is seen from the traffic density of each phase in the intersection. And
action represent which phase will get the green time to the light. Our algorithms
are adaptive in nature and do not require road network model information, it
can control traffic light in real-time by traffic information. Simulation results
show that the performance of traffic system has been improved by the proposed
traffic light timing strategy. In the current work, we only control traffic light
with the information from joint junction without the light around. Thus, in the
future, one could develop to coordinate the lights connected in a small area, and
we intend to do our assumption in the future.
References
1. Abdulhai B, Pringle R, Karakoulas GJ (2003) Reinforcement learning for true adap-
tive traffic signal control. J Transp Eng 129(3):278–285
2. Arel I, Liu C et al (2010) Reinforcement learning-based multi-agent system for
network traffic signal control. IET Intell Transp Syst 4(2):128–135
3. Azimirad E, Pariz N, Sistani MBN (2010) A novel fuzzy model and control of single
intersection at urban traffic network. IEEE Syst J 4(1):107–111
4. Bi Y, Srinivasan D et al (2014) Type-2 fuzzy multi-intersection traffic signal control
with differential evolution optimization. Expert Syst Appl 41(16):7338–7349
5. Chin YK, Wei YK et al. (2012) Q-learning traffic signal optimization within multiple
intersections traffic network, pp 343–348
6. Chu T, Wang J, Cao J (2015) Kernel-based reinforcement learning for traffic signal
control with adaptive feature selection, pp 1277–1282
7. Khamis MA, Gomaa W (2014) Adaptive multi-objective reinforcement learning with
hybrid exploration for traffic signal control based on cooperative multi-agent frame-
work. Eng Appl Artif Intell 29(3):134–151
8. Lu W, Zhang Y, Xie Y (2011) A multi-agent adaptive traffic signal control system
using swarm intelligence and neuro-fuzzy reinforcement learning. In: Integrated and
sustainable transportation system, pp 233–238
9. Moghaddam MJ, Hosseini M, Safabakhsh R (2015) Traffic light control based on
fuzzy q-leaming. In: Artificial intelligence and signal processing, pp 124–128
Research on the Collaborative Governance
of Innovation Network Based on the Extended
JM Model
1 Introduction
With the development of the economy, the model of single enterprise innovation
gradually evolved into cooperative innovation, further promoting the develop-
ment of innovation network. Collaborative innovation is the key impetus to pro-
mote the development of economy, it is complicated innovate organization. Coop-
erating the innovation network is a process that main bodies take the flowing of
knowledge as the carrier from communication, coordination and cooperation to
collaborative [15]. Therefore, the external formal of collaborative innovation is
the development of innovation network, to realize the collaborative innovation
is the ultimate purpose to form the innovation network.
Through the various innovative elements (micro and macro elements) inno-
vation network use different collaborative ways to achieve nonlinear interaction
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 148
Collaborative Governance of Innovation Network Based on the JM Model 1763
and coupling, coming into being effective knowledge flowing and optimized allo-
cation in the self-organizing mechanism, the coupling mechanism, the network
drive mechanism and coordination mechanism, to produce the overall effect is
greater than the sum of the parts. In innovation network resources knowledge
and behavior characteristic of subjects have great difference. There often appears
the phenomenon such as fuzzy relations and negative interaction in the operation
of the network. Perrow [8] proposed in a tight and complex system, the com-
plex and unpredictable interaction between non-standard parts (namely different
standards or specifications or character of the individual) will bring disastrous
threat. Some integrate both parts of the performance equation-process and pro-
ductivity from collaboration [6,7]. Throughout the domestic and international
related researches, these mostly focus on collaborative innovation performance,
operating carrier (mainly is knowledge), running mechanism and the role of sub-
jects. Although some scholars proposed to attaches great importance to the net-
work of relationships and interactions between subjects, but not for an in-depth
studying, and about the process of innovation network operation controlling and
synergies playing is not enough. In order to achieve the goals of innovation net-
work, orderly and efficient operation of network organizations is necessary. How
complicated and challenging collaboration can be, even though it may be needed
now more than ever [2]. So we need to do more. Collaborative governance can
effectively guarantee the network relationship, benign interaction and effective
synergy [21], more suitable for the process controlling of innovation network
operation.
The general governance focuses on network governance in a single point, line
or dimension, ignoring the design and planning of overall network governance.
However, collaborative governance emphasizes the integration of the network
structure, the relationship and interaction [29]. Finally, “synergy” includes mul-
tiple subjects participating and cooperating, and integrity, interactive and sus-
tainability; “Governance” is aimed at the network nodes, the relationship of
network and the overall network, also includes the control of the governance
target and the mechanism. Therefore, collaborative governance can guarantee
network efficient operating. In view of this, this research refers to the thought of
collaborative governance and try to build collaborative governance framework in
innovation network, to better solve the problems in the innovation network oper-
ation, providing a reference for the management and development of innovation
network.
flowing, and weak relationship promotes explicit knowledge flowing. Low cost
characteristics of weak ties can absorb new knowledge and resources more, but
generally causing the low-stable dominant network, insufficient knowledge flow-
ing. Strong ties guarantee stable network, the cooperation of high willingness
and sufficient knowledge flowing, strong relationship. Of course, too much strong
ties will restrict the development of the network and the spread of knowledge.
For these, we need to have different priorities. For most of China’s national high-
tech industry base, the high density of innovation network, for example, should
be properly promote weak relationship evolve to the strong relations, designing
and building complete industry chain and technology chain, to achieve higher
conversion rate and win-win among the members. Meanwhile, it should be paid
attention to the weak relationship developing to obtain more knowledge, espe-
cially the heterogeneous knowledge. In this way, in innovation network weak
relationship strengthen the heterogeneous knowledge acquisition, strong rela-
tionships improve the efficiency of knowledge sharing.
asymmetry, low knowledge matching degree and the bridging degree for unfa-
miliar subjects, especially in knowledge.
Interaction of collaborative governance logic stresses the main body of gov-
ernance is not a single subject, but all the members in network. They work
together to promote the development of the network. Based on stakeholder the-
ory [5], multiple participation not only promotes interaction, but also control
and guide malignant it. In this way, it deepens understanding and trust among
subjects and improves the efficiency of knowledge and resources flowing. Such
as regional economic competition forces the Yangtze river delta to seek deep
interactions with surrounding areas and other economic belt, and according to
the geographical relations and the level of economic set up different levels of
city circles. At the same time, improving city circle cooperation interaction, and
promoting the development of regional economy.
and the heterogeneity of knowledge, to develop the network value. Trust must
be embedded network, could it exist. To product and strengthen the relationship
can establish and strengthen trust; On the contrary, the trust is the necessary
condition to build strong relationships. On the one hand, Constraint mechanism
and incentive mechanism is in favor of reducing the cost of strong relationship,
and the contradiction caused by complicated relationship, On the other hand,
to encourage the network extension, improve network activity.
Second, establishing sharing mechanism for knowledge and other resources,
interest distribution mechanism and risk mechanism to avoid opportunism. Being
satisfied subjects with interests’ needs, and promoting cooperation and benign
interaction, promoting the knowledge’s efficient flowing in the network. As well
as principal partners in collaborative innovation network, subjects also is com-
petitors, benefit disputes and conflicts among them will affect the overall effect of
the collaborative governance. Meanwhile, their some characteristics can affect the
formation and development of innovation network collaborative governance. As a
result, interest distribution mechanism and risk mechanism reasonable guarantee
interest allocation effective and fair. The conclusion of the agricultural model,
when knowledge’s gap is very big, strong party interaction’s will is insufficient,
combined with the weak absorption ability is limited, leading to a lower efficiency
of interaction. Conversely, when the gap is small, the effective is better. Sharing
mechanism which can reduce the knowledge’s gap, and can avoid the information
asymmetry. In the process of knowledge flows, involving the subjects learn and
internalized knowledge. Therefore, organizational learning mechanism needs to
further strengthen, to reduce the knowledge differences between subjects, so as
to promote knowledge flow.
Third, through establishing supervision mechanism and evaluate mechanism,
evaluating subjects’ execution and the completion of target. Due to the diversifi-
cation of innovation network, supervision mechanism is hard to effective operate.
For effective supervising the whole process of collaborative governance, firstly,
Institutionalized problems such as the organization’s powers and the constitution
and the profit distribution, and implement the innovation network. Increasing
default cost and effectively reducing the coordination costs, to constraints sub-
jects’ behavior with institutional in collaborative governance. Second, strengthen
moral construction, forming public opinion and supervision mechanism. Super-
vising subjects’ behavior in the process of collaborative governance with moral.
Through the evaluate mechanism to clear synergy governance effect, clear and
control each problem source in every stage and every node.
Of course, the cooperative governance mechanism is not only for one link
in the relationship, interaction and synergy, but on the whole process gover-
nance. Constraint mechanism, for example, can also constraint subjects’ behav-
ior in the collaborative governance, control interaction. Supervision mechanism
and evaluating mechanism is also valid for each link and each movement. And
the characteristic of innovation network and collaborative innovation represents
the different phase characteristics, cooperative governance mechanism should
also be adjusted. In the process of the collaborative governance, collaborative
1770 C. Zou et al.
5 Conclusion
This paper mainly discusses how to solve the problems in the operation of inno-
vation network through cooperative governance, according to the extension of
JM model further analysis of the collaborative governance logic, on this basis to
analyze the innovation network of cooperative governance, and put forward the
safeguard measures of innovation network collaborative governance, so as to sup-
port and guarantee for the operation of the collaborative governance, to promote
innovation network synergy. Effective cooperative governance mechanism is able
to efficiently governance the relationship, interaction and synergy in innovation
network. Finally, to solve the relationship which is not clear and soured between
the main body in innovation network, network is running off track, collaborative
innovation is unsustainable, coordination is inefficient and so on. In the future,
further research on collaborative governance mechanism and the strength of the
control network relations, and the definition and controlling for the relationship
intensity in governance model etc.
References
1. Ansell C, Torfing J (2015) How does collaborative governance scale? Policy Polit
43(3):315–329
2. Bryson JM, Crosby BC, Stone MM (2015) Designing and implementing cross-sector
collaborations: needed and challenging. Public Adm Rev 75(5):647–663
3. Charron N, Dijkstra L, Lapuente V (2014) Regional governance matters: quality
of government within european union member states. Reg Stud 48(1):68–90
4. Chen J, Yang YJ (2012) Theoretical basis and content for collaborative innovation.
Stud Sci Sci 62:270–277
5. Dugger WM (1996) The mechanisms of governance. J Econ Issues 44(4):261–281
6. Emerson K, Nabatchi T (2015) Collaborative governance regimes
7. Emerson K, Nabatchi T, Balogh S (2012) An integrative framework for collabora-
tive governance. J Public Adm Res Theor 22(1):1–29
Collaborative Governance of Innovation Network Based on the JM Model 1771
Chang Liu1 , Yixiao Zhou2 , Wei Zhao3 , Qiang Jiang3(B) , Xuedong Liang3 ,
Hao Li3 , Hua Huang3 , and Shucen Fan4
1
School of Securities and Futures, Southwestern University of Finance
and Economics, Chengdu 610052, People’s Republic of China
2
School of Economics and Finance, Deakin Univesity, Perth, Australia
3
Business School, Sichuan University,
Chengdu 610065, People’s Republic of China
jiang.qiang@outlook.com
4
Department of Economics and Management,
Sichuan Technology and Business University,
Chengdu 611745, People’s Republic of China
1 Introduction
Since the 30 years of reform and opening up, China’s real estate industry grad-
ually changed from the planned economy system to market economic system,
and real estate and related industries have been the most important industry in
China’s regional social economy and played an important role in the national
macroeconomic development. However, along with the gradual progress of the
marketization process of the housing market, we are facing some unexpected
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 149
1774 C. Liu et al.
negative effects of China’s housing market including high housing prices and low
housing affordability, housing prices increased too fast, the fever of land specu-
lation. These negative effects demonstrated that the housing market operation
mechanism has some limits. The direct consequence is that houses have become
unaffordable for the low-income groups being not satisfied and housing structure
becoming unbalanced.
To solve the current problems in China’s housing market, the government
introduced a series of monetary policy and fiscal policy. The policy makers tried
to change the performances of banks and real estate developers in the market and
cool the overheated housing market by standardizing housing market laws and
regulations. Especially after 2008, the rapidly rising housing price has become
the focus of public opinion, how to keep housing price away from rapid increases
has become a vital problem which the government needs to solve urgently. Under
this background, a series of price regulation policy were introduced at the historic
moment.
Since 2005, housing price regulation policy which issued by Chinese gov-
ernment can be roughly divided into two categories: First, policy instruments
are design to suppress housing prices, such as issuing purchase orders, increasing
bank interest rates, raising housing taxes, raising the down-payment ratio of real
estate and so on. Second, policy instruments are design to stimulate the housing
prices, such as: tax reductions, lower down-payments, lower interest rates, and
so on.
Since 2005, little evidence has been found in the efficiency of the existing
housing policy. As the housing market policy failure in 2005, China’s housing
market continued to deteriorate and housing prices in many cities increased
sharply in 2006. From 2005 to 2007, the national housing policy focused on
increasing bank interest rates and raising tax rates. For example, in May 2006,
the State Administration of Taxation issued a notice on strengthening housing
business tax levy management. This new tax regulation required that after June
1st, individuals housing purchased less than 5 years shall paid the business taxes
at the full rate; In 2007, the People’s bank of China increased interest rates by
6 times and the deposit reserve rate by 10 times, it is still failed to control rapid
increased property prices.
In 2008, to cope with the financial crisis, the Chinese government decided to
stimulate China’s housing market rather than suppress it. Therefore, the gov-
ernment launched a series of new policy to stimulate housing prices. These new
policies included the temporary exemption of stamp duty on personal sales or
purchase housing; the temporary exemption of land value-added tax on personal
sales housing; local government can introduce their own policies to encourage
housing consumption tax relief and so on. In 2009, the housing price regu-
lation policy was mainly on the inheritance, strengthening and deepening of
2008 real estate regulation policy, while highlighting the focus of strengthening
the housing security. Therefore, the early 2009 housing price regulation policy
mainly includes several aspects like tax, financial market and housing security.
In 2009, while most of the industry was facing a financial crisis, China’s real
Housing Price Regulation Policy in China 1775
2 Literature Review
There are very rich economic research results about national regulation policy
effect on housing prices. The existing studies mainly focus on four aspects:
First, it is the analysis of several factors affecting the housing prices to discuss
the effect of regulation policies, Such as Wang and Chen [14] in Wuhan City as
an example, the author thought that four indicators: sales accounted for below
90 square meters, the new supply area, the money supply (M2), the lending
rate, put a significant impact on Wuhan housing prices. This study focuses on
the qualitative analysis of the regulatory policy and its effect, but it is lack of
accurate interpretation to the effect of regulation policy.
Second, it is the single regulation policy effects on the real estate prices, for
example, monetary policy, land policy and tax policy. Nie and Liu [11] thought
that China’s monetary policy can affect real estate prices, and the money supply
was more significant than the interest rate policy regulation function by analyz-
ing the monetary policy from 2005 to 1994. At present, most of the domestic
literatures focus on qualitative analysis to study the effect of the policy on the
regulation of real estate market, and most research results show that monetary
policy has a significant impact on housing prices. This study did not consider the
regulation effects of a number of policies under the joint action comprehensively,
and be lack of a comprehensive research on overall coordination and interaction.
1776 C. Liu et al.
et al. [13] applied Neural Networks in housing prices predications. Other studies
included index theories [6,12,18,20].
In summary, the current study mainly focused on the qualitative study of
single and comprehensive regulation policy effect, and there is less literature to
use quantitative analysis method to study the regulation policies the superpo-
sition and neutralizing effect between regulation policies. Therefore, this article
adopts the monthly price indexes between January 2006 and February 2015 to
establish housing price regulation policy effectiveness evaluation system based
on the ARIMA model and intervention analysis model. Furthermore, this study
analyzes the influence of policy factors on the real estate prices empirically.
According to the results of the model, we conducted a quantitative performance
evaluation on house price regulation policy by years. The quantitative study
focuses on three key themes including analyzing the anticipation, conductivity
of housing price regulation policy implementation effect in recent years, as well
as the mutual influence between different time point housing price regulation
policy implementation effect, their hedge and superposition, and reasonable pre-
dictions of the effects of policies. These results can provide a scientific basis for
the formulation of the new regulation policies.
3 Theoretical Basis
Time series are often influenced by special events and situations such as external
interventions. Intervention refers to the stand or fall of forecasting model fitting
degree, that is the pros or cons of fitting degree between the simulated value
generated by the prediction model and the historical actual value.
(1) Intervention variables’ form
The basic variables of intervention analysis model are intervening variables.
There are two kinds of common interference variables: the first is the sustainable
intervening variables, which indicates that there have been affected when T
moment happened, then the step function representation can be used; the second
is transient interference variables, which expresses that it can been affected just
for the moment when it occurs at a certain moment, using unit impulse function.
This article not only studied the regulation effect of housing policy at a certain
time, but also analyzed the impact of these policies on the future housing prices,
so we chose the first kind of intervention variables’ form, as Eq. (1):
0, before intervention events (t < T )
StT = (1)
1, after intervention events (t < T ).
Zt = ωStT (2)
Zt = ωB b StT (4)
where: StT = 1,
Zt (1 − δB) = ω, (5)
Zt − Zt−1 δ = ω. (6)
Housing Price Regulation Policy in China 1779
By deformation
ω
Zt = , (7)
1 − δB
where, B is a backward shift operator. According to the forecast results, if the
fit of the forecasting model is higher, and the forecast value is also in line with
the actual situation, then the prediction model has a certain application value.
On the contrary, the prediction model is invalid. So on the basis of the ARIMA
model, it is necessary to carry out the intervention analysis. The purpose of
our intervention analysis is to assess the specific impact of the other policy
intervention or emergency on the economic environment and economic process
by using the quantitative analysis.
4 Empirical Study
4.1 Evaluation Index Selection
This paper investigates how the housing market response to the change of the
housing price regulation. Thus housing price index is the most direct indicators
to reflect the housing price changes. Housing price index is the advantage of
“homogeneous”, which can exclude the price fluctuations which due to market
supply and demand relations and other reasons after the effect of these factors:
housing quality, construction structure, geographical location and others. The
monthly housing price indexes of 2006 to 2010 can be achieved in the iFinD
software directly, while the 2011 to February 2015 index can be calculated by
averaging monthly housing price indexes of 70 large and medium-sized cities
nationwide. Thus the random time series of housing price index is obtained.
Observing the housing price index time curve in Fig. 1, we find that the data
is not stable and contain a clear trend. Data is changed into a stationary random
sequence by first order difference, as shown in Fig. 2:
where, Zt is the difference between the actual values and predicted values.
Namely in Eq. (9),
0.0044
Zt = . (9)
1 − 0.8357B
Through the t-test of the regression value of δ, it is found that the estimated
parameter δ is significant. So after the intervention analysis, the ARIMA model
is in Eq. (10):
0.0044
Xt = 0.0190 + 0.8370Xt−1 + εt − 0.2787εt−1 − . (10)
1 − 0.8357B
Housing Price Regulation Policy in China 1781
By analyzing the housing price regulation policy since 2006, we can find that
there are three main kinds of policies which have a timely effect. First is involved
in the adjustment of the relevant taxes. Such as in May 2006, the state promul-
gated the country of six, and the State Administration of Taxation issued the
notice of the relevant issues on strengthening the management of housing busi-
ness tax collection. This policy promulgation is St2. Taking it into validity test,
we can see that the next month policy effectiveness is significant, which indi-
cates that China’s commercial housing price index has made the reflection in
that month.
Second is to adjust the bank loan interest rate and the first loan ratio. In
September 2006, the people’s bank of China issued a notice on strengthening
the management of the commercial real estate credit, which clearly stipulates
1782 C. Liu et al.
that the loans down payment ratio of the second set housing shall not be less
than 40%, lending rates may not be lower than 1.1 times of the benchmark
interest rate which announced by people’s Bank of China over same period and
grade, and the proportion of the loans down payment and interest rates will be
significantly improved with the increase of volume, the purpose of which is to
regulate trade order and rectify the real estate market.
Third is about to investigate and punish the violations of law and discipline
and the ownership disputes. However, the policy which can take effect imme-
diately is still relatively less, and most of the policies are lagging, such as the
state issued the regulations to strengthen land supply regulation and shorten
the period of land development in October 2007, and in the same month, prop-
erty tax “idling” ten spread to 10 cities. Housing price index began to make the
corresponding reflection in December.
6 Conclusion
policy is also different, where the policies have a certain impact on the next two
years housing prices in September 2007 and February 2013. Policies launched
by government may have overlapping or effect neutralization. These policies are
international.
Using ARIMA model and intervention analysis to analyze the effect of the
policy intervention, predicting the effect of this policy, the effect of superposition
or neutralize is analyzed by linking the policy and previous policy. It will provide
reference for the relevant functional departments to formulate effective housing
price regulation, so that the policies have been introduced together to achieve
the best regulation effect.
References
1. Agarwal S, Rengarajan S et al (2016) School allocation rules and housing prices: a
quasi-experiment with school relocation events in singapore. Reg Sci Urban Econ
58:42–56
2. Chen WD, Hall S, Pauly P (2016) Policy failure or success? Detecting market
failure in China’s housing market. Econ Model 56:109–121
3. Chen X, Fang Y (2016) Research on the implementation and exit effect of the
control policies on real estate based on VECM and DSGE model. Mod Econ Sci
38:31–43
4. Crawford GW, Fratantoni MC (2003) Assessing the forecasting performance of
regime-switching, arima [auto regressive integrated moving average] and garch [gen-
eralized autoregressive conditional heteroskedasticity] models of house prices. Real
Estate Econ 31:223–243
5. Dang Y, Liu Z, Zhang W (2014) Land-based interests and the spatial distribution
of affordable housing development: the case of Beijing, China. Habitat Int 44:137–
145
6. Dezhi L, Yanchao C et al (2016) Assessing the integrated sustainability of a public
rental housing project from the perspective of complex eco-system. M.E. Sharpe
7. Feng H, Lu M (2010) School quality and housing prices: empirical evidence from
a natural experiment in Shanghai, China. J Hous Econ 22(4):291–307
8. Feng Q, Wu GL (2015) Bubble or riddle? An asset-pricing approach evaluation on
China’s housing market. Econ Model 46:376–383
9. Ge J (2017) Endogenous rise and collapse of housing price: an agent-based model
of the housing market. Comput Environ Urban Syst 62:182–198
10. Hui ECM, Zhong J, Yu K (2016) Land use, housing preferences and income poverty:
in the context of a fast rising market. Land Use Policy 58:289–301
11. Nie X, Liu C (2005) An empirical study on the transmission of Chinese monetary
policy toward investment. Manage Rev 17(2):51–54
Housing Price Regulation Policy in China 1785
12. Shi J, Zou W (2009) Forecasting house price index for USA with autoregressive
integrated moving average models. In: The international conference on manage-
ment of technology, Taiyuan, pp 590–594
13. Wan TL, Wang L et al (2016) Housing price prediction using neural networks. In:
International conference on natural computation and fuzzy systems and knowledge
discovery, pp 518–522
14. Wang L (2013) The effect of recent real estate regulation policy on house price: a
case study of Wuhan city. Mod Prop Manage 221(2):306–315
15. Wen H, Zhang Y, Zhang L (2014) Do educational facilities affect housing price?
An empirical study in Hangzhou, China. Habitat Int 42(42):155–163
16. Woo K, Sung-Suk R (2015) Time series modeling for forecasting land price change
rate-focusing on the intervention ARIMA model. Korea Real Estate Acad Rev
60:142–154
17. Wu W (2014) Effectiveness evaluation of housing price regulation policy based on
PSR model.business information. Bus Inf 20:63–63
18. Xie X, Hu G (2007) A comparison of Shanghai housing price index forecasting. In:
International conference on natural computation, pp 221–225
19. Zhang H, Li Y, Li H (2013) Multi-agent simulation of the dynamic evolutionary
process in Chinese urban housing market based on the GIS: the case of Beijing.
Autom Constr 35(11):190–198
20. Zhang Z, Tang W (2016) Analysis of spatial patterns of public attention on housing
prices in Chinese cities: a web search engine approach. Appl Geogr 70:68–81
21. Zhou Z (2016) Overreaction to policy changes in the housing market: evidence
from Shanghai. Reg Sci Urban Econ 58:26–41
A Descriptive Analysis of the Impact of Air
Pollution on the Mortality of Urban and Rural
Residents in Mianyang
Jianping Xu1 , John Thomas Delaney2 , Xudong Chen3(B) , and Liming Yao4
1
Regional Economic Development and Enterprises Administration Center,
Sichuan University, Chengdu 610062, People’s Republic of China
2
Kogod School of Business, American University, 4400 Massachusetts Avenue,
Washington, DC 20016-8044, USA
3
School of Management Science, Chengdu University of Technology,
Chengdu 610062, People’s Republic of China
chengxudong198401@163.com
4
Uncertainty Decision-Making Laboratory, Sichuan University,
Chengdu 610064, People’s Republic of China
Abstract. The paper uses the air pollution data of Mianyang, a city
in Sichuan Province, and the death data of its urban and rural resi-
dents from 2008 to 2014 to investigate the relationship between air pol-
lution and human mortality. SPSS19.0 is utilized to conduct a descriptive
analysis of the correlation between the principal air pollutants (including
PM10 , NO2 , SO2 ) and the mortality (including gender, age, education,
respiratory diseases, non-respiratory diseases, and chronic obstructive
pulmonary diseases). Hence, the paper shows to what extent air pollu-
tion affects the mortality rate of the people in Mianyang, thereby further
evaluating the influence of air pollution on the health of the residents.
The results indicate that the rising trend of air pollution is consistent
with the expansion of industrial production. The paper estimates that the
mortality rate is correlated to air pollution during the normal develop-
ment of society. The government should emphasize air pollution control
and take measures to reduce air pollution, so as to improve the quality
of life of urban and rural residents and to reduce the mortality caused
by air pollution directly or indirectly.
1 Introduction
Haze related pollution have been proved to be the reason for lung cancers and
some other diseases and also have serious impact on and threatening future
health consequences across the country [9]. Numerous Chinese and foreign stud-
ies have shown that PM10 and gaseous pollutants, such as sulfur dioxide (SO2 )
and nitrogen dioxide (NO2 ), have a significantly negative impact on human
c Springer International Publishing AG 2018
J. Xu et al. (eds.), Proceedings of the Eleventh International Conference
on Management Science and Engineering Management, Lecture Notes
on Multidisciplinary Industrial Engineering, DOI 10.1007/978-3-319-59280-0 150
A Descriptive Analysis of the Impact of Air Pollution 1787
health. Moreover, a positive correlation between mortality and air pollution has
been demonstrated in multiple researches [1,4,8,16]. The 2002 WHO report esti-
mated that the global urban air pollution caused at least 1 million deaths per
year and 7.4 million disability-adjusted life years (DALY) [3].
China is one of the countries with severe environmental problems. The con-
centration of particulate air pollution is far beyond the developed countries in
Europe and North America. In the past 30 years, the economy and industry
had been developing continuously. The rapid growth of energy consumption has
aggravated air pollution, which becomes one of the major risk factors for damag-
ing human health. The irrationality of the energy structure is that coal accounts
for nearly 70% of the total energy consumption in China. The PM and gaseous
pollutants emitted from coal combustion are the main sources of air pollution.
The cities in China face severe air pollution, and the PM10 is much higher than
the 20 µg/m3 (annual average) and 50 µg/m3 (24 h average)–the air quality
standards recommended by the WHO.
Mianyang City, which is located in Southwest China, is the second largest
city in Sichuan Province in terms of economic output. It has a dry climate and
is densely populated. The agriculture, industry, and service (transportation) are
well-developed there with a ratio of 15.3: 50.5: 34.2, indicating that the industry
represents half of the economic output. Therefore, it is of significance to examine
the impacts of air pollution on human health. In order to study the impact of
air pollution on the mortality rate of urban and rural residents in Mianyang,
the paper conducts a descriptive analysis, investigating the correlation between
the principal air pollution data (including PM10 , NO2 , SO2 ) and the human
mortality.
3 Results
and damage of residential buildings and industrial plants caused by the earth-
quake, to a certain extent, lead to an increase in airborne dust particles and toxic
gas content, and exacerbate air pollution, thereby resulting in the Mianyang air
pollution peak between 2008 and 2009.
(2) During the restoration and reconstruction process starting from June,
2008, due to the industrial and agricultural losses and the impacts of the earth-
quake, the local workers tended to find employment outside Mianyang, whereas
the enthusiasm of the non-local job seekers, especially skilled workers and high-
tech talents, were seriously affected in a certain period of time. Second, because of
the severe capital losses caused by the damages to the plants and the equipment,
some enterprises need relocation and reconstruction, increasing the costs and
slowing down the process of the industrial and agricultural recovery in Mianyang.
Moreover, most of the enterprises need a longer recovery period, since the major
coal companies stop production due to the disaster; other kinds of electricity sup-
plies are affected; and the main roads and power facilities are severely damaged
[7]. During the recovery period, because of the less industrial and agricultural
emissions caused by the decreased productivity of the enterprises, and less traffic
pollution emissions, the air pollution generally showed a downward trend from
2009 to 2012. After a 3–4 year of recovery, the overall industrial and agricul-
tural industries were recovered in 2012, and the infrastructure was gradually
improved, resulting in a rise in air pollution.
(3) The reasons why the air pollution in Mianyang showed a decreasing
trend between 2013 and 2014 are twofold. First, according to Sichuan Air Pol-
lution Prevention and Control, compared to 2012, the annual average concen-
tration of PM10 should decrease by 10%. Mianyang, as one of the key cities
in Sichuan Province, takes strict emission reduction measures to control the
coal-fired elevated-source, industrial point source, urban non-point source, and
moving source (vehicle pollution), through strict assessment and accountabil-
ity. Therefore, air pollution emissions are reduced from a number of sources.
Second, because of the overall improvement in the environmental awareness of
the Mianyang residents, the development of tourism, the promotion of energy
conservation, as well as green travel, air pollution emissions are reduced.
Year/ Total Male death Female Unknown Illiterate/ Elementary Middle University
death toll population toll death toll degree of semi- school school and above
of education literate education education education
Mianyang death toll death death death death
2008 5407100 10360 6504 1867 6912 5245 2569 271
2009 5446500 10557 6698 2047 6781 5620 2625 280
2010 5418700 12129 7553 2543 7648 6194 3025 272
2011 5433600 14387 8999 2706 9449 7490 3453 284
2012 5454000 15395 10044 2980 10433 8197 3520 306
2013 5474000 15942 10382 3243 10605 8458 3725 290
2014 5488000 20212 13252 0 0 31275 1827 386
that Mianyang faced a severe aging problem from 2012 to 2014. More precisely,
Mianyang has a large aged population which grows fast. By the end of 2013, the
residents aged 60 and above reached 1,07 million, accounting for 19.8% of the
total population, and the individuals who die of old age were 22,939 in 2014.
Therefore, the 2013–2014 mortality rate is influenced by the aging population
[10,14]. Because of the changes in the social demographic structure, the impact
of air pollution on human death is reduced.
Obviously, it can be seen from Fig. 2 that there was a dramatic change in
the death rate of residents with primary and secondary education. The reason is
because the Mianyang Statistic Bureau changed the classification standard of the
education-specific human mortality. That is, people with unknown education,
illiterate or semi-illiterate, primary, secondary education, and university and
above is changed into junior high and below, technical secondary education,
high schools, technical schools, universities and colleges. Hence, between 2013
and 2014, the dramatic changes in human mortality were due to the classification
criteria of the Mianyang Statistic Bureau, which was not related to air pollution.
The mortality rate of people with primary school education, illiterate or
semi-literate, grew steadily from 2008 to 2013, whereas that of residents with
secondary education, university and above showed a stable trend, maintaining
the same level. It indicates those in terms of the same air pollution level, the
higher the education level, the stronger the awareness of environmental protec-
tion. Therefore, these residents tend to take preventive measures more timely
and effectively when exposed to air pollution, and they are less affected [5,11].
The impact of air pollution on human death is positively correlated with the edu-
cational level of the residents. The more serious the air pollution is, the higher
the mortality rate is. However, due to the differences in people’s education level,
the awareness of air pollution prevention varies considerably. The people with
the lower education level may have the lower residents’ awareness of air pollu-
tion prevention, which results in a higher chance of being affected by the air
pollution.
1792 J. Xu et al.
According to Fig. 3, between 2008 and 2014, the mortality rate of residents
aged 0–44 years and 45–64 years showed a stable trend, while that of people aged
65 years and over was consistent with the trend of air pollution between 2008
and 2013, indicating that air pollution impacted individuals aged 65 years and
above the most. In terms of residents aged 44 and over, their physical function
declines with age, and they are more easily affected by the external environment.
Air pollution, to a larger extent, impacts individuals aged 44 and over, the
death rate of whom may increase with a rise of the air pollution concentration
[13]. However, the mortality rate of people aged 65 years and over in 2013–2014
showed a reverse trend with air pollution, which contradicts the argument that
the senior residents are susceptible to air pollution. By searching literature, the
paper finds that Mianyang faced a severe aging problem from 2012 to 2014. By
the end of 2013, the residents aged 60 and above reached 1,07 million, accounting
for 19.8% of the total population. Hence, a high aging population impacts the
human mortality.
It can be seen from Fig. 4 that the mortality rate caused by respiratory
diseases showed a steady upward trend from 2008 to 2013, but it grew rapidly
A Descriptive Analysis of the Impact of Air Pollution 1793
from 2013 to 2014. Due to the 2008 Wenchuan earthquake, Mianyang experienced
a recovery period of agriculture, industry, and transport infrastructure. As a
result, air pollution declined from 2009 to 2011. Nevertheless, the concentration
of air pollution increased from 2011 to 2013, which correspondingly saw a rising
trend of respiratory diseases-related deaths. It indicates that air pollution has a
significant impact on the residents who died of respiratory diseases. During the
period of 2013 and 2014, the human mortality rate caused by respiratory diseases
showed a reverse trend with air pollution, contradicting the argument that there
is, according to existing researches, a positive correlation between respiratory
diseases-related deaths and air pollution [6,12,15]. From the Mianyang 2013–
2014 government and other reports, the paper finds that this city faced a severe
aging problem. Due to the prevalence of respiratory diseases among the senior
residents, the 2013–2014 death rate was correlated considerably with the aging
population, less with air pollution.
4 Discussion
The paper uses descriptive statistical analysis to investigate the correlation
between major air pollutants and residents’ mortality in Mianyang City. Given
the impacts of natural disasters, national policies, and socio-demographic
changes, the paper concludes that the urban and rural residents, who are male,
illiterate or semi-literate, or with primary education, the middle-aged or aged,
or with respiratory diseases, are more susceptible to the increase in the PM10 ,
NO2 and concentration.
The accumulation of pollutants and photochemical reactions in various cities
leads to complex regional air pollution issues. Mianyang, located in the northwest
of Sichuan Basin, has a subtropical humid monsoon climate. The hot, rainy, and
wet weather is not conducive to the diffusion of particulate matters, aggravating
the air pollution of Mianyang. Furthermore, factors, such a high level exploita-
tion of the rich mineral resources in Mianyang and its emphasis on industry
1794 J. Xu et al.
References
1. Fischer PH, Marra M et al (2015) Air pollution and mortality in seven million
adults: the Dutch environmental longitudinal study (duels). Environ Health Per-
spect 123(7):697–704
2. Gotoh T, Nishimura T et al (2002) Air pollution by concrete dust from the Great
Hanshin Earthquake. J Environ Qual 31(31):718–723
A Descriptive Analysis of the Impact of Air Pollution 1795
3. Gulland A (2002) Air pollution responsible for 600 000 premature deaths world-
wide. BMJ (Clinical Res Ed) 325(7377):1380
4. Jerrett M, Burnett RT et al (2013) Spatial analysis of air pollution and mortality
in Los Angeles. Am J Respir Critical Care Med 188(5):727–736
5. Kan H, Chen B (2008) Season, sex, age, and education as modifiers of the effects
of outdoor air pollution on daily mortality in Shanghai, China: The public health
and air pollution in Asia (PAPA) study. Environ Health Persp 116(9):1183–1188
6. Katanoda K, Sobue T, Satoh H (2011) An association between long-term exposure
to ambient air pollution and mortality from lung cancer and respiratory diseases
in Japan. J Epidemiol 21(2):132–143
7. Kirby E (2008) Stress changes from the 2008 Wenchuan earthquake and increased
hazard in the Sichuan basin. Nature 454(7203):509–510
8. Lelieveld J, Evans JS et al (2015) The contribution of outdoor air pollution sources
to premature mortality on a global scale. Nature 525(7569):367–371
9. Li Y, Xu Y et al (2016) Haze-related air pollution and the tourism industry in
China. In: GAI international academic conferences proceedings, pp 122–131
10. Markides KS, Eschbach K (2005) Aging, migration, and mortality: current status
of research on the Hispanic paradox. J Gerontol 60(Spec No 2):68
11. O’Neill MS, Bell ML et al (2008) Air pollution and mortality in Latin America:
the role of education. Epidemiology 19(6):810–819
12. Saldiva PH, Lichtenfels AJ et al (1994) Association between air pollution and
mortality due to respiratory diseases in children in São Paulo, Brazil: a preliminary
report. Environ Res 65(2):218–225
13. Schwartz J, Dockery DW (1992) Increased mortality in Philadelphia associated
with daily air pollution concentrations. Am Rev Respir Dis 145(3):600–604
14. Strehler BL, Mildvan AS (1960) General theory of mortality and aging. Science
132(3418):14
15. Wong TW, Tam WS et al (2002) Associations between daily mortalities from
respiratory and cardiovascular diseases and air pollution in Hong Kong, China.
Occup Environ Med 59(1):30–35
16. Xie W, Li G et al (2015) Relationship between fine particulate air pollution and
ischaemic heart disease morbidity and mortality. Heart 101(4):257–263
Author Index
W Y
Wang, Aixin, 878 Yamamoto, Masahide, 411
Wang, Chunxiao, 106, 212 Yan, Fang, 204, 847
Wang, Fuzheng, 180 Yan, Jinjiang, 452
Wang, Hong, 1737 Yan, Sicheng, 670
Wang, Hongchun, 998 Yan, Tengteng, 1559
Wang, Hui, 600 Yang, Dan, 1476
Wang, Kunling, 1677 Yang, Jianchao, 1136
Wang, Lei, 490 Yang, Jing, 1653
Wang, Minxi, 986 Yang, Mengjia, 1253
Wang, Rui, 180 Yang, Qian, 438
Wang, Tao, 1762 Yang, Qing, 1667
Wang, Tianjin, 1522 Yang, Tian, 1497
Wang, Xian, 311 Yang, Xiongtao, 1667
Wang, Xueying, 1667 Yang, Ying, 311
Wang, Yahong, 1019 Yang, Zhen, 499
Wang, Yinhai, 286 Yao, Chenglin, 63
Wang, Yu, 106 Yao, Liming, 1351, 1786
Wang, Yuanyuan, 847 Ying, He, 1535
Wang, Yusheng, 1307 Ying, Qianwei, 1497
Wang, Zeming, 577 Yonezawa, Yuji, 804
Wang, Zhong, 1019 Yoshida, Taketoshi, 644
Wei, Qifeng, 166 You, Xiaoling, 986
Wei, Ying, 1296, 1645 Yu, Dongjing, 1122
Wen, Feng, 1752 Yu, Lei, 1548
Wu, Ke, 634 Yu, Weiping, 233
Wu, Mingcong, 421, 1089 Yuan, Xiaoyue, 1535
Wu, Pingwen, 721 Yuan, Yuan, 274, 538, 1573
Wu, Qiong, 791 Yun, YoungSu, 962, 1030
Wu, Yilun, 791
Wu, Zhibin, 1688 Z
Zakhidov, Romen, 1198
X Zeng, Jianqiu, 490
Xiao, Le, 106, 212 Zeng, Ziqiang, 286, 708
Xiao, Min, 547 Zhan, Qinglong, 1253
Xie, Jiming, 1019 Zhang, Dan, 878
Xie, Tao, 825 Zhang, Huanmei, 814
Xie, Zongtang, 522, 1457 Zhang, Jing, 204
Xing, Jiankai, 452 Zhang, Liangqing, 398
Xiong, Guoqiang, 311 Zhang, Liming, 1573
Xiu, Hongxia, 1457 Zhang, Linling, 91
Xu, Caiyang, 129, 1007 Zhang, Wenqiang, 106, 212
Xu, Dirong, 1296 Zhang, Xinli, 1522
Xu, Jianping, 1786 Zhang, Ye, 1043
Xu, Jing, 1067, 1573 Zhang, Yi, 868
Xu, Jinhua, 600 Zhang, Ying, 117, 680
Xu, Jiuping, 3, 923 Zhang, Yong, 814
Author Index 1801