Exploratory Study of Performance Evaluation Models for Distributed
Software Architecture
S.O. Olabiyisi; E.O. Omidiora
Department of Computer Science
& Engineering
Ladoke Akintola University of
Technology, Ogbomoso
Oyo State, Nigeria
Faith-Michael
Uzoka
Department of
Computer Science &
Information Systems
Mount Royal University
Calgary, Canada
Boluwaji A. Akinnuwesi
Department of Information
Technology
Bells University of
Technology, Ota, Ogun State
Nigeria
Victor W. Mbarika
Mathieu Kourouma
Hyacinthe Aboudja
International Centre for
Information Technology and
Development
Southern University, Baton
Rouge, Louisiana, USA
Department of
Computer Science
College of Sciences
Southern University,
Baton Rouge,
Louisiana, USA
Department of Computer
Science
School of Business
Oklahoma City University
Several models have been developed to evaluate the performance of
Distributed Software Architecture (DSA) in order to avoid problems that
may arise during system implementation. This paper presents a review
of DSA performance evaluation models with the view of identifying the
common properties of the models. It was established in this study that
the existing models evaluate DSA performance using machine
parameters such as processor speed, buffer size, cache size, server
response time, server execution time, bus and network bandwidth size
and lots of others. The models are thus classified to be machinecentric. Moreover the involvement of end users in the evaluation
process is not emphasized. Software is developed in order to satisfy
specific requirements of the client organization (end-users); therefore,
involving users in evaluating DSA performance should not be
underestimated. This study suggests future works on establishing
contextual organizational variables that can be used to evaluate DSA.
Also to complement the existing models, works should be done on
development of user-centric performance evaluation model which will
directly involve the end-users in the evaluation of DSA using the
identified contextual organizational variables as parameters for
evaluation.
Keywords: Distributed software, Performance, Performance evaluation
model, Software system architecture, Client organization, machinecentric, user-centric
INTRODUCTION
Today, distributed computing applications are used
by many people in real time operations such as
electronic commerce, electronic banking, online
payment, et cetera [22]. Distributed computing is
used as enabling technology for modern enterprise
applications; thus in the face of globalization and
ever increasing competition, Quality of Service (QoS)
attributes like performance, security, reliability,
scalability, and robustness are of crucial importance
[29]. Companies must ensure that the distributed
software (DS) they operate does not only provide all
relevant functional services, but also meet the
performance expectation of their customers.
Therefore it becomes imperative to analyze and
predict the expected performance of distributed
software systems at the level of the architectural
design in order to avoid the pitfalls of poor QoS
during system implementation.
Software architecture (SA) is a phase of software
design which describes how a system is
decomposed
into
components,
how
these
components are interconnected, and how they
communicate and interact with each other. This
phase of software design is a major source of errors
if the organizational structure of the different
components is not carefully defined and designed.
There are two parts to SA [6, 33]. The first part is the
micro-architecture which covers the internal structure
of the software system such as conceptual
architecture, module interconnection architecture,
execution architecture, and code architecture. The
second part of SA is the macro-architecture that
focuses on external factors that could influence the
design and implementation of the software system.
Examples of the external factors are: culture and
belief of people (users), government policies and
regulations, and disposition of people towards the
use of computer.
SA is an important phase in the software life cycle as
it is the earliest point and highest level of abstraction
at which useful analysis of a software system is
possible [35]. Hence, performance analysis at this
level can be useful to establish whether a proposed
architecture satisfies the end users’ requirements
and also meets the desired performance
specifications. It also helps to identify eventual errors
and verify that the quality requirements have been
addressed in the design and thus saving major
potential modifications later in the software
development life cycle or tuning the system after
deployment.. SA is considered the first product in an
architecture-based development process and
evaluation at this level should reveal requirement
conflicts and incomplete design descriptions from
stakeholders’ perspective [6].
Performance of software is a quality attribute that is
measured in any of the following metrics: system
throughput, responsiveness, resource utilization,
turnaround time, latency, failure rate, and fault
tolerance. Thus, assessing and optimizing system
performance is essential for the smooth and efficient
operation of the software system. There are several
approaches for evaluating the performance of
system architecture. One of the earliest approaches
is the fix-it-later approach [3] which advocates
software correctness and deferring performance
considerations to the integration testing phase. If
performance problems are detected, then, additional
hardware may be needed; otherwise, the software
will be tuned to correct the problems. This approach
has some limitations, such as,: it takes time to
acquire and install new hardware; also tuning the
software takes time and could be costly. Tuning may
distort the original software design and testing must
be repeated after code changes. This gives a
negative impression to users after it is corrected. The
rational for the fix-it-later approach is to save
development time and cost. This however will not be
realized, if initial performance is unsatisfactory
because of additional time and cost of tuning and
maintenance. Also, Connie [3] proposed a DesignBased Evaluation and Prediction Technique
(ADEPT), an analysis technique used in conjunction
with the performance engineering discipline. ADEPT
was the strategy used to combat the fix-it-later
principle and supported the performance engineering
process. ADEPT evaluates the performance of
information system early in the life cycle using
specifications for both
expected resources
requirement and upper bounds. The system design
is likely to be stable if the performance goal is
satisfied for the upper bound. ADEPT had the
following challenges: lack of automatic feedback
component, not robust enough to evaluate large and
complex systems, inability to eliminate unwanted
argument in the course of evaluation, and inability to
work in concurrent processing environments.
In recent years, several models have been
developed to constantly evaluate the performance of
DSA. The survey done in this paper provides the
developments over about a decade (1999 – 2010)
with the aim of identifying the parameters used by
each model for evaluating DSA performance and
also deduces the properties that are common to the
models. Further research direction is proposed as a
consequence.
RELATED WORKS
Many studies have been carried out on the survey of
system performance evaluation models with the
ultimate goal of providing recommendations for
future research activities. Those activities could
significantly improve the performance evaluation and
prediction of software system architecture. A survey
of the approaches to evaluate software performance
from 1960 to 1986 was done in [4]. The study
pointed out the breakthroughs leading to the
software performance engineering approach (SPE)
and a comprehensive methodology for constructing
software to meet performance goals. The concepts,
methods, tools, and use of SPE were summarized
and future trends in each area were suggested.
In [6] eight architecture analysis methods were
reviewed with the view of discovering similarities and
differences between these methods by making
classifications, comparisons, and appropriateness
studies. The eight methods considered are: SAAM
(Scenario-Based Architecture Analysis Method),
SAAMCS (SAAM Founded on Complex Scenarios),
ESAAMI (Extended SAAM by Integration in the
Domain), SAAMER (Software Architecture Analysis
Method for Evolution and Reusability), ATAM
(Architecture Trade-Off Analysis Method), SBAR
(Scenario-Based
Architecture
Reengineering),
ALPSM (Architecture Level Prediction of Software
Maintenance), and SAEM (Software Architecture
Evaluation Model). The authors discovered at that
time that SAAM was used for different quality
attributes like modifiability, performance, availability,
and security. In addition SAAM was applied in
several domains unlike the other methods that were
undergoing refinement and improvement as at that
time. As a result, some future works were proposed
to evaluate the effects of their various usages and
create a repeatable method based on repositories of
scenarios, screening and elicitation questions.
Three indications that concern software design
specifications, performance models, and analysis of
processes were highlighted in [31]. The following
recommendations were made in the paper: the use
of standard software artifacts like Unified Modeling
Language (UML) diagrams for software design
specifications; the existence of strong semantic
mapping between software artifacts and the
performance models as strategy to reduce the
performance model complexity and still maintaining a
meaningful semantic correspondence; use of
simulation in addition to analytical simulations to
address performance model complexity and
provision of feedback which is a key success factor
for a widespread use of performance analysis
models.
In [1] a review of performance prediction techniques
for component-based software systems was carried
out and the following recommendations were made:
(1) integration of quantitative prediction techniques in
software development process; (2) design of
component models allowing quality prediction and
building of component technologies supporting
quality prediction; (3) inclusion of quality attributes
such as reliability, safety or security in the software
development
process;
and
(4)
study
of
interdependencies among the different quality
attributes to determine, for example, how the
introduction of performance predictability can affect
other attributes such as reliability or maintainability.
In [7], three foundational formal software analyses
were described. The authors reviewed emerging
trends in software model and identified future
directions that promise to significantly improve the
cost-effectiveness.
CLASSIFICATION OF DSA PERFORMANCE
EVALUATION MODELS
This paper classifies existing performance models
based on the technique used to develop the models.
The techniques are: (1) Factor Analysis; (2) Queuing
Network; (3) Petri net; (4) Pattern-Based; (5)
Hierarchical Modelling; (6) Performance Analysis
and Characterization Environment (PACE) Based;
(7) Component-Based Modelling; (8) ScenarioBased; (9) Soft computing approach; (10) Relational
Approach; (11) Software Architecture Analysis
Methods (SAAM); (12) Aspectual Software
Architecture Analysis Methods (ASAAM); (13) Hybrid
Approaches such as UML-Petri net, UML-Stochastic
Petri net, Queue Petri Nets Approach and Soft
Computing Approach. The models are reviewed in
order to establish the kind of parameters used in
them to evaluate DSA.
Factor Analysis (FA) Based Approach
FA approach was used in [2] to develop a model for
analysing Information Technology (IT) software
projects with the aim of establishing the success or
failure of the project before it takes off. FA as
contained in SPSS and Statview software was used.
Fifty performance indices of IT projects planning,
execution,
management,
and
control
were
formulated. Eleven factors were extracted and
subjected to further analysis with a view to
estimating and ranking their contribution to the
success of IT projects. The model was tested using
sample life data gotten using questionnaires that
were administered to the principal actors of the
popular IT software projects in Nigeria. The
significant contribution of the research is the
provision of a working model that utilized both
quantitative and qualitative decision variables in
assessing the success or failure of IT projects. This
serves as template for evaluating IT projects prior to
its implementation. This model was not used to
evaluate
performance
of
software
system
architecture.
Queuing Network Based Models
This is a conventional modelling paradigm which
consists of a set of interconnected queues [28]. The
models based on Queuing Networks are categorized
in Table 1.
Table 1
Queuing Network Based Performance Models
Description of Model
[30] designed and
implemented objectoriented queuing
network model – a
reusable performance
models for software
artifacts.
Parameters Considered
Buffer size, processor speed of server, queue size,
number of incoming request, request arrival time,
request departure time.
Class of Parameter
Machine centric
parameter
[31] integrated
performance and
specification model to
provide a tool for
quantitative evaluation
of software architecture
at the design phase.
[35] modeled layered
software system as a
closed Product Form
Queuing
Network
(PFQN) and solve it for
finding
performance
attributes of the system
[31] proposed an
approach based on
queuing networks
models for
performance prediction
of software systems at
the software
architecture level,
specified by UML.
[12] developed
Software Architecture
and Model Extraction
(SAME) technique that
extract communication
patterns from
executable designs or
prototype that use
message passing, to
develop a Layered
Queuing Network
Performance Model in
an automated fashion.
Number of service centers, service rate of service
center, arrival rate of requests at service centre, number
of servers in service centers, routing procedure of
requests, Number of request circulating in the system,
physical resources available
system workloads,
network topology.
Software process centric
and machine centric
parameters
Range of number of clients accessing the system,
average think time of each client, number of layers in
the software system, relationship between the machines
and software components, number of CPUs and disks
on each of the machine and thread limitation (if any),
uplink and downlink capacities of the connectors
connecting machines running adjacent layers of the
system, size of packets of the links, service time
required to service one request by a software layer,
forward transition probability, rating factors of the CPU
and the disks of each machines in the system
Same as in [35]
Software and Machine
centric parameters
Same as in [35]
Software and Machine
centric parameter
Petri Net Based Approach
Petri nets were introduced in 1962 by Dr. Carl Adam
Petri [27]. A Petri net is a graphical and
mathematical modelling tool [26]. It is a directed
bipartite graph with an initial state called the initial
Table 2
Software and Machine
centric parameters
marking. Petri Nets consist of four basic elements:
places, transitions, tokens, and arcs. System
performance models based on Petri net approach
are categorized in Table 2.
Petri Net Based Performance Models
Description of model
[18]
developed
performance
evaluation
model for Agent-based
system using petri net
approach
[20]
did
performance
analysis of Internet based
software retrieval systems
using petri nets
[13] developed stochastic
petri nets model from UML
activity diagrams
Parameters Considered
System load, system delays, system
routing rate, latency of process, CPU
time.
Class of Parameter
Machine centric parameters
Network time.
Machine centric parameters
Routing rate, action duration, system
response time.
Machine centric parameters
[14]
translated
UML
activity
diagram
into
stochastic Petri net model
that allows to compute
performance indices.
[23] derived performance
parameters
from
Generalized
Stochastic
Petri Net (GSPN) using
Markov chain theory.
Routing rate, action duration, system
response time.
Machine centric parameters
Routing rate, action duration, system
response time.
Machine centric parameters
Queuing Petri Net (QPN) Based Models
The hybrid of Petri Net and Queuing Networks is
Queuing Petri Nets (QPNs) which facilitates the
integration of hardware and software aspects of
system behaviour into the same model. In addition to
hardware contention and scheduling strategies,
using QPNs, one can easily model simultaneous
resource possession, synchronization, blocking, and
contention for software resources. Thus QPNs
Table 3
combines Queuing Networks and Petri Nets into a
single formalism in order to eliminating their
disadvantages. QPNs allow queues to be integrated
into places of Petri Nets and this enables the
modeller to easily represent scheduling strategies
and to bring the benefits of Queuing Networks into
the world of Petri Nets [28]. System performance
models based on Queuing Petri net approach are
categorized in Table 3.
Queuing Petri Net Based Performance Models
Description of Model
[28] applied QPN formalism
to analyse the performance
of distributed e-business
system.
[29] presented a novel case
study of a realistic state-ofthe-art
distributed
component-based system,
showing how the QPN
modelling formalism can be
exploited as software system
performance prediction tool.
Parameters Considered
Service demand of queue, service rate of
queue, token population of queue, queue
size, buffer size, processor speed of
server, routing rate.
Same as in [28].
Performance Analysis and Characterization
Environment Based Approach
The motivation to develop Performance Analysis and
Characterization
Environment
(PACE)
based
approach in [15] was to provide quantitative data
concerning the performance of sophisticated
applications running on high performance systems.
The framework of PACE is a methodology based on
a layered approach that separates out the software
and hardware system components through the use
of a parallelization template. This is a modular
approach that leads to readily reusable models,
which can be interchanged for experimental analysis.
Each of the modules in PACE can be described at
multiple levels of details thus providing a range of
result accuracies, but at varying costs in terms of
prediction evaluation time. PACE is aimed to be
used for pre-implementation analysis, such as
design or code porting activities, as well as, for onthe-fly use in scheduling systems. The core
component of PACE is a performance specification
3
language, CHIP S (Characterization Instrumentation
for Performance Prediction of Parallel Systems).
3
CHIP S provides a syntax that allows the description
Class of Parameters
Machine centric parameters
Machine centric parameters
of the performance aspects of an application and its
parallelization to be expressed. This includes control
flow information, resource usage information (for
example number of operations), communication
structures, and mapping information for a parallel or
distributed system. The software object in the PACE
system were created using the Application
Characterization Tool (ACT). ACT aids the
conversion of sequential or parallel source code into
3
the CHIP S language via the Stanford Intermediate
Format (SUIF). ACT performs a static analysis of the
code to produce the control flow of the application,
count the number of operations in terms of high-level
language implemented, and also the communication
structure. The hardware objects of the model are
created using a Hardware Model Configuration
Language (HMCL) by specifying system-dependent
parameters. On evaluation, the relevant sets of
parameters are used and supplied to the evaluation
methods for each of the component models.
Hierarchical Performance Modeling Approach
In [32] a Hierarchical Performance Modelling (HPM)
technique
for
distributed
systems,
which
incorporated different level of modelling abstraction,
was presented. HPM is a technique to model
performance for different layers of abstraction. It
includes several layers of organization from primitive
operation to software architecture, therefore,
providing a degree of accuracy that cannot be
achieved with single layer models. The application is
developed in a top-down fashion from general to
more specific, but performance information is
generated in bottom-up method, thus linking the
different levels of analytic models into a composite
model. This approach support specification and
performance model generation that incorporates
computation and communication delays along with
hardware profile characteristics to assist in the
evaluation of performance alternatives. HPM models
Table 4
provide a quantitative performance assessment of an
entire system comprising of hardware, software, and
communication. The HPM provided a well-defined
methodology to allow system designers to evaluate
the application based on the system requirements of
their application and fine tune the values of
performance parameters.
Pattern Based Approach
Design patterns are defined as description of
communicating objects and classes that are
customized to solve a general design problem in a
particular context. The components of design pattern
are: Pattern name, Intent, Motivation, Applicability,
Structure,
Participants,
Collaborations,
Consequences, Implementation, Sample code,
Known uses and Related pattern. Performance
models based on pattern based approach are
presented in Table 4.
Pattern Based Performance Models
Description of Model
[19] presented an approach
based on patterns to develop
performance models for object
oriented software system in
the early stages of the
software
development
process. This complement the
approach given in [18]
[21] presented a pattern-based
approach
to
model
the
performance
of
software
system and used it to evaluate
the performance of mobile
agent system
[9] presented a pattern-based
performance completion for
message-oriented middleware
Parameters Considered
Event load, time to perform an action, request
arrival time, request service time, number of
concurrent users
Class of Parameter
Software process centric
parameters
Same as in [19]
Software process centric
parameters
System configuration (hardware & network
components),
message size (incoming &
outgoing), delivery time for message, number
of message sent, size of message sent,
number of message
delivered, size of
message delivered, transaction/request size,
buffer/pool size
Software process centric
parameters and machine
centric parameters
Soft Computing Approach
Soft computing is an approach to computing which
parallels the remarkable ability of the human mind to
reason and learn in an environment of uncertainty
and imprecision [8]. It is a consortium of
methodologies centering in fuzzy logic (FL), artificial
neural networks (ANN) and evolutionary computation
(EC). These methodologies are complementary and
synergistic, rather than competitive. They provide in
one form or another flexible information processing
capability for handling real life ambiguous situations.
Soft computing aims to exploit the tolerance for
imprecision, uncertainty, approximate reasoning, and
partial truth in order to achieve tractability,
robustness, and low-cost solutions. The attributes of
these models are often measured in terms linguistic
values, such as very low, low, high, and very high.
The imprecise nature of the attributes constitutes
uncertainty and vagueness in their (subsequent)
interpretation. Performance models based on soft
computing approach are presented in Table 5. The
advantage of Soft computing models particularly
fuzzy logic and ANN are [10]: they are more general
and they mimic the way in which humans interpret
linguistic values and the transition from one linguistic
value to a contiguous linguistic value is gradual
rather than abrupt.
Table 5
Performance Models Based Soft Computing Approach
Description of Models
[10] applied fuzzy logic to
measure
similarity
of
software projects when their
attributes are described by
categorical values (linguistic
values in fuzzy logic)
[11]
presented
a
new
technique based on fuzzy
logic, linguistic quantifiers
and
analogy-based
reasoning to estimate the
cost of or effort of software
projects when they are
described by either numerical
data or linguistic values.
[17] showed how fuzzy logic
can be applied to computer
performance work to simplify
and speed analysis and
reporting.
[25] Developed a fuzzy
model
for
evaluating
information system projects
based on their present value
using
fuzzy
modelling
technique.
Parameters Considered
Seventeen parameters: software size, project
mode plus 15 cost drivers.
Class of Parameter
Software
process
centric and machine
centric parameters
Same as in [10]
Software
process
centric and machine
centric parameters
CPU Queue length, memory (RAM) available,
pages input per second, read time, write time,
I/Os per second.
Machine
parameters
Three parameters representing three possible
values of project costs, benefits, evaluation
periods, and discount rate.
Software
process
centric parameters
Other Performance Models
In [5], multivariate Adaptive Regression Splines
(MARS) was used for software performance
analysis. A resource function was designed and
automated, having the following parameters - size of
data objects, number of disk blocks to be read, size
of messages to be processed, memory and cache
size, processor speed, bus and network bandwidth.
In [16], PASA, a method for performance
assessment of software architectures, was
developed and it was scenario-based. It identifies
potential areas of risk within the architecture with
respect to performance and other quality objectives.
It identifies strategies for reducing or eliminating the
risks if a problem is found. Scenario for important
workloads are identified and documented. The
scenarios provide means of reasoning about the
performance of the software as well as other
qualities and they serve as starting point for
constructing performance models of the architecture.
ASAAM (Aspectual Software Architecture Analysis
Method) is scenario-based proposed in [34]. It
introduces a set of heuristic rules that help to derive
architectural aspects and the corresponding tangled
architectural components from scenarios. It takes as
input the architecture design and measures the
impact of predefined scenarios on it in order to
identify the potential risks and the sensitive points of
the architecture. This helps to predict the quality of
the system before it is built and therefore reducing
unnecessary maintenance costs.
centric
In [36], performance analysis based on requirements
traceability was presented. Requirement traceability
is critical to providing a complete approach which will
lead to an executable model for performance
evaluation. The paper investigated the software
architectures that are extended based on the
performance requirements traceability to represent
performance property. The extended architectures
are then transformed into a simulation model colored
Generalized Stochastic Petri Nets (GSPN) and the
simulation results are used to validate performance
requirements and evaluate system design. The
parameters considered are queue length, number of
requests to be serviced, server response time,
server execution time, and processor speed.
GENERAL PROPERTIES OF THE EXISTING DSA
PERFORMANCE EVALUATION MODELS
From survey of the existing DSA performance
evaluation models, the following common attributes
are identified:
i. The models are algorithmic using hard computing
principles.
ii. Parameters for evaluation are machine centered
and they are objective. For example, processor
speed, bus and network bandwidth size, RAM
size, cache size, server response time, server
execution time, number of disk to be read and
message size. Therefore the models are
machine-centric.
iii.
The models are implemented at the
architectural stage of the software life cycle.
iv. Though in the existing models, the contributions
of the client organization (end users) during
software
development
process
were
acknowledged but none of the models draws
parameters for evaluation from the contextual
organizational decision variables.
v. The models are re-useable and scalable.
vi. Performance metrics considered are mostly the
following: throughput, response time, and
resource utilization.
viii. The models are limited by their inability to cope
with uncertainties and imprecision of data or
information surrounding software projects in the
early stage of the development life cycle.
ix. The conceptual structures of some model (for
example, probabilistic models) that can
represent vague information are inadequate for
dealing with problems in which information is
perception-based and is expressed in linguistic
form.
x. The models are computationally intensive and
are intolerant of noise. They cannot handle
categorical data other than binary valued
variables.
CONCLUSION AND FUTURE WORK
Conclusion
In this paper, a review of research works on
performance evaluation models from 1999 to 2010 is
presented in order to establish the properties
common to these models. It was deduced that most
models for evaluating DSA performance are
machine-centric. The following are some of the
evaluation parameters identified: buffer size,
processor speed, cache size, server response time,
server execution time, number of disk block to be
read, queue size, request arrival time, request
departure time, bus size, network bandwidth size
(uplink and down link), number of Central Processing
Unit (CPU), number of request circulating in the
system, system routing rate, latency of system,
network time, system RAM (Random Access
Memory) size, size of data object, size of message to
be processed. The performance evaluation models
are, therefore, classified as machine-centric models.
They are established and used to evaluate DSA
performance with respect to satisfying the machine
and system process requirements. However
subjective decision variables of users are not
considered in the machine-centric models; also the
models cannot cope with uncertainties and
imprecision of data or information surrounding
software projects in the development life cycle.
Users are involved in DSA development in order to
feed the software developers with the necessary
organizational information. This helps the software
developers to develop software system that will be
accepted by end users and satisfies the
organization’s requirements using available machine
infrastructure. The question is “how do we measure
the performance of the DSA from users’ perspectives
in order to establish the extent of responsiveness of
the DSA to the requirements of the client
organization”. It is hoped that future research works
will address this question.
Future Work
Management of the client organization and the end
users are key players in software development
process. Therefore, contextual organizational
decision variables (for example: Organizational goals
and task; Level of users competence/experience in
Information Technology; Information requirements of
users and the format; Internal service of the
organization,
and
their
relationships;
The
organization’s defined functions required in the user
interface;
Organization’s
policies,
rules
or
procedures for transaction process flow etcetera),
should not be underestimated while establishing the
variables to evaluate performance of software
architecture. We therefore propose, as a result, that
future works should identify and verify with some
empirical analysis, both objective and subjective
contextual organizational decision variables that
could influence the choice of architectural style and
design pattern made by the software developer. We
are of the view that if some organizational variables
can be established as parameters to evaluate DSA
performance, it will be possible to have some DSA
performance evaluation models that will be usercentric or a hybrid model having both organizational
decision variables and machine/system variables as
parameters for evaluation.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
Bailey, H.D., Snavely, A. “Performance
Modeling: Understanding the Present and
Predicting the Future”, Proceedings of Euro-Par,
Lisbon, Portugal: 2005
Chiemeke, S.C. “Computer Aided System for
Evaluating Information Technology Projects”,
PhD thesis submitted to the School of
Postgraduate Studies, Federal University of
Technology, Akure, Ondo State, Nigeria: 2003.
Connie, U.S. “Increasing Information System
Productivity”, Proceedings of the Computer
Measurement
Group’s
International
Conference,The Computer Measurement Group
Inc: 1981.
Connie, U.S. “The Evolution of Software
Performance
Engineering:
A
Survey”,
Proceedings of ACM Fall Joint Computer
Conference: 1986, pp 778 – 783.
Courtois, M., Woodside, M. “Using Regression
Splines for Software Performance Analysis”,
Proceedings of WOSP, Ontario, Canada. 2000.
Dobrica, L., Niemela, E. “A Survey on Software
Architecture
Analysis
Methods”,
IEEE
Transactions on Software Engineering, (28:7),
2002,
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
Dwyer, B.M., Hatcliff, J., Pasareanu, S.C.,
Visser, W. “Formal Software Analysis: Emerging
Trends
in
Software
Model
Checking”,
Proceedings of Future of Software Engineering
(FOSE’07): 2007.
Gary, R.G., Frank, C., 1999. “Application of
Neuro-Fuzzy
Systems
to
Behavioral
Representation
in
Computer
Generated
th
Forces”, Proceedings of 8 Conference on
Computer Generated Forces and Behavioural
Representation, Orlando FL: 1999.
Happe, J., Friedrich, H., Becker, S., Reussner,
H.R. “A Pattern-Based Performance Completion
for
Message-Oriented
Middleware”,
Proceedings of WOSP’08, Princeton, New
Jersey: 2008.
Idris, A., Abran, A. “A Fuzzy Based Set of
Measures for Software Project Similarity:
Validation
and
Possible
Improvements”,
Proceedings of METRICS 2001, London,
England: 2001, pp 85 – 96.
Idris A., Alain A. and Khoshgoftaar. “Fuzzy
Case-Based Reasoning Models for Software
Cost
Estimation”.
2004.
Available
@
http://www.gelog.etsmtl.ca/publications/pdf/803.
pdf
Israr, A., Tauseef, L.H.D., Franks, G.,
Woodside, M. “Automatic Generation of Layered
Queuing Software Performance Models from
Commonly Available Traces”, Proceedings of
WOSP’05, Palma de Mallorca, Spain: 2005
Juan, P.L., Jose M., Javier, C. “From UML
Activity Diagrams to Stochastic Petri Nets:
Application
to
Software
Performance
Engineering”, Proceedings of WOSP’04,
Redwood City, California: 2004.
Juan, P.L., Jose, M., Javier, C. “On the use of
Formal Models in Software Performance
Evaluation”,
News in the Petri Nets World,
Dec.
27,
2008.
Available
@
<http://webdiis.univzar.es.crpetri/paper/jcam
pos/02_LGMC_JJCC.pdf>
Junwei, C., Darren, J.K., Efstathios, P.,
Graham, R.N. “Performance Modeling of
Parallel and Distributed Computing Using
PACE”, Proceedings of IEEE International
Performance Computing and Communications
Conference, IPCCC-2000, Phoenix: 2000, pp
485 – 492.
SM
Lloyd, G.W., Connie, U.S. “PASA : An
Architectural Approach to Fixing Software
Performance Problems”, Software Engineering
Research and Performance Engineering
Services: 2002.
Maddox, M. “Using Fuzzy Logic to Automate
Performance Analyses”, Proceedings of the
Computer
Measurement
Group’s
2005
International Conference, The Computer
Measurement Group inc: 2005.
Merseguer, J., Javier, C., Eduardo, M.
“Performance Evaluation for the Design of
Agent-Based Systems: A Petri Net Approach”,
Proceedings of the workshop on Software
st
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
Engineering and Petri Nets within the 21
International Conference on Application and
Theory of Petri Nets, University of Aarhus:
2000a. pp 1 – 20.
Merseguer, J., Javier, C., Eduardo, M. “A
Pattern-Based Approach to Model Software
nd
Performance”,
Proceedings
of
the
2
International Workshop on Software and
Performance, Ottawa, Ontario: 2000b, pp 137142.
Merseguer, J., Campos, J., Mena, E.
“Performance Analysis of Internet Based
Software Retrieval Systems Using Petri Nets”,
Proceedings of 4th ACM International Workshop
on Modeling, Analysis and Simulation of
Wireless and Mobile System, Rome Italy: 2001.
Merseguer, J., Javier, C., Eduardo, M. “A
Pattern-based Approach to Model Software
Performance Using UML and Petri Nets:
Application
to
Agent-based
Systems”,
th
Proceedings of 7 World Multiconference on
Systemic Cybernetics and Informatics, Orlando,
Florida: 2003, (9), pp 307 – 313.
Merseguer,
J.,
Javier,
C.
“Software
Performance Modeling Using UML and Petri
Nets”, LNCS2965, Springer Verlag: 2004, pp
265-289.
Motameni, H., Movaghar,
A., Siasifar, M.,
Montazeri, H., Rezaei, A. “Analytic Evaluation
on Petri Net by Using Markov Chain Theory to
Achieve Optimal Models”, World Applied
Sciences Journal (3:3), 2008, pp 504 – 513.
Olabiyisi S.O, Omidiora E.O, Uzoka F.M.E,
Victor Mbarika, Akinnuwesi B.A. “A Survey of
Performance Evaluation Models for Distributed
Software System Architecture”. Proceedings of
International Conference on Computer Science
and
Application,
World
Congress
on
Engineering and Computer Science (WCECS
2010), San Francisco: 2010, Vol. 1, pp 35 – 43.
Omitaomu, A.O., Adedeji, B. “Fuzzy Present
Value Analysis Model for Evaluating Information
System Projects”, Engineering Economist
(52:2), 2007, pp 157 – 178.
Peterson, J.L. Petri Net Theory and the
Modeling of Systems, Prentice Hall, 1981.
Petri, C.A. “Communication with Automata”.
Technical Report RADC-TR-65-377, Rome Air
Dev. Centre, New York: 1962
Samuel, K., Alejandro, B. “Performance
Modeling of Distributed E-Business Applications
Using Queuing Petri Nets”, Proceedings of IEEE
International Symposium on Performance
Analysis of Systems and Software: 2003, pp
145 – 153.
Samuel, K. “Performance Modeling and
Evaluation of Distributed Component-Based
System Using Queuing Petri Nets”, IEEE
Transactions on Software Engineering. (32:7),
2006, pp 487 – 502.
Savino-Vazquez,
N.,
Puigjaner,
R.
“A
Component Model for Object-Oriented Queuing
Networks and its Integration in a Design
Technique
for
Performance
Models”,
Proceedings of the 2001 Symposium on
Performance Evaluation of Computer and
Telecommunication System (SPECTS 2001),
Orlando, Florida: 2001.
[31] Simonetta, B., Roberto, M., Moreno, M.
“Performance
Evaluation
of
Software
Architecture with Queuing Networking Model”,
Proceedings of ESMc’04, Paris, France: 2004.
[32] Smarkusky, D., Ammar, I.A., Sholi, H.
“Hierarchical
Performance
Modeling
for
Distributed System Architecture”. Available @
<http://www.cs.sfu.ca/~mhefeeda/papers/ISC20
00-HPM.pdf>, 2000.
[33] Soni, D., Nord, R., Hofmeister, C. “Software
Architecture
in
Industial
Applications”,
Proceedings 17th International Conference on
Note: This work is a revised version. The first
version is [24] that was presented in the International
Conference on Computer Science and Application,
World Congress on Engineering and Computer
Science (WCECS 2010), San Francisco, USA,
October 20 – 22, 2010. It is one of the preliminary
results of an ongoing research that focuses on
developing User-Centric Model to evaluate the
performance of Distributed Software Architecture.
Acknowledgement: This research is partly
sponsored by the National Science Foundation
(NSF) under Grant Nos. 1036324 and 0811453 and
UNCFSP NSTI under the supervision of Dr Victor
Mbarika in International Centre of Information
Technology and Development, Southern University
and A & M College, Baton Rouge, Louisiana, USA.
Also Bells University of Technology, Ota, Ogun
State, Nigeria is acknowledged for providing a partial
support.
About the Authors
S.O. Olabiyisi, Ph.D. is a Senior Lecturer in the
Department of Computer Science and Engineering,
LAUTECH, Ogbomosho, Nigeria. His Research
interests are Software Performance Evaluation,
Computational Mathematics, Discrete Structures and
Softcomputing. (e-mail:tundeolabiyisi@hotmail.com)
E.O Omidiora, Ph.D. is a Senior Lecturer in the
Department of Computer Science and Engineering,
LAUTECH, Ogbomosho, Nigeria. His research
interests are Computer Architecture, Softcomputing
and e-Learning system. (e-mail:
omidiorasayo@yahoo.co.uk)
F.M.E. Uzoka, Ph.D. is a Faculty member in the
Department of Computer Science and Information
System, Mount Royal University, Calgary, Canada.
He was a Senior Lecturer in Information Systems,
University of Botswana. He conducted a two year
postdoctoral research at the University of Calgary
(2004-2005).
His
research
interests
are
Software Engineering (ICSE17): 1995, pp 196207.
[34] Tekinerdogan B. “ASAAM: Aspectual Software
Architecture Analysis Method”, Early Aspects:
Aspect-Oriented Requirements Engineering and
Architecture Design Workshop, Boston, USA:
2003.
[35] Vibhu, S.S., Pankaj J., Kishor S.T. “Evaluating
Performance Attribute of Layered Software
Architecture”, CBSE 2005: Vol. 3489 of LNCS,
pp 66-81.
[36] Wise, J.C., Chang, C.K., Xia, J., Cleland-Huang,
J.
“Performance
Analysis
Based
on
Requirements Traceability”, Technical Report,
Dept of Computer Science, Iowa State
University, Iowa: 2005.
Organizational
Computing,
Decision
Support
Systems, Technology Adoption and Innovation and
Medical Informatics. He serves as a member of
editorial/review board of a number of Information
Systems journals/conferences
(e-mail: uzokafm@yahoo.com).
Boluwaji A. Akinnuwesi, Ph.D., is a Lecturer with
Department of Information Technology, Bells
University of Technology, Ota, Ogun State, Nigeria.
He is also the Director of the Computer Centre in
Bells University of Technology. He was a Research
Scholar in International Centre of Information
Technology and
Development in Southern
University, Baton Rouge, Louisiana, USA. His
research area is Software Performance Engineering.
His other research interest areas are Medical
Informatics, Soft-computing, Expert System, and
Software Engineering. He is a professional member
of ACM, CPN (Computer Professional Registration
Council of Nigeria) and NCS (Nigeria Computer
Society). e-mail: akinboluade@yahoo.com.
Victor Wacham A. Mbarika, Ph.D. is the Executive
Director,
International
Center
for
IT
and
Development
(ICITD, Southern University, T. T
Allain #321, Baton Rouge, LA 70813, USA. He is
Editor-in-Chief of The African Journal of Information
Systems (AJIS, Phone: +1 225 715 4621 or +1 225
572 1042; Fax: +1 225 208 1046. (Email:
victor@mbarika.com)
Mathieu Kokoly Kourouma, Ph.D., is a professor in
the Department of Computer Science, College of
Sciences, at Southern University and A&M College.
He has a Bachelor in Electrical and Computer
Engineering from the Polytechnic Institute of the
University of Conakry, Guinea, a Master and Ph.D. in
Telecommunications and Computer Engineering,
respectively, from the University of Louisiana at
Lafayette - U.S.A. His research areas of interest are
wireless
communications,
Sensor
Networks,
Cognitive Radio Networks, Telecommunications,
Network
Performance
Analysis,
Software
Engineering and Development, and Database
Design. He is a professional member of ACM, NSTA,
and AAC&U. Emails: mkkourouma@cmps.subr.edu
and
mkourouma@gmail.com.
Web
site:
www.cmps.subr.edu. Office number: (225)771-3652.
Hyacinthe Aboudja, Ph.D. is currently visiting
Assistant Professor in the Computer Science
Department of the School of Business at Oklahoma
City University. His research interest ranges from
computer architecture, Real-time Systems Design,
Theory of computing, System Performance Analysis,
Software Engineering, and Computer Simulation of
Biological Systems. He is a professional member of
ACM and IEEE. (email: haboudja@okcu.edu)