Acta Electrical 2010 2
Acta Electrical 2010 2
K. György, L. Dávid
Comparative Analysis of Model Predictive Control Structures .................... 5
Cs. Szabó, M. Imecs, I. I. Incze
Synchronous Motor Drive at Maximum Power Factor with Double
Field-Orientation.............................................................................................. 16
I. I. Incze, A. Negrea, M. Imecs, Cs. Szabó
Incremental Encoder Based Position and Speed Identification:
Modeling and Simulation ................................................................................ 27
D. Fodor
Aluminium Electrolytic Capacitor Research and Development
Time Optimization Based on a Measurement Automation System ............. 40
Computer Science
L. Haţegan, P. Haller
Framework for Modeling, Verification and Implementation of
Real-Time Applications ................................................................................... 51
M. Muji
Application Development in Database-Driven Information Systems ......... 63
A. Aszalos, J. Domokos, T. Vajda, S. T. Brassai, L. Dávid
Exambrev - Integrated System for Patent Application ............................... 73
3
Telecommunications
Signal Processing
4
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 5-15
1. Introduction
The model predictive control (MPC), – also called receding horizon control
(RHC) – is the most important advanced control technique which has been very
successful in practical applications, where the control signal can be obtained by
solving a discrete-time optimal control problem over a finite horizon. The most
important advantage of the MPC algorithms is the fact that they have the unique
ability to take into account constraints imposed on process inputs, process state
variables and outputs, which usually determine the quality, the efficiency, and
safety of production. Implementation of centralized state space (SS) MPC
algorithms is becoming an important issue for different multivariable industrial
processes. The main idea of our work is to develop a multi-agent software that
can be implemented in low cost embedded systems, with parallel computational
facilities. These software agents are valid for a default model and can be
multiplied and customized according to the control horizon. Each one solves the
problem of finding one of the control actions. This procedure is repeated several
times before the control action values are delivered to the final control elements.
An agent, as an executive, has to know general information about the system
and some others which are specific of his own department. It is important to
notice that the algorithm to be solved by each agent while computing its control
action is much simpler than the one to be solved by the centralized solution.
Previous works on distributed MPC [2], [3], [4], [6] use a wide variety of
approaches, including multi-loop ideas, decentralized computation using
standard coordination techniques, robustness to the actions of others, penalty
functions, and partial grouping of computations. The key point is that, when
decisions are made in a decentralized fashion, the actions of each subsystem
must be consistent with those of the other subsystems, so that decisions taken
independently do not lead to a violation of the coupling constraints. The
decentralization of the control becomes more complex when disturbances act on
the subsystems making the prediction of future behavior uncertain.
We will analyze how the overall performance of a distributed system is
influenced if one or more agents – except the coordinating agent –, fail or
obviously underperforms from some reasons. The objective is to solve SS-MPC
problems with locally relevant variables, costs, and constraints, but without
solving a centralized SS-MPC problem. The coordinated distributed
computations solve an equivalent centralized SS-MPC problem. This means
that properties that can be proven for the equivalent centralized MPC problem
(e.g., stability, robustness) are valid to the above distributed SS-MPC
implementation. The significance of the proposed distributed control scheme is
that it reduces the computational requirements in complex large-scale systems
and it makes possible the development of fault tolerant control systems.
Comparative Analysis of Model Predictive Control Structures 7
where xk is the state vector (n x 1), uk is the input vector (m x 1), yk is the output
vector (p x 1), and Φ, Γ and C are the matrices of the system. If these matrices
(parameters) are unknown, we have to implement a system identification
module in the control algorithm.
The centralized model predictive algorithm looks for the vector ∆Uk that
minimizes a cost function represented by the scalar J, defined as:
(
J (∆U k ) = Y k − Y ref
k )
T
(
⋅ Q ⋅ Y − Y ref
k )+ ∆U Tk ⋅ R ⋅ ∆U k , (2)
where Y ref
k is the vector with the future references, Y k is the vector of the
predictions of the controlled variables (output signals), ∆U k is a vector of
future variations of the control signal, Q is a diagonal matrix with weights for
set-point following enforcement, R is a diagonal matrix with weights for control
action changes. If the prediction horizon is N and the control horizon is Nc these
vectors and matrices are [1]:
y y ref ∆u k k
k +1 k k +1 k
Y k = , Y ref k = , ∆U k = , (3)
y ref
k + N k y k + N k ∆u k + N c −1 k
Q1 0 0 R0 0 0
0 Q 0 0 R1 0
Q= 2
, R= . (4)
0 0 Q R N c −1
N 0 0
An incremental state space model can be used if the model input is the
control increment ∆u k = u k − u k −1 . The following representation is obtained for
predictions:
8 K. György, L. Dávid
Yk = Φ * ⋅ x k + Γ * ⋅ u k −1 + G y ⋅ ∆U k , (5)
where
C ⋅Γ
C ⋅Φ N c −1 C ⋅Γ 0
∑ C ⋅ Φ i
⋅ Γ
N
.
C ⋅Φ N c iN=0 c
Φ =
*
C ⋅ Φ N c +1 Γ *
=
∑
c
⋅ Φ i
⋅ Γ
G y = ∑
C ⋅Φ ⋅ Γ
i
C ⋅ (Φ ⋅ Γ + Γ )
C i =0
i =0
N N −1 N − Nc
C ⋅Φ − C ⋅Φ i ⋅ Γ C ⋅Φ ⋅ Γ
N 1
∑
C ⋅Φ ⋅ Γ
i
∑ ∑ i
i =0 i =0 i =0
(6)
k − Φ ⋅ x k − Γ ⋅ u k −1 .
E k = Y ref * *
It is to be mentioned that only the first control action is taken at each instant,
and the procedure is repeated for the next control decision in a receding horizon
fashion.
( )
J (∆U k ) = ∆U k T ⋅ G Ty ⋅ Q ⋅ G y + R ⋅ ∆U k − 2 ⋅ E Tk ⋅ Q ⋅ G y ⋅ ∆U k + E Tk ⋅ Q ⋅ E k
(11)
then the first order optimality condition can be determined in the following
way:
∂J (∆U k )
∂ ( ∆u g / k )
[
= 2 ⋅ GTy ⋅ Q ⋅ G y + R ]
g − k +1, g − k +1
⋅ ∆u g , k −
( [ ]
∆u g / k = 2 ⋅ G Ty ⋅ Q ⋅ G y + R g − k +1, g − k +1 )
−1 N
∑[
⋅ 2 ⋅ Q ⋅ G y
]Ti, g − k +1 ⋅ [E k ]i −
i =1
(13)
∑ (( ) )
N c −1
− ⋅Q ⋅Gy +
[G Ty ⋅ Q ⋅ G y + R] g ,i +1 ∆u k + i , k
R]Ti +1, g − k +1 + [G Ty
i =0
i≠ g −k
The first value of every ∆ug/k is only an approximation since it depends on
the other ∆ui+k/k values ( i ≠ g − k ). It should be noticed that the computation
burden to obtain ∆ug/k is much smaller than the one needed to compute the
whole vector ∆Uk. As already discussed, in this distributed approach, the vector
∆Uk is determined by software agents using a combination of repeated
computation of ∆ug/k and exchange of information.
The equation (13) can be written in the following general form:
∑ (A j +1,i +1 ⋅ ∆u nk −+1i / k + B j +1 ),
N c −1
∆u nk + j / k = (14)
i =0
i≠ j
where 0 ≤ j ≤ N c − 1 , the matrices Ai,j have the dimension m × m and vector Bi
has the dimension m × 1, where m is the number of inputs. Matrix Ai,i is zero. A
centralized expression for ∆Uk using equation (14) can be written as:
10 K. György, L. Dávid
n n −1
∆u k k ∆u k k B1
0 A1, 2 A1, Nc
A ∆u k +1 / k
∆u k +1 / k 0 A B2
=
2,1 2, Nc
⋅ + (15)
B
∆u k + N c −1 k
∆u k + N c −1 k
Nc
Nc ,1
A A Nc , 2 0
which, in a compact form becomes
∆U nk = A ⋅ ∆U nk −1 + B . (16)
The convergence of the ∆Uk vectors to their true values has to be assured for
a reliable application. For unconstrained applications the results obtained in the
field of distributed computation can be used [5]. The Jacobi over-relaxation
approach is adopted here by recomputing ∆Uk as a linear combination of the
value computed using equation (16) and the value obtained in the previous
iteration,
∆U nk , filtered = ( I − diag (α)) ⋅ ∆U nk + diag (α) ⋅ ∆U nk −1 (17)
where α is a vector of the filter parameters. Applying the filter according to
equation (16), it results:
∆U nk = ((I − diag (α ) ) ⋅ A + diag (α ) ) ⋅ ∆U nk −1 + (I − diag (α ) ) ⋅ B =
(18)
= A(α ) ⋅ ∆U nk −1 + (I − diag (α ) ) ⋅ B
A sufficient condition for convergence of the iterative process is to have
A(α ) < 1 for α ∈ (0,1) . The search for a filter vector α which minimizes
A(α ) can be reduced to a linear constrained optimal programming problem.
4. Numerical simulation
This section presents the application of the centralized and distributed model
predictive algorithm to a multiple input and a multiple output theoretical system
which is characterized by following state space model:
0.7 0 0.1 0 4 0
0 − 0.5 0.2 0 3 9
x k +1 = ⋅ xk + ⋅u
0 0.01 0.1 0 − 10 1 k
(19)
0.01 0 0 − 0.5 0 2
1 0 0 0
yk = ⋅ xk
0 0 1 0
where
Comparative Analysis of Model Predictive Control Structures 11
[
x k = x1,k x 2, k x3,k x 4, k ]T , u k = [u1,k u2,k ]T and [
y = y1,k y2,k
k
]T .
For both algorithms the Simulink models have been built and the following
parameters were used for both simulations:
N = 4; N c = 3; R = 0.1 ⋅ I 2 Q = 10 ⋅ I 2 (20)
The Simulink diagram of the centralized predictive control is shown in
Fig. 1, where the “Centralized_MPC_control” subsystem contains one complex
S-function module for centralized control algorithm. The Simulink model of the
distributed predictive algorithm is shown in Fig. 2.
The “Distributed_MPC_control” subsystem is presented separately in Fig. 3,
where the three interdependent modules for calculating ∆u1 / k , ∆u 2 / k , ∆u 3 / k
can be observed. The structure of these modules is one and the same, just the
input signals and parameters are different.
3
2
u
2
2{2} y 1
1 y _ref 2
u u y(k)
y_ref(k) x 2 4
4 x_estim 2
4
Centralized_MPC_control x(k)
Process
y _ref (k)
u u
x0 x_est
Reference_signal
Distributed_MPC_control Process States(x)
Control_signals(u)
B1
B2 duk1 1
B3 B1
2
Aj1 A1 for { ... }
B2
duk1
Aj2 A2 3
du(k-1)
du(k-1) du(k-1) 1
Aj3 A3 duk B3 Bj du3(k)
Bj du2(k) Bj du1(k) duk1
du_0 u_calc 6 Aj
A_distr Aj Aj
A3 Modul_u_3 Modul_u_2 Modul_u_1
1 Distributed_duk 5
Concat_vect
yref A2
u0
y ref B1 duk1
4
duk0 duk A1 duk2
2 x0 B2
For
1:N
x0 Iterator duk3 du(k)
u0 B3 Initial value 2
For Iterator iter
duk
B_distr 7
1 duk0
u du_0
The choice of alpha (α) provides all eigenvalues of matrix A(α) of equation
(18) smaller than 1, which is sufficient to assure that the iterative method
converges. These values were determined before the numerical simulation, and
the one optimal constrained problem was solved in Matlab environment. It
seems that the parameter tuning for the distributed algorithm does not need to
be exactly the same as the one used for the centralized version. For the same
amount of information exchange among agents, a faster reference filter
improves the response.
The results obtained by numerical simulation for the centralized control
algorithm using a variable reference signal are shown in Fig. 4. and results of
numerical simulation of the distributed algorithm after 500 iterations are shown
in Fig. 5.
Time variation of control signals Time variation of reference and output signals
30 15
u1 yref1
u2 yref2
20
10 y1
Amplitude
Amplitude
y2
10
5
0
-10 0
0 2 4 6 8 10 0 2 4 6 8 10
Time [sec] Time [sec]
(a) (b)
Figure 4: Time variation of the control signals (a) and outputs signals (b) in case of the
centralized algorithm.
Comparative Analysis of Model Predictive Control Structures 13
Time variation of control signals Time variation of reference and output signals
25 14
u1 yref1
20 12
u2 yref2
10 y1
15
y2
Amplitude
Amplitude
8
10
6
5
4
0 2
-5 0
0 2 4 6 8 10 0 2 4 6 8 10
Time [sec] Time [sec]
(a) (b)
Figure 5: Time variation of the control signals (a) and outputs signals (b) in case of
distributed algorithm after 500 iterations.
1.5
abs
10
Amplitude of e
T [sec]
1
5
0.5
0 0
1 2 3 4 [n] 1 2 3 4 [n]
n
Number of iteration [10 ] Number of iteration [10n]
(a) (b)
Figure 6: Variation of the reference tracking error (a) and of the simulation time (b)
versus the number of iterations in case of the distributed algorithm.
5. Conclusion
The performance of the distributed control applied for the example
discussed in the paper is comparable to that obtained with the centralized model
predictive control. The computation power needed to solve the distributed
problem is smaller than that is needed for the centralized case. This fact may
allow the utilization of the model predictive control executed in distributed
hardware with low computational power. The size of the centralized problem
grows considerably with the number of inputs/outputs while the size of the
Comparative Analysis of Model Predictive Control Structures 15
distributed problem remains the same for the same control horizon. One point to
mention is that unlike in the case of the presented example, most of the
multivariable problems do not have a complete interaction. In case of the
distributed algorithms the problem is to choose the convenient sample time and
the correct filter parameters’ vector. The choice of the filter should be done off-
line and the condition presented is enough to ensure the convergence of the
algorithm. Future developments are needed to provide the best filter option
(assuring the fastest convergence with robustness) and to introduce some
constraints in the model predictive applications. The main benefit expected in
case of the distributed MPC control is the improvement of the system’s
maitainability and the ’apparent’ simplicity to the user.
References
[1] Camacho, E. F., “Model Predictive Control”, Springer Verlag, 2004.
[2] Camponogara, E., Jia, D., Krogh, B. H. and Talukdar, S. N., “Distributed model predictive
control”, IEEE Control Systems Magazine, vol. 22, no. 1, pp. 44–52, February 2002.
[3] Venkat, A. N., Rawlings, J. B. and Wright, S. J., “Implementable distributed model
predictive control with guaranteed performance properties”, American Control Conference
Minneapolis, Minnesota, USA, June 14-16, 2006, pp. 613-618.
[4] Mercangoz, M. and Doyle, F. J, “Distributed model predictive control of an experimental
four tank system”, Journal of Process Control, vol. 17, no. 3, pp. 297–308, 2007.
[5] Plucenio, A., Pagano, D. J., Camponogara, E., Sherer, H. F. and Lima, M., “A simple
distributed MPC algorithm”, Rio de Janerio, Brasil.
[6] Maestre, J. M., Munoz de la Pena, D. and Camacho, E. F., “Distributed MPC: a supply
chain case study”, IEEE Conference on Decision and Control, Shanghai, China, December
16-18, 2009, pp. 7099 – 7104.
[7] Venkat, A. N., Hiskens, I. A., Rawlings, J. B. and Wright, S. J. “Distributed MPC
Strategies With Application to Power System Automatic Generation Control”, IEEE
Transactions on Control Systems Technology, vol. 16, no. 6, pp. 1192-1206, November,
2008.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 16-26
Abstract: The paper presents a vector control structure for a wound-excited salient-
pole synchronous motor, fed by a voltage-source converter, working at unity power
factor. The variable exciting current is ensured by a DC chopper. Due to this additional
intervention possibility the motor may have three degrees of freedom from the control
point of view, and three control loops will be formed instead of two: one for the control
of the mechanical quantities, and two for the magnetic ones. The three prescribed
references are the rotor angular speed, the stator flux (both directly controlled by using
PI regulators) and the power factor that is only imposed at its maximum value. In the
control structure two types of orientation procedure are used: stator-field-orientation for
power factor control, and rotor-orientation for computation of the voltage-control
variables and self commutation. There is also presented a speed-computation procedure
used in practical implementation regarding the signal processing of the incremental
encoder position. The method is based on the derivation with respect to time of both
sine and cosine functions of the rotor position. The angular speed is obtained then by
computing the module of the two resulted sinusoidal signals. This method avoids the
division by zero related issue that occurs at every zero crossing if the angular speed is
computed by dividing the time based derivative of one signal with the other one. For
validation of the presented control strategy simulations were carried out in
Matlab/Simulink® environment.
1. Introduction
For high performance dynamic applications the most suitable solution is the
vector controlled AC drive fed by a static frequency converter (SFC). The
wound-excited synchronous motor (Ex-SyM) is the only machine capable to
operate at unity or leading power factor (PF). The structure of the vector control
system is determined by the combination between the type of the SFC used
including the pulse width modulation (PWM) procedure, the orientation field
and its identification method [2], [8], [9].
The rigorous control of the PF can be made only with the resultant stator-
field orientation. If the PF is maximum, there is no reactive energy transfer
between the armature and the three-phase power source.
Some motor-control-oriented digital signal processing (DSP) equipments
present on the market don’t dispose over implementation possibility of the
current-feedback PWM, suitable for current-controlled VSIs, consequently in
the control structure it is necessary the computation of the voltage control
variables from the current ones, imposed or directly generated by the
controllers.
The proposed control structure is based on both types of orientation. The
stator-field orientation is used for control of the unity power factor and stator-
flux, and also for generation of the armature-current control variables. The
orientation according to the rotor position (i.e. exciting-field orientation) is
applied for self-commutation and for generation of the armature-voltage control
variables for the inverter control. The transition between the two orientations is
performed by using a coordinate transformation block (CooT), which rotates the
stator-field oriented reference frame with the value of the load angle (δ =λs−θ ).
dΨ sdθ
dt = u sdθ − Rs ⋅ isdθ + ω ⋅Ψ sqθ ;
dΨ sqθ
dt = u sqθ − Rs ⋅ isqθ − ω ⋅Ψ sdθ ;
dΨ e = u − R ⋅ i ;
dt e e e
dΨ (1)
Ad
= u Ad − R Ad ⋅ i Ad ;
dt
dΨ
Aq
= u Aq − R Aq ⋅ i Aq ;
dt
dω = p ⋅ 3 z (Ψ
z
2 p sdθ ⋅ isqθ −Ψ sqθ ⋅ isdθ ) − mL ,
dt J tot
The integration of the state equations is made directly from the derivatives of
the angular rotor speed and fluxes, then the currents are computed from the
fluxes expressed according to the longitudinal dθ rotor axis:
1 1 1
isdθ = '' Ψ sdθ − '' Ψ Ad − '' Ψe
Lsd Lm ( sdθ − Ad ) Lm ( sdθ − e )
1 1 1
i Ad = − '' Ψ sdθ + '' Ψ Ad − '' Ψe (2)
Lm ( sdθ − Ad ) LAd Lm ( e − Ad )
ie = − 1 1 1
Ψ sdθ − '' Ψ Ad + '' Ψ e
' '
Lm ( sdθ − e ) Lm ( e − Ad ) Le
and according to the quadrature qθ rotor axis:
L
isqθ = 1 Ψ sqθ − mq Ψ A
' q
L'sq LAq
(3)
1 Lmq
i Aq = − '' Ψ Aq − Ψ sqθ
LAq Lsq
As state-variables were chosen the fluxes (i.e. the direct and quadrature axis
components of the stator flux (Ψ sdθ and Ψ sqθ ) and of the damper winding flux
(Ψ Ad and Ψ Aq ), the exciting winding flux (Ψ e ) and the rotor electrical angular
speed ( ω ).
Cs. Szabó, M. Imecs, I. I. Incze 19
Figure 1: Vector control system of the adjustable excited synchronous motor fed by a static frequency converter with feed-
forward voltage-PWM and double field orientation, operating with controlled stator flux and imposed unity power factor.
Cs. Szabó, M. Imecs, I. I. Incze 21
In the third control loop the resultant stator-flux is directly controlled with a
PI controller, which outputs the ims magnetizing current, necessary for the
computation of the excitation current in the IeC block [2], [8].
dC d (θ )
= − sin θ (9)
dt dt
2 d (θ ) 2 d (θ )
2 2 2 2
dS dC dθ
ω= + = cos θ + sin θ = (10)
dt dt dt dt dt
Using this procedure the zero crossing is avoided, and leads to an accurate
result, as shown in Fig. 3.
-sin(theta)
cos(theta),
cos(theta)
1 1
cos(theta)
0 0
-sin(theta
-1 -1
0.9 0.95 1 1.05 1.1 1.15 0.9 0.95 1 1.05 1.1 1.15
t(s) t(s)
dS/dt, dC/dt
500 500
dS/dt
dS/dt
0 0 dC/dt
-500 -500
0.9 0.95 1 1.05 1.1 1.15 0.9 0.95 1 1.05 1.1 1.15
t(s) t(s)
500
w (rad/s)
w (rad/s)
500
0
0
-500 -500
0.9 0.95 1 1.05 1.1 1.15
0.9 0.95 1 1.05 1.1 1.15
t(s)
t (s)
Figure 2: The rotor angular speed Figure 3: The rotor angular speed
computed with the classical method, computed using the proposed method
leading to an inaccurate result avoiding the division by zero issue.
The sign of the position signal θ gives the direction of the rotation. The
computation of the angular speed is based on the following expression:
d (sin θ ) d (cos θ )
2 2
ω = ω sign(θ ) = + sign(θ ) , (11)
dt dt
Cs. Szabó, M. Imecs, I. I. Incze 23
and its computation may be processed with the structure presented in Fig. 4 [4].
Figure 4: Block symbol and structure for computation of the rotor angular speed based
on the encoder position signals.
5. Simulation results
Based on the structure from Fig. 1 simulations were performed in Matlab-
Simulink® environment. The rated data of the simulated salient pole Ex-SyM
are: UsN = 380 V, IsN = 1.52 A, PN = 800 W, fN=50 Hz, nN = 1500 [rpm], cosφ=
0.8 (capacitive).
400 1
Power factor
200 0.5
w rad/sec
0 0
-200 -0.5
-1
-400
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t (s) t (s)
10
Stator flux (Wb)
m e , m L (Nm)
1
0
0.5
-10
-20 0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t (s) t (s)
Figure 5: Electrical angular speed (w), Figure 6: The power factor and the stator-
electromagnetic (me) and load torque (mL) flux amplitude versus time.
24 Synchronous Motor Drive at Maximum Power Factor with Double Field-Orientation
400 4
300 3
200 2
100 1
w (rad/s)
isd (A)
0 0
-100 -1
-200 -2
w=f(m )
-300 e
-3
w=f(mL)
-400
-15 -10 -5 0 5 10 -4
4 3 2 1 0 -1 -2 -3 -4
m , m (Nm) i (A)
e L
sq
0.4
isdls, isqls (A)
1
psisd (Wb)
0.2
0 0
-0.2
-1
-0.4
-2
-0.6
-3 -0.8
-1
-4
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1
t(s)
psisq (Wb)
Figure 9. The armature-current two-phase Figure 10. The trajectory of the stator-
components ( isdλs and isqλs ) in stator-flux- flux space-phasor in natural stator-fixed
coordinate frame.
oriented coordinate frame.
After the starting process the motor runs at the rated speed value
corresponding to a frequency of 50 Hz. At t = 1 s under the full rated load a
speed reversal is applied. The mechanical load has reactive character and it is
linearly speed-dependent.
The simulation results show that the proposed control structure from Fig. 4
is a viable one with improved performances with respect to the conventional
vector control systems [2].
The results show a good performance of the drive also in transient operation
at starting, and also at speed reversal (Fig. 5). The power factor is maximum
also during the speed reversal, when the drive is in regenerative running for a
short period of time, as is shown in Fig. 6. Unity power factor is realized by
canceling the stator-field-oriented longitudinal armature reaction, as in Fig. 9.
Cs. Szabó, M. Imecs, I. I. Incze 25
6. Conclusion
The presented control structure uses two types of orientations: resultant
stator-field and exciting-field, i.e. rotor-position orientation.
For a rigorous control of the power factor, stator-field orientation was
applied, and in order to achieve unity power factor operation in the reactive
control loop the stator-flux oriented longitudinal armature reaction was
cancelled.
In order to obtain improved control performances, the computation of the
control variables were made in rotor-oriented reference frame, so the self-
control of the motor and the synchronization of the inverter trigger signal are
made based on a directly measured value of the rotor position.
In the control structure of the voltage-controlled Ex-SyM drives the dual
field-orientation combines the advantages of the two types of field-orientation
procedure, on the one hand of the stator-field orientation suitable for power
factor control and on the other hand of the exciting-field orientation for
computing feedback- and control-variables based on the rotor-position-oriented
classical MM, in order to ensure sophisticated calculus due to the geometry
characteristics, i.e. two-axis symmetry of the salient-pole rotor.
The applied computation procedure of the synchronous angular speed
corresponding to the rotating the stator-flux avoids the division by zero and
gives an accurate result.
The applied computation procedure of the rotor angular speed, which avoids
the division by zero, gives accurate results also in computation of the
synchronous angular speed of the rotating resultant orientation flux in any field-
oriented vector control structure, including induction motor drives, too.
The presented control structure was validated by simulation in
Matlab/Simulink®, and the obtained result shows the reliability of this method.
The practical implementation was realized on an experimental rig based on
the dSpace DS1104 controller board. The results were published in [8].
References
[1] Kelemen, Á., and Imecs, M., “Vector Control of AC Drives, Volume 1: Vector Control of
Induction Machine Drives”, OMIKK-Publisher, Budapest, Hungary, 1991.
[2] Kelemen, Á., Imecs, M., “Vector Control of AC Drives, Volume 2: Vector Control of
Synchronous Machine Drives” Ecriture Budapest, Hungary, 1993.
[3] W. Leonhard, “Control of Electrical Drives”, Springer Verlag. Berlin, Heidelberg, New
York, Tokyo, 1985.
[4] Szabó, Cs., “Implementation of Scalar and Vector Control Structures for Synchronous
Motors (in Romanian)”, PhD Thesis, Technical University of Cluj-Napoca, Romania,
2006.
26 Synchronous Motor Drive at Maximum Power Factor with Double Field-Orientation
[5] Kazmierkowski, M. P., Tunia, H., “Automatic Control of Converter-Fed Drives”, Elsevier,
Amsterdam, 1993.
[6] Vauhkonen, V., “A cycloconverter-fed synchronous motor drive having isolated output
phases”, in Proc. International Conference on Electrical Machines, ICEM ’84, Lausanne,
Switzerland, 1984.
[7] Imecs, M., Szabó, Cs., Incze, I. I., “Stator-field-oriented control of the variable-excited
synchronous motor: numerical simulation”, in Proc. 7th International Symposium of
Hungarian Researchers on Computational Intelligence HUCI 2006, Budapest, Hungary,
2006, pp. 95-106.
[8] Szabo, C., Imecs, M., Incze, I. I., “Vector control of the synchronous motor operating at
unity power factor”, in Proc. 11th International Conference on Optimization of Electrical
and Electronic Equipment, OPTIM 2008, Brasov, Romania, 2008, vol. II-A pp. 15-20.
[9] Szabo, C., Imecs, M., Incze, I. I., “Synchronous motor drive with controlled stator-field-
oriented longitudinal armature reaction”, in Proc The 33rd International Conference of the
IEEE Industrial Electronics Society, IECON 2007, Taipei, Taiwan, 2007, CD-ROM
[10] Davoine, J., Perret, R., Le-Huy H., “Operation of a self-controlled synchronous motor
without a shaft position sensor”, Trans. Ind Applications, IA-19, no. 2, pp. 217-222,
March/April 1983.
[11] Shinnaka, S., Sagawa, T., “New optimal current control methods for energy-efficient and
wide speed-range operation of hybrid-field synchronous motor,” IEEE Trans. Ind
Electronics, vol. 54, no. 5, pp. 2443-2450, Oct. 2007.
[12] Imecs, M., Incze, I. I., Szabó, Cs., “Double field orientated vector control structure for
cage induction motor drive”, Scientific Bulletin of the „Politehnica” University of
Timisoara, Romania, Transaction of Power Engineering, Tom 53(67), Special Issue, pp.
135-140, 2008.
[13] Imecs, M., Incze, I. I., Szabó, Cs., “Dual field orientation for vector controlled cage
induction motors”, in Proc. of the 11th IEEE Internat. Conference on Intelligent
Engineering Systems, INES 2009, Barbados, 2009, pp 143-148.
[14] Imecs, M., Szabó, Cs., Incze, I. I., “Stator-field-oriented vector control for VSI-fed
wound-excited synchronous motor”, in Proc. International Aegean Conference on
Electrical Machines and Power Electronics, ACEMP-ELECTROMOTION Joint
Conference, Bodrum, Turkey, 2007, pp. 303-308.
[15] Wallfaren, W, “Method and apparatus for determining angular velocity from two signals
which are function of the angle of rotation”, US Patent No.4814701, Mar. 21 1989.
[16] Bose K. B., “Modern Power Electronics and AC Drives”, Prentice-Hall PTR. Prentice-
Hall Inc., Englewood Cliffs, New Jersey, USA, 2002.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 27-39
1. Introduction
The incremental encoder is a device which provides electrical pulses if its
shaft is rotating [1], [2], [4]. The number of the generated pulses is proportional
28 I. I. Incze, A. Negrea, M. Imecs, Cs. Szabó
to the angular position of the shaft. The incremental encoder is one of the most
frequently used position transducers. The principle of an optical incremental
encoder is presented in Fig. 1. Together with the shaft there is rotating a
transparent (usually glass) rotor disc with a circular graduation-track realized as
a periodic sequence of transparent and non-transparent radial zones which
modulates the light beams emitted by a light source placed on one side of the
disc on the fix part (stator) of the encoder. On the opposite side the modulated
light beams are sensed by two groups of optical sensors and processed by
electronic circuits. Each of the two outputs of the encoder (noted A and B) will
generate one pulse when the shaft rotates an angle equal to the angular step of
graduation θp, i.e. the angle according to one successive transparent and non-
transparent zone. The number of pulses (counted usually by external electronic
counters) is proportional to the angular position of the shaft. Due to the fact that
the light beams are placed shifted to each other with an angle equal to the
quarter of angular step of graduation θp/4, the pulses of the two outputs will be
also shifted, making possible the determination of the rotation sense. A third
light beam is modulated by another track with a single graduation. The output
signal (named Z) associated to this third beam provides a single pulse in the
course of a complete (360°) rotation. The shaft position corresponding to this
pulse may be considered as reference position. Fig. 2 shows the output pulses of
the encoder.
Usually for counter-clockwise (CCW) direction θ is considered as positive,
and for clockwise (CW) direction it is considered negative.
1 if θ mod(2π ) = 0;
Z (θ ) =
0 if θ mod(2π ) ≠ 0.
During a rotation angle of the shaft, equal to the angular step of graduation
θp, there are four switching events in the output pulses; therefore the minimal
rotation-angle-increment detectable by the encoder is θp/4 [3]. The number of
pulses, generated by the encoder in the course of a rotation, is equal with the
number of angular steps of the graduation on the circular track on the rotor.
2π
Nr = (2)
θp
Based on (1) a Matlab/Simulink® a simulation structure shown in Fig. 3 was
built. The outputs A, B and Z are computed by Simulink® function blocks. θp
is defined by a constant block. The structure is saved as a subsystem. The
simulation structure of the incremental encoder may be integrated in any other
Simulink® structure.
CCW CW
θ = θp
∑ Ni − ∑ N j = θ pN .
(3)
i j
In order to compute the algebraic number of pulses it is necessary to know
the direction of the rotation.
A
Modulo Relational A
θp Operators Logic AN
B
θ Modulo Relational B
θp BN
Operators Logic
Z
Modulo Zero Z
2π Detect Logic ZN
IE
Subsystem
A trivial solution of the problem may be sampling at every rising edge of the
B output pulses of the A output logic value. The resulted logic value will be 1
for counterclockwise (CCW) direction of rotation, and 0 for the clockwise (CW)
direction. The method detects the direction changing only after a time interval
according to a rotation of 3 θp/4 – 5 θp/4.
Z
Counter
ΣN CW
1st Res
Pulse S
dθ dθ 2π∆N θ p ∆N
ω= ≅ ≅ = (4)
dt Ts N rTs Ts
Due to the lack of synchronization between the sampling period and encoder
pulses a quantization error occurs. The relative error of the procedure is given
by
1 2π
εf = (5)
ω N rTs
and depends on the reciprocal of the speed, the measuring interval and the
resolution of the encoder [5].
∆ N pulses
A
or
t
B
Ts
measurement period
Figure 5: The principle of the speed identification based on the frequency measurement.
The speed calculation structure based on frequency measurement is
presented in Fig. 6. In order to enhance the precision, the “Logic x4” block
multiplies by 4 the frequency of the encoder signals. Two, alternatively resetted
and enabled counters count the number of pulses. The content of the just
disabled counter is used for speed computation.
The speed identification based on frequency measurement produces
relatively small errors at high speed because the number of pulses from the
encoder in the measurement-time interval is high.
Speed-f Subsystem
In θp Ts
A f
Counter Out
∆N′ ∆N′
Logic 4f
B x4 or
En
∆N′′ ω
Logic
Speed
Ts In and
Ts Counter Out Error
Generator T s En
∆N′′ Computing εf
HF
Clock t
T hf
n pulses
Figure 7: The principle of the speed identification based on the period measurement.
In this case the expression of the angular speed is
dθ dθ 2π
ω= ≅ ≅ (6)
dt nThf N r nThf
where n represents the counted number of the high frequency pulses. The
relative error is increasing with the rotation frequency and is given by [5]
N rωThf
εp = (7)
2π
The speed calculation structure based on period measurement is presented in
Fig. 8.
Speed-p Subsystem
In θp Thf
HF Clock Counter Out
n′ n′
Generator or
En
n′′ ω
Logic
Speed
A En In and
Counter Out Error
B Logic
En n′′ Computing εp
En
εp
Speed_p εp >
< εf
εf
5. Simulation results
The structure of the interconnected functional units for simulation of
position computing is shown in Fig. 10. The reference angular position θref (the
input signal of the encoder block “IE”) is generated by a user programmable
“Function generator” block. The encoder generates the A, B and Z signals.
Based on these, the block “Poz” computes the position θ and the block “Speed”
provide the computed angular speed. In order to test the structure, the function
generator was programmed in order to start the simulation generating a positive
ramp-reference angle, which is the input signal for the “IE” encoder block. At
0.2 s the ramp is switched to negative (equivalent to a reversal from CCW to
CW), decreasing in time until 0.8 s, when it is again switched to positive.
Incremental Encoder Based Position and Speed Identification: Modeling and Simulation 35
A
Function θ ref B
IE
Generator
Z
Poz Speed
θref A B Z N θ S ω ωf εf ωp εp
To Workspace
Figure 10: The structure of the interconnected functional units for simulation
of the position and speed computation:
IE – incremental encoder, Poz – position computing block,
Speed – speed computing block.
The time profile of the generated reference angle is presented in Fig. 11 a)
(top trace). The block “Poz”, using the encoder output signals, determines the
direction of the rotation (in Fig. 11 a) bottom trace) and computes the position
(shown in Fig. 11 a) middle trace). The computed position follows very well the
reference one. Fig. 11 b) presents an enlarged detail of superposed reference
and computed angle before and after the reversal at 0.2 s. The incremental
character of the computed position is evident.
0.1 0.105
thetaref [rad]
0
0.1
-0.1 theta
0 0.2 0.4 0.6 0.8 1
0.095
theta, thetaref [rad]
0.1
theta [rad]
0 0.09
thetaref
-0.1
0 0.2 0.4 0.6 0.8 1 0.085
1
Direction
0.08
0.5
0
0.075
0 0.2 0.4 0.6 0.8 1 0.16 0.18 0.2 0.22 0.24 0.26
time [s] time [s]
a) b)
Figure 11: The simulation results representing the reference angle and computed
angle during reversal.
a) From top to bottom: reference angle θref, computed angle θ, direction signal S.
b) Detail of the reference angle θref and computed angle θ versus time.
There was analyzed the reversal process. The simulated results are presented
in Fig. 12. a)–d). The parameters of the function generator were selected in such
a manner, that all possible combinations of signals A and B at reversal
(presented in Table 1) were captured. In all cases the sensing of the reversal is
done in a quarter of angular step, as is shown in Fig. 12.
36 I. I. Incze, A. Negrea, M. Imecs, Cs. Szabó
1 1
A output
out A
0.5 0.5
0 0
0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
1 1
B output
out B
0.5 0.5
0 0
a) 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
1 1
Direction
Direction
0.5 0.5
0 0
0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
time [s] time [s]
1 1
out A
out A
0.5 0.5
0 0
0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
1 1
out B
out B
0.5 0.5
0 0
b) 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
1 1
Direction
Direction
0.5 0.5
0 0
0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
time [s] time [s]
1 1
out A
out A
0.5 0.5
0 0
0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
1 1
out B
out B
0.5 0.5
0 0
c) 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
1 1
Direction
Direction
0.5 0.5
0 0
0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
time [s] time [s]
1 1
out A
out A
0.5 0.5
0 0
0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
1 1
out B
out B
0.5 0.5
0 0
d) 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14
1 1
Direction
Direction
0.5 0.5
0 0
0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
time [s] time [s]
Figure 12: The simulation results showing all combinations of the reversal process.
Left column: Reversal from CCW to CW, Right column: Reversal from CW to
CCW, top trace: output A, middle trace: output B, bottom trace: direction signal.
Reversal occurs at: a) A=0, B=0; b) A=0, B=1; c) A=1, B=0; d) A=1, B=1.
Incremental Encoder Based Position and Speed Identification: Modeling and Simulation 37
out A
0.5 0.5
0 0
0.795 0.8 0.805 0.81 0.39 0.395 0.4 0.405
1 1
out B
out B
0.5 0.5
0 0
0.795 0.8 0.805 0.81 0.39 0.395 0.4 0.405
1 1
out Z
out Z
0.5 0.5
0 0
0.795 0.8 0.805 0.81 0.39 0.395 0.4 0.405
time [s] time [s]
a) b)
Figure 13: The A, B and Z signals of the encoder at crossing the reference position:
a) in CCW direction, b) in CW direction.
In order to test the speed identification, the function generator was
programmed for a linearly increasing and decreasing speed profile. Fig. 14
shows the theoretical speed profile and the computed speed. In Fig. 15 is
presented the variation of the errors εf and εp. The switching between the two
methods occurs at moment 0.2 s and 0.8 s, respectively.
50 0.35
speedref ωref, speed ω, [rad/sec]
0.3
40 εf
ωref 0.25
30
ε f, ε p, [%]
0.2
ω
0.15
20 εp
0.1
10
0.05
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
time[s] time[s]
Figure 14: The simulation results Figure 15: Variation of the error versus
showing reference and calculated speed. time of the two speed calculation
methods.
The structure presented in Fig. 10 may be integrated in the simulation
structures of electrical drives [2], [5]. In this case the input signal of the encoder
– i.e. the angular position – will be provided by the mathematical model of the
electrical machine. The computed position and speed is used as position feed-
back signal by the control system of the drive.
The conditions used in simulations are: N=500, Thf = 4 μs, Ts = 6 ms, the
simulation step was taken 1 μs.
38 I. I. Incze, A. Negrea, M. Imecs, Cs. Szabó
6. Experimental results
In order to investigate different position and speed determination algorithms
an experimental set-up is under construction (see Fig. 16). The incremental
encoder (type 1XP8001-1) is mounted on the shaft of an induction motor driven
by a static frequency converter. (Micromaster, Siemens). The encoder signals
are processed by an experimental board built around a DSP based development
board from Spectrum Digital.
Storage
Micromaster
PC Scope
Frequency Converter
TDS3014
A,B,Z Ua,b,c
Experimental
Board
Incremental Induction
IE IM
Encoder Motor
a) b)
Figure 17: Captured encoder output signals
a) for CCW direction versus time;
Top: Signal A, Middle: Signal B, Bottom: Marker signal Z;
b) for direction reversal from CW to CCW direction versus time;
Top: Signal A, Bottom: Signal B.
A comparison of the above figures to Fig. 13 a) and Fig. 12 a) shows that
the captures are very closed to the simulated results.
Incremental Encoder Based Position and Speed Identification: Modeling and Simulation 39
7. Conclusion
The information provided by the incremental encoders is inherently digital.
The angular position of the encoder shaft is obtained by algebraic summing
of the number of pulses provided by the encoder according to CCW and CW
rotation.
The direction of the rotation may be determined by a digital decoding
scheme using the two quadrature signals. The direction changes are detected in
an angular interval equal to a quarter of the angular step of the graduation.
The frequency of the pulses generated by the encoder is proportional to the
speed of the rotation. The error of the measurement is inversely proportional to
the speed, therefore the procedure is appropriate for high speed region.
At low speeds the measurement of the period of the encoder pulses is
recommended. The measurement error is decreasing with the decreasing of the
speed.
In case of large speed variations – in order to minimize the errors – a
switching between the two described methods is suitable.
The presented simulation structure of the incremental encoder, position and
speed computation may be integrated in any Matlab-Simulink® structure.
References
[1] Incze, J. J., Szabó, Cs., and Imecs, M., “Modeling and simulation of an incremental encoder
used in electrical drives”, in Proc. of 10th International Symposium of Hungarian
Researchers on Computational Intelligence and Informatics CINTI 2009, Budapest,
Hungary, 2009, pp. 97-109.
[2] Incze, I. I., Szabó, Cs., and Imecs, M., “Incremental encoder in electrical drives: modeling
and simulation” in Studies in Computational Intelligence Editors: I. J. Rudas, J. Fodor, J.
Kacprzyk, Springer Verlag, Germany, under press.
[3] Koci, P., and Tuma, J., “Incremental rotary encoders accuracy”, in Proc. of International
Carpathian Control Conference ICCC 2006, Roznov pod Rashosten, Czech Republic, 2006,
pp. 257-260.
[4] Lehoczky, J., Márkus, M., and Mucsi, S., „Szervorendszerek, követő szabályozások”,
Műszaki Kiadó, Budapest, Hungary, 1977.
[5] Petrella, R., Tursini, M., Peretti, L., and Zigliotto, M., “Speed measurement algorithms for
low resolution incremental encoder equipped drives: comparative analysis”, in Proc. of
International Aegean Conference on Electrical Machines and Power Electronics, ACEMP-
ELECTROMOTION Joint Conference, Bodrum, Turkey, 2007, pp. 780-787.
[6] Miyashita, I., and Ohmori, Y., “A new speed observer for an induction motor using the
speed estimation technique”, in Proc. of European Power Electronics Conference EPE΄93,
Brighton, United Kingdom, 1993, vol. 5, pp. 349-353.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 40-50
Abstract: The aim of this paper is to present partly the Measurement Automation
System (MAS) of an Electrolytic Capacitor Development Laboratory at EPCOS
Hungary. The main role of the MAS is to facilitate the electrolyte and capacitor research
and development, through the automation of the related measurement tasks, and to
provide a powerful database system background for data retrieval and research decision
support. The paper introduces only a few applications of the entire system. More than
27 different electrolyte and capacitor measurements were automated. All the
measurements have been implemented in a similar manner. During the process the user
initializes the measurement, sets the measurement environmental parameters, and
launches the execution. The program runs on its own, sending automatically the results
of the measurement to a database system, from where the data can be retrieved in a
predefined or a non-predefined way. For the realization of the above requirements the
National Instruments measurement, data acquisition and LabVIEW software
development tools were chosen as implementation and development platform. After
validation of the system, there are many advantages as making the measurements more
precise, more reliable, fault tolerant, parallel running, which all contribute to speed up
the research and development of new components and devices.
The developed measurement system controls and harmonizes the different devices
and supervises their work. The developers do not have to encroach. The user can simply
check the measurement phase by a glance on the screen. The programs estimate and
display the running time of the experiments, allowing for the researchers working on the
laboratory to manage the instrumental resources in time and to schedule in advance (for
hours, weeks and months) the new measurements. Another big advantage is the
database system behind, which stores the result of each measurement in an easy
searchable way. Different measurement reports and statistical diagrams can be made
automatically and the results can be reused in later research. The effectiveness of the
D. Fodor 41
system was tested also via the inner gas pressure measurement of electrolytic capacitors
to estimate the life-span of the capacitors. According to the experiments the introduction
of the MAS system in the Lab the research and development time for new electrolytes
and capacitors has been decreased considerably.
1. Introduction
Capacitors play a very important role in our world [1], [2]. They can be
found in every electronic device around us, they are widespread all over the
world used as energy storage elements, filters and decouplers.
The main features of capacitors are: capacity (1pF-1F), operational voltage
(from 1,5 V up to some kV), operational temperature (from -55 °C to 125 °C),
loss factor, size and shape.
The most frequently used capacitor types in the industry nowadays are:
ceramic, foil, aluminium and tantalum capacitors. The four most important
application fields for capacitor technologies are radio techniques, electrical
power processing, energy storage and power electronics. Except for the first
application field electrolyte-capacitors can be used, so this type of capacitor is
prevalent.
The main advantage of the electrolytic capacitors is the high capacity and
voltage value, which can be attributed to the dielectric layer with very small
thickness, but with very large surface. Their disadvantage is the over voltage
sensitivity.
The main characteristics of the electrolytic capacitors are determined by the
electrolyte, the anode foil and the paper separator.
The electrolyte generally consists of the following components:
• solvent: e.g.: ethylene glycol,
• acids and bases: usually organic,
• different additives.
The electrolytes are characterized by two major features: conductivity and
breakdown potential, both of them dependent on the temperature. The change of
conductivity as a function of temperature decisively affects the electric
parameters of the capacitor. The chemical reactions, which take place inside the
electrolyte, are in direct relationship with the conductivity value at different
temperatures and the quantification of this relationship is important.
The conductivity and breakdown potential of the electrolyte influences the
maximum operating condition of the capacitors. Electrolytes with high
42 Al. Electrolytic Cap. Research and Dev. Time Optim. Based on a Meas. Automation System
conductivity are used in the low voltage capacitors, while the electrolytes with
low conductivity are used in the high voltage capacitors.
The paper is organized as follows. The short descriptions of the
measurement types, which must be automated, are given in Section 2. The
architecture of the proposed measurement system is presented in Section 3. In
Section 4 a characteristic measurement “Conductivity (T)” is presented in detail
in order to demonstrate the program structure and some implementation issues.
Some characteristic measurement results are given in Section 5, and the
conclusions are presented in Section 6.
2. Measurement types
There are two groups of measurements used in electrolyte-capacitor research
and development. The first main measurement group is related to electrolytes,
while the second main measurement group is related to capacitors. The
electrolyte measurement consists of six measurement programs as follows:
• “Conductivity (T)”: measurement on the temperature dependence of
conductivity. This is one of the most important measurements, and is
presented in details in the next section.
• “Ph (T)”: measurement of the pH value as a function of temperature.
The structure of the program is completely similar with the above-
mentioned one, with a difference that pH meter is used instead of
conductivity meter. The measurement is important because the pH
value of the used electrolyte in the electrolytic-capacitor must be in a
specified range.
• “Mixing (pH with single temperature)”: measurement of the pH value
as a function of the concentration of an electrolyte composition at a
specified temperature. As a matter of fact we use this measurement in
order to set up the pH value of the electrolyte.
• “Mixing (conductivity with single temperature)”: measurement of the
conductivity as a function of the concentration of an electrolyte
composition at a specified temperature.
• “Mixing (conductivity with multi temperature)”: measurement of the
conductivity as a function of the concentration of an electrolyte
composition at several temperatures. This and the former measurement
are used to set up the conductivity value of the electrolyte.
• “Spark detector”: measurement of the breakdown potential of the
electrolyte.
4. “Conductivity(T)” measurement
The base of the whole software system is a framework which was originally
designed to provide a common user interface for the different measurement
programs. In spite of the fact that all the measurements have an individual
character, they were integrated into the above mentioned framework, in order to
manage the communication ports and instruments, and to provide the parallel
run of the programs.
The measurement system includes at least 27 different measurements, whose
presentation can not be done here. All the measurements have been
implemented in a similar manner. Firstly the user initializes the measurement,
sets the measurement parameters, launches the execution and leaves the
program to run on its own, sending the results of the measurement to a database
system. This process is presented afterwards through the “Conductivity (T)”
measurement.
Two instruments are involved in this measurement: the thermostat and the
conductivity meter.
Before launching the program the user initializes the measurement. During
the measurement the user can choose between two main tab controls in
Fig.3. The “Set parameters” tab contains the parameters set of the
measurement, while the “Measurement” tab shows the state flow of the
measurement, graphs and displays. The program executes the same steps
cyclically. The state diagram shows the current state of the measurement and
the remaining time before the next phase. Firstly the program sets up the
temperature. On the temperature graphs the temperature of the measuring
probe and of the thermostat can be followed up. During the stabilization time
the temperature of the electrolyte becomes the same as the thermostat’s.
Through measuring & saving phase the important conductivity values are
measured and stored locally. The remaining time before the next measurement
is indicated with a progress bar . After the measuring & saving phase the
program calculates the mean value from the stored conductivity data and only
the result is migrated into the database. On the and graphs the
conductivity and the temperature as a function of the number of measurements
can be seen. These are the most important graphs, because the electrolyte
forming can be directly correlated with the temperature. The measurement is
ended after the conductivity is measured at all of the settled temperatures. The
remaining time before the end of the experiment is shown during the
measurements . The program can be stopped with the STOP button .
46 Al. Electrolytic Cap. Research and Dev. Time Optim. Based on a Meas. Automation System
5. Results
In Fig. 4 it is demonstrated how decisions are supported by the automated
measurement system. Each dot represents a separate “Conductivity (T)” and
“Sparkling voltage” measurement at 85 degree Celsius- result obtained by the
automated system. By graphical evaluation electrolytes with specific
conductivity or sparkling voltage can be selected for further research.
With the help of the currently automated system the forming of an
electrolyte as a function of the temperature can be followed up. In Fig. 5
measurement results of Electrolyte 1 and Electrolyte 2, which has been obtained
using the “Conductivity (T)” measurement program are demonstrated. The
conductivity of the electrolyte is measured more than once (usually 10 times) at
the specified temperatures. A mathematical mean is calculated from the
D. Fodor 47
6. Conclusions
More than 27 different electrolyte and capacitor measurements, have been
automated. All the measurements have been implemented in a similar manner.
Firstly the user initializes the measurement, sets the measurement parameters,
launches the execution and leaves the program to run on its own, sending the
results of the measurement to a database system, from where the data can be
retrieved in a predefined or a non-predefined way. After validation of the MAS,
there are many advantages as making the measurements more precise and more
reliable, fault tolerant (i.e.: monitoring functions are implemented like detection
50 Al. Electrolytic Cap. Research and Dev. Time Optim. Based on a Meas. Automation System
Acknowledgements
The author fully acknowledges the support of the National Office for
Research and Technology via the Agency for Research Fund Management and
Research Exploitation (KPI), under the research grants GVOP-3.2.2.-2004-07-
0022/3.0 (KKK), GVOP-3.1.1.-2004-05-0029/3.0 (AKF) and more recently for
TAMOP-4.2.1/B-09/1/KONV-2010-0003: Mobility and Environment: Re-
searches in the fields of motor vehicle industry, energetics and environment in
the Middle- and West-Transdanubian Regions of Hungary. The Project is
supported by the European Union and co-financed by the European Regional
Development Fund.
References
[1] Theisbürger, K. H., “Der Elektrolyt-Kondensator”, FRAKO Kondensatoren- und Apparat-
bauen G.m.b.H Teningen, third edition, EPCOS internal document.
[2] Per-Olof Fägerholt, “Passive components” (internal document), 1999.
[3] National Instruments, “Product guide”, 2004.
[4] ***, National Instruments Hompage,
http://zone.ni.com/devzone/conceptd.nsf/webmain/5D1A4BAB15C82CC986256D3A0058
C66C?OpenDocument&node =1525_us.
[5] ***, National Instruments Hompage, http://www.ni.com/labview/whatis/.
[6] ***, “Haake DC30 Circulator- Technical Manual”, Thermo Electron Corporation, 2002.
[7] ***, Metrohm AG Hompage, www.metrohm.com.
[8] ***, “780 pH Meter Instruction Manual”, Metrohm Ltd., Switzerland, 2002.
[9] ***, “712 Conductometer Instruction Manual”, Metrohm Ltd., Switzerland, 2002.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 51-62
Abstract: This paper proposes a framework for modeling, simulation and formal
verification of embedded real-time applications running over a real-time multitasking
kernel. We extend a simple real time kernel (RTOS) with synchronous and
asynchronous message passing interface to communicate between tasks and drivers. In
the same time some embedded system’s specific drivers have been added, allowing
unified resource access through these interfaces. The process engineer defines the
control system as a set of tasks interacting with events occurring irregularly in time
(alarms, user commands, communication) and regularly in time (sampled sensor data
and actuator control signals). Taking into consideration both non-preemptive and
preemptive scheduling, we propose two models consisting of networks of timed
automata. Using a model-checker tool (UPPAAL), one can verify the timing and logical
properties of an application, changing the time constrains and priorities. In a priority-
based scheduling scheme, tasks interact both through the scheduler and through the
mutual exclusion mechanism, but there are hidden from the engineer by the framework.
The framework also offers a solution for generating the source code skeleton of the
modeled application. This reduces the risk of errors do to error-prone human coding and
most importantly ensures that the task will have the same behavior as described in the
model.
1. Introduction
Real-time embedded systems have become widely used in a large number of
fields, especially in the industrial environment, playing an increasing role in
modern society and are rapidly evolving, growing in complexity. Moreover they
are often used not only by themselves, but in clusters and networks. An
52 L. Hategan, P. Haller
INFINITE_LOOP
- Request access to a resource (blocking call)
- Perform a read/write operation
- Perform a computation
- Request access to another resource (blocking call)
- Perform a read/write operation
……
END_INFINITE_LOOP
Resources are accessed by tasks through blocking request calls. The desired
resource is explicitly specified through its RID (resource ID) [7]. After making
a request call, the task enters in blocking state, where it waits for the resource to
become available.
The FreeRTOS tasks are prioritized.
The computations performed by the tasks are characterized by a worst case
execution time (WCET) and a best case execution time (BCET). This is a con-
54 L. Hategan, P. Haller
3. The resources
In constructing the general resource model, we considered that the
following: resources are reusable and can be shared (but only one task can have
access to a resource at any given time), a task can request a single resource at
one time and in the request call the resource is explicitly specified through its
resource ID. Every resource has a minimal inter-arrival time (the MIAT). A
resource can unblock a waiting task and provide data at any time after its MIAT
expires.
Task1{
Loop
{ Request(RID);
Read(RID, var_rid, nr);
computation();
}}
If there is no task running or ready, the FreeRTOS kernel schedules the Idle
Task (Fig. 2), which is always available for scheduling. Following the general
task form, the Idle Task periodically requests the NULL resource, yielding
processor control.
y >= B
request[pid][RESOURCE_NULL]!
W=0,
B=0
schedule[pid]?
READY RUN
y = 0,
y <= W
W+=WCET,
B+=BCET
pid : id_task
pid == GetNextTask()
schedule[pid]!
pid : id_task,
f : id_all Init(),
request[pid][f]? x=0
x = 0,
IDLE res_all[f] = 1, SELECT INIT
Rotate(pid) x<=5
INIT: the necessary hardware settings and initialization of task priorities and
data structures take place. Because the state is committed, the scheduler will
leave this state immediately at startup.
SELECT: the ready task with the highest priority is chosen for scheduling
(the GetNextTask() function). The invariant x<=5 specifies the time needed to
select the next task; the value can be changed to match the actual physical time,
which is hardware-dependant.
Framework for Modeling, Verification and Implementation of Real-Time Applications 57
x>all_period[rid]
event_or_timer[rid]!
x=0,
RESOURCE W+=RES_WCET[rid],
B+=RES_BCET[rid]
The MIAT values for all of the system's resources are stored in the array
all_period[NR_RESOURCES]. The MIAT is modeled by the guard
x>all_period[rid]. The waiting task is unblocked via the event_or_timer[rid]!
channel. The RES_WCET[pid] and RES_BCET[pid] constants are used to delay
the task interrupted by the resource ISR.
In case a task must execute an action periodically, at strict interval, it can
utilize the timer resource (Fig. 5). The timer unblocks a waiting task when the x
clock has the same value as all_period[tid]. The constant tid represents the
timer's RID.
x >= all_period[tid]
res_all[tid] == 0
TIMER
x = 0, missed_timer[tid]++,
x <= all_period[tid]
W+=RES_WCET[tid], SYNC
B+=RES_BCET[tid]
res_all[tid] == 1
event_or_timer[tid]!
x=0, W+=RES_WCET[tid],
res_all[tid]=0, B+=RES_BCET[tid]
The initial state TIMER is left when the predefined period expires. The
SYNC state is committed so it is left immediately, the automaton unblocking
any waiting task. In order to avoid system deadlock, the timer is allowed to
expire even if none of the tasks are blocked waiting for it.
58 L. Hategan, P. Haller
schedule[pid]?
READY1
W[pid]+=REQ_WCET[RID],
B[pid]+=REQ_BCET[RID],
y=0,running=1,t=0
(y<=W[pid])&&(y'==1) suspend[pid]?
(y<=W[pid])&&(y'==1)
RUN3 running=0 SUSPENDED1
SUSPENDED3 suspend[pid]? RUN1
y>=B[pid]
y'==0 running=0 y'==0
computation(),y=0,
schedule[pid]?
W[pid]=REQ_WCET[RID], running=1
schedule[pid]? B[pid]=REQ_BCET[RID]
running=1
y>=B[pid]
request[pid][RID]!
y>=B[pid]
ready[pid]=0,
Read(RID, var_rid, nr), running=0, W[pid]=0, B[pid]=0
W[pid]=WCET,
B[pid]=BCET, y=0
BLOCKED
event_or_timer[RID]?
SUSPENDED2 suspend[pid]? ready[pid]=1,t=0
y'==0 running=0 schedule[pid]?
READY2
y=0,running=1,
schedule[pid]? RUN2 W[pid]+=READ_WCET[RID],
running=1 (y<=W[pid])&&(y'==1) B[pid]+=READ_BCET[RID]
The Idle Task is the same as in the case of the cooperative model, except for
the fact that it doesn't suspend itself by requesting the NULL resource, but it is
preempted by the scheduler.
Framework for Modeling, Verification and Implementation of Real-Time Applications 59
pid : id_task,
f : id_all SELECT
y<=2 x >= iTickRate
request[pid][f]?
TICK
y = 0, tick!
IDLE res_all[f] = 1,
Init(), x=0
y=0 INIT x <= iTickRate
Rotate(pid),
tick? current_pid=GetNextTask()
y=0
suspend[current_pid]!
y = 0,
SUSPEND Rotate(current_pid),
y<=2 current_pid=GetNextTask()
(a) (b)
Figure 7: The preemptive scheduler model (a) and the tick interrupt model (b).
Liveness properties require that, in all cases, the system will eventually reach
a state where a formula p is true. Another form is that if a formula p is true,
another formula q will become true eventually:
- A<> Task1.RUN1 - Task1 will inevitably be in the RUN1 state at
some point;
- Timer(1).SYNC --> Task5.RUN2 - considering a blocked task
waiting for a timer, if Timer(1) expires then Task5 will inevitably be
scheduled.
Write operation:
- Display: RES_BCET=390 μs (1 char.); RES_WCET≈7 ms (20 char.);
- External storage: RES_BCET=54 μs (1 char.) ;RES_WCET ≈100 μs(128 char.).
Read operation:
- Analog/digital converter: RES_BCET=RES_WCET=26,5 μs;
- Keys: RES_BCET=27,2 μs; RES_WCET=28 μs;
- Joystick: RES_BCET=27,2 μs; RES_WCET=28 μs.
Framework for Modeling, Verification and Implementation of Real-Time Applications 61
The generator creates a header file containing the declarations for all the
tasks and a C source code file with their implementation. These resulting files
can be compiled in a project along with the FreeRTOS source code. Before
compilation, all that remains is to add the computational blocks containing the
algorithms that manipulate the data. The task code is identical in both cases,
cooperative and preemptive.
Alongside the models and source code generator, we constructed a source
code project that contains the FreeRTOS source, drivers including interrupt
mechanisms for the system's resources and the module for unified peripheral
access.
The source code project also includes a special task that can be used to
directly measure the time necessary for a sequence of code to execute on this
physical system (for use in the model). Also, we included functions to facilitate
the conversion of data from the type specific to a particular resource to another
resource's type. For example, to convert and copy the data from the state
variable associated with the system's analog-to-digital converter to the state
variable of the LCD display, one can use the function ADCtoLCD(res_adc,
res_lcd ). The available functions and they execution time is:
- INTtoLCD( Integer, res_lcd ); RES_BCET=7,8 μs; RES_WCET=44,1 μs;
- INTtoSD( Integer, res_sd ); RES_BCET=66,5 μs; RES_WCET=305 μs;
- ADCtoSD( res_adc, res_sd ); RES_BCET=45,3 μs; RES_WCET=333 μs;
- ADCgetINT(res_adc); RES_BCET=RES_WCET=31,4 μs;
- ADCgetSTR( res_adc); RES_BCET=17,5 μs; RES_WCET=350 μs;
- LCDgetSTR( res_lcd); RES_BCET=RES_WCET=18 μs;
- SDgetSTR( res_sd); RES_BCET=RES_WCET=47,2 μs.
62 L. Hategan, P. Haller
8. Conclusions
This paper presents a framework that can be used to model, verify and
implement real-time multitasking applications. The operating system, resources
and application tasks are modeled by timed automata. This approach allows for
the system's simulation and verification before the actual implementation,
permitting the early detections of any undesirable behavior. The unified
resource access interface and the code generator make possible the automatic
generation of the modeled (and verified) application's source code, avoiding
most of the error-prone human coding. Because the method is susceptible to
state space explosion, the model must be abstract as much as possible, making a
compromise between model complexity and its state space size.
References
[1] Fersman, E., “A generic approach to schedulability analysis of real-time systems”, Ph.D.
Thesis, Faculty of Science and Technology, Uppsala University, November 2003.
[2] Waszniowski, L., and Hanzalek, Z., “Formal verification of multitasking applications based
on timed automata model”, Real-Time Systems, vol. 38, no. 1, Springer-Verlag, pp. 39-65,
2008.
[3] Zaharia, T., and Haller, P., “Formal verification and implementation of real time operating
system based applications”, in Proc. of the 4th IEEE International Conference on Intelligent
Computer Communication and Processing, Cluj-Napoca, Romania, pp. 299-302, 2008.
[4] FreeRTOS – portable, open source, mini Real Time Kernel; http://www.freertos.org
[5] UPPAAL – tool box for modeling and verification of real-time systems modeled as
networks of timed automata; http://www.uppaal.com
[6] Liu, J.W., “Real-time systems”, Prentice-Hall, Inc., Upper Saddle River, New Jersey 2000.
[7] Li, P., Ravindran, B., Suhaib, S., and Feizabadi, S., “A formally verified application-level
framework for real-time scheduling on POSIX real-time operating systems”, IEEE Trans.
Software Eng. vol. 9, no. 30, pp. 613-629, 2004.
[8] Hessel, A., Larsen, K. G., Mikucionis, M., Nielsen, B., Pettersson, P., and Skou, A.,
“Testing real-time systems using UPPAAL”, Formal Methods and Testing, Springer-
Verlag, pp. 77-117, 2008.
[9] Behrmann, G., David, A., and Larsen, K. G., “A tutorial on UPPAAL”, In Proceedings of
the 4th International School on Formal Methods for the Design of Computer, Communi-
cation, and Software Systems (SFM-RT'04). LNCS 3185, Springer-Verlag, 2004.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 63-72
Abstract: The relational model provides extensive support for data integrity
constraints (i.e. business rules) specification, as an integral part of the data model.
Current Relational Database Management Systems (RDBMS), however, cover just
partially the various categories of data integrity constraints, mostly those directly related
with the database structure (e.g. entity integrity, referential integrity). The rest of them
are delegated to the application languages. Consequently, they are usually defined in a
function-oriented approach (e.g. the object-oriented technology), loosing their direct
link with the data model – with all the negative consequences in terms of system
scalability and logical data independence. The present paper proposes a data-oriented
approach for the development of the external level of database systems. Under the
proposed model, the external data is structured only by means of ordered sets of tuples
(i.e. arrays of tuples), and the corresponding business rules (i.e. the presentation rules)
are treated as external schema integrity constraints. Consequently, the application
developer is able to define the user views of the system in a declarative fashion, similar
to the relational database design. The immediate advantage is that he or she gains a data
designer perspective, rather than one of a programmer. The essentiality (i.e. the unique
data constructor) of the model facilitates a seamless integration with the relational
model, an entity-relationship graphical representation, and the complete automation of
the user interface development.
1. Introduction
Database-driven information systems are developed around an integrated
and shared source of data. The integration is important when somebody needs a
general view of the system: for example, a manager who wants to track an item
from the supplier to the end-client, spanning the procurement, production and
sales activities of a company. This is why, regardless how many individual
views we have about an organization’s data, there is always needed an
integrated, general view of the entire database. On the other hand, it is also
important for initial system development and for long-term data management
purposes to work with data representations which are not dependent on the
physical storage equipment.
These requirements led to the ANSI/SPARC three levels architecture [2, 3]
(see Fig. 1), which makes a clear distinction between the physical and the
conceptual (i.e. logical) representation of the system, and between the general,
integrated community view and the individual views of the system, respectively.
The physical-logical separation provide physical data independence, which
basically means that the applications would not be affected by changes at the
physical data representations (for hardware upgrade purposes, for example); the
community-individual views separation provides logical data independence,
which means that the system could grow (through some new user views or
modification of the existent ones) without affecting the applications
corresponding to the user views that remain unchanged.
EXTERNAL
LEVEL User view 1 User view 2 User view N
...
CONCEPTUAL
Community view
LEVEL
The relational model provides the theoretical support for the development of
information systems in accordance with the three levels architecture. Thus,
Relational Database Management Systems are currently the technology of
choice for the development of the physical and conceptual level, sharing with
the application languages the development of the external level.
Application Development in Database-Driven Information Systems 65
Graphical
User GUI 1 GUI 2 GUI N
...
Interface
CONCEPTUAL
Community view
LEVEL
4. An example
The following example is inspired from the chapter about presentation rules
in reference [6]. Some details were added to enable a better presentation of our
approach (see Fig. 3).
Application Development in Database-Driven Information Systems 69
Suppose that we have a user view that exposes to the end user data about
customers, orders, and order details. Suppose that the user will have to be able
to see at any time all the customers which simultaneously satisfy the following
conditions:
• they have a credit limit less than a certain value;
• they are located in a specific region;
• they can be ordered by name, by credit limit, or by the total value of
their orders;
• customers whose accounts are overdue must be displayed in red.
Likewise, the user should be able to see, also, at any time, the orders which
simultaneously satisfy the following conditions:
• they belong to the current customer;
• their issuing date is in a certain period, say after a start_date and before
an end_date, specified by the user;
• they can be ordered by date, value-ascending, or value-descending;
• rush orders must be displayed before regular orders.
When the user inspects a specific order, the system should provide all the
order_details that belong to that particular order. Those details should be
displayed in their part number order.
Customer
Credit limit Region
sequence
Order
Customer Time frame
sequence
Order
Order details
The array named region takes its values from the conceptual level (possibly
through a relational view), but its content doesn’t depend on any other data
structure from the user view.
The customer data contained by the customer array depends on the current
region chosen by the user from the region array, on the current customer
sequence chosen by the user in the customer sequence array, and also on the
value provided by the user in the credit limit array (the credit limit array will be
a special case of an array with one tuple and one attribute, but still an array and
not a simple scalar variable, in order to preserve the essentiality of the external
view model). This is why the defining operator of the customer array should
have three parameters, which will automatically take their values at run time
from the current tuples in the region, customer sequence, and credit limit arrays,
respectively, at any refresh of the customer data.
The list of customer orders exposed to the user at a given moment, contained
by the order array, depends on the current elements of the customer array, the
time frame array, and the order sequence array. Consequently, the defining
operator of the order array should have at least three parameters, one for every
parent array. In fact, for the present example, we may consider four parameters:
one for the link with the customer array (e.g. customer_id), two for the link with
the time frame array (e.g. start date, and end date), and one for the link with the
order sequence array (e.g. order_sequence_no).
As required, the order details array will contain at any moment all the details
of the current order from the order array. The rule that the details should always
be ordered by their part number is specified inside the defining function of the
order details array, and will remain transparent at the user view design level.
We should also be able to provide solutions for the presentation rules that are
not related with relationship definitions:
• “customers whose accounts are overdue must be displayed in red” – for
this rule, we need to introduce an attribute in the customer array, which
would allow the distinction of the ‘red’ customers, so that, at the display
level, while defining the graphical object (e.g. the grid, or the list)
which displays the customers data, we’ll be able to incorporate this
presentation rule in a straightforward manner (i.e. declaratively, if
possible);
• “rush orders must be displayed before regular orders” – this rule is
implemented inside the defining function of the order array (which is
completely transparent for our model) .
So, under the proposed model, the developer is able to design the
presentation level declaratively, just specifying:
• the declaration of all the array structures: array name, attribute names,
data types;
Application Development in Database-Driven Information Systems 71
5. Conclusions
There is a clear need for a data-oriented approach in application engineering.
The software engineering field is now dominated by the new trend introduced
by the OMG’s Model Driven Architecture [14], which has a strong object
oriented bias. The position sustained by this paper is that the application
development should be not only model-driven, but data-model-driven [10, 11].
The paper introduces a data-oriented model for the development of the external
level of database systems, which considers the presentation level as the only
required data layer above the relational data model. Moreover, this should be a
thin layer, with the unique purpose of data presentation, which doesn’t need to
address any business logic other than the presentation rules [6].
The standard behavior and the essentiality of our model enable the
automation of the presentation level development. At the same time, the
mapping operators (defined at the lower levels and called at the presentation
level to promote the CRUD operations to the conceptual level) are the key for
the provision of logical data independence at the presentation level. This
constitutes the major step forward from the previous attempts to automate the
interface, which failed to provide an appropriate degree of logical data
independence at the external level of the system. Trying to generate the
interface based on various entity-relationship patterns existent at the conceptual
level, and assuming that the user views are just sub-schemas of the conceptual
level [15, 16, 18], they become useless as soon the external level has multiple
sublevels, i.e. the presentation data is obtained from the conceptual data through
a series of complex operations – which is always the case for large, integrated
information systems.
The foreseen applications of the presentation model are related primarily to
the application development for database-centric systems (e.g. enterprise
72 M. Muji
References
[1] Adya, A., Blakeley, J. A., Melnik, S., and Muralidhar, S., “Anatomy of the ADO.NET
entity framework”, ACM SIGMOD International Conference On Management Of Data.
Beijing, China, 2007, pp. 877-888.
[2] ANSI/X3/SPARC Study Group on Data Base Management Systems. “Interim Report”,
ACM SIGMOD Bulletin, no. 2, 1975.
[3] Date, C. J., “An Introduction to Database Systems (8th edition)”, Addison-Wesley, 2003.
[4] Date, C. J., “Date on Database: Writings 2000-2006”, Apress, 2006.
[5] Date, C. J., “Logic and Databases: The Roots of Relational Theory”, Trafford Publishing,
2007.
[6] Date, C. J., “What Not How: The Business Rules Approach to Application Development”,
Addison-Wesley, 2000.
[7] Date, C. J., and Darwen, H., “Foundation for Future Database Systems: The Third
Manifesto (2nd Edition)”, Addison-Wesley, 2000.
[8] Halle, B., “Business Rules Applied: Building Better Systems Using the Business Rules
Approach”, Wiley, 2001.
[9] Hay, D. C., “Data Model Views”, The Data Administration Newsletter - TDAN.com, Apr.
2000.
[10] Lewis, B., “Data Lineage: The Next Generation”, The Data Administration Newsletter -
TDAN.com, Aug. 2008.
[11] Lewis, B., “Data-Oriented Application Engineering: An Idea Whose Time Has Returned”,
The Data Administration Newsletter - TDAN.com, Jan. 2007.
[12] Lewis, W. J., “Data Warehousing and E-Commerce”, Prentice Hall PTR, 2001.
[13] Lewis, W. J., “E-Commerce Vs. Data Management”, The Data Administration Newsletter –
TDAN.com, Jan. 2002.
[14] Model Driven Architecture. http://www.omg.org/mda/
[15] Pizano, A., Yukari, S., and Atsushi, I., “Automatic generation of graphical user interfaces
for interactive database applications”, Conference on Information and Knowledge
Management, Washington, D.C., 1993, pp. 344-355.
[16] Rollinson, S. R., and Roberts, S. A., “A mechanism for automating database interface
design, based on extended E-R modelling”, Advances in Databases. s.l. : Springer Berlin /
Heidelberg, 1997, pp. 133-134.
[17] Ross, R. G., “Principles of the Business Rule Approach”, Addison-Wesley Professional,
2003.
[18] Rowe, L. A., and Shoens, K. A., “A form application development system”, ACM SIGMOD
International Conference On Management Of Data., Orlando, Florida, 1982, pp. 28-38.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 73-86
1. Introduction
Worldwide patent applications are growing at an average rate of 4.7% per
year, according to the 2007 edition of the World Intellectual Property
Organization (WIPO)'s Patent Report [1]. The patent examination procedure has
74 A. Aszalos, J. Domokos, T. Vajda, S. T. Brassai, L. Dávid
two stages: formal verification which follows all the formal procedural steps
and verifies if applications are patentable and the evaluation stage which checks
the grade of novelty and innovation of the patents [2], [3]. To reduce the patent
examination time and increase the quality of the evaluation, despite that the
number of the patent applications are growing, there are two possibilities: to
increase the number of employments of the State Office for Invention and
Trademarks (OSIM) or to reduce the amount of work required for registration,
formal verification and evaluation by using an online integrated system. The
following paragraphs of this section present similar existing systems.
OSIM [4] is a specialized government body that has exclusive authority in
Romania in the field of protection of industrial property. Taking into
consideration the special economic importance of the industrial property and the
need of a competitive management of information in the field of industrial
property, the OSIM has developed a system of services by which offers to the
large public useful information concerning industrial property, processed by
highly competent specialists such as to facilitate correct economic decisions to
be taken. It pays special attention to the promotion of the industrial property.
From 2006, OSIM offers the possibility to register on-line to the epoline®
system, for the following types of patents:
• patents filed according to the European Patent Convention (CBE/EPC),
through OSIM as the national office;
• patents filed according to the Patent Cooperation Treaty (PCT), through
OSIM as reception office;
In the present it is not possible to register online the Romanian national
patent. On the OSIM web page you can find important information about on-
line registration for the above mentioned patent application such as: important
announcements, details about the services, information about how to register
on-line, software for registration of the patent request at OSIM,
recommendations, assistance for clients who want to register on-line an
invention and some details about this page services.
EPO [5] provides a uniform, coherent application procedure for individual
inventors and companies from 38 European countries. It is the executive body
of the European Patent Organization and is supervised by the Administrative
Council. The main role of the EPO is to grant European patents.
The EPO carries out researches and substantive examinations on a
continuously growing number of European patent applications and international
applications filed according to the Patent Cooperation Treaty. In the case of
European patent applications, the Office gives the option of an accelerated
procedure. The Office examines also oppositions against already granted
European patents.
Exambrev - Integrated System for Patent Application 75
2. Technical information
A. System overview
The architecture of our system is presented in Fig. 1. Our system has two
main modules divided in multiple subsystems. The first module is called
Interfaces and data preparation module which manages the patent requests,
common users (UCOM), expert users (UEX), civil servants (UFUNC),
applicants (UAPP), administrators (UADM), civil servant managers
(UFUNCM) and expert managers (UEXPM) and also prepares some initial data
for the Expert system module (SIEXP). The second module is the Expert system
module (SIEXP) which gives the world wide novelty of a technical solution
proposed by an inventor and contains the legal and procedural database. In this
paper the Interfaces and data preparation module and especially the search
methods for similar technical solutions in the online patent databases are
presented.
The deployment diagram shown in Fig. 2 illustrates the connections between
the different subsystems of the Interfaces and data preparation module and
their deployment on the used servers. As we can see, all the subsystems
communicate with the system database through JPA (Java Persistence
Application Programming Interface) which communicates with the database
through the JDBC API (Java Database Connectivity API).
Exambrev - Integrated System for Patent Application 77
the username, password and the activation status of the account. If the username
and password are matching and the account is activated the applicant and expert
users can login. If the expert user’s account isn’t confirmed he has limited
accessibility in the system and can only change the list of the categories he is
expert at. The civil servant users can log in only if their account is confirmed.
The patent application process consist in filling of the online application
form which is the same used at OSIM in present. This process is divided into 4
steps, because this way the amount of required data on one page isn’t too large
and if there are validation errors, the user can correct them more easily.
propagation. The list of coefficients initialized in the first section and their
initialization values are shown in Table 1.
Table 1: The IPC suggestion algorithm coefficients and their initialization values.
Section Class Subclass Main group Subgroup
(ps) (pc) (psc) (pmg) (psg)
30 25 20 15 10
The search section of the algorithm contains 6 steps. In the following these
will be presented in detail. For a better understanding of the algorithm we
introduce two result sets, A and B, which will contain the temporary and final
search results. In the first step we search on all IPC levels for IPC categories of
which description contains at least one of the given keywords. These categories
are inserted into result set A. This step is shown in Fig. 4.
Figure 4: Keyword search mechanism on all IPC levels and building of the result set A.
In the 2nd step we take the subgroups from result set A, insert them into
result set B and calculate a suggestion value for them. The general form of the
calculation formula is given in Eq. 1:
SgVal = NoOfKwdsInIPC _ Desc ⋅ IPCLevCoef + OldSgVal (1)
In this step the old suggestion value is 0, the IPC level coefficient is 10, and
the number of keywords in the IPC category description is calculated for every
record. Fig. 5 shows the graphical representation of this step.
In the 3rd step we advance upwards in the IPC’s hierarchical organization to
the level of the main groups. We take those main groups from A, which does
not have subgroups in result set A, insert them into result set B, and calculate
their suggestion values with the Ec. 1. This is illustrated on Fig. 6 as the “i”
substep. The second substep “ii” consists of updating the suggestion values of
those subgroups in result set B, which belong to the main groups in the result
set A. The updated value is calculated using the formula given in the Ec. 1.
Exambrev - Integrated System for Patent Application 81
E. IFS-SIEXP subsystem
The IFS-SIEXP (SS-4) subsystem is the special interface for the SIEXP
module [6], [8], [11]. It makes data transfer between the Interfaces and data
preparation module and Expert system module. It also communicates with
UEXP, UCOM and UINV via a Web interface. This is the login point to the
Web application for registered users.
3. Results
In the first part of this section there is presented a comparison between the
execution times of the optimized and the non-optimized IPC suggestion
algorithms. This is followed by the qualitative results of the previously
mentioned algorithm. The execution time of the similar invention search
mechanism and the comparison of the two versions (single and multi-threaded)
of the search are presented in the second part of this section.
The second and third columns of the table contain the non-optimized and
optimized algorithm’s execution times. If we have a look at the difference
between the average execution times of the two versions of the algorithm, it is
evident that there is a 67.02% decrease in the execution time of the algorithm so
the optimized algorithm performs more than 3 times faster.
84 A. Aszalos, J. Domokos, T. Vajda, S. T. Brassai, L. Dávid
Table 3. shows the suggested IPC categories and suggestion values for an
existing invention, an algorithm suitable for edge detection in image processing.
The following keywords were given as an input for the suggestion algorithm:
“edge detection image processing algorithm”. The IPC main group in which the
existing invention is categorized is shown in the gray cell of the table. As for
the evaluation of the algorithm’s quality, we can state, that having a look at the
suggestion values, the correct IPC category is located on the 3rd place. If we
have a look at the hierarchical structure of the IPC, we can see that the
algorithm determined correctly the section, class and subclass level of the
invention even in the first result.
B. Results of the Similar Invention Search Mechanism
Table 4 and Table 5 contain the execution times of the single- and multi-
threaded version of the similar invention search mechanisms and the number of
results.
It is important to mention that the table contains only the execution time of
the search mechanism, measured with the functions provided by the Java
language for execution time measurement and does not include the search setup
time. Table 3 shows us that the single-threaded version of the search mechanism
was approximately 2 times slower in average, comparing to the multi-threaded
version.
The difference between the test cases shown in Table 3 and Table 4 is in the
used search providers and search languages. In Table 3 there were used three
search providers provided by Esp@cenet plus the Google Patent Search with
English keywords. In Table 4 the tests were conducted on the Esp@cenet search
providers in English and Romanian languages. Having a look at the average
execution times in Table 4 we can conclude that the multi-threaded version of
Exambrev - Integrated System for Patent Application 85
Table 5: Execution times of the two versions of the Similar Invention Search
Mechanisms with Esp@cenet using English and Romanian keywords.
4. Conclusion
We designed and developed a JEE based integrated system for patent
examination. The system will help applicants to make online patent application
registration for all three patent types discussed (EPC, PCT and Romanian
national patent type). The system also helps OSIM patent evaluator experts
management, employers management and patent management.
86 A. Aszalos, J. Domokos, T. Vajda, S. T. Brassai, L. Dávid
The main results obtained are the UCOM, UEXP, UINV, UFUNC and
patent application registration interfaces. The interfaces were developed
considering Java Server Faces technology and PrimeFaces 2.0 technology.
We have developed an algorithm for semiautomatic IPC code assignment for
helping the applicants and also the evaluator experts and a patent database
search mechanism which speeds up the similar technical solutions search.
The focus in this paper was on the presentation of the results of the
optimized IPC suggestion algorithm and the multi threaded similar invention
search mechanisms.
Acknowledgements
This project is developed under Partnership in Anterior Domains Program
of National Authority for Scientific Research in Romania, project code:
11-076/2007.
References
[1] WIPO webpage: http://www.wipo.int/
[2] Implementing regulations to the patent law no. 64/1991, as republished in Official Gazette
of Romania, Part I, No. 456/18 June 2008.
[3] Patent law No. 64/1991, as republished in Official Gazette of Romania, Part I, no. 541/8
August 2007.
[4] OSIM webpage: http://www.osim.ro/
[5] EPO webpage: http://www.epo.org/
[6] Radu, M., “Elaborarea strategiei de cercetare privind examinarea cererilor de brevet de
invenţie şi studiu critic asupra procedurilor de examinare aflate în uz”, Technical report
EXAMBREV, stage I, PNII – Parteneriate, no. 11-076/2007, Centrul de Cercetări pentru
Materiale Macromoleculare şi Membrane, Bucureşti, 2007.
[7] Domokos, J., Vajda, T., Brassai, S. T., Dávid, L., “Realizarea, implementarea în faza de
laborator şi testarea sistemului informatic de examinare a cererilor de brevet de invenţie”,
Technical report for stage III, EXAMBREV, PNII – Parteneriate, no. 11-076/2007,
Sapientia University, Tîrgu Mureş, 2009.
[8] Brassai, S. T., Dávid, L., Domokos, J., Vajda, T., “Technical report for stage II”, PNII –
Parteneriate, no. 11-076/2007, Sapientia University, Tîrgu Mureş, 2008.
[9] Vajda, T., Domokos, J., Brassai, S. T., Dávid, L., Aszalos, A., “Developement of EXAM-
BREV Integrated System for Patent Application”, in Proceedings of the 4th edition of The
INTER-ENG International Conference, Tîrgu Mures, România, 12-13 November, 2009, pp.
309-314.
[10] Aszalos, A., Domokos, J., Vajda, T., Brassai, S. T., Dávid, L., “EXAMBREV Integrated
System for Patent Application”, in Proceedings of the 2nd International Conference on
Mechatronics, Automation, Computer Science and Robotics (MACRo 2010), Tîrgu Mureş,
Romania, 14-15 May 2010, pp. 55-62.
[11] Domokos, J., Vajda, T., Brassai, S., T., Dávid, L., “Integrated System for Patent Appli-
cation Examination (EXAMBREV)”, in Proceedings of 17th International Conference on
Control Systems and Computer Science (CSCS17), Bucureşti, Romania, 26 - 29 May 2009,
pp. 135-139.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 87-98
1. Introduction
Emergent technologies like GMPLS, WDM and carrier-grade Ethernet will
replace legacy ones in future Internet domains. Combining of these different
data plane technologies and different services at different layers into an efficient
interworking environment is a challenging task. The resulting system should
offer a trade-off for service providers to operate their networks.
88 L. Huszár, Cs. Simon, M. Maliosz
2. Networking technologies
services, etc. Most of these methods are valid on intra-domain level, because
internal network information is typically limited outside of administrative
domains [17]. Consider the domain as a closed entity, with a given traffic
matrix. In such solutions the TE affects only the output, but it does not take into
consideration the possibility to influence the input. [2], [3], [18], [19]. Even if it
does, it considers that all domains – the one that provides the traffic, and the one
that conveys the traffic – use the same TE method (e.g. Path Computation
Element based TE - SPF) [4], [5].
3. Network Models
networks are also well studied and in this model we define it as the national or
wider area domain. Typically they have a meshed topology. As seen in Fig. 1,
the metro network, which links the access to the core, is split into three sub-
segments. In legacy infrastructures, these sub-segments together form a
hierarchy. Access areas may be connected to any metro sub-segment by COs,
and each metro is connected to the core through a Point-of-Presence (PoP).
The roles of the sub-segments should be specified in the context of the
deployed technologies. In this paper we assume that carrier-grade Ethernet-
based L2 technologies become dominant not only in the aggregation, but also in
the access [12]. Based on the above assumptions we obtain the reference
network used in this paper, derived from the generic network model [16]. In this
specific model the metro sub-segments use L2 switching in the access and edge,
while the core deploys L2/L3 TE mechanisms. Thus, the first two sub-segments
of the metro represent successive aggregation levels of the user traffic. In the
metro-access, the first aggregation level, the traffic from multiple COs is
aggregated in Concentration Nodes (CN). In the metro-edge, the second level of
aggregation, traffic from different CNs is processed by a L3/L2 metro node, and
the PoPs at L2/L3 boundary are handling several tens of thousands users. As a
summary, we consider that the metro-access and metro-edge networks form an
aggregation domain, while the metro-core segment is a meshed distribution.
low as possible for reasons of costs. The network has six destination nodes
(sinks) represented by the exit points of the core network on the right side. The
main function of the aggregation domain is to channel end-user traffic towards
the core, thus its nodes are connected to two neighboring devices, at most.
The core domain has a meshed topology, with a 3 hop shortest distance
between the ingress and the egress. The nodes of the core have a degree of
connectivity of 3 or 4. This is a trade-off between cost effectiveness and the
assurance of alternative paths. The aggregation domain uses Ethernet-switched
technology, and the core uses WDM extended with an electronic control layer.
Apart from investigating the efficient network capacity usage and balanced
load of the core domain, we also investigated the possibility to minimize the
operations in the electronic layer and the usage of longer optical paths. These
last two parameters are characteristics of the dual opto-electronic models.
Figure 2: Topologies of aggregation and meshed core (left) and dual ring core (right).
Our proposal supposes that the domains have a control plane that apart of
running TE and other control functions are capable of communicating/
cooperating with the control planes of the neighbouring domains. Such a control
plane model is the Knowledge Plane [23] that can use MSTP in the aggregation
and CSPF in the core domains.
We also investigated the behavior of the core if it deploys a dual ring
topology (see Fig. 2 - right). We have kept the edge nodes and the output nodes
from the previous topology in order to use the same aggregation network and to
be able to compare the two results. In the following we refer to the first
topology as meshed core, while to the latter one as dual ring core.
4. Inter-domain TE cooperation
Our proposal is to use shared intelligence between control planes, where the
core intra-network functions are unchanged and only the inter-network control
planes co-operate which enhances the performance.
Inter-Domain Traffic Engineering for Balanced Network Load 93
In Fig. 2 the traffic reaches the core network through the aggregation
domain. In case of any event (congestion on a link, link failure, etc.) the
classical TE works with the assumption that the traffic matrix remains
unchanged and it has to re-distribute the traffic volume relying on load
redistribution inside the core. Our proposal is to use the Knowledge Plane and
re-arrange the input traffic distribution outside the core edge routers. This
means that – from the point of view of the core – we change the traffic matrix,
since the load on the edges will be different.
Let us take the topology presented in the left-side of Fig. 2. Now in the
situation when the aggregation domain directs all the traffic to the e1_edge (the
“northern” one), while e2_edge (the “middle” edge) and e3_edge (the
“southern” edge) do not feed any traffic to the core. This is the worst case
situation to overload the core and corresponds to the situation when only the
tree rooted in e1_edge is used to collect the traffic in the aggregation domain.
Now, if we take the opposite situation, when we use each of the trees in the
aggregation domain to forward the same amount of traffic, then the aggregation
domain distributes the traffic evenly among the three ingresses. In this case all
regions of the core will be evenly loaded.
It is the task of the Knowledge Plane to map the traffic sources among the
trees. In our simulations we used small individual flow throughputs. Each tree is
collecting such individual demands and the sum of these represents the traffic
load at the edges. Practically the granularity of the traffic is small enough to
allow us to finely balance the load. In what follows we will use the term load
balancing as the operation of load redistribution in the aggregation domain as
described above. The goal of load balancing will be to decongest a certain area
of the core network with a minimal redistribution of the original load.
5. Simulation results
A. Traffic model
During the simulations the traffic flows originated from the sources have the
same bandwidth. We considered that we know the traffic matrix and the paths in
the core are computed by a PCE using CSPF protocol. Additionally we
generated background traffic, as well, which enter the core at the edge nodes
and sink on the most right-hand side destination nodes. The links of the core
networks had 200 Mbps capacity, which defines the load region where the core
network is congested, but not overloaded of 400 Mbps to 800 Mbps.
In our investigations we used the e2_edge node where we directed all the
traffic and tried to serve it using CSPF. The resulting paths were called the main
branch. If the demand is high enough, the traffic demand cannot be served. If
94 L. Huszár, Cs. Simon, M. Maliosz
we apply our solution to this situation that means that some part of the traffic
will be shifted to the other two edges, e1_edge and e3_edge. The paths that
follow the flows entering on these two edges are called secondary branches.
We used the background traffic to “fill” the network up to the point where
congestion might start to develop. We sent 200 Mbps background traffic on the
main branch. Then we started to add new traffic demands until we reached the
total one, which was set differently from case to case: all our simulations were
run with the 500Mbps, 600Mbps, 700Mbps and 800Mbps total traffic demand.
These are the situations when we can test the usefulness of our proposal and
evaluate its impact on the efficiency of the opto-electronic core transport.
We used a flow level simulator, already used for the research of opto-
electronic networks [24]. We generated individual flows, and the sum of these
demands resulted in the overall traffic demand. Each link was divided into
lightpaths of 10 Mbps capacity. This results in 20 lightpaths within each link
that offers enough flexibility for multiplexing the flows within the core. Based
on earlier work with the simulator we opted for 12 individual flows per
lightpath, resulting in a flow capacity of 0.83 Mbps.
Within each scenario – that is for different overall traffic demands – we have
simulated several sub-cases, where the load of the main branch was gradually
re-distributed among the secondary branches. At first we started with the
situation where 30% of the traffic was entering at edges e1_edge and e3_edge
(15% on each of them). From there on, we stepwise directed more and more
traffic towards to the secondary branches while the network was able to carry
the traffic without loss. In order to be sure on that, we also simulated the next
step following this point. The individual flow demands were scheduled
randomly. For each situation we run ten simulations and averaged the results.
e1 e1_b e1_edge
e2 e2_b e2_edge
e3 e3_b e3_edge
100
Rate of served demands [%]
90
80 500 Mbps
600 Mbps
70 700 Mbps
800 Mbps
60
30 40 50 60 70 80
Redistributed traffic [%]
Figure 4: The successfully served flow demands as the function of traffic re-distribution
for the meshed core (left) and the dual ring core (right).
The left-hand side of Fig. 4 presents the ratio of successfully served traffic
demands in the meshed core. It can be seen that 500 Mbps total traffic will be
served if we redirect 35% of the traffic on the secondary branches. The traffic
volumes that must be redirected to achieve a loss-free ratio for the 600Mbps,
700Mbps and 800Mbps traffic scenarios are 50%, 60% and 70%, respectively.
These results confirm that if we redistribute the traffic before it hits the core
edges, we can balance the core load, thus it is a viable mechanism to actively
increase the efficiency of the core traffic engineering process.
We achieved similar results for the dual ring topology, as well (right-hand
side of Fig. 4). The congestion-free core is achieved for the redistribution of the
35% of traffic for the 500Mbps case and 70% for the 800Mbps (worst) case.
First we explain the results using the meshed core. The first parameter is the
number of lightpaths. We can see in Fig. 5 (on the left) that as the rate of
successfully served traffic demands is rising, but is still below 100%, the
number of lightpaths is increasing. This is due to the fact that more and more
individual flows are in the network and these are following new (alternative)
routes. Thus, the increase of this parameter is not a consequence of the
decreasing efficiency but of the growth of the core utilization.
This trend is reversing if we keep redistributing the traffic even after all the
traffic reaches its destination. This corresponds to the situation depicted in
Fig. 4 by the dots on the 100% line. As we already mentioned, for each overall
traffic load scenario we simulated two cases when all the demands were served
by the core: the “break-even” point and one following step where we increased
the traffic redistribution by 10%. The results obtained for these cases are
encircled in Fig. 5 and as we can see the number of paths is decreasing. The
more the core is loaded, the more these trends are accentuated, therefore it can
be seen the best result on the curve corresponding to the highest loads.
350 700
Opto-electronic conversions [#]
300 600
250
Lightpaths [#]
500
200
500 Mbps 400 Meshed core
600 Mbps
150
700 Mbps
Dual Ring Core
300
800 Mbps
100
30 40 50 60 70 80
30 40 50 60 70 80
Redistributed traffic [%]
Redistributed traffic [%]
Figure 5: The efficiency of the lightpath management. Number of lightpaths for meshed
core (left) and number of opto-electronic conversions (right).
the nodes that make a grooming operation and multiple conversions may
happen in such a node. Based on our simulations there are several hundreds of
such conversions per node. The trend observed for the number of the lightpaths
is valid also here, for both core topologies.
6. Conclusion
This paper proposed a traffic management solution that improves the
performance of the core network. The aggregation network is supposed to
deploy L2 switched technologies, while the core network will use WDM in
combination with GMPLS. We have prepared a meshed and a dual ring
topology following the principles of the TIGER2 project’s reference network
and investigated our proposal by means of simulations.
We have shown that if congestion occurs in the core, we can eliminate the
congestion just with a proper coordination between the control planes of the
aggregation and core domains, redistributing the traffic prior entering the core.
We deployed MSTP protocol in the aggregation domain and the traffic
redistribution was done using these spanning trees. This solution increases the
ratio of successfully served traffic demands, increasing the utilization of the
core. The traffic redistribution at the aggregation has positive effects even if
there is no congestion in the network, because in such cases it increases the
efficiency of the opto-electronic transport layer.
As a conclusion we can say that the cooperation of the control layer of the
aggregation and core domains has multiple advantages. In the future we plan to
investigate the trade-off between the cost of load balancing and opto-electronic
efficiency.
Acknowledgements
This work has been partially funded in the framework of the CELTIC
TIGER2 project (CP5-024) as part of the EUREKA cluster program.
References
[1] Osbourne, E., Simha, A., “Traffic Engineering with MPLS”, Cisco Press, Indianapolis,
ISBN 978-1-58705-031-2, 2003.
[2] Fortz, B., Rexford, J., Thorup, M., “Traffic engineering with traditional IP routing
protocols”, IEEE Comm. Magazine, vol. 40, no. 10, pp. 118-124, 2002.
[3] Dasgupta, S., de Oliveira, J. C., Vasseur, J.-P., “Dynamic traffic engineering for mixed
traffic on international networks: Simulation and analysis on real network and traffic
scenarios”, Computer Networks, vol. 52, no. 11, pp. 2237-2258, 2008.
98 L. Huszár, Cs. Simon, M. Maliosz
[4] Casellas, R., Martinez, R., Munoz, R., Gunreben, S.,“Enhanced Backwards Recursive Path
Computation for Multi-area Wavelength Switched Optical Networks under Wavelength
Continuity Constraint”, Journal of Optical Communications and Networking (JOCN), vol.
1, no. 2, pp. A180-A193, 2009.
[5] Ho, K-H. et al, “Inter-autonomous system provisioning for end-to-end bandwidth
guarantees”, Comp. Commun., vol.30, no. 18, pp. 3757-3777, Dec. 2007.
[6] Sabella, R., Zhang, H., eds.: “Traffic Engineering in Optical Networks”, IEEE Network,
vol.17, no. 2, pp. 6-7, 2003.
[7] Cinkler, T., “Traffic- and λ-Grooming”, IEEE Network, vol. 17, no. 2., pp. 16-21, 2003.
[8] Liu, K. H., “IP Over WDM”, John Wiley & Sons Inc., ISBN: 978-0-470-84417-5, 2002.
[9] Mukherjee, B., “Optical WDM Networks”, Optical Networks Series, Springer, ISBN: 978-
0-387-29055-3, 2006.
[10] Ziegelmann, M., “Constrained Shortest Path and Related Problems: Constrained Network
Optimization”, VDM Verlag Dr. Müller, ISBN 978-3-8364-4633-4, 2007.
[11] Lee, Y., Mukherjee, B., “Traffic engineering in next-generation optical networks”, IEEE
Comm. Surveys and Tutorials, vol. 6, no. 1-4, pp. 16-33, 2004.
[12] Fang., L., Bitar, N., Zhang, R., Taylor, M., “The Evolution of Carrier Ethernet Services:
Requirements and Deployment Case Studies”, IEEE Comm. Mag., vol. 46, no. 3, pp. 69–
76, 2008.
[13] Occam Networks whitepaper, “Switching Versus Routing in Access Networks”,
http://www.occamnetworks.com/pdf/SWITCH_VS_ROUT_WP_FINAL.pdf, May, 2007.
[14] Caro, L. F., Papadimitriou, D., Marzo, J. L., “A performance analysis of carrier Ethernet
schemes based on Multiple Spanning Trees”, VIII Workshop in G/MPLS networks, Girona,
Spain, Jun. 2009.
[15] CELTIC TIGER2 project homepage, http://projects.celtic-initiative.org/tiger2/
[16] Dorgeulle, F., “Rationales and scenarios for investigations on next generation of access,
backhauling and aggregation networks”, CELTIC TIGER2 public report D20, Nov. 2009.
[17] Awduche, D. et al., “Overview and Principles of Internet Traffic Engineering”, RFC3272,
May 2002.
[18] Feamster, N., Borkenhagen, J., Rexford, J., “Guidelines for interdomain traffic enginee-
ring”, SIGCOMM Comput. Comm., Rev. 33, pp. 19-30, 2003.
[19] Vigoureux, M., et al., “Multi-layer traffic engineering for GMPLS-enabled networks”,
IEEE Comm. Mag., 0163-6804 vol. 43 (7), pp. 44–50, 2005.
[20] Mukherjee, B., “Optical Communication Networks”, McGraw-Hill, ISBN 978-0-070-
44435-5, 1997.
[21] Chiu, A., et al., “Unique Features and Requirements for The Optical Layer Control Plane”,
IETF internet draft, work in progress.
[22] International Telecommunication Union, “OTN – ITU-T Recomm. on ASTN/ASON
Control Plane”, http://www.itu.int/ITU-T/2001-2004/com15/otn/astn-control.html.
[23] Clark, D., Partridge, C., Ramming, J. Ch., Wroclawski, J. T., “A knowledge plane for the
internet”, ACM SIGCOMM 2003, Karlsruhe, Germany, pp. 3-10, Aug. 2003.
[24] Hegyi, P., Cinkler, T., Sengezer, N., Karasan, E., “Traffic Engineering in Case of Inter-
connected and Integrated Layers”, IEEE Networks, Budapest, Hungary, pp. 1-8, Sept. 2008.
[25] Homepage of TOTEM simulator, http://totem.run.montefiore.ulg.ac.be/features.html.
[26] Homepage of BridgeSim simulator, http://www.cs.cmu.edu/~acm/bridgesim/index.html.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 99-113
1. Introduction
With the explosive growth of communication networks, energy consumption
has risen to a major economical (operational expenditures) and ecological (CO2
emission) concern in the past few years. About 2 percent of the total CO2
emission is produced by the Information Technology and Communication
sector (ICT) which is more than the contribution of the whole aviation industry
[1]. A recent study puts more emphasis on this issue by showing that the rise of
energy consumption of large communication systems corresponds to Moore’s
law [2]. Therefore, power consumption has become a critical factor of
communication networks, IT facilities, data centers and high performance
network elements. Energy-efficient designing helps cutting the Operating
Expenses (OPEX) as well [3], [4], and in addition to that, it also might result in
more reliable network elements (due to the decrease of heat dissipation).
In order to save energy in communication networks, first, we have to reveal
the reasons of energy wastage in existing systems. Energy inefficiency might
come from architectural (SW related) and physical design (HW related). From
the energy-efficiency point of view, the most important feature of networking is
underutilization. While networks are generally designed to handle peak-time
traffic, most of the time, their capacity remains (heavily) unexploited. This is
called over-provisioning. According to [1], the magnitude of network utilization
is 33% for switched voice, 15% for internet backbones, 3~5% for private line
networks and 1% for LANs, while the energy consumption of network
equipments remains substantial even when the network is idle. A rather physical
related issue is that the energy consumption of network elements is not
proportional to their utilization, i.e. energy cost is a function of capacity, not
throughput. These facts result in high energy wastage.
Today’s networks are mostly configured statically, running at full
performance all the time which is not necessary. In order to achieve higher
energy-efficiency, network management methods need to be able to
dynamically adopt network characteristics to the actual traffic demands during
operation. Switching off underutilized (or idle) parts of the network and
dynamically adapting transmission rates (with satisfying certain QoS
constraints) are ultimately important approaches of designing a greener
network. In order to make energy-aware management possible, network
elements also should support these features. On-demand frequency-scaling of
CPUs and data storage modules and network interfaces with adjustable
transmission rates (rate-adaptation support) are all mandatory for attaining
greener network elements.
Finding efficient ways for cooling network equipments is also a big
challenge. In case of data centers, roughly 50% of electricity is consumed by the
Energy-Efficient Networking: An Overview 101
cooling infrastructure, the other 50% used for computing [5], [6]. Increasing
cooling efficiency and using alternative cooling methods have a huge
contribution to the electric bill of equipment rooms. Employing alternative
energy sources (e.g. solar, wind) for supplying network nodes (base stations) is
also a matter of interest nowadays.
In this paper, we provide an overview of the latest results concerning energy-
efficient networking, discussing the different functional parts of the ICT
infrastructure separately. In Fig. 1, the estimated share in energy consumption
by different areas of ITC can be seen [7]. Energy-efficiency is examined from
an operator’s point of view with focusing on networking infrastructure and data
centers, but leaving PCs, monitors, printers, and other user equipments out of
consideration.
Figure 2: Relative power usage and energy-efficiency for a typical data center
scenario.
Figure 3: Relative power usage and energy-efficiency for a data center scenario
with larger dynamic range [6].
104 L. Szilágyi, T. Cinkler, Z. Csernátony
by optical links between large distance nodes with the processing and switching
of traffic being performed mostly in the electrical domain. While the energy
consumption of electrical components is getting lower by every year, large
reduction could be achieved in terms of power consumption if the whole
processing and switching is executed in the optical domain. Although, optical
switching has become a reality (MEMS, CMOS), IP, nowadays’ dominant
packet switched transmission technology, requires random access memory for
buffering which is yet to be implemented purely optically. Large fiber delay
lines are not practical as they require power-consuming signal regenerators, not
to mention the impractical size of them.
From an energy-efficiency point of view, it is important to note that –
despite the connectionless nature of IP – above 90% of the traffic within
backbone networks is transported via the connection oriented Transmission
Control Protocol (TCP) [14] (consider applications such as IP television, voice
over IP, video conferencing or online gaming services, all those by which a very
high quality of service is required).
As being shown in Fig. 4, within a node of an IP network, more than half of
the energy is consumed by the traffic processing and forwarding engine
(TP/FE).
4. Access networks
Since most of the physical elements are located in the segment of access
networks, the energy saving by each type of elements is multiplied by a large
factor. This makes an important contribution to the reduction of total
consumption [15].
Nowadays, the most widely deployed technology for broadband landline
connections still employs copper lines for bearers. With the continuous increase
of bandwidth requirements, broadband penetration and bandwidth demands,
more energy is required than ever. Although new transmission technologies,
such as VDSL2, allow higher speeds, they induce increased complexity and
power consumption [16]. By today’s networking technologies, serving high-
bandwidth demands together with sustainable energy consumption can only be
achieved through progressive optical fiber deployment in FTTCab, FTTB (and
also in the longer-term FTTH) architectures which are expected to shorten the
copper access network and to boost the overall performance of xDSL systems.
The deployment of such systems however requires certain capital expenses to
be involved which makes this technological shift a gradual one. Dynamic
Spectrum Management [17] and energy-aware solutions of such kind can make
copper technologies more sustainable, but they only yield the industry little
additional time for making the required technological shift towards optical
networks.
Mobile operators with radio access networks (2G, 3G etc.) have to provide
services for very large physical areas and for several subscribers as well [2].
Given the necessity for several base stations, operating them requires a large
amount of energy. As mobile broadband is expanding rapidly, the energy-
efficiency of radio access networks is expected to receive even more significant
attendance in the future.
Energy-Efficient Networking: An Overview 107
companies and governments not only to reduce cost with optimized resource
usage, but to keep their business greener. Cloud computing service providers
have large scale data centers and server farms and provide computing services
over the Internet making companies having neither to invest in their own server
parks nor to worry about over-provisioning their systems. The capital
expenditure of a startup company can be restricted to investing in thin clients to
access the demanded services. Although such infrastructures might be more
energy-efficient compared to the legacy approach (i.e. ordinary companies
providing a single PC for every employer while operating a more or less over-
provisioned server park), large scale data centers still tend to be more energy-
efficient.
6. Summary
In this paper, we have revealed a number of possibilities for saving power in
different parts of the network. We have started the discussion with presenting
the general concept that the over-provisioning of the resources results in
suboptimal energy-efficiency by making the energy consumption of the network
disproportional to its utilization. In the following, we have given a review on
the various power-saving opportunities of underutilized network elements for
core networks via sleeping and rate adaptation. Moreover, it has been shown
that by deploying circuit-switched all-optical networks, it can be capitalized on
that the vast majority of Internet traffic is transported via the connection-
oriented TCP. Afterwards, access networking has been discussed: among
landline connection technologies, optical-based FTTx solutions seem to be
promising, while for wireless networks, horizontal and vertical handoff
mechanisms have been presented for adjusting range of radio communication.
Finally, energy-efficient ways for operating data centers and the concept of
cloud computing have been briefly overviewed.
Based on the state-of-the-art, we suspect that the biggest challenge will be
the effective control and cooperation of network components of varying energy-
awareness. Today’s large-scale communication networks are of great
heterogeneity in terms of the employed networking technologies. Consequently,
network management software has to handle different networking equipment
types and generations at the same time. For both complexity and heterogeneity
reasons, there is a fundamental need for a shift to be made from centralized
solutions towards a distributed approach. Designing future’s energy-efficient
network components and architecture needs strong inter-technological
cooperation to match HW capabilities with management techniques.
112 L. Szilágyi, T. Cinkler, Z. Csernátony
References
[1] Nordman, B., “Energy Use and Savings in Communications”, in Proc. of IEEE ICC, 2009,
Keynote.
[2] Bolla, R. et al., “Energy-Aware Performance Optimization for Next-Generation Green
Network Equipment”, in Proc. of ACM SIGCOMM, Workshop on Programmable routers
for extensible services of tomorrow, 2009, pp. 49-54.
[3] Qureshi, A., Weber, R., Balakrishnan, H., Guttag, J. and Maggs, B., “Cutting the Electric
Bill for Internet-Scale Systems”, in Proc. of ACM SIGCOMM, Conference on Data
communication, 2009, pp. 123-134.
[4] Odlyzko, A. M., “Data networks are lightly utilized, and will stay that way”, Review of
Network Economics, vol. 2, no. 3, pp. 210-237, 2003.
[5] Fan, X., Weber, W. D., Barroso, L.A., “Power Provisioning for a Warehouse-Sized
Computer”, in Proc. of ACM International Symposium on Computer Architecture, 2007,
pp. 13-23.
[6] Barroso, L. A. and Hölzle, U., “The Case for Energy-Proportional Computing”, in Proc. of
IEEE Computer, 2007, pp. 33-37.
[7] Kumar, R. and Mieritz, L., “Conceptualizing ‘Green IT’ and data centre power and cooling
issues”, Gartner Research Paper, No. G00150322, 2007.
[8] Zafer, M. A., Modiano, E., “A calculus approach to energy-efficient data transmission with
quality-of-service constraints”, in Proc. of IEEE/ACM, Transactions on Networking, vol.
17, issue 3, pp. 898-911, 2009.
[9] Andrews, M., Anta, A. F., Zhang, L., and Zhao, W., “Routing and scheduling for energy
and delay minimization in the powerdown model”, in Proc. of IEEE INFOCOM, 2010, pp.
21-25.
[10] Fisher, W., Suchara, M., and Rexford, J., “Greening backbone networks: reducing energy
consumption by shutting off cables in bundled links”, in Proc. of ACM SIGCOMM, Green
Networking, 2010, pp. 29-34.
[11] Nedevschi, S., Popa, L., Iannaccone, G., Ratnasamy, S., and Wetherall, D., “Reducing
Network Energy Consumption via Sleeping and Rate-Adaptation”, in Proc. of 5th USENIX
Symposium on Networked Systems Design and Implementation, 2008, pp. 323-336.
[12] Bolla, R., Bruschi, R., Davoli, F., Ranieri, A., “Performance Constrained Power Consump-
tion Optimization in Distributed Network Equipment”, in Proc. of IEEE ICC, Workshop on
Green Communications, 2009, pp. 1-6.
[13] Kant, K., “Power Control of High Speed Network Interconnects in Data Centers”, in Proc.
of IEEE ICC, 2009, pp. 145-150.
[14] Aleksic, S., “Analysis of Power Consumption in Future High-Capacity Network Nodes”, in
Proc. of IEEE/OSA JOCN, vol. 1, issue 3, 2009, pp. 245-258.
[15] Marsan, M. A., Chiaraviglio, L., Ciullo, D. and Meo, M., “Optimal Energy Savings in
Cellular Access Networks”, in Proc. of IEEE ICC, Workshop on Green Communications,
2009, pp. 1-5.
[16] Bianco, C., Cucchietti, F., Griffa, G., “Energy consumption trends in the Next Generation
Access Network - a Telco perspective”, in Proc. of INTELEC, 2007, pp. 737-742.
[17] Cioffi, J. M., Zou, H., Chowdhery, A., Lee, W., and Jagannathan, S., “Greener Copper with
Dynamic Spectrum Management”, in Proc. of IEEE GLOBECOMM, 2008, pp. 1-5.
[18] Zuckerman, D., “Green Communications – Management Included”, in Proc. of IEEE ICC,
2009, Keynote.
[19] Marsan, M. A., Meo, M., “Energy Efficient Management of two Cellular Access
Networks”, in Proc. of ACM SIGMETRICS, Performance Evaluation Review archive, vol.
37, issue 4, 2010, pp. 69-73.
Energy-Efficient Networking: An Overview 113
[20] Hasswa, A., Nasser, N., Hassanein, H., “Generic Vertical Handoff Decision Function for
Heterogeneous Wireless Networks”, in Proc. of IFIP Conference on Wireless and Optical
Communications, 2005, pp. 239–243.
[21] Choi, Y. and Choi, S., “Service Charge and Energy-Aware Vertical Handoff in Integrated
IEEE 802.16e/802.11 Networks”, in Proc. of IEEE INFOCOM, 2007, pp. 589-597.
[22] Yang, W.-H., Wang, Y.-C., Tseng, Y.-C., and Lin, B.-S. P., “An Energy-Efficient
Handover Scheme with Geographic Mobility Awareness in WiMAX-WiFi Integrated
Networks”, in Proc. of IEEE WCNC, 2009, pp. 2720--2725.
[23] Seo, S. and Song, J., “Energy-Efficient Vertical Handover Mechanism”, in Proc. of IEICE
Transactions on Communications, vol. E92-B, no. 9, 2009, pp. 2964-2966.
[24] Petander, H., “Energy-aware network selection using traffic estimation”, in Proc. of ACM
MICNET, 2009, pp. 55-60.
[25] Barth, U., Wong, P., Bourse, D., “Key Challenges for Green Networking”, in Proc. of
Ercim News 79, 2009, pp. 13.
[26] Gyarmati, L., Trinh, T. A., “How Can Architecture Help to Reduce Energy Consumption in
Data Center Networking?”, in Proc. of ACM SIGCOMM, 2010, pp. 183-186.
[27] Chang, V., Bacigalupo, D., Wills, G., and De Roure, D., “A Categorization of Cloud
Computing Business Models” in Proc. of IEEE/ACM CCGRID, 2010, pp. 509-512.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 114-122
Abstract: Localization of the calls is a topic that has been coming up even from the
early time of the telephony. Calls made from mobile phones were even more interested
to be localized due to their mobility. This paper presents a localization solution that uses
information from the mobile network, being a technical solution that ensures the
acquisition of the localization information of the calls from the terminals in the mobile
network and which is delivering this data to a localization server. The localization
solution that is presented has three major features: receiving calls’ information from
mobile networks and obtaining the localization information from the Signaling System
#7(SS7); data processing from signaling frame and IP-transmitting of this information
to a localization server; visualization of the call location on the map. Due to client-
server architecture, users of the system can access calls locations using digital maps.
1. Introduction
Localization of the calls is useful not only from the legal point of view but
also in case of emergencies as is for example the usage of the short number 112
or 911. In this case, the localization of the person who is in possible danger is
vital.
Using SS7 localization approach has drawbacks which are treated in the
presented solution: each mobile phone service provider supplies the localization
information within the Initial Address Message (IAM) field of the ISDN User
Part (ISUP) protocol from SS7 frame in its’ own specific format [1]. Thus, the
V. Cazacu, L. Cobârzan, D. Robu, F. Sandu 115
Network code
(e.g.72,74 or Services bit (reserved) Location area code Cell ID
4072, 4074)
2-4 digits 1 digit 5 digits 5 digits
A. Localization Server
In this module, there are two major functionalities: SS7 frame parsing and
communication protocols between the client and the Geo Database Server.
The parsing module consists of two parts, one dealing with the SS7
Integrated Services Digital Network User Part (ISUP) communication, while
the other being responsible for the protocol message parsing. It is out of the
paper’s scope to detail the SS7 communication between our solution and the
mobile network. The technical approach taken is to use the JAIN ISUP API that
gives the possibility to exchange ISUP messages in the form of Java Event
Objects [2], [3].
One rule for parsing SS7 information is the fact that independently of the
mobile operator, SS7 frames are in standard format and the relevant parameters
for our solution can be found under the Initial Address heading. The relevant
parameters are presented in the Table 3.
The phone numbers are received without the prefix digits, so in this
implementation the “Calling party number” parameter is taken into account. For
national calls a 0 digit and for international calls two 0 digits are inserted at the
beginning of the caller number. Also, to determine from which mobile operator
the call is performed and knowing that every telephone number begins for
example with 07XY, where depending on the XY digits, the solution can extract
118 Localiz. of the Mob. Calls Based On Ss7 Inform. and Using Web Mapping Service
the provider of the call based on a table correspondence and on interrogating the
portability server, if available.
In the implementation of the module, the extracted information is stored in
an object called SS7Object with fields like String callingNumber, String
calledNumber, String Nature, String LocalizationString, Date dateCreated,
String Provider. SS7Objects are sent to the Localization Server module for
further processing.
The communication protocol with the client uses sockets and when new
localization objects are received from the SS7 parsing module, the server will
send the object to the client in order to use it on the GUI. After sending the
localization object, the server is waiting for an answer from the client. If the
client does not confirm the reception of the object in the previously defined time
frame, the localization server will resend this information. The Localization
objects that are not confirmed are maintained in a waiting list. When the
localization server receives a message from GUI/Google Maps, it will delete the
corresponding object from the “waiting list”, meaning that it will not wait for
the confirmation for that object.
The communication protocol between Localization Server and the Geo
Information Database Server is done by calling the getCoordinatesBy
LocalizationString (localization_string) method which takes a string parameter,
representing the localization string and as returned value, an object which
contains the coordinates of the area from which the call was made. The
coordinates will be the latitude and the longitude, each one containing 3 fields:
degrees, minutes and seconds. The calling of the class is done using Remote
Method Invocation (RMI).
the ExtractCoordinate class has a static method that returns the coordinates,
based on the provider and the localization string, as it can be seen next capture.
Only one tab is presented in this paper, the View Calls tab which contains
the recent calls information in a list. The call information contains the exact
time of the call, the caller number, the provider and the location of the call as it
is received from the Location Server. Each call has its own checkbox, which
will specify if the call was processed or not. When a call is selected in the list,
the application is marking automatically the location of the call in the map
frame. More calls can be selected simultaneously, so the map can be marked in
more locations. Calls that are checked (processed) are deleted from the list and
the marks from the map disappear.
In order to integrate a digital map into the solution, Google Maps API was
chosen due to several considerations [5].
Google Static Maps API embeds a Google Maps image without requiring
JavaScript but the problem is that it returns the map as an image (GIF, PNG or
JPEG) in response to a HTTP request via a URL. This way, the benefits of the
zoom and navigation facilities disappear.
JXMapViewer embeds mapping abilities into Java application, but at the
solution’s development time it was not possible to use it with Google Maps or
Yahoo since there were legal restrictions.
One other strong reason why the Google Maps API was chosen for
integrating the web mapping service was the possibility to control the zoom and
navigation features from the application’s back-end.
Since Google Maps API uses JavaScript, the JWebBrowser class from the
chrriis.dj.nativeswing.components package has been used in the development of
the Java client application; this offers the possibility to have a native web
browser component in the application [4]. Because the client application has to
be operating system independent, the web browser component was developed to
use the Mozilla engine.
The digital map from Google is loaded using the following code line [5].
web_browser.navigate(gmapfilelocation.getAbsolutePath());
The parameter gmapfilelocation points to the file containing the script which
loads the map.
In Fig. 3, the client GUI is shown together with the calls markers, each
marker descriptor containing a string defining the location to place the marker
and the visual attributes to use when displaying the mark.
V. Cazacu, L. Cobârzan, D. Robu, F. Sandu 121
Figure 4 shows a case when one call is selected from the list and
automatically the application zooms in on the location where the caller is
positioned. If another call is selected, so that two calls are on the map, the
application automatically zooms out exactly as it is necessary to display both
callers on the map.
3. Conclusion
The solution of call localization presented in this paper is still under
development since topics like high degree of availability or the ability to work
in load-balanced and failed-over conditions between locations are not
implemented. The application’s architecture has been implemented by the
authors of the paper and solution allows further improvements, in order to
enable features like accepted input traffic of a high number of simultaneous
voice calls to be implemented as easy as possible.
But the goal of the research, at least in this phase, was achieved, since the
usage of a web mapping service for calls location has been demonstrated by the
solution presented in this paper.
Acknowledgements
The authors wish to thank their colleagues who contributed with their effort
to achieve the results presented in this paper, especially the colleagues located
at the Cluj-Napoca Siemens PSE site.
References
[1] Dryburgh, L., Hewett, J., “Signaling System No. 7 (SS7/C7): protocol, architecture, and
services”, Cisco Press, 2005.
[2] Jepsen, T. C., Anjum, F., “Java in telecommunications: solutions for next generation
networks”, John Wiley & Sons, 2001.
[3] *** http://jcp.org/en/jsr/summary?id=ISUP.
[4] *** http://djproject.sourceforge.net/ns/documentation/javadoc/index.html.
[5] *** http://code.google.com/apis/maps/documentation/reference.html.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 123-135
1. Introduction
We propose a novel framework to analyse electroencephalogram (EEG)
biosignals from multi-trial visually-evoked potential (VEPs) signals recorded
124 L. F. Márton, L. Szabó, M. Antal, K. György
in [5]. The test recordings are usually 1 to 1.5 minutes long. The ERO intervals
alternate with relax state with a period of usually 10 seconds. The time-
frequency method used in this application is the continuous wavelet transform
based on Morlet, Paul and DOG (m = 2 and m = 6) wavelet base functions. The
Morlet and Paul bases are providing a complex continuous transform proper for
time-frequency component analysis of the recordings. For the subjects of the
experiments, during the recordings a deckchair was used to avoid extra EMG
noise created by the body stability problems.
Figure 1: The BrainMaster AT-1 System (Brain Master LTD company product image).
In the EEG–EMG experiments, subjects are trained for two different motor
tasks: a left-right or up-down movement of the closed or opened eye balls and
right or left hand movement. The scenario of the performed task is recorded in
the header of the generated file. The recording technique is a not invasive
recording method.
From the experimental studies of VEPs, relative to the recorded oscillations, the
literature clearly depicts the delta (1–4 Hz) and theta (4–10 Hz) ranges as
containing main components of power in frequency domain for the waves. We
will consider these bands for further identification of the activity patterns [1].
Now, we are considering the important details of wavelet transform used in
these processings.
By decomposing a time series into time–frequency space, one is able to
determine both the dominant modes of variability and how those modes vary in
time. The first tested method was the Windowed Fourier Transform (WFT). The
WFT represents one of analysis tool for extracting local-frequency information
from a signal. The WFT represents a method of time–frequency localization, as
it imposes a scale or ‘response interval’ T into the analysis. An inaccuracy
arises from the aliasing of high- and low-frequency components that do not fall
within the frequency range of T window. Several window lengths must be
usually analyzed to determine the most appropriate choice of window size to be
sure to contain within the window the main, but unknown basic oscillatory
components. To avoid this difficult task, in our analysis finally we have used
wavelet transform (WT) methods.
The WT can be used to analyze time series that contain nonstationary power
at many different frequencies. The term ‘wavelet function’ is generically used to
refer to either orthogonal or nonorthogonal wavelets. The term “wavelet basis”
refers only to an orthogonal set of functions. The use of an orthogonal basis
implies the use of the discrete wavelet transform (DWT), while a nonorthogonal
wavelet function can be used with either the discrete or the continuous wavelet
transform (CWT).
A brief description of CWT is following. Assume that the recorded time
series, xn, is with equal time spacing δt (sampling period) and n = 0…N-1. Also
assume that one has a wavelet function, Ψ0(η), which depends on a non-
dimensional ‘time’ parameter η.
To be ‘admissible’ as a wavelet, this function must have zero mean and must
be localized in both time and frequency space. An example is the Morlet
wavelet, consisting of a plane wave modulated by a Gaussian function:
where the (*) indicates the complex conjugate. By varying the wavelet scale s
and translating along the localized time index n, one can construct a picture
showing both the amplitude of any features versus the scale and how this
amplitude varies with time. The subscript 0 on Ψ has been dropped to indicate
that this Ψ has also been normalized. It is possible to calculate the wavelet
transform using (2), and it is considerably faster to do the calculations in
Fourier space. By choosing N points, the convolution theorem allows us to do
all N convolutions simultaneously in Fourier space using discrete Fourier
transform (DFT). To ensure that the wavelet transforms at each scale s are
directly comparable to each other and to the transforms of other time series, the
wavelet function at each scale s was normalized to have unit energy.
Normalization is an important step in time-series analysis and is used at each
scale s.
Morlet wavelet function Ψ (η ) is a complex function, the wavelet transform
Wn(s) is also complex. The transform can then be divided into the real and
imaginary part, or amplitude and phase. Finally, one can define the wavelet
power spectrum as |Wn(s)|2. The expectation value for |Wn(s)|2 is equal to N times
the expectation value for the discrete Fourier transform of the time series. For a
white-noise time series, this expectation value is σ2/N, where σ2 is the variance
of the noise. Thus, for a white-noise process, the expectation value for the
wavelet transform is |Wn(s)|2 = σ2 at all n and s. Based on this knowledge, the
same logic is used to calculate the expected value of red noise. The |Wn(s)|2 / σ2
is the measure of the normalized signal value relative to white noise. As the
biological background noise is a red noise type, the normalization method is
relative to red noise as it is described in [5] (in a way as it was used in the
results of this paper).
An important concept of this study is the so called Cone of influence (COI).
The cone of influence is the region of the wavelet spectrum in which edge
effects become important because of the finite length of signal and used
window. The significance of the edge effect is defined as the e-factor (power
spectrum edge drops by a factor e−2) time of wavelet power at each scale. The
edge effects are negligible beyond the COI region. This must be considered for
an accurate analysis. In each figure, COI is represented at the edge of the
wavelet transforms (lighter area in figures).
Another important factor we have added to this analysis is the significance
level of the correlation studies. The theoretical white/red noise wavelet power
spectra are derived and compared to Monte Carlo simulation results. These
spectra are used to establish a null hypothesis for the significance of a peak in
the wavelet power spectrum (the question to be answered: is a power peak from
a wavelet figure the result of biological events or it is a result of stochastic
red/white noise effect?).
128 L. F. Márton, L. Szabó, M. Antal, K. György
The null hypothesis is defined for the wavelet power spectrum as follows. It is
assumed that the time series has a mean power spectrum, if a peak in the
wavelet power spectrum is significantly above this background spectrum, then it
can be assumed to be a true feature with a certain percent of confidence.
(‘significant at the 5% level’ is equivalent to ‘the 95% confidence interval’).
Our application is highlighting the biological events, with surrounding the
significant peaks of correlations at the confidence interval of 95%. It is
important that in biological studies the background noise can be modeled by a
red (or pink) noise. A simplest model for a red noise is the lag-1 autoregressive
[AR (1), or Markov] process. In the following figures, all significant
localization of a biological event (VEPs) in time-frequency domain is also
statistically significant. This is an important result. What is not within a
significant area, is not considered in the results. Another important result in our
analysis is the representation of the phase relationship between two recordings.
In each cross-wavelet spectrum and cross-coherence spectrum the phase
relationship is represented with arrows. A horizontal arrow to the right means
that in that time frequency domain the two biological signals are in phase (if
that domain is significant at 5% level and is not within COI domain). The
opposite arrow orientation has the meaning of opposite phase correlation. The
angle of arrows relative to horizontal line is showing the phase angle in that
time frequency domain. Our application is calculating and is representing all
these phase values. The definition of wavelet-cross-correlation and wavelet-
cross-coherence are defined in [5] and [6].
4. Results
The following figures are slim examples from our recordings and their
analysis. Fig. 2 is the amplitude/time representation of a channel signal as a
recording on the right hand side of the Motor Cortex area when the left hand has
been lifted two times during a 50 s recording session. This figure is showing
also the application menu created for this WFT type of analysis. Fig. 3 is the
WFT representation of the Fig. 2 recording in different frequency bands. The
vertical axis is for the frequency, and the horizontal one is the time axis. The
frequency bands for the different windows are 0.1 – 50 Hz for the top left, 0.1 –
8 Hz for bottom left, 24 – 30 Hz top right window and finally 30 – 36 Hz for
bottom right window. The frequency bands are representative for biological
events. A color-code represents the intensity of a time-frequency domain for
that WFT decomposition (the corresponding color-code is represented at the
right hand side of each window). The shape within each window is
characteristic for a real time motion recorded as EMG+EEG. Fig. 4 is similar
with Fig. 3 but it is represented in 3D for a better visual understanding of the
Analysis of Neuroelectric Oscillations of the Scalp EEG Signals 129
Figure 3: The wavelet Fourier transform (WFT) in four different frequency bands.
As it was mentioned, the same hand lifting time event recording has been
done on both cortical sides. The left cortical side recording and decomposition
with WFT is not represented here, but it has a similar configuration. Fig. 5 is
the difference in time-frequency domain of the two side signals. It is visible that
the two side recordings are not the same as it is known from theory. This
130 L. F. Márton, L. Szabó, M. Antal, K. György
analysis is WFT, and here the COI and significance test was not used. As it was
mentioned, the WFT is very sensitive to the window length (T) used in
decomposition of the time signal [7], [8]. In the spectrum, not controllable
frequency interference is present, but the method is powerful enough to be
usable in detection and classification of not very sensitive types of motor
actions.
Figure 6: The two channels (Ch1, Ch2) amplitude/time representation of the recordings.
The bottom figure is showing the two superimposed signals. The green highlights are
the eye movement time sequences (sample size on vertical).
Fig. 6 is representing left-right movement of eye balls with a relax time
between them. It is visible the time sequence of left and right hand cortical side
EMG+EEG. The whole recording length is about 100 seconds. The high
amplitude signals in 25-37 seconds interval in Ch2 recording is an extra EMG, a
noise from the experiment point of view. The time sequence is containing three
eye movement events. These are between (12, 25) sec, (39, 55) sec and finally
(71, 85) seconds. These are highlighted by green line segments. In the third
(bottom) window it is visible that the Ch1 and Ch2 recordings are in opposite
phase. But this will be obvious from cross correlations calculated and
represented in Fig. 9.
The next two figures (Fig. 7 and Fig. 8) are the representation of Morlet WT
of these channel recordings. The COI and the significant areas are represented.
Domains of the signal within these closed contours are significant, outside are
not significant (should not be considered biological events). We are considering
the two parallel lines delimiting roughly the (0.75 – 1.5) Hz frequency interval.
Within these limits we can consider the events of eye movement left-right-left
in the detected time intervals. It is very obvious the presence of significant
132 L. F. Márton, L. Szabó, M. Antal, K. György
domains, and they can be easily identified. In Fig. 8, in 25 sec to 37 sec interval,
the EMG ‘noise’ is there, but the basic components are present also at much
higher frequency domains.
(0.75 – 1.5Hz). This means that the correlation between the two channels, in
this frequency band is in opposition. In the recordings with half way eye
movement (left to middle, or right to middle) the phase shift is not opposite but
is around of 90/270 degrees. These phase events permit the detection of the
direction of eye movements. Fig. 9 bottom image is the normalized version of
the same cross-correlation, the so-called cross-coherence between the two
channels. This information about the interrelation of the two recordings is more
relevant to characterize the ERO contained visual evoked potentials.
and are not presented in this paper, but can be considered for technical
applications based on EEG+MEG recordings. The cross-coherence matrix is
processed to extract the information (signals) for further control tasks.
5. Conclusions
References
[1] Nicolelis, M. A. L. and Lebedev, M. A., “Principles of neural ensemble physiology under-
lying the operation of brain–machine interfaces”, Nature Reviews Neuroscience, vol. 10, pp
530-540, July 2009.
[2] Hockensmith, G. B, Lowell, S. Y, and Fuglevand, A. J. “Common input across motor
nuclei mediation precision grip in humans”. J. Neuroscience, vol. 25, pp. 4560–4564, 2005.
[3] Caviness, J. N., Adler, C. H., Sabbagh, M. N., Connor, D. J., Hernandez, J. L., and
Lagerlund T. D., “Abnormal corticomuscular coherence is associated with the small
amplitude cortical myoclonus in Parkinson’s disease”, Mov Disorder, 2003, no.18, pp.
1157–1162, 2003.
[4] Grosse, P., Cassidy, M. J., Brown, P. “EEG–EMG, MEG–EMG and EMG–EMG frequency
analysis: physiological principles and clinical applications”, Clin Neurophysiol, no. 113, pp.
1523–1531, 2002.
[5] Torrence, C., and Compo, G. P., “A Practical Guide to Wavelet Analysis”, Bulletin of the
American Meteorological Society, vol. 79, no. 1, pp. 61-78, January 1998.
[6] Yao, B., Salenius, S., Yue, G. H., Brownc, R. W., and Liu, Z. L., “Effects of surface EMG
rectification on power and coherence analyses: An EEG and MEG study”, Journal of
Neuroscience Methods, no.159, pp. 215–223, 2007.
[7] Ermentrout, B. G, Gala´n, R. F., and Urban N. N. “Reliability, synchrony and noise”
Review: Trends in Neurosciences Cell Press, vol. 31, no. 8, pp. 428-434, 2008.
[8] Faes, L., Chon, Ki. H., and Giandomenico, N., “A Method for the Time-Varying Nonlinear
Prediction of Complex Nonstationary Biomedical Signals”, IEEE Transactions on
Biomedical Engineering, vol. 56, no. 2, pp. 205-209, February 2009.
[9] Chua, K. C., Chandran, V., Rajendra Acharya, U., and Lim C.M. “Analysis of epileptic
EEG signals using higher order spectra” Journal of Medical Engineering & Technology,
vol. 33, no. 1, 42–50, January 2009.
Analysis of Neuroelectric Oscillations of the Scalp EEG Signals 135
[10] Guo, X., Yan, G., and He, W. “A novel method of three-dimensional localization based on
a neural network algorithm”, Journal of Medical Engineering & Technology, vol. 33, no. 3,
pp. 192–198, April 2009.
[11] Ajoudani, A., and Erfanian, A., “A Neuro-Sliding-Mode Control With Adaptive Modeling
of Uncertainty for Control of Movement in Paralyzed Limbs Using Functional Electrical
Stimulation”, IEEE Transactions on Biomedical Engineering, vol. 56, no. 7, pp. 1771-1780
July 2009.
[12] Harrison, T. C., Sigler, A., and Murphy, T. H., “Simple and cost-effective hardware and
software for functional brain mapping using intrinsic optical signal imaging”, Journal of
Neuroscience Methods, no. 182, pp. 211–218, 2009.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 136-145
1. Introduction
The interest in using the wavelet transform to denoise the electrocardiogram
(ECG) signals is increasing. The wavelet transform is a useful tool from time–
frequency domain, preferred for the analysis of complex signals. The
application of this transform to ECG signal processing has been found
particularly useful due to its localization in time and frequency domains. The
discrete wavelet transform- based approach produces a dyadic decomposition
structure of the signals. In this way, the wavelet packet approach is an adaptive
method using an optimization of the best tree decomposition structure
independently for each signal.
2. Methods
The continuous wavelet transform (CWT) of the signal x(t ) is defined as a
convolution of a the signal with a scaled and translated version of a base
wavelet function, [1]:
+∞ +∞
1 t −b
Wa x (b) = ∫ x(t ) ⋅ψ a,b (t ) dt = a
∫ x(t ) ⋅ψ dt
a
(1)
−∞ −∞
where the scale ‘a’ and translation ‘b’ parameters are nonzero real values and
the wavelet function is also real. A small value of ‘a’ gives a contracted version
of the mother wavelet function and then allows the analysis of high frequency
components. A large value of the scaling factor stretches the basic function and
provides the analysis of low-frequency components of the signal.
The discrete wavelet transform (DWT) is defined as a convolution between
the analyzed signal and discrete dilation and translation of a discrete wavelet
function. In its most common form, the DWT applies a dyadic grid (integer
power of 2 scaling with ‘s’ and ‘l’) and orthonormal wavelet basis function: .
( )
s
−
ψ ( s ,l ) ( x ) = 2 2 ψ 2 − s x − l (2)
The variables s and l are integers that scale and translate the mother function
ψ to generate wavelets (analyzing functions). The scale index s indicates the
wavelet's width, and the location index l gives its position. The mother wavelets
are rescaled, or “dilated” by powers of two, and translated by integer ‘l’ values.
In this case we have a dyadic decomposition structure. These functions define
an orthogonal basis, the so-called wavelet basis [3], [5]. The Discrete Wavelet
Transform (DWT) decomposition of the signal into different frequency bands
138 Nonlinear Filtering in ECG Signal Denoising
noisy ECG
1
0.5
-0.5
0 50 100 150 200 250 300 350 400 450 500
-1
0 50 100 150 200 250
-0.5
-1
0 50 100 150 200 250
4. Results
To estimate the ability of this denoising procedure the followed parameters
were the obtained signal to noise ratio and the absolute value of the error
defined as:
PoriginalECG
SNR1[dB] = 10 lg (6)
PoriginalECG − PdenoisedECG
Error = abs (originalECG − denoisedECG ) (7)
Figure 7 presents the original signal, the wavelet transform based denoised (soft
thresholding) signal and the result of proposed nonlinear filtering. One can see
(visual analysis) that the new method preserves more accurate information
about signals characteristic points than the DWT based procedure. Figure 8
shows the obtained signal-to-noise ratios by different denoising methods. The
results show that the proposed method performs better denoising if the signal
has lower signal-to-noise ratio.
Z. Germán-Salló 143
noisy ECG
1
0.5
-0.5
0 50 100 150 200 250 300 350 400 450 500
-1
0 50 100 150 200 250 300 350 400 450 500
-1
0 50 100 150 200 250 300 350 400 450 500
Comparing SNRs
12
10
8
SNR
6
SNRwp
SNR
4
SNRnew
2
SNRdwt
0
-2 1 2 3 4 5
-4
nr of tests
denoising errors
0.3
0.25
0.2
ErrorWP
errors
0.15 ErrorNEW
ErrorDWT
0.1
0.05
0
SNR 10.15 5.87 2.72 2
SNR
Figure 7 presents the filtering errors for different methods, the proposed
procedure seems to be slightly better than the wavelet packed based denoising
method.
5. Conclusions
The main idea was to estimate the correlation between the noise and the
signal. The discrete wavelet decomposition algorithm offers a good opportunity
to have access to different time-frequency domains in order to perform non-
linear filtering. An extra decomposition of the noise was used to reduce this
correlation. This method was compared with ordinary wavelet decomposition
and wavelet packet decomposition based filtering techniques. Wavelet and
wavelet packets based denoising methods gave different performances, due to
the different division strategies of the signal decomposition structures.
References
[1] Donoho, D. L., “De-noising by soft-thresholding”, IEEE, Transaction on Information
Theory, vol 41, no 3, pp. 613-627, 1995.
[2] Aldroubi, A., and Unser, M.: “Wavelets in Medicine and Biology”, CRC Press New York
1996.
[3] Misiti, M., Misiti, Y., Oppenheim, and G., Poggi, J-M.: “WaveletToolbox. For Use with
Matlab. User’s Guide”, Version 2, The MathWorks Inc 2000.
[4] Coifman, R. R., and Wickerhauser, M. V., “Entropy-based algorithms for best basis selec-
tion”, IEEE Transaction on Information Theory, vol. 38, no 2, pp. 713–718, 1992.
Z. Germán-Salló 145
[5] Mallat, S. A., “A theory for multi-resolution signal decompositions: The wavelet represent-
tation”, IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 11, no. 7, pp. 674-
693, 1989.
[6] Donoho, D. L., and Johnstone, I. M. “Ideal spatial adaptation by wavelet shrinkage”, Bio-
metrika, Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China,
September 1-4, vol. 81, 2005, pp. 425-455.
[7] Chang, C. S., Jin, J., Kumar, S., Su, Q., Hoshino, T., Hanai, M., and Kobayashi, N.,
“Denoising of partial discharge signals in wavelet packets domain”, IEEE Proceedings of
Science, Measurement and Technology, vol. 152, no. 3, pp. 129-140, 2005.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 146-158
Abstract: In the past few years a considerable research activity has been directed
towards understanding the structure forming phenomena of nanostructured materials. A
wide range of composition and structure of multiphase material systems have been
investigated in order to allow a fine tuning of their functional properties. We have
studied one of the most promising material systems composed of Al, Ti, Si, N, where a
significant reduction in grain growth was achieved through control of phase separation
process. Nanostructured (Al, Ti, Si)N thin film coatings were synthesized on Si(100)
and high speed steel substrates by DC reactive magnetron sputtering of a planar
rectangular Al:Ti:Si=50:25:25 alloyed target, performed in Ar/N2 gas mixture. For all
the samples we have started with deposition of a nitrogen-free TiAlSi seed layer. Cross-
sectional transmission electron microscopy investigation (XTEM) of as-deposited films
revealed distinct microstructure evolution for different samples. The metallic AlTiSi
film exhibited strong columnar growth with a textured crystalline structure. Addition of
a small amount of nitrogen to the Ar process gas caused grain refinement. Further
increase of nitrogen concentration resulted in fine lamellar growth morphology
consisting of very fine grains in close crystallographic orientation showing up clusters
of the chain-like pearls in a dendrite form evolution. Even higher N concentration
produced homogeneous compact coating, with an isotropic structure in which we can
observe nanocrystals with average size of ~3nm. The kinetics of structural
transformations is explained in the paper by considering the basic mechanism of
spinodal decomposition process.
1. Introduction
In the last decade intense research activity was devoted to investigate
nanocomposite coating materials, consisting of a nanocrystalline transition
metal nitride and an amorphous tissue phase. These coating materials are
characterized by high hardness [1], enhanced elasticity [2] and high thermal
stability [3], which define their unusual mechanical and tribological properties.
Various studies revealed that in multiphase nanocomposite materials the
microstructure and the ratio between hardness and elastic modulus H/E are
important in the coating performance [4]. Recently the most studied material is
the quaternary (Ti, Al, Si)N nitride system revealing the most promising results.
As it has been suggested by Veprek [5], in nanocomposite materials the
structure and size of the nanocrystalline grains embedded in the amorphous
tissue phase together with the high cohesive strength of their interface, are the
main parameters which control the mechanical behavior of the coatings. The
reported results revealed that adatom mobility may control the microstructure
evolution in multi-elemental coating systems, where the substrate temperature
and the low energy ion/atom arrival ratio have significant effect on the growth
of nanocrystalline grains.
The microstructure and growth mechanism of arc plasma deposited TiAlSiN
(35 at.% Ti, 42 at.% Al, 6.5at.% Si) thin films were investigated by Parlinska et
al. [6, 7]. It was shown that compositionally graded TiAlSiN thin films with Ti-
rich zone close to the substrate exhibited crystalline structure with pronounced
columnar growth. Addition of Al+Si leads to a grain refinement of the coatings,
and a further increase of the Al+Si concentration results in the formation of
nanocomposites, consisting of equiaxial, crystalline nanograins surrounded by a
disordered, amorphous SiNx phase.
In our study (Al, Ti, Si)N single layer thin film coatings were deposited on
Si(100) and high-speed steel substrates by DC reactive magnetron sputtering.
We investigated the micro structural modification of (Ti1-xAlxSiy)N thin film
coatings as a function of nitrogen concentration by conventional transmission
electron microscopy.
Table 1: Summary of deposition parameters used for preparation of (Al, Ti, Si)N
coatings: Pd- DC magnetron discharge power, qN2- nitrogen mass flow rate, Ts- substrate
temperature, Us- substrate bias voltage.
Figure 2: XTEM micrograph and SAED electron diffraction pattern of the (AlTiSi)N
coating grown by nitrogen flow rate of qN2=2 sccm (sample TiS_09): a). Bright field
(BF) image indicates a weakly columnar structure evolution in close vicinity of
transition zone from the ternary TiAlSi sub-layer to the quaternary (AlTiSi)N
overgrown layer, b). On the enlarged micrograph slightly curved fine lamellar growth
morphology could be identified inside the individual columns.
For a nitrogen flow rate of qN2 = 2 sccm the microstructure indicates a weak
columnar evolution (Fig. 2a). Slightly curved fine lamellar growth morphology
could be identified inside of the individual columns (see on the enlarged
micrograph, Fig. 2b).
Selected area electron diffraction pattern (SAED) performed in close vicinity
of transition zone –including also the Si(100) bulk–, claims for a two-phase
mixture of fcc-TiAlN nanocrystals embedded in an amorphous tissue phase
(inset of Fig. 2b). Furthermore, (200) preferential growth in close vicinity of
transition zone from the ternary TiAlSi sub-layer to the quaternary (AlTiSi)N
overgrown layer was slightly maintained. The presence of continuous reflection
rings suggests a grain refinement of the coating with a strong tendency for
evolution from the textured polycrystalline phase to a mixture of nanocrystalline
AlTi(Si)N phase and possible formation of silicon nitride amorphous tissue
phase.
Chemical composition of the as deposited (Ti, Al, Si)N thin films was
evaluated from EDS spectra, and found to be 23 at.% Ti, 46 at.% Al, 26 at.%.
Si, and about 5 at.% N.
152 Microstruct. Modif. of (Ti1-xAlxSiy)N Thin Film Coatings as a Function of Nitrogen Conc.
Figure 3: Bright field XTEM micrograph of (AlTiSi)N thin film deposited with an
increased nitrogen flow rate (TiS_05 sample, qN2=5 sccm): a). The coating’s
microstructure indicates the development of a competitive columnar evolution of the
ternary TiAlSi sub-layer followed by the growth of the quaternary (AlTiSi)N overgrown
layer developed in an isotropic morphology. b). The enlarged micrograph clearly shows
a random distribution of very fine nc-(Al1-xTix)N grains, having an average size of ~ 3
nm, with disordered grain limiting boundaries.
The chemical composition of the as deposited thin film was evaluated from
EDS spectra analysis, and found to be 12 at.% Ti, 19 at.% Al, 23 at.% Si and 46
at.% N. The oxygen impurity content decreased to about 0.2 % which was
related to a prolonged outgassing process of the vacuum chamber and thermal
degassing of the substrate by heating to 600 °C prior to the deposition process.
Veprek et. al [10, 11] in their recently published review paper emphasized
that ultra-hard nanocomposite nitride phase coatings based on (Ti, Al, Si)N
elemental composition can be managed by well-controlled plasma and
deposition conditions. The development in a periodic structure of the
D. Biró, S. Papp, L. Jakab-Farkas 153
∂C A
j A = − DA ⋅ , (2)
∂x
where DA stands for diffusion coefficient in the first empirical law of Fick.
The diffusion flux jA can be driven also by the free energy gradient of
component A:
∂G
j A = −C A ⋅ µ A ⋅ A , (3)
∂x
where µ A stand for the mobility constant of component A.
From the above equations the diffusion coefficient DA can be written as a
derivative function of the Gibbs free energy in respect to the concentration:
∂G
DA = C A ⋅ µ A ⋅ A (4)
∂C A
∂G A
If the diffusion coefficient is positive, D A > 0, i.e. > 0, the chemical
∂C A
potential gradient has the same direction as the concentration gradient, therefore
the diffusion flux occurs along the concentration gradient. For diffusion
∂G A
coefficient D A < 0, i.e. < 0, the diffusion flux occurs against to the
∂C A
concentration gradient.
By correlating the phase composition diagram (i.e. a diagram of phases,
where the dependence for temperature T versus the molar fractions XA and XB of
components indicates the composition ranges for equilibrium phases) and the
diagram of the free energy change versus the molar fraction of the mixture, it
can be seen that below the spinodal (where the second derivative of the free
∂ 2 ∆G
energy of mixing versus the molar fraction XA is zero, = 0) the system is
∂X A 2
unstable (Fig. 4).
For negative values of the second derivative of the free energy of mixing
∂ 2 ∆G
versus the molar fraction XA, e.g. of component A, i.e. < 0, the
∂X A 2
homogeneous supersaturated solution is unstable and spinodal decomposition
may proceed.
The spinodal decomposition process happens in an unstable region, where
further instability is caused by the small fluctuation occurred in the local
concentration, while the diffusion process takes place from the lower to higher
concentration (i.e. “up-hill” diffusion).
D. Biró, S. Papp, L. Jakab-Farkas 155
Our experimental results on fine lamellar growth morphology of (Ti, Al, Si)N
nitride coatings, consisting of chain-like pearls in a dendrite evolution with very
fine grains in close crystallographic orientation, may be explained in accordance
with Veprek’s theory by partial spinodal decomposition and phase segregation
during the film’s growth while percolation threshold composition is attained by
an increased nitrogen activity.
On the other hand, the increase of deposition rate induces a decrease of the
surface mobility related to the decrease of the ion-to-atom arrival rate ratio.
These particular deposition conditions explain the columnar structure of TiAlSi
solid solution crystallites, which can be clearly observed in the XTEM image of
TiS_01 sample. Addition of minor amounts of nitrogen leads to an
encapsulation of the growing TiAl(Si)N crystallites by process segregated
amorphous phase.
From the detailed observation of the SAED diffuse diffraction pattern of
sample TiS_05 obtained with an increased nitrogen flow rate, the presence of an
amorphous phase surrounding the Ti3AlN nanocrystallites can be attributed to
Si3N4 matrix phase (inset of Fig. 3a). The formation of amorphous TiSi2 and
AlN phase due to the partial segregation of Al and Si atoms should be also
considered due to the effect of enhanced ion bombardment provided by the
focused plasma beam that is characteristic to the present experimental
conditions [8].
When the atomic surface mobility in the growing film is adequate, the
segregated atoms can nucleate and develop the new phases controlled by
deposition temperature and by the energy transfer from an increased incident
ion-to-atom arrival rate ratio [13-15].
Further experiments are in progress to investigate the influence of the
deposition temperature on structure evolution of (TiAlSi)N coatings.
4. Conclusions
In the present work it was shown that:
a) Columnar structure of polycrystalline AlTiSi thin film coating evolved by
non-reactive DC magnetron sputtering applied to Al:Ti:Si = 50:25:25 alloyed
target (performed in pure Ar atmosphere, where the 500 W discharge power,
Ts = 400 ºC substrate temperature and Us = –75 V bias voltage were held
constants).
b) Addition of a small amount of nitrogen to the process gas leads to a grain
refinement of polycrystalline (Ti, Al, Si)N thin films. Increase of N concen-
tration (qN2 = 2 sccm flow rate) resulted in fine lamellar growth morphology of
coatings, showing chain-like pearls in a dendrite evolution, consisting of
clusters of very fine grains in close crystallographic orientation.
D. Biró, S. Papp, L. Jakab-Farkas 157
Acknowledgements
The authors are thankful for the financial support of this project granted by
Sapientia Foundation − Institute for Scientific Research, Sapientia University.
The EDS analyses of the investigated coatings were performed in a CM 20
Philips 200kV TEM electron microscope by Professor P. B. Barna from
RITPMS, Budapest. Professor P. B. Barna’s contribution to investigating
elemental composition and the valuable discussions are highly appreciated.
References
[1] Yoon, J. S., Lee, H. Y., Han, J., Yang, S. H., Musil, J., “The effect of Al composition on
the microstructure and mechanical properties of WC–TiAlN superhard composite
coating”, Surface and Coatings Technology, vol. 142-144, pp. 596-602, 2001.
[2] Duran-Drouhin, O., Santana, A. E., Karimi, A., “Mechanical properties and failure modes
of TiAl(Si)N single and multilayer thin films”, Surface and Coatings Technology, vol.
163-164, pp. 260-266, 2000.
[3] Musil, J. and Hruby, H., “Superhard Nanocomposite Ti1-xAlxN Films Prepared by
Magnetron Sputtering” , Thin Solid Films, vol. 365, pp. 104-109, 2000.
[4] Ribeiro, E., Malczyk, A., Carvalho, S., Rebouta, L., Fernandes, J. V., Alves, E., Miranda,
A. S., “Effect of ion bombardment on properties of d.c. sputtered superhard (Ti,Si,Al)N
nanocomposite coatings”, Surface and Coatings Technology, vol. 151-152, pp. 515-520,
2002.
[5] Veprek, S., “New development in superhard coatings: the superhard nanocrystalline-
amorphous compositics”, Thin Solid Films. vol. 317, pp. 449-454, 1998.
[6] Parlinska-Wojtan, M., Karimi, A., Cselle, T., Morstein, M., “Conventional and high
resolution TEM investigation of the microstructure of compositionally graded TiAlSiN
thin films”, Surface and Coatings Technology, vol. 177-178, pp. 376-381, 2004.
[7] Parlinska-Wojtan, M., Karimi, A., Coddet, O., Cselle, T., Morstein, M., “Characterization
of thermally treated TiAlSiN coatings by TEM and nanoindentation”, Surface and
Coatings Technology, vol. 188-189, pp. 344-350, 2004.
[8] Biro, D., Barna, P. B., Szekely, L., Geszti, O., Hattori, T., Devenyi, A., “Preparation of
multilayered nanocrystalline thin films with composition-modulated interfaces”, Nuclear
Instruments and Methods in Physics Research vol. 590, pp. 99-106, 2008.
[9] Lábár, J. L., “ProcessDiffraction: A computer program to process electron diffraction
patterns from polycrystalline or amorphous samples”, Proceedings of the XII EUREM,
Brno (L. Frank and F. Ciampor, Eds.), vol. III., pp. I 379-380, 2000.
158 Microstruct. Modif. of (Ti1-xAlxSiy)N Thin Film Coatings as a Function of Nitrogen Conc.
[10] Veprek, S., Veprek-Heijman, M. G. J., Karvankova, P., Prochazka, J., “Different approa-
ches to superhard coatings and nanocomposites”, Thin Solid Films, vol. 476, pp. 1-29,
2005.
[11] Veprek, S., Zhang, R. F., Veprek-Heijman, M. G. J., Sheng, S. H., Argon, A. S., “Super-
hard nanocomposites: Origin of hardness enhancement, properties and applications”, in
Surf. And Coat. Technol., vol. 204, pp. 1898-1096, 2009.
[12] E. P. Favvas, A. Ch. Mitropoulos: “What is spinodal decomposition?”, Journal of Engi-
neering Science and Technology Review, vol. 1, pp. 15-27, 2008.
[13] Carvalho, S., Rebouta, L., Ribeiro, E., Vaz, F., Dennnot, M. F., Pacaud, J., Riviere, J. P.,
Paumier, F., Gaboriaud, R. J., Alves, E., “Microstructure of (Ti,Si,Al)N nanocomposite
coatings”, Surface and Coatings Technology, vol. 177-178, pp. 369-375, 2004.
[14] Carvalho, S., Rebouta, L., Cavaleiro, A., Rocha, L. A., Gomes, J., Alves, E.,
“Microstructure and mechanical properties of nanocomposite (Ti,Si,Al)N coatings”, Thin
Solid Films, vol. 398-399 , pp. 391-396, 2001.
[15] Vaz, F., Rebouta, L., Goudeau, P., Pacaud, J., Garem, H., Riviere, J. P., Cavaleiro, A.,
Alves, E., “Characterization of TiSiN nanocomposite films”, Surface and Coatings
Technology, vol. 133-134 , pp. 307-313, 2000.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 159-165
Abstract: The paper discusses the generating surfaces of the paloid bevel worm hob
used for paloid gear cutting. A pitch modification is required by this type of tool in
order to ensure the optimal contact pattern by gearing. As a consequence of this
modification the flank line of the plain gear tooth results as a paloid- a more general
shaped curve than the theoretical involute of the basic circle. Equations of the
generating surfaces result as equations of a generalized arhimedic bevel helical surface
presenting those modifications that arise from the variation of the tooth thickness.
The first subsection discusses the essential geometrical peculiarities of the paloid
worm hob. Here it is to remark that the most important characteristic of the tool is the
variation of the tooth thickness on the rolling tape generator. The tooth thickness has its
minimum value at the middle of the generator, and is maximum on the extremities. As a
consequence, the generated gear tooth presents an opposite variation of thickness. The
thickness variation is realized by moving the relieving tool on an ellipse, but this is not
the only possible trajectory to be used.
The second subsection presents the generalized mathematical model of the tooth
thickness variation. Starting from the ellipse used by classical relieving technologies,
and writing the equation of the ellipse reported to the coordinate system of the paloid
hob it results the radius function of the revolved surface of the reference helix. This
function is used in its condensed form. With this, the developed mathematical model
can be used for other forms of the relieving tool trajectory.
The next paragraphs present the matrix transformations between the coordinate
systems of the hob and the relieving tool. Finally the parametric equations of the hob
tooth flank are obtained. These equations depend on the radius modification function.
Using other relieving trajectories that differ from an ellipse, other tooth flank forms will
be obtained. Using this model, the flanks of the cut gear tooth can be easily written for
all types of trajectories.
160 D. Hollanda, M. Máté
1. General description
Paloid bevel gears are realized on Klingelnberg type teething machine-tools,
using paloid bevel gear worm-hobs [1], [6]. These tools present a straight-
shaped edge in their axial section. As a conclusion, the origin surface of the
paloid worm hob is an Arhimedic bevel worm having the half taper angle of 30°
as shown in Fig. 1. [4], [5]. The chip-collecting slots are axially driven. In order
to realize the clearance angle on all edges, a helical relieving, perpendicularly
oriented to the bevel generator is allowed.
π ⋅ mN
Feo − [cos 30° + tg30° sin(30° − α )] + λ sin(30° − α )]
4
rM = λ cos α (2)
0
1
Figure 4: The position of the right edge related to the used reference system.
First the edge must be reported to an auxiliary coordinate system. Between
this and the auxiliary system OM X M YM Z M there exist only translations by
amounts of p A ⋅ ϕ / 2π and h ⋅ ϕ / 2π respectively. A transfer matrix describing
the above translations is:
pA
1 0 0 ϕ
2π
h
M aM = 0 1 0 ϕ (3)
2π
0 0 1 0
0 0 0 1
The pointing vector of the bevel helical surface described by the edge,
reported to the stationary reference system is obtained from the following
matrix equation
r = M Oa ⋅ M aM . ⋅ rM = M OM ⋅ rM (5)
Multiplying M Oa by M aM it results
pA
1 0 0 ϕ
2π
h
0 cos ϕ − sin ϕ
ϕ . cos ϕ
M OM = 2π (6)
h
0 sin ϕ cos ϕ ϕ .sin ϕ
2π
0 0 0 1
Finally, the matrix expression of the pointing vector is:
pA
1 0 0 ϕ
X 2π XM
h
Y 0 cos ϕ − sin ϕ ϕ . cos ϕ YM
= 2π . (7)
Z h ZM
0 sin ϕ cos ϕ ϕ .sin ϕ
1 2π 1
0 0 0 1
Expression (7) realizes the matrix form of the right flank of the bevel worm-
hob tooth, reported to the stationary system OXYZ, attached to the hob.
Analytical expression is obtained after multiplying the matrices in the
equation before. Similarly results the equations of the opposite flank, if starting
the calculus with the equations of the opposite edge.
References
[1] Krumme, W., “Klingelnberg-Spiralkegelräder Dritte neubearbeitete Auflage”, Springer
Verlag, Berlin, 1967.
[2] *** “MASINOSTROIENIE”- ENCYCLOPEDY, Vol.VII., Masghiz Moskou.
[3] Máté, M., Hollanda, D. “The Enveloping Surfaces of the Paloid Mill Cutter”, in Proc. 18th
International Conference on Mechanical Engineering, Baia Mare, 23-25 April 2010, pp.
291-294.
[4] Michalski, J., Skoczylas, L. “Modeling the tooth flanks of hobbed gears in the CAD
environment", The International Journal of Advanced Manufacturing Technology, vol. 36,
no. 7-8, 2008.
[5] Shu-han Chen, Hong –zhi Jan, Xing-zu Ming, “Analysis and modeling of error of spiral
bevel gear grinder based on multi-body system theory", Journal of Central South
University of Technology, vol. 15, no. 5, 2008.
[6] Klingelnberg, J. “Kegelräder- Grundlagen, Anwendungen”, Springer Verlag, 2008.
Acta Universitatis Sapientiae
Electrical and Mechanical Engineering, 2 (2010) 166-176
1. Introduction
The number of applications in the industry which use parallel mechanisms
are growing and the interest of the academia to find new solutions and
applications to implement such mechanisms is present all over the world. The
lower degree of freedom mechanisms which are suited for some specific tasks
Z. Forgó 167
are preferred because of the architecture simplicity and therefore the easy
mathematical modeling and finally, but not at least for economical reasons.
The 6 degrees of freedom (DOF) parallel mechanism is introduced by
Steward and Gough [1] and since then many aspects of the mechanism and its
application are revealed. During the last decades more attention has been paid to
the study of 6 DOF parallel mechanisms, including synthesis and analysis on
kinematics, dynamics, singularities, error and workspace. Some milestones in
the analysis of those mechanisms are set by Earl and Rooney using a method for
synthesis of new kinematic structures [2], Hunt studied the manipulators on the
basis of screw theory [3], Tsai is using systematic methodology in [4] and
Hervé discussed the structural synthesis of parallel robots using the
mathematical group theory [5]. More recently Shen proposed a systematic type
synthesis methodology for 6 DOF kinematic structures enumerating 29 parallel
structures [6]. Hereby Shen defines the hybrid single-open chains (HSOC)
which are able to generate three translations and three rotation angles. Using
those HSOCs four 6 DOF manipulators are presented with symmetrical
arrangement of the limbs (see No.3-No.6 architectures, Table 2. from [6]).
According to Tsai [7], the symmetry implies the use of the same number of
actuators on the same positions in each limb. Moreover he says that a parallel
manipulator is symmetrical if it satisfies the condition that the number of limbs
is equal to the number of degrees of freedom of the moving platform. In the
case of double actuated limbs (with two actuated joints) the last presented
condition can be omitted. So the HSOCs defined by Shen can be replaced by
serial chains which enable three translations and three rotations also.
This paper presents some kinematic structures according to the above
mentioned criteria without the aim of full discussion about all the possible
structures. The geometrical model of one architecture is presented as well.
=
{Li } {=
D} {T }{S ( N=
)}
(3)
{T (u)}{T (v )}{T ( w )}{R ( N , u)}{R ( N , v )}{R ( N , w )} ∀N .
Figure 1: The {Li} displacement Lie group variants incorporating the X-motion
generator.
The X-motion (or Shoenflies motion) generator can be easily observed, due
to equation (9) and Fig. 1a. Considering primitive Schoenflies-motion
generators [10] equivalences can be applied. Extending those generator family
members with the universal joint as seen in Fig. 1, new generators for {D}
displacement Lie group can be introduced. However, this enumeration is out of
the topic of this paper. Because of the reduced link number and simplicity, in
further investigation, the Fig. 1b variant is preferred. Using other geometrical
constraints the architecture is presented in [11] also. The schematic design of
such a limb for a 6 DOF manipulator is presented in Fig. 2b. The index i is
introduced because the same type of limbs are used for moving the manipulator
platform.
Ci =
Bi C i Di + Di = Bi Ci Di x ⋅ i + Ci Di y ⋅ j + Di Bi ⋅ k , (11)
C i=
Bi Ci Bi x ⋅ i + Ci Bi y ⋅ j + Di Bi ⋅ k , (12)
where i, j and k are the unit vectors of the x0, y0 and z0 Cartesian axes. The setup
of the mechanism (based on the projection of the manipulator on the 0x0y0 plan
– top view from Fig. 2) suggests a planar Delta manipulator.
Figure 2: Schematic design of ith limb of the 6 DOF manipulator (a, b), and the top view
of the proposed mechanism (c). The shaded couplings are the active joints (one
prismatic and one rotation for each limb), and the white ones are passive bonds.
For this reason the mathematical modelling of the proposed mechanism is
made easily and it is like a well known planar Delta manipulator modelling [7]
with some completions. These completions must be made due to the fact, that it
Z. Forgó 171
is possible to rotate the platform around the x0 and y0 axes too, and so the
projections of the BiBi+1 platform length are variable. Through these paragraphs
the inverse and direct kinematics of the proposed mechanism is defined, and
issues about singular configurations are presented as well.
At the beginning the closure equation is considered for the three limbs:
OAi + Ai C i + C i Bi =OP + PBi where i =1, 2,3 . (13)
In case of inverse kinematic modelling the right side of equation (13) is
given through the coordinates of the characteristic point (denoted by P) and
through the three rotation angles around the axes of the fixed 0x0y0z0 system:
= X [ xP yP z P α β γ ]T . The task is to determine the robot parameters
q = [q1 q2 q3 q4 q5 q6 ]T from the left side of the equation. Assuming that
vector a has the components axy parallel to the 0x0y0 plan and az parallel to the
z0 axis, equation (13) becomes:
OAi cos αi + Ai Ci cos( qi+3 +αi −π) + Ci Bi cos( qi+3 +αi −π−βi=) xP + PBix
,(17)
OAi sin αi + Ai Ci sin ( qi+3 +αi −π) + Ci Bi sin ( qi+3 +αi −π−βi=) yP + PBiy
=
where PBix PBix (α, β, γ ) and PB
=iy PBiy (α, β, γ ) respectively i=1,2,3. To
eliminate the βi parameter belonging to a passive joint, the equations are
rearranged, and summing the square of the two equations in (17) yields:
e1i ⋅sin ( qi+3 +αi −π) + e2i ⋅cos( qi+3 +αi −π) + e3i =0 , (18)
172 Kinematic Analysis of a 6 DOF 3-PRRS Parallel Manipulator
where
(
e1i =−2⋅ Ai Ci ⋅ yP + PBiy − OAi sin αi ; )
e2i =−2⋅ Ai Ci ⋅( xP + PBix − OAi cos αi ); (19)
( )
2
e3i = ( xP + PBix − OAi cos αi )2 + yP + PBiy − OAi sin αi + Ai Ci 2 − Ci Bi 2 .
Solving equation (18) by using the substitutions:
2ti
sin ( qi+3 +αi −π) = 2
1+ ti q +α −π
2
where ti = tan i+3 i , (20)
cos q +α −π =1− ti 2
( i+3 i ) 1+ t 2
i
r r
−
qi 2 2 qi .
M
=q r ⋅
(26)
i +3 − r qiM+3
−
2 R 2 R
The inverse geometry calculus can be performed using the following
equation:
1 R
−
q r M
r qi .
=
i
⋅
R qi +3
M
(27)
q − 1i +3
−
r r
Using the above formulation and considering equations (16) and (21) the
inverse geometry is obtained in the following form:
174 Kinematic Analysis of a 6 DOF 3-PRRS Parallel Manipulator
1 R
r 0 0 − 0 0
r
1 R
q1 0
M 0 0 − 0 q
r r
q2M R q2
1
1
q M 0 0 0 0 −
q M
=
3M = r r ⋅ q3 =⋅
A q . (28)
q4M − 1 0 0 −
R
0 0 4
q
q5 r r q5
q6M 0 −
1
0 0 −
R
0 q6
r r
1 R
0 0 − 0 0 −
r r
Due to the characteristic setup of the driving mechanism the equations for
the kinematics are obtained in similar way:
q A−1 ⋅ q M .
q M= A ⋅ q and = (29)
In accordance with the formulated equations the dynamics of the manipulator
can be calculated easily, and will be presented in a further paper.
z0 z0
q i+3
Ei Ei Ci
R Ai
q i+3
E’i
qi qi
q iM M
q i+3
O y0 O x0
Figure 3: Schematic design of the belt mechanism for one, double drive link.
Z. Forgó 175
5. Singular configurations
The singularity analysis of this mechanism can be done based on the
matrices from (24) and (25). Inverse kinematic singularities occure in case of
aix biy − aiy bix =
0 (i=1,2,3) which defines the workspace boundaries. An other
possibility is biz = 0 (i=1,2,3) but it can be avoided through geometrical design,
because it is a constant value. Direct kinematic singularities occure when at the
same time it can be stated that bix = 0 or biy = 0 (i=1,2,3), which means that the
CiBi links are parallel. The same type of singularities can be found for
coexistence of eix biy − eiy bix =
0 (i=1,2,3) in case of coliniar C i Bi and
PBi vectors. Both direct kinematic singularity cases can be avoided by careful
geometrical design. The implemented parallel drive mechanisms have no
singular configurations, and this kind of calculations can be omitted.
6. Conclusions
This paper deals with a 6 degrees of freedom manipulator architecture using
the group theory. The mobile platform is connected to the base through three
PRRS limbs, each being double actuated on the first and second joint levels.
The inverse geometrical calculations are performed through equations (16) and
(21), hence the direct modelling is presented through the equation system (22).
The relation between the robot and general velocities is stated by the equation
(23). Some aspects about the singular configurations are introduced in the paper
based on the equation mentioned before. As it can be seen in the figures
presented in this paper the architecture is the extension of the well known planar
Delta robot to a 6 DOF mechanism. The mathematical model of the spatial
manipulator reflects this fact very well. The simple setup of the presented
mechanism assures a good manufacturability and needs a relatively easy control
algorithm considering some other 6 DOF manipulators.
References
[1] Stewart, D. A., “Platform with six degrees of freedom”, in Proceedings on Institution of
Mechanical Engineering, 1965, vol. 180, pp. 371-386.
[2] Dasguta, B. and Mruthyunjaya, T. S., “The Stewart platform manipulator: a review”
Mechanism and Machine Theory, vol. 35, pp. 15-40, 2000.
[3] Hunt, K. H., “Structural kinematics of in-parallel-actuated robot arms”, ASME Journal of
Mechanical Design, vol. 105, pp. 705-712, 1983.
[4] Tsai, L. W., and Joshi, S., “Kinematics and optimization of a Spatial 3-UPU Parallel
manipulator”, ASME Journal of Mechanical Design, vol. 122, pp. 439-446, 2000.
176 Kinematic Analysis of a 6 DOF 3-PRRS Parallel Manipulator
[5] Hervé, J. M., “Design of parallel manipulators via the displacement group”, in Proceedings
of the 9th World Congress on Theory of Machine and Mechanisms, Milano, 1985, pp. 2079-
2082.
[6] Shen, H., Yang., T., and Ma, L., “Synthesis and structure analysis of kinematic structures of
6-DOF parallel robotic mechanisms”, Mechanism and Machine Theory, vol. 40, pp. 1164-
1180, 2005.
[7] Tsai, L. W., “Robot Analysis – The Mechanics of Serial and Parallel Manipulators”, John
Wiley & Sons, 1999.
[8] Hervé, J. M., “The Lie group of rigid body displacements, a fundamental tool for
mechanism design”, Mechanism and Machine Theory, vol. 34, pp. 719-730, 1999.
[9] Hervé, J. M., “The planar-spherical kinematic bond: Implementation in parallel mecha-
nisms”, http://www.parallemic.org/Reviews/Review013p.html.
[10] Lee, C. C. and Hervé, J. M., “Type synthesis of primitive Schoenflies-motion generators”,
Mechanism and Machine Theory, vol. 44, pp. 1980-1997, 2009.
[11] Olea, G., Plitea, N., and Takamusa, K., “Kinematical analysis and simulation of a new
parallel mechanism for robotics application”, in ARK Piran, Piran, 25-29 June 2000, pp.
403-410.
ACKNOWLEDGEMENT
László BAKÓ
Sándor Tihamér BRASSAI
József DOMOKOS
Katalin GYÖRGY
Piroska HALLER
Tünde JÁNOSI-RANCZ
Lajos KENÉZ
Nimród KUTASI
László Ferenc MÁRTON
Márton MÁTÉ
István PAPP
Sándor PAPP
László SZABÓ
László SZILÁGYI
Tamás VAJDA
177
Acta Universitatis Sapientiae
The scientific journal of Sapientia University publishes original papers and surveys
in several areas of sciences written in English.
Information about each series can be found at
http://www.acta.sapientia.ro.
Editor-in-Chief
Antal BEGE
abege@ms.sapientia.ro
Main Editorial Board
ISSN 2065-5916
http://www.acta.sapientia.ro
Information for authors
Acta Universitatis Sapientiae, Electrical and Mechanical Engineering
publishes only original papers and surveys in various fields of Electrical and Me-
chanical Engineering. All papers are peer-reviewed.
Papers published in current and previous volumes can be found in Portable Document
Format (PDF) form at the address: http://www.acta.sapientia.ro.
The paper must be submitted both in MSWord document and PDF format. The
submitted PDF document is used as reference. The camera-ready journal is prepared
in PDF format by the editors. In order to reduce subsequent changes of aspect to
minimum, an accurate formatting is required. The paper should be prepared on A4
paper (210 × 297 mm) and it must contain an abstract of 200-250 words.
The language of the journal is English. The paper must be prepared in single-column
format, not exceeding 12 pages including figures, tables and references.
One issue is offered to each author free of charge. No reprints are available.
Publication supported by