0% found this document useful (0 votes)
85 views9 pages

Research Article: Depth Analysis of Greyscale Integral Images Using Continuous Multiview Wavelet Transform

paper

Uploaded by

sujithakurup
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views9 pages

Research Article: Depth Analysis of Greyscale Integral Images Using Continuous Multiview Wavelet Transform

paper

Uploaded by

sujithakurup
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Hindawi

International Journal of Optics


Volume 2018, Article ID 3151209, 8 pages
https://doi.org/10.1155/2018/3151209

Research Article
Depth Analysis of Greyscale Integral Images Using Continuous
Multiview Wavelet Transform

1
Vladimir Saveljev and Irina Palchikova2,3
1
NEMO Lab., Department of Physics, Myongji University, Yongin-si, Gyeonggi-do, 17058, Republic of Korea
2
Technological Design Institute of Scientific Instrument Engineering, Siberian Branch of the Russian Academy of Sciences
(TDISIE SB RAS), ul. Russkaya, 41, Novosibirsk, 630058, Russia
3
Department of Physics, Novosibirsk State University (NSU), ul. Pirogova, 2, Novosibirsk, 630090, Russia

Correspondence should be addressed to Vladimir Saveljev; saveljev.vv@gmail.com

Received 10 October 2018; Accepted 28 October 2018; Published 2 December 2018

Guest Editor: Xiaowei Li

Copyright © 2018 Vladimir Saveljev and Irina Palchikova. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.

The wavelet analysis of the integral images can be used in extraction of the depth of the photographed/synthesized 3D objects.
The result of the analysis, however, depends on the colour/texture of the object and thus can be ambiguous. In this paper, we
propose to normalize the image before processing in order to avoid such an ambiguity, and to extract the depth without regard to
the colour/texture. The proposed technique is verified in multiple integral/plenoptic images and can be applied to multiview and
light-field displays as well.

1. Introduction The intensity in any point of the image in the image plane
is proportional to the brightness of the point of the object
There are many 3D technologies [1–3] in 3D imaging. Par- in that point. The intensity of all separated parts of voxel
ticularly, the wavelets are used in 3D imaging, for instance, pattern [22] is equal to the brightness of the corresponding
in multiview video compression [4], in image coding [5], in point of the object. The voxel patterns precede the wavelets;
image fusion [6], as a quality metric [7], etc. The disparity thus, this important property is kept in wavelets. Therefore,
of stereoscopic images can be estimated [8], the shape of the wavelet coefficients depend on the colour; and the result
the photographed/synthesized objects can be analysed, and of the wavelet analysis of a multiview image is proportional
the depth can be extracted using the wavelet analysis of to the brightness of voxels (or pieces of surface in a texture
the integral images [9–11]. In this paper, we propose the model).
technique to eliminate the texture effect. Based on the Define the central view (CV) of an integral image as an
similarity between the 3D images, where a 3D content is image, where the central pixels of all image cells of the integral
represented in a single image plane consisting of the logical image are assembled in accordance with the location of the
image cells [12], our results can be applied to integral [13–15], cells; i.e., the centre pixel of the left top cell goes to the left top
multiview [16–18], plenoptic images, and light-field displays corner of the CV, the centre pixel of the right top cell goes to
[19–21]. the right top corner of the CV, etc.; such image can be seen
The result of the wavelet analysis of 3D images, however, by a hypothetic (nonexistent) camera located at the centre of
depends on the colour/texture of the surface of the object. the lens array.
This effect was noticed before, and the most images in [9–11] The CV can be calculated by applying the known inter-
were binary BW images and most results were presented in lacing technique (see, e.g., [23]) along both dimensions of the
a qualitative visual form. This unexpected undesirable effect integral image. The CV can be calculated for binary, greyscale,
must be reduced, but a solution has been unknown yet. or colour images; see Figure 3. In our paper, the CV is solely
2 International Journal of Optics

(a) (b)

Figure 1: (a) Binary BW integral image of digits 800 x 800 pixels (credits for the original colour image to Prof. B. Lee, SNU); (b) the central
view (CV) of this image (as seen through the camera located behind the central microlens of the lens array) 40 x 40 pixels.

Wavelet coeff. (A.U.) Wavelet coeff. (A.U.)


2.5 2.5
2 2
1.5 1.5
1 1
0.5 0.5
depth depth
0 plane 0 plane
−10 −5 0 5 10 −10 −5 0 5 10
BW 1
2
3
(a) (b)

Figure 2: Wavelet coefficients of binary BW image Figure 1: (a) whole image, (b) each digit separately. (The depth plane is an integer number.)

used as a simple graphical picture of all objects of a 3D image, coefficients are the same for every digit. Note that it is a
because in the original integral image, the objects may look repainted colour image, originally provided by Prof. B. Lee.
unclear (nonsharp, or out-of-focus, or blurred, etc.); compare The wavelet analysis of the binary BW image Figure 1
Figures 1(a) and 1(b). shows almost uniform shape (a flat-top curve) for the depth
What is important is that it is unnecessary to calculate planes between -6 and +6, where the digits of this 3D image
the CV for the wavelet transform, and therefore not a one are presumably located in space; see Figure 2(a). The wavelet
operation with the wavelets needs the CV. In this context, the coefficients for each digit processed separately are shown in
central view is nothing but a descriptive illustration for the Figure 2(b), where all three maxima are close to each other.
journal paper. Based on Figure 2(b), one may conclude that the digit 1
Later in this paper we will name an integral/plenoptic
is located between the +1st and +2nd planes, digit 2 in 0th ,
image by its CV (as “in Figure 1(b)”), but actually such a
referencing will mean the integral image (i.e., “Figure 1(a)”) digit 3 between -2nd and -3rd planes. Outside of this region
which corresponds to the named CV. (|depth| > 3), the wavelet coefficients monotonously decay.
Previously, the colour dependence remained behind the Therefore later in the related Figures 4 and 5, we will show
scene, and the depth was mixed with colours. In this paper, the wavelet coefficients within the depth region [-2, +2] only,
as soon as we know an answer, we tried to open a door, where the expected result is a flat-top horizontal line.
slightly. To restore the depth more correctly, we propose to Then, if the shades of grey of digits or their colours are
use the normalized image. Firstly, we would like to illustrate not the same as in Figure 3, the expected results of the wavelet
the dependence on colours in the wavelet analysis of the analysis would be different for every digit.
integral images. The results of the wavelet analysis (wavelet coefficients)
of greyscale images, Figures 3(a) and 3(b), are shown in
2. Dependence of Wavelet Figure 4(a). The wavelet coefficients of the colour digits,
Coefficients on Colours Figures 3(c) and 3(d), are shown in Figure 4(b).
There is an essential difference between the wavelet
Consider colours of three digits in the 123-image. If all colours coefficients of digits of various colours. In all cases, instead of
are identical (as in the BW image in Figure 1), then the wavelet a flat top and decay (as in Figure 2(a) for the binary image),
International Journal of Optics 3

(a) (b) (c) (d)

Figure 3: Various grey levels and colours of digits (CVs).

Wavelet coeff. (A.U.) Wavelet coeff. (A.U.)


2 2
1.5 1.5
1 1
0.5 0.5
depth depth
0 plane 0 plane
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
grey1 RGB
grey2 CMY
(a) (b)

Figure 4: Wavelet coefficients of images from Figure 2: (a) grey-scale, (b) colour.

Wavelet coeff. (A.U.)


3
2.5
2
1.5
1
0.5
0 depth
plane
−3 −2 −1 0 1 2 3
Normalized
(a) (b)

Figure 5: (a) Normalized image (CV). (b) Wavelet coefficients of normalized image.

the wavelet coefficients of grey images may rise up, fall down, cell is an area under a lenticular lens). The images were taken
or have a maximum or minimum in the middle; see Figure 4. from different independent sources, either photographs or
These graphs can be treated ambiguously: either the digits synthesized (computer-generated) images.
in the same plane have different colours, or the digits of the The multiview wavelets and the algorithm of the contin-
same colour are located at different distances. A variety of uous multiview wavelet transform are exactly the same as
intermediate interpretations is also possible. presented in our previous paper about the multiview wavelets
This confirms a strong dependence of the results on the [10, 11].
colours of voxels (texture); it is an undesirable side effect
in the depth (shape) extraction. Below, we describe how to In order to avoid the undesirable effect of texture and to
eliminate it. restore the spatial structure without regard to the colours,
we propose to process so-called normalized image [24].
3. Materials and Methods A normalized image is typically used in order to reduce
a nonuniform illumination in a local neighbourhood. The
In our examples, the source images are integral/multiview/ normalized image can be built by the algorithm [25]. The nor-
plenoptic images with the square grid of the image cells (a malized image corresponding to the original image of digits
4 International Journal of Optics

(a) (b)

Figure 6: Source and normalized images of the house (CVs). The original colour image “house crop” is available “for free” at the website by
T. Georgiev [26].

Figure 1 is shown in Figure 5(a). The result of the wavelet line can be clearly seen as a separate pulse in Figures 7(c) and
analysis of this normalized image is shown in Figure 5(b). 7(d).
Despite insignificant imperfectness (such as non-100% N.B. The first image of each pair in Figure 7 displays
horizontal flat top), the graphs in Figures 2(a) and 5(b) are the modulus of the wavelet coefficients (the black colour
similar to each other more than any graphs in Figure 4. This means maximum, and the white colour means zero), while
means that the normalized image predominantly contains the second graph is the full profile along the selected row. The
the information about the 3D structure (depth), rather than same layout will be used later in Figures 9, 11, and 12.
about colours (texture). For instance, the RMS difference Also, note the difference in the average (mean) level
of the wavelet coefficients between the processed RGB and on the body of the car between the columns 45 and 65 as
BW images is 1.2 A.U. (normalized RMS error 60%); in grey indicated by the dashed line in Figures 7(b) and 7(d). The
and BW images it is 0.7 A.U. (48%); however, in BW and influence of the texture colour of the body of the car is clearly
normalized images it is 2.5 – 4.2 times less (0.3 A.U, 19%). reduced in Figure 7(d) down to the average level of that row.
This is not a complete elimination of the undesirable effect, The source and normalized images of books are shown in
but its essential reduction. Figure 8.
Before processing, the colour images were transformed In this example, we consider the depth plane -1, the row
into the grey-scaled images, but it is not necessary. The full- 95. Figures 9(c) and 9(d) clearly show a recognized 3D edge
colour images can be processed as well, by processing the R, (backbone of the book) as a separate pulse, but not a texture-
G, B colour components separately. induced effect (a step pulse) as in Figures 9(a) and 9(b).
The dimensions of the images are as follows: the image of The influence of the texture is reduced in this image too.
digits is 800 x 800 pixels (CV and normalized image are 40 The average level at the cover of the book is almost the same
x 40 pixels), the rabbit 1350 x 1350 (CV and normalized 45 x along the row in the processed normalized image (shown in
45), the books 4031 x 4031 pixels (CV and normalized 139 x Figures 9(b) and 9(d) by the dashed line).
139), and the house 4575 x 4575 pixels (CV and normalized N.B. The letters “EG” on the cover of the book in
75 x 90). Figure 9(a) are not a restored 3D structure, rather a texture
of the surface.
4. Results The source and normalized images of the rabbit are shown
in Figure 10. In the 3D analysis, two planes will be considered.
To illustrate the proposed technique, we applied it to plenop- (1) Plane 2, row 13. The eye of the rabbit becomes clearly
tic/integral images from various sources. The normalization recognized in the normalized image, while it is completely
is followed by the wavelet transform. In the examples, we invisible in the unnormalized one; see Figure 11.
will compare the results (i.e., the wavelet coefficients for the (2) Plane 5, row 17. The same is valid for the nose. The
source and normalized images) by the depth planes and along nose is clearly recognized in the processed normalized image,
the rows (horizontal lines) in the array of the coefficients. while it is hidden in the source (unnormalized) image; see
The original and normalized images of the house are Figure 12.
shown in Figure 6 (recall, this is a CV). These two examples also demonstrate that the influence of
Consider, for example, the depth plane 0 and the row 70 the texture is reduced in the normalized images, so as some
of this image. The boundary line between the path and the previously unrecognized features appear.
lawn is at the same time the line of the change of colour of the
texture. Because of that, the depth of this line is concealed 5. Discussion
(hidden) in the original image; see the wavelet coefficients
and the graph along the horizontal row in Figures 7(a) and We processed the greyscale images. Before processing, the
7(b). What is important is that in the normalized image, this colour images (if any) were transformed into the grey-scaled
International Journal of Optics 5

Wavelet coeff. (A.U.)


0.25
0.2
0.15
0.1
0.05
x (pixels)
row 70 0
0 20 40 60 80
−0.05
−0.1
(a) (b)
Wavelet coeff. (A.U.)
0.08
0.06
0.04
0.02
x (pixels)
0
row 70 −0.02 0 20 40 60 80
−0.04
−0.06
(c) (d)

Figure 7: Wavelet coefficients of source image (picture and graph on the top) and normalized image (on the bottom): modulus of the wavelet
coefficients in (a), (c); the wavelet coefficients along the row 70 in (b), (d).

(a) (b)

Figure 8: Source and normalized images of books (CVs). The original colour image “input” is available “for free” at the website of T. Georgiev
[26].

images, but it is not necessary. The full-colour images can be 6 and 8) or by means of the computer simulation of the inte-
processed as well, by processing the RGB colour components. gral imaging (Figures 1 and 10). In either case, a lens array with
Also, we used the BW normalized image; however in general, a square grid of microlenses or its computational equivalent
each colour component can be normalized individually, and was used. The high quality lens arrays are known for their very
a colourful normalized image could be obtained. uniform structure (a small deviation of the lens pitch across
The numeration of the rows and columns in the array the array). The proposed image processing procedure can be
applied to the 3D images built on the hexagonal grid of the
of the wavelet coefficients is slightly different from that of
lenses (with properly redesigned wavelets).
rows and columns in the CV, because of the different sizes.
Instead of a lens array, the 3D images can be also obtained
(The size of the CV is fixed, but the array size at each depth
from a camera array, as described in [27] and in the references
plane varies, because we did not make any assumptions about in [3]; on the other hand, the layout of cameras in the
the behaviour of the image beyond the sides of the original camera array might be less uniform than that of the lens
image.) The difference is less than the number of the current array. Moreover, the 3D imaging with the sensors at random
depth plane, but this small difference is out of the scope of the locations is demonstrated [28]. Therefore, we hope that a
current paper. wavelet processing can be in principle applied to 3D images
The 3D images used in this paper were obtained by the from the camera arrays; however the wavelets have to be
independent authors either from a plenoptic camera (Figures radically modified in this case.
6 International Journal of Optics

Wavelet coeff. (A.U.)


0.25
0.2
0.15
0.1
row 95 0.05
x (pixels)
0
0 20 40 60 80
−0.05
−0.1
(a) (b)

Wavelet coeff. (A.U.)


0.08
0.06
0.04
0.02
row 95 x (pixels)
0
−0.02 0 20 40 60 80

−0.04
−0.06
(c) (d)

Figure 9: Wavelet coefficients (modules on the left, graph along the row 95 on the right) of source (top) and normalized (bottom) images.
The layout is the same as in Figure 7.

(a) (b)

Figure 10: Source and normalized images of rabbit (CVs). The original colour image was provided by Prof. Lee by a personal request.

In the normalization of the most images (only a few of proposed technique can be efficiently used in 3D imaging for
many processed images are presented in this paper), we used the depth extraction and the shape reconstruction without
the default values 𝜎1 = 18, 𝜎2 = 24; however in a few cases, regard to the colour/texture.
these values were reduced to 𝜎1 = 4.5, 𝜎2 = 6.
Data Availability
6. Conclusions
The plenoptic/integral image data used to support the find-
The influence of colours (texture) is essentially reduced by ings of this study were obtained from two sources. Namely,
means of the normalized image instead of the source full- two of four images we used in the examples are available
colour or greyscale image. The technique is confirmed by pro- for free on the website “http://www.tgeorgiev.net/” by Dr. T.
cessing integral/plenoptic images from independent sources. Georgiev [26]; the other two images were kindly supplied by
The numerical comparison became possible between the Prof. B. Lee upon our personal request.
planes and within each plane (because the wavelet coefficients
are normalized by definition, and because the integral image Conflicts of Interest
itself is used in the normalized form). Integral, multiview,
plenoptic (light-field) images with the square grid of cells can The authors declare that there are no conflicts of interest
be processed. Colour images can be processed as well. The regarding the publication of this paper.
International Journal of Optics 7

Wavelet coeff. (A.U.)


0.3
row 13 0.2
0.1
0
0 10 20 30 40 x (pixels)
−0.1
−0.2
−0.3
(a) (b)

Wavelet coeff. (A.U.)


1.5
row 13 1

0.5
x (pixels)
0
0 10 20 30 40 50
−0.5

−1
(c) (d)

Figure 11: Wavelet coefficients in 2 nd


plane for source and normalized images, row 13 on the right. The layout is the same as in Figure 6.

Wavelet coeff. (A.U.)


0.15
0.1

row 17 0.05
0
0 10 20 30 40 x (pixels)
−0.05
−0.1
−0.15
(a) (b)
Wavelet coeff. (A.U.)
0.8
0.6

row 17 0.4
0.2 x (pixels)
0
0 10 20 30 40 50
−0.2
−0.4
(c) (d)

Figure 12: Wavelet coefficients in 5 plane for source and normalized images, row 17 on the right. The layout is the same as in Figure 6.
th

Acknowledgments the research [project No. 17-47-540269]. We greatly appreciate


Prof. B. Lee for the images provided to us personally (the
This work was partially supported for one author (Prof. I. digits and the rabbit). Also, we are thankful to Dr. T. Georgiev
Palchikova) by the Russian Foundation for Basic Research for his wonderful and very useful website (research until
and by the Ministry of Education, Science, and Innovative 2014), particularly, for the images available at that site for free
Policy of the Novosibirsk Region within the framework of (the input and the house crop).
8 International Journal of Optics

References [19] R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P.


Hanrahan, “Light field photography with a hand-held plenoptic
[1] J. Hong, Y. Kim, H. Choi et al., “Three-dimensional display camera,” Stanford [Tech Report], CTSR 2005-02, April 2005.
technologies of recent interest: principles, status, and issues,” [20] T. Georgiev and A. Lumsdaine, “Focused plenoptic camera and
Applied Optics, vol. 50, no. 34, pp. H87–H115, 2011. rendering,” Journal of Electronic Imaging, vol. 19, no. 2, 2010.
[2] J.-Y. Son, B. Javidi, and K.-D. Kwack, “Methods for displaying [21] J. Kim, S. Moon, Y. Jeong, C. Jang, Y. Kim, and B. Lee,
three-dimensional images,” Proceedings of the IEEE, vol. 94, no. “Dual-dimensional microscopy: real-time in vivo three-
3, pp. 502–522, 2006. dimensional observation method using high-resolution light-
[3] M. Martı́nez-Corral and B. Javidi, “Fundamentals of 3D imag- field microscopy and light-field display,” Journal of Biomedical
ing and displays: a tutorial on integral imaging, light-field, and Optics, vol. 23, no. 6, pp. 1–11, 2018.
plenoptic systems,” Advances in Optics and Photonics, vol. 10, no. [22] M.-C. Park, S. Ju Park, V. V. Saveljev, and S. Hwan Kim,
3, p. 512, 2018. “Synthesizing 3-D images with voxels,” in Three-Dimensional
[4] M. Flierl and B. Girod, “Multiview video compression,” IEEE Imaging, Visualization, and Display, pp. 207–225, Springer,
Signal Processing Magazine, vol. 24, no. 6, pp. 66–76, 2007. 2009.
[5] W. Yang, Y. Lu, F. Wu, J. Cai, K. N. Ngan, and S. Li, “4-D wavelet- [23] P. V. Johnson, J. Kim, and M. S. Banks, “Stereoscopic 3D display
based multiview video coding,” IEEE Transactions on Circuits technique using spatiotemporal interlacing has improved spa-
and Systems for Video Technology, vol. 16, no. 11, pp. 1385–1395, tial and temporal properties,” Optics Express, vol. 23, no. 7, pp.
2006. 9252–9275, 2015.
[6] J. L. Rubio-guivernau, V. Gurchenkov, M. A. Luengo-oroz et al., [24] D. Sage and M. Unser, “Teaching image-processing program-
“Wavelet-based image fusion in multi-view three-dimensional ming in Java,” IEEE Signal Processing Magazine, vol. 20, no. 6,
microscopy,” Bioinformatics, vol. 28, no. 2, pp. 238–245, 2012. pp. 43–52, 2003.
[7] P. B. Zadeh and C. V. Serdean, “Stereo video disparity estimation [25] D. Sage, “Local Normalization,” Web-Page, 07 Sep. 2018,
using multi-wavelets,” in Proceedings of the Seventh Interna- http://bigwww.epfl.ch/sage/soft/localnormalization/.
tional Conference on Digital Telecommunications (ICDT), pp. [26] T. Georgiev, “Research in the area of light fields until 2014,” Web-
50–54, 2012. Page, 13 Sep. 2018, http://tgeorgiev.net/.
[8] E. Bosc, F. Battisti, M. Carli, and P. Le Callet, “A wavelet- [27] Y. Xing, Z.-L. Xiong, M. Zhao, and Q.-H. Wang, “Real-
based image quality metric for the assessment of 3D synthesized time integral imaging pickup system using camera array,” in
views,” in Proceedings of the 24th IS and T/SPIE Stereoscopic Proceedings of the Advances in Display Technologies VIII, vol.
Displays and Applications Conference, SD and A 2013, vol. 8648, 10556, p. 12, San Francisco, USA, January 2018.
USA, February 2013.
[28] M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimen-
[9] V. Saveljev, “Wavelets and continuous wavelet transform for sional imaging with randomly distributed sensors,” Optics
autostereoscopic multiview images,” Journal of Electrical Engi- Express, vol. 16, no. 9, pp. 6368–6377, 2008.
neering, vol. 4, no. 1, pp. 19–23, 2016.
[10] V. Saveljev and I. Palchikova, “Analysis of autostereoscopic
three-dimensional images using multiview wavelets,” Applied
Optics, vol. 55, no. 23, pp. 6275–6284, 2016.
[11] V. Saveljev and I. Palchikova, “Analytical model of multiview
autostereoscopic 3D display with a barrier or a lenticular plate,”
Journal of Information Display, vol. 19, no. 2, pp. 99–110, 2018.
[12] V. V. Saveljev and S.-J. Shin, “Layouts and cells in integral
photography and point light source model,” Journal of the
Optical Society of Korea, vol. 13, no. 1, pp. 131–138, 2009.
[13] S.-G. Park, J. Yeom, Y. Jeong, N. Chen, J.-Y. Hong, and B. Lee,
“Recent issues on integral imaging and its applications,” Journal
of Information Display, vol. 15, no. 1, pp. 37–46, 2014.
[14] X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances
in three-dimensional integral imaging: Sensing, display, and
applications,” Applied Optics, vol. 52, no. 4, pp. 546–560, 2013.
[15] J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-
dimensional information processing based on integral imag-
ing,” Applied Optics, vol. 48, no. 34, pp. H77–H94, 2009.
[16] N. A. Dodgson, “Analysis of the viewing zone of multi-view
autostereoscopic displays,” in Proceedings of the Stereoscopic
Displays and Virtual Reality Systems IX, vol. 4660, pp. 254–265,
USA, January 2002.
[17] B.-R. Lee, J.-J. Hwang, and J.-Y. Son, “Characteristics of com-
posite images in multiview imaging and integral photography,”
Applied Optics, vol. 51, no. 21, pp. 5236–5243, 2012.
[18] Y. Takaki, K. Tanaka, and J. Nakamura, “Super multi-view dis-
play with a lower resolution flat-panel display,” Optics Express,
vol. 19, no. 5, pp. 4129–4139, 2011.
The Scientific
Engineering Geophysics
Journal of International Journal of
Advances in Applied Bionics
Chemistry
Hindawi Hindawi
World Journal
Hindawi Publishing Corporation Hindawi
and Biomechanics
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 http://www.hindawi.com
www.hindawi.com Volume 2018
2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Active and Passive


Shock and Vibration Electronic Components
Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Submit your manuscripts at


www.hindawi.com

Advances in Advances in
Mathematical Physics
Hindawi
Astronomy
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

International Journal of

Rotating
Machinery Advances in
Optical
Advances in
Technologies
OptoElectronics

Advances in Advances in
Hindawi
Physical Chemistry
Hindawi
Condensed Matter Physics
Hindawi
Hindawi Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Hindawi Volume 2018
www.hindawi.com www.hindawi.com

International Journal of
Journal of Antennas and Advances in Advances in International Journal of
Chemistry
Hindawi
Propagation
Hindawi
High Energy Physics
Hindawi
Acoustics and Vibration
Hindawi
Optics
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

You might also like