Proceedings of The Canadian Society of Civil Engineering Annual Conference 2022
Proceedings of The Canadian Society of Civil Engineering Annual Conference 2022
Proceedings
of the Canadian
Society of Civil
Engineering Annual
Conference 2022
Volume 4
Lecture Notes in Civil Engineering
Volume 367
Series Editors
Marco di Prisco, Politecnico di Milano, Milano, Italy
Sheng-Hong Chen, School of Water Resources and Hydropower Engineering, Wuhan
University, Wuhan, China
Ioannis Vayas, Institute of Steel Structures, National Technical University of Athens,
Athens, Greece
Sanjay Kumar Shukla, School of Engineering, Edith Cowan University, Joondalup,
WA, Australia
Anuj Sharma, Iowa State University, Ames, IA, USA
Nagesh Kumar, Department of Civil Engineering, Indian Institute of Science
Bangalore, Bengaluru, Karnataka, India
Chien Ming Wang, School of Civil Engineering, The University of Queensland,
Brisbane, QLD, Australia
Lecture Notes in Civil Engineering (LNCE) publishes the latest developments in
Civil Engineering—quickly, informally and in top quality. Though original research
reported in proceedings and post-proceedings represents the core of LNCE, edited
volumes of exceptionally high quality and interest may also be considered for publi-
cation. Volumes published in LNCE embrace all aspects and subfields of, as well as
new challenges in, Civil Engineering. Topics in the series include:
• Construction and Structural Mechanics
• Building Materials
• Concrete, Steel and Timber Structures
• Geotechnical Engineering
• Earthquake Engineering
• Coastal Engineering
• Ocean and Offshore Engineering; Ships and Floating Structures
• Hydraulics, Hydrology and Water Resources Engineering
• Environmental Engineering and Sustainability
• Structural Health and Monitoring
• Surveying and Geographical Information Systems
• Indoor Environments
• Transportation and Traffic
• Risk Analysis
• Safety and Security
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Volume 1
Volume 2
Volume 3
v
vi Conference Committee Members
Volume 4
Structural
A Solution for Stacking Multiple Precast Housing Modules
During Shipment on a Barge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Samira Rizaee, Lloyd Alan, Zhen Lei, and Brandon Searle
Effect of Concrete Masonry Unit Web Thickness
on the Compressive Strength of Concentrically and Eccentrically
Loaded Hollow Masonry Prisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Olga V. Savkina and Lisa R. Feldman
A Wave-Propagation-Based Approach to Estimate the Depth
of Bending Cracks in Steel-Fiber Reinforced Concrete . . . . . . . . . . . . . . . 31
Ahmet Serhan Kırlangıç
Effect of Fastener Stiffness on Buckling Behaviour of Wooden
Built-Up Beams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Robabeh Robatmili, Yang Du, and Ghasan Doudak
Application of Digital Image Correlation (DIC) Method
for Concrete Masonry Prism Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Nitesh Chhetri and Lisa R. Feldman
Moment Redistribution Limits for Beams with High Strength
Steel Reinforcement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Sohaib Akbar, F. Michael Bartlett, and Maged A. Youssef
Investigating Release-Connection Between Post-tensioned
Concrete Slab and Wall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Mohammad Jonaidi, Mehrdad Sasani, Simin Nasseri,
and George Williams
Experimental Investigation of Single-Story CLT Shear Walls . . . . . . . . . 101
Md Shahnewaz, Carla Dickof, and Thomas Tannert
vii
viii Contents
General
Lifting and Rigging Study for Precast Volumetric Modular
Construction in Nunavut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Samira Rizaee, Alan Lloyd, Zhen Lei, and Brandon Searle
The Alcan Pioneer Road and Discovery of Permafrost . . . . . . . . . . . . . . . 397
J. David Rogers
Prediction of Rework on a Construction Site Utilizing ANN
Integrated into a BIM Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Raghda Attia, Khaled Nassar, and Elkhayam Dorra
x Contents
Cold-Region
Water Model Development for Freeze-Protected Water System
in Iqaluit, Nunavut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
Marilyn Fanjoy, Ken Johnson, Marc Lafleur, Eric Bell,
and Simon Doiron
Hydrology and Water Balance Study for the Canadian High
Arctic Community of Grise Fiord . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
Chris Keung, Ken Johnson, Joel Gretton, and Aurangzeb Alamgir
Materials
Assessment of Key Imperatives for Enhancing Precast
Adoptability in Developing Countries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
Mostafa Abdelatty
Alkali-Activated Concrete Workability and Effect of Various
Admixtures: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
Nourhan ELsayed and Ahmed Soliman
Inconsistencies and False Assumptions Related
to the Determination of Design Values for FRP Systems . . . . . . . . . . . . . . 739
Scott F. Arnold and Reymundo Ortiz
Design and Experimentation of Pollution Absorbing Blocks
(PABs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
Abdelrahman ElDokhmasey, Loay Hassan, Mira Nessim,
Omar Rabie, Mostafa Abdel Aziz, Salah El Gamal, Farida Said,
and Mohamed AbouZeid
Piezo Monitoring of Concrete—A Review Paper . . . . . . . . . . . . . . . . . . . . . 767
Manisha Madipalli, Sakshi Aneja, Ashutosh Sharma, Rishi Gupta,
and Caterina Valeo
xii Contents
Environmental
Estimating Lake Evaporation for the South Saskatchewan River
Basin of Alberta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925
Zahidul Islam, Shoma Tanzeeba, Carmen de la Chevrotière,
and Prabin Rokaya
Contents xiii
Dr. Rishi Gupta is a Professor in the Department of Civil Engineering at the Univer-
sity of Victoria. He leads the Facility for Innovative Materials and Infrastructure
Monitoring (FIMIM) at UVic. He received both a masters and a Ph.D. in Civil Engi-
neering from the University of British Columbia. His current research is focused
on studying smart self healing cement-based composites containing supplementary
cementitious materials and fiber reinforcement. His areas of interest include devel-
opment of sustainable construction technologies, structural health monitoring, and
non-destructive evaluation of infrastructure. He has more than 20 years of combined
academic and industry experience. His industry experience includes working as the
Director of Research of Octaform Systems Inc in Vancouver.
Rishi is a Fellow of Engineers Canada, the Canadian Society of Senior Engineers,
and a past chair of the EGBC’s Burnaby/New West branch. He is the past Chair
of the international affairs committee of the Canadian Society of Civil Engineering
(CSCE). He is a long standing member of the American Concrete Institute and is
also a voting member of several subcommittees of ASTM C 09.
Dr. Min Sun is an Associate Professor and the Director of Undergraduate Program in
the Civil Engineering Department at the University of Victoria (UVic). His research
interests lie primarily in the field of structural engineering. His past research activity
has included the development of design rules for hollow structural section connec-
tions and fillet welds, which have been incorporated into Canadian and American
national steel design standards. From 2018 to 2020, he was the Western Region VP
of the Canadian Society for Civil Engineering (CSCE). In 2019, he received the
Faculty Award for Excellence in Teaching at UVic. Before joining UVic, he worked
as a structural designer at Read Jones Christoffersen.
Dr. Svetlana Brzev is currently an Adjunct Professor at the Department of Civil Engi-
neering, University of British Columbia. She has more than 35 years of consulting
and research experience related to structural and seismic design and retrofitting of
masonry structures in Canada and several other countries. Her research has been
focused on seismic behaviour and practical design and construction issues related
xv
xvi About the Editors
Dr. M. Shahria Alam is a Professor of Civil Engineering and the Tier 1 Principal’s
Research Chair in Resilient and Green Infrastructure in the School of Engineering
at The University of British Columbia (UBC)’s Okanagan campus. He is serving
as the founding Director of the Green Construction Research and Training Center
(GCRTC) at UBC. Dr. Alam is the Vice President (Technical Program) of the Cana-
dian Society for Civil Engineering (CSCE) and Chair of the Engineering Mechanics
and Materials Division of CSCE. He received his Ph.D. in Civil/Structural Engi-
neering from Western University in 2008. His research interests include smart and
recycled materials and their structural engineering applications. He has published
more than 350 peer-reviewed articles in these areas. He is the recipient of more than
forty national and international awards including three best paper awards. Currently,
Dr. Alam is serving as an Associate Editor of ASCE’s Journal of Bridge Engineering
and Journal of Materials in Civil Engineering.
Dr. Kelvin Tsun Wai Ng major fields of interest are in sustainable waste manage-
ment system, disposal facility design, and data-driven waste policy. Kelvin has over
120 publications, and his projects have been supported by NSERC, Mitacs, CFI,
Innovation Saskatchewan, Ministry of Environment, and other national and provin-
cial sponsors. Kelvin has received both research and teaching awards, including
six RCE Education for Sustainability Recognition Awards (2018-2022), Elsevier’s
Top Reviewer Award—Waste Management (2021), Saskatchewan Innovation Chal-
lenge (2019), McMaster Engineering Top 150 Alumni for Canada’s Sesquicenten-
nial (2017), and the University of Regina President’s Award for Teaching Excel-
lence (2017). Provincially, he has been appointed by the Ministry of Environment in
Saskatchewan to serve on the Solid Waste Management Advisory Committee. Kelvin
has been serving on the Association of Professional Engineers and Geoscientists of
Saskatchewan (APEGS) Award Committee since 2017. Currently, he is chairing
the APEGS Awards Committee. Kelvin is the Environmental Division Chair for
Canadian Society for Civil Engineering and has organized and chaired/co-chaired a
number of conferences, including the CSCE General Conference in 2015, and four
CSCE Environmental Specialty Conferences in 2019, 2020, 2021, 2022. Currently,
he is organizing the 2023 CSCE Environmental Specialty Conference at Moncton.
About the Editors xvii
Dr. Ashraf El Damatty, Professor and Chair of the Department of Civil and Environ-
mental Engineering, Western University. He is a Fellow of the Canadian Academy of
Engineering, the Engineering Institute of Canada, and the Canadian Society of Civil
Engineering (CSCE). He is a Research Director at the WindEEE Research Insti-
tute and Editor-in-Chief of the Journal of Wind and Structures. He holds honorary
Professorship titles at four international universities. He obtained B.Sc. and M.Sc.
from Cairo University, Egypt, Ph.D. from McMaster University, and MBA from
University College London, UK. He is the founder of the CSCE Steel Structures
Committee and served for five years as the Chair of the CSCE Structures Divi-
sion. He has written more than 250 publications, supervised more than 60 graduate
students and has been invited as keynote speaker in 14 countries. He received several
awards including the Alan Yorkdale Award by ASTM, Best Paper Award at the
Canadian Conference on Effective Design, Honourable Mention in 2014 Casimir
Gzowski Medal, 2015 CSCE Whitman Wright Award, 2016 CSCE Horst Leipholz
Medal, Western University Faculty Scholar Award, 2018 Professional Engineers of
Ontario Medal of Research and Development, 2021 Pratley Award for Best Paper
on Bridges, and the 2021 Western Engineering Award for Excellence in Research,
xviii About the Editors
Clark Lim has over three decades of experience in public, private, and academic
sectors, specializing in analytical methods and information systems for transportation
applications. As a consultant, he advises senior officials on policy, technology, and
governance matters, where he utilizes an evidence-based and technically progressive
approach to establish sound policy frameworks. In the mid-1990’s, he was part of the
team that established TransLink, the Greater Vancouver Transportation Authority,
where he was also the Project Manager of the Evergreen Rapid Transit Line planning
and consultation process. At UBC Clark is currently an Adjunct Professor in the
Department of Civil Engineering where he has taught transportation engineering
and planning to senior undergraduate and graduate students since 2006. His previous
research at UBC focused on intelligent transportation systems for freight, and the
impact quantification of the 2010 Winter Olympic Games. Currently he is researching
the effects of hybrid working on transportation policies, the impacts of ride-hailing
trips through big data methods, and developing tools to measure sustainability and
diversity-equity-inclusion indices for corporate boards.
Structural
A Solution for Stacking Multiple Precast
Housing Modules During Shipment
on a Barge
1 Introduction
When modules are stacked in a barge and before arriving at their destination, they
need to be stacked in the opposite order of their installation. That allows the first story
modules to be picked up and installed first and the last story modules to be picked
up and installed next. Therefore, when stacking the modules, the top story module
goes to the bottom. In this project, the top story modules’ roofs are sloped; therefore,
in order to place one module on top of them, a temporary stacking structure (TSS)
is required to be placed on top of the roof to make it level. In this paper the design
process for the TSS used in this project is discussed. SAP2000 v21.2.0 is used for
the structural analysis and design, and Tekla Tedds 2021 is used for the connection
design.
2 Model Geometry
The housing modules are 4500 × 9000 mm2 . The roof height is 4067 mm on the
shorter side and 4500 mm on the taller side. Modules roofs are slopped in two
directions. Roof of the modules A, B, and D is sloped along the longer direction
and module C along the shorter direction, as can be seen in Fig. 1. In order to
prevent two designs for different roof slopes, only the modules that are sloped along
their longer dimension are suggested to be placed underneath the stacking structure.
Therefore, the design of only one type of stacking structure such as presented in
Fig. 2 is presented in this paper. Equal distanced bays of 2.25 m are considered in
both X and Y directions, resulting in four bays in the X-direction and two bays in
the Y-direction. The depth of stacking structure is considered to be 933 mm for the
lower roof side and 500 mm for the taller roof side. In the X–Y plane at the top and
in the lower plane, diagonal members connect different bays to create lateral bracing
and stability. The height of the stacking structure resulted in four different levels in
the Z direction, 0, 0.217, 0.432, and 0.933 m.
3 Load Calculations
Weight of the modules and wind load while in transit have to be transferred securely
to the stacking structure. Subsequently, the stacking structure has to transfer the load
to the sloped roof module underneath. Therefore, two load cases, dead load and wind
load, are considered. To facilitate load application on members, dead load cases are
divided into four, roof’s dead load, Drf , walls’ dead load, Dwl , floor’s dead load, Dflr ,
and the truss’s self-weight, Dsw . Similarly, the wind load is divided into four, positive
and negative X directions, wd1 and wd1-1 , and positive and negative Y directions,
wd2 and wd2-1 .
A Solution for Stacking Multiple Precast Housing Modules During … 5
The weight of the four exterior walls and the roof of a housing module is evenly
distributed over the beams on the perimeter, and the floor weight of a module is
evenly distributed over the interior beams in a stacking structure. The weight of the
roof is the density,12.5 kN/m3 , multiplied to the thickness of the roof, 0.5 m, which
is 6.25 kN/m2 . Considering a two-way load distribution for the roof, the weight is
distributed in a triangular shape to the walls along grids A and E (4500 mm long) and
in a trapezoidal shape to the walls along grids 1 and 3 (9000 mm long), resulting in
a peak linear load of 19.5 kN/m. All exterior walls are 4.5 m tall and 450 mm thick
with a 13.8 kN/m3 density. The dead load from the walls is linearly distributed over
the exterior beams along girders A, E, 1, and 3 and is equal to 28 kN/m. The weight
of the interior partition walls is not applied separately since they are significantly
lower than the exterior walls and can be accounted for by the load factors and can be
canceled out by the weight of openings. The floor load is considered to be distributed
over the interior beams on grids A, B, C, D, and E. Beams over grid A and E are
exterior beams and will get half the load portion of beams over grid A, B, C, and D
that are interior beam. Therefore, the load from the floors is 5.7 kN/m for the exterior
beams and 11.3 kN/m for the interior beams.
Wind load calculation is calculated according to the National Building Code of
Canada [3]. Although usually the wind velocity pressure is 1.23 kPa in Resolution
Island, along the way to the destination, 2 kPa wind velocity pressure is considered in
the calculation to account for the possibility of barge speed effects on the wind pres-
sure. The other factors that are considered are low importance factor of 0.8, exposure
factor of 0.9 and topographic factor of 1. C p × C g , exterior pressure coefficient and
gust effect factor, is decided based on the National Building Code of Canada 2015,
6 S. Rizaee et al.
a)
b)
c)
d)
Fig. 2 Stacking structure, a 3D view, b X–Y plane view at the top, c Y –Z plane view at grid A,
d X–Z plane view at grid 1 (axes are shown in light blue color at the left bottom corner)
A Solution for Stacking Multiple Precast Housing Modules During … 7
Division B clause 4.1.7 [3] for each section of the building. Then the load from each
direction is calculated considering the reference height of zb = 2.255 m and length of
either 4.5 or 9 m. Wind loads are applied as point loads on the perimeter top joints of
the stacking structure. The overturning effect of these loads is considered as counter
acting axial loads on each pair of opposite joints on the top.
Load combinations consisting of imposed dead load from the top module, self-
weight, and wind loads are considered for the calculations. The load factored consid-
ered in the load combinations are 1.25 and 0.9 for the dead load and ± 0.4 and ± 1.4
for wind load. An impact load factor of 1.5 is also considered for the dead load
which increases the dead load factors to 1.9 and 1.35, respectively. A total of 36 load
combinations are considered.
4 Structural Design
The stacking structure is a truss with pin-pin connections everywhere. The top
members which directly carry the weight of the module on the top are designed
as beams, but the rest of the members are considered to be truss members and are
loaded only in the axial direction. Grade CSA G40.21-350W with f y and f u of 350 and
450 MPa, respectively, is considered in the design. Consequently, matching weldable
steel that meets the requirements of ASTM A992/A992M is suggested to be used.
Rectangular or square HSS members, which have high axial, bending, and
torsional resistance and relatively lightweight compared to their load capacity, are
considered. According to the final SAP2000 designs, HSS51 × 51 × 6.4, HSS76 ×
76 × 9.5, HSS127 × 76 × 9.5, HSS127 × 127 × 13, HSS152 × 152 × 9.5, and
HSS152 × 152 × 13 sections are used. The dimensions used in the SAP2000 design
are in imperial and are HSS2 × 2 × 0.25, HSS3 × 3 × 0.375, HSS5 × 3 × 0.375,
HSS5 × 5 × 0.5, HSS6 × 6 × 0.375, and HSS6 × 6 × 0.5, respectively. Figure 3
shows the plan view of the top section of the structure at 0.933 m level. Figure 5
shows the frame sections in grid 1, 2, and 3. Consequently, Fig. 5 shows the frame
sections in grids A to C. The total weight of a stacking structure is 3.4 T, which is
about 3.4% of the weight of a module that weighs roughly 100 T.
5 Connections Design
Pin supports need to be designed and installed to transfer any axial, tensile, or
compressive, and shear loads to the concrete wall below at the six joint locations
along girder A and E. They are shown as green triangles at the joint locations in
Figs. 4 or 5. To create pin supports, the stacking structure needs to be bolted to
the roof of the lower module using base plates and anchor blots. Along the interior
girders, B, C, and D, roller supports need to be provided. Roller supports are shown
as green circles in Figs. 4 or 6. Structural analysis does not show any tensile (lifting)
8 S. Rizaee et al.
load in the roller or pin support from any of the load cases. The reason is that the
heavy weight of the top module does not allow any tensile load due to overturning
effect of wind loads to suppress the heavy weight of the top module.
Most critical base reactions, based on 1.9 DL ±WLi load combinations, are
considered for the design of base plates and anchor bolts. They show maximum
of 110 kN and 72 kN shear load on the interior and exterior connections, respec-
tively. These loads are considered in the anchor bolt design. Maximum of 349 kN
axial compression load is considered for the base plate design. Base plate and anchor
bolt designs are performed and checked using Tekla Tedds 2021 software according
to CSA-A23.3-19 [2]. Vertical HSS sections connecting to the pin connections are
either HSS 51 × 51 × 6.35 (HSS 2 × 2 × 0.25) or HSS 76 × 76 × 9.5 (HSS 3 ×
3 × 0.375). 40 mm thick, 400 × 400 mm2 base plates need to be welded to the frame
sections at these locations, see Fig. 6. Then, the base plate is bolted at four locations
to the corner walls of the lower modules using 400 mm long, 50 mm diameter hooked
end anchors. The anchors are 100 mm distanced from the base plate edges and 125
mm distanced from the wall edges. The anchor bolt analysis is conducted assuming
the wall beneath is of at least 20 MPa concrete for 450 × 800 mm2 in the interior
connection and 450 × 625 mm2 for corner connections, presented in Fig. 7. The
depth of concrete for these areas needs to be greater than 400 mm.
To create roller supports, only the compressive axial load needs to be transferred
properly to the walls. None of the load combinations, even when dead loads are
reduced by 0.9, showed tensile stresses at the location of any of the supports. There-
fore, the compressive load can be transferred to the wall beneath having base plates
only. The members have to be welded to the base plates at these locations, similar
to Fig. 6. Placing base plates will result in proper stress distribution of the load and
prevent stress concentration and local crushing failure of the concrete beneath. Since
A Solution for Stacking Multiple Precast Housing Modules During … 9
a)
b)
c)
there is no tensile or shear loads at the location of these supports, the base plates
are not required to be bolted to the roof of the module beneath. Bolting interior
supports to the walls beneath has to be prevented. Bolting would result in creation of
pin supports, redistribution and creation of higher shear loads in the bolts than their
design capacity, and failure of bolts or the concrete beneath them.
In case the weight of the modules is significantly reduced, due to changes as the
project progresses, the connections can be redesigned to accommodate for tensile
loads. The bolts can be designed in slotted connections, which will not prevent
horizontal movement and will not create shear force reactions; however, they will be
able to transfer axial loads. Similarly, anchor bolts with sleeves or bush can be used
to allow for marginal in plane movement and not carry shear loads while being able
to carry axial loads.
Securing the top module to the stacking structure is similar to securing the stacking
structure to the bottom module. The only difference is that the top module will sit
on the stacking structure. There will not be any base plates so that the load of the
module can be transferred on the top beams of the stacking structure. The stacking
structure will be bolted to the top module at the location of joint through grid A and
10 S. Rizaee et al.
a)
b)
c)
d)
e)
A Solution for Stacking Multiple Precast Housing Modules During … 11
E using the same bolted connection specifications to transfer the shear load from the
wind.
The top members of the stacking structures carry the weight of the top module.
They are designed as beam members. While the other members are truss members.
Member connections in this structure are all pin-pin. Pin-pin connections in top
members transfer shear loads and, in the members, transfer axial loads. Since the
stacking structure is going to be factory built, the connections are designed to be
welded all around the edges. For G40.21 steel with 450 MPa ultimate strength,
12 S. Rizaee et al.
matching weld electrode, E49XX with Xu, ultimate tensile strength, of 490 MPa is
used. The matching electrode is selected according to Table 4 in CSA S16-14 [1].
To design the welded connections, first the shear loads, in beam members, and axial
loads, in truss members, are exported from the SAP2000 model analysis to Excel.
Weld size calculations are conducted and in an Excel file. Weld size is 2 mm for
HSS127 × 76 × 9.5 (HSS5 × 3 × 0.375), HSS152 × 152 × 9.5 (HSS6 × 6 ×
0.375), HSS152 × 152 × 13 (HSS6 × 6 × 0.500) and is 5 mm for HSS51 × 51 ×
6.4 (HSS2 × 2 × 0.250) and HSS76 × 76 × 9.5 (HSS3 × 3 × 0.375).
Although modules with two different slope directions are present, design of a tempo-
rary stacking structure (TSS) for modules with roofs that slope along the longer
side is proposed in this paper. It is recommended to only build and use one type of
stacking structure and stack modules with the perpendicular roof orientation as upper
modules. Working with only one type of TSS prevents delays and complications due
to building two TSS and rigging and lifting variations and arrangements. A truss
type TSS built with square and rectangular HSS members is proposed to be manu-
factured. HSS members provide high torsional, compressive, and flexural resistance
and can be welded at the connections without much complication. In addition, a truss
type structure results in optimal weight, performance, and constructability. The total
weight of the stacking structure is only 3.4% of the weight of a module. Furthermore,
truss structures members are only loaded at their ends, which result in simpler joint
design and welding. This structural design may be modified in the future if the needs
and module properties or weight change by Nunafab/Illu.
References
1. CSA S16-14 (2014) Design of steel structures. Canadian Standards Association, Mississauga
(Ontario)
2. CSA.A23.3-19 (2019) Design of concrete structures. Canadian Standards Association, Missis-
sauga (Ontario)
3. National Building Code of Canada (2015) National research council of Canada, Ottawa
Effect of Concrete Masonry Unit Web
Thickness on the Compressive Strength
of Concentrically and Eccentrically
Loaded Hollow Masonry Prisms
Abstract ASTM C90 and CSA A165 are the governing standards prescribing allow-
able concrete masonry unit (CMU) geometry in the United States and Canada,
respectively. Since 2011, ASTM C90 has allowed a decreased web thickness of
19 mm regardless of CMU size. Conversely, CSA A165 has maintained historically
used minimum web thicknesses that increase with CMU size starting at 26 mm for
100 mm nominal sized CMUs. Thinner webs reduce CMU weight and so result in
reduced cost and potential for workplace injury along with improved energy effi-
ciency. No experimental work was identified examining the contribution of CMU
webs to the compressive strength of masonry. It was hypothesized that a reduction
in web thickness could lead to a decrease in masonry strength since hollow masonry
prisms tend to fail due to tensile splitting through the webs of the CMUs. An exper-
imental program was therefore conducted to examine the influence of varying CMU
web thickness on the compressive strength of prisms under eccentric and concen-
tric loading. CMUs with web thicknesses meeting the minimum requirements of
ASTM C90 and CSA A165 were used to construct five-course tall masonry prisms.
Additional parameters investigated included CMU size, nominal CMU strength, and
knock-out units. Thirty-two series with six replicates in each for a total of 192 five-
course tall prisms were tested: 120 were subject to axial concentric loading, and 72
were subject to eccentric loading. Preliminary results show that differences in the
development of the strain gradient are not significant for prisms built using 200 mm
CMUs and subject to eccentric loading, but that statistically significant differences in
the resulting masonry assemblage strength exist when subject to concentric loading.
1 Introduction
The two standards governing the geometric requirements for concrete masonry units
(CMUs) in the United States and Canada are ASTM C90 [2] and CSA A165 [5],
respectively. Historically, these standards have remained nearly identical, but in 2011,
ASTM C90 [3] was modified to allow for a reduced web thickness for all sizes of
CMUs and introduced the minimum allowable normalized web area to allow CMU
manufacturers more flexibility related to the geometric design of CMUs.
The normalized web area, Awn , is determined as the quotient of the total web area
adjacent to the faceshell divided by the nominal height and length of the CMU:
n w tw h w
Awn = (1)
Hnom L nom
(a) (b)
Fig. 1 Typical concrete masonry unit with a a full-height web and b knock-out web. Note that
H nom and L nom are 10 mm larger than the CMU dimensions account to for mortar joint thickness
Effect of Concrete Masonry Unit Web Thickness on the Compressive … 15
The changes to ASTM C90 [3] were primarily motivated by a desire to improve
thermal bridging and energy efficiency [12]. The reduction to minimum CMU web
thicknesses also has the benefit of reduced CMU weight. Lighter CMUs poten-
tially decrease the chronic stress experienced by masons, will possibly diversify the
workforce, and may improve workplace retention [11]. Lighter CMUs can also be
transported more economically, reduce a project’s construction schedule, and result
in cost savings. A numerical analysis was performed to confirm the adequacy of the
compressive strength of the modified CMU, but based solely on the CMU strength
itself and not that of masonry assemblies that include mortar, and, potentially, grout
[12]. The numerical analysis was further based on the total web area adjacent to the
faceshell of the CMU rather than the geometric configuration of the webs. This anal-
ysis led to the requirements for normalized web as was provided in ASTM C90-11
[3].
Previous experimental investigations that included ungrouted masonry prisms
showed that failure initiates due to cracking in the webs [8, 9]. No historical research
was identified examining the contribution of webs on the behaviour of masonry under
various load conditions despite knowing that web geometry contributes to prism
strength [4]. These load conditions include both pure axial load and combined axial
load and bending that arise due to construction imperfections or intended design. A
hypothesis was established by the authors that changes to the geometric properties
of CMU webs may influence the strength and behaviour of masonry assemblages.
2 Experimental Program
A total of 96 prisms were constructed and tested under either axially eccentric or
concentric loading. The latter was used to simulate prisms subject to combined axial
load and bending. Parameters investigated included: CMU geometry, nominal CMU
strength, and loading type. The CMUs with full-height webs meeting the geometric
requirements of both ASTM C90 [2] and CSA A165 [5] were included in select
prisms. A knock-out CMU geometry was also included that met the normalized web
area requirements as included in ASTM C90 [2]. Varying the nominal CMU strength
in select prisms allowed for a comparison of the relationship between CMU strength
and masonry assemblage strength. Two loading types were selected for the exper-
imental program: one setup designed to meet the conventional axial compressive
prism testing requirements prescribed by CSA S304 [7], hereafter referred to as the
conventional load setup, and another setup containing rollers at the top and bottom of
the prism to apply a concentrated axially eccentric or concentric load to the prisms,
hereafter referred to as the concentrated load setup. The conventional load setup
was used to determine masonry assemblage strength, while the concentrated load
setup was used primarily to determine how prisms constructed using CMUs with
varying geometry perform under eccentric loading and to determine the viability of
modifying current CMU geometric standards in Canada.
16 O. V. Savkina and L. R. Feldman
Ninety-six five-course tall, hollow prisms constructed in running bond were built
by an experienced mason over a three-week period. The prism configurations met
the requirements outlined in Appendix D of CSA S304 [7]. Faceshell only mortar
bedding was necessitated by the misalignment of the webs due to the running bond
pattern and as prescribed in CSA S304 [7].
Figure 2 shows the geometry of all prisms. Full blocks were used as the top and
bottom courses to minimize damage during transportation of prisms from their as-
construction location to the test platform. Half-blocks used to construct the prisms
were cut in the lab from full blocks using a masonry saw and are included in the
second and fourth courses. The thicknesses of the cut half-block webs were about
3 mm less than half the full web thickness and resulted from the thickness of the saw
blade. Prisms were air-cured until testing.
Table 1 shows that the 96 total prisms built were divided into 16 series with six
replicates in each. These series were separated by: CMU geometry used, nominal
CMU strength, test setup type, and degree of applied eccentricity.
Figures 3 and 4 show the concentrated and conventional load setups, respectively.
The concentrated load setup (Fig. 3) was used to apply either a concentric or eccentric
load and was designed similarly to that used by [9]. In addition to a concentric load
(e = 0), eccentricities of t/6 and t/3 were used to draw the relationship between
strain gradient development for prisms built with CSA A165 [5], ASTM C90 [2],
and ASTM C90 [2] knock-out CMUs.
2.2 Materials
Figure 5 shows the three different CMU geometries that were used. All CMUs had
a nominal size of 200 mm. Figure 5a and b shows the two CMU geometries that
include full-height webs with thicknesses meeting either ASTM C90 [2] (Fig. 5a) or
CSA A165 [5] (Fig. 5b) minimum allowable web thicknesses. Figure 5c shows the
details of the knock-out CMU that was included in the experimental program.
The knock-out CMUs, shown in Fig. 5c, were cut from CMUs with web thick-
nesses meeting CSA A165 [5] requirements in the laboratory in preparation for prism
construction using a masonry saw. The height of the masonry saw was adjusted and
used to cut slots in the webs adjacent to the faceshells such that the remaining web
height, hw , was 140 mm. The slotted portions of the webs were then knocked-out by
the mason during prism construction using a stonemason’s hammer.
CMUs meeting CSA A165 [5] or ASTM C90 [2] standards with nominal strengths
of 15, 20, and 30 MPa were procured from six batches for production: one for
each combination of CMU geometry and strength. CMUs from were tested for both
absorption and compression to establish their material properties. Three CMUs from
each batch produced were tested for absorption in accordance with ASTM C140 [1]
for a total of 18 CMUs tested. CMUs from each batch were submerged in water for
24 h, then removed from the water, and oven dried for an additional 24 h. Weights
of the CMUs were measured and recorded on arrival, when submerged, following
saturation, and when oven dry. Five additional CMUs from each batch were tested
Effect of Concrete Masonry Unit Web Thickness on the Compressive … 19
Fig. 5 CMU geometry for a ASTM C90 CMU, b CSA A165 CMU, and c knock-out CMU
under axial compression in accordance with ASTM C140 [1] for a total of 30 CMUs
tested.
Mortar used to build the prisms was mixed in accordance with CSA A179 [6]
using Type S mortar cement with an aggregate to cement ratio of 3:1 by volume.
A water cement ratio of 0.3:1 was used to achieve adequate workability. Six mortar
cubes were cast in brass moulds for each batch of mortar that was mixed. The cubes
were initially moist cured in the moulds by covering them with a polyurethane sheet
for 48 h, then were removed from the moulds, and air-cured until the day of testing
of the corresponding prism. Mortar cubes were tested in axial compression using a
universal testing machine.
2.3 Instrumentation
Laser displacement transducers and a digital image correlation (DIC) system were
used in combination to assess the strains that developed on the surfaces of the prisms.
One side of the prisms was speckle painted prior to testing to facilitate strain measure-
ments using the DIC system. Figure 6a shows the typical speckle paint pattern used
for all the prisms, and Fig. 6b shows a close-up view of the pattern. Figure 7 shows
the configuration of the six lasers displacement transducers in addition to the DIC
system, while Fig. 8a and b, respectively, shows a detailed schematic of the DIC and
laser instrumentation setup. Four aluminium brackets were attached at mid-height
of the faceshells of the top and bottom courses of the prism using epoxy to provide a
surface for the laser displacement transducers to measure displacements. Lasers with
20 O. V. Savkina and L. R. Feldman
Fig. 6 Typical speckle paint pattern for DIC measurements showing a a full prism and b detail
view
The preliminary test results for masonry prisms constructed using 200 mm CMUs are
discussed herein. Differences in masonry assemblage strength, strain, and observed
crack patterns are discussed for all prisms. A comparison of the measured strain
gradient is presented for the eccentrically loaded prisms only. Differences between as-
tested masonry assemblage strength and that predicted based on the method included
in CSA S304 [7] are limited to a discussion of prisms tested using the conventional
test setup.
Effect of Concrete Masonry Unit Web Thickness on the Compressive … 21
Prism
Laser
displacement
gauge (Typ.)
(a) Top view of DIC system setup (b) Side view of laser gauge setup
Fig. 8 Instrumentation setup: a top view of DIC system setup, and b side view of laser gauge setup
A total of 30 CMUs were tested in accordance with ASTM C140 [1] to determine
their compressive strength. Table 2 shows the results of the testing. Although CMUs
22 O. V. Savkina and L. R. Feldman
with distinct nominal strengths of 15, 20, and 30 MPa were requested from the
supplier, not all CMUs met the desired requirements.
There were also no statistically significant differences at the 95% confidence
interval between the as-tested compressive strengths of the nominal 15 MPa and
20 MPa CMUs for both ASTM C90 [2] and CSA A165 [5] CMUs. The prisms built
with CMUs with either 15 MPa or 20 MPa nominal strengths were thus combined
into a single group. Statistically significant differences were identified between the
nominal 15 MPa and 30 MPa CMUs meeting ASTM C90 [2] and CSA A165 [5] stan-
dards. Statistically significant differences were also identified between the nominal
20 MPa and 30 MPa CMUs for both ASTM C90 [2] and CSA A165 [5] CMUs.
The average compressive strength results from nominal 30 MPa ASTM C90 [2] and
CSA A165 [5] CMUs were found to be statistically significantly different and should
therefore not be compared.
The average strength of mortar cubes tested in axial compression was 13.1 MPa
with a coefficient of variation (COV) equal to 9.17%.
Table 3 shows the results for the prisms tested using the conventional test setup, and
the masonry assemblage strengths calculated using the unit strength method included
in Clause 5.1.3.5.2 of CSA S304 [7]. The masonry unit strength was determined using
the as-tested CMU strength and linear interpolation from Table 4 of CSA S304 [7].
Appendix D of CSA S304 [7] prescribes a correction factor to calculate the masonry
assemblage strength from the as-tested prism strength based on the prism’s height-to-
thickness ratio. The correction factor is equal to unity when the height-to-thickness
ratio is greater than 5, as is the case for all prisms included in this investigation. As
such, the as-tested prism strength is the same as the masonry assemblage strength.
Table 4 shows the results for the compressive strength of prisms tested using the
concentrated load setup. The prism strength was determined using the method for
extreme fibre compressive stress developed by [9] for eccentrically loaded prisms.
The strength of a prism subjected to concentric loading, f m conc , is:
Effect of Concrete Masonry Unit Web Thickness on the Compressive … 23
P 6e
f m conc = 1+ (2)
2btf tβ
where P is the applied load, b is the prism length, and t f is the faceshell thickness.
The prism length, b, is not always equivalent to the length of the CMU and can range
from a half-block length to lengths greater than that of a CMU. β is determined by
2
tf tf
β =3−6 +4 (3)
t t
P
f m ecc = (4)
(tf −x)2
0.5btf 1 + t−x−tf
t−x
+ tf (t−x)
where x is the depth of cracking in the most tensile mortar joint developed by interpo-
lation between the stages where both mortar joints are in compression and when the
tensile mortar joint is completely cracked. These stages are defined by the stresses that
result from combined axial compression and bending. The value of x is determined
iteratively from:
t − x − tf (tf − x)2 t tf (tf − x)2 2t − x − tf
e 1+ + = − + 6
t−x tf (t − x) 2 3 tf (t − x)
(t − x − tf )(t − tf ) (t − x − tf ) 2t − tf
+ − 3
(5)
(t − x) (t − x)
The stress at the extreme compressive fibre is inversely proportional to the effective
mortared cross-sectional area. The effective mortared cross-sectional area decreases
as eccentricity increases, resulting in greater stresses at the extreme compressive fibre
when compared to the stresses in the faceshells of prisms tested under a concentric
load. This increase in extreme fibre compressive stress is referred to as the strain
gradient effect. To compare the strain gradient effect for prisms built with CMUs of
different geometries, a strain gradient ratio, a, is calculated as:
f m ecc
a= (6)
f m conc
This method of normalization allows the stress at the extreme compressive fibre for
the various prism configurations to be more effectively compared. The compressive
strength under concentric loading, f m conc , tested using the concentrated load setup,
and e = 0 was used for determining the strain gradient ratio for the prisms tested under
an eccentric load to eliminate differences resulting from the use of the conventional
setup.
Statistical outliers were identified and removed from four prism groups that were
tested using the concentrated load setup and four prism groups that were tested using
the conventional load setup. Prisms identified as statistical outliers typically had
lower values of as-tested compressive strength than others in the group and generally
resulted due to in differences in the applied load rate. CSA S304 [7] requires that half
of the maximum load be applied at a convenient rate with total duration of roughly
one minute, with the load rate then decreased such that failure is reached within
two to three total minutes. Maximum loads were estimated prior to testing using
the unit strength method to determine the required load rate; however, the predicted
maximum load was occasionally inaccurate due to the inherent conservatism of the
unit strength method. The load rates of all subsequent prism tests were readjusted
Effect of Concrete Masonry Unit Web Thickness on the Compressive … 25
but failed to consistently capture the unique behaviour of every prism, resulting in
load rates that, on occasion, affected the as-tested prism strengths.
1.28
ASTM C90
Strain Gradient Ratio
1.07
1
0 16 32 48 64
Eccentricity (mm)
groups. Many of the prism groups had a large scatter of data with coefficients of
variation approaching 15%, and as such statistically significant differences cannot
be captured at a 95% confidence interval. Masonry prisms rely on a single faceshell
to carry the majority of the applied load when eccentricity increases, and thus, the
strength of the prisms built using different CMUs converges. A statistical analysis of
the strain gradient was not conducted since the determination of the strain gradient
was based on using average as-tested compressive strengths. However, some notable
trends were observed. Figure 9 shows the relationship of the strain gradient ratio with
increasing load eccentricity. The increase in stress at the extreme compressive fibre
for e = t/3, the largest eccentricity as tested, from the as-tested compressive strength
reported for prisms built using ASTM C90 [2], CSA A165 [5], and knock-out CMUs
tested using the concentrated load setup was 11%, 15%, and 26%, respectively. This
indicates that the presence of the strain gradient effect is greatest in prisms built
using knock-out CMUs and the strain gradient effect plays a considerable role in
their failure.
Cracking of prisms tested using the conventional setup was initiated in both interior
and exterior webs of the CMUs as was observed by many previous researchers [8,
10]. Figure 10 shows the crack patterns at the point just prior to failure for prisms
built using CSA A165 [5], ASTM C90 [2], and knock-out CMUs. Cracking typically
initiated in the half-blocks in the second and fourth courses, followed by the centre
full-block course, and then extends through the webs of the upper and lower courses
until failure. Imperfections in the CMUs caused crack propagation to initiate where
webs were thinnest, rather than through the centre of the webs as is shown in Fig. 10.
No differences were observed in the cracking behaviour of the prisms built using the
different CMU types.
Effect of Concrete Masonry Unit Web Thickness on the Compressive … 27
(a) (b) ©
Fig. 10 Crack development in prisms built using a ASTM C90, b CSA A165, and c knock-out
CMUs
The stress versus strain plots that were generated from the data obtained from the
laser displacement transducers for the prisms tested using the conventional setup were
compared to examine if similarities were identified for prisms built using each CMU
type. The strains were calculated as the average from the vertical laser transducers
labelled by L N and L S , as shown in Fig. 8b. Figure 11 shows three representative stress
versus strain plots for prisms built using each CMU type: ASTM C90 [2], CSA A165
[5], and knock-outs. Little difference was observed for prisms constructed using each
of the three CMU types. Slight differences in the magnitudes of the reported stresses
and strains can be attributed to the difference in material properties of the CMUs and
the mortar; however, the shape of the stress versus strain curves is similar for prisms
built using each CMU type. The modulus of elasticity and resistance are similar for
each CMU type despite differences in web geometry.
Cracking of prisms tested using the concentrated setup also initiated in the webs
of the prisms for all applied eccentricities and so findings are similar to that observed
by others [9]. However, as the eccentricity increased, the cracking initiated at lower
values of applied loads and less cracking was observed prior to failure. Generally, the
cracking would be in line with the rollers. Figures 12, 13, and 14 show representative
stress versus strain plots for prisms tested using the concentrated test setup built using
the three CMU types. The symbol (C) in the legend of Figs. 13 and 14 indicates the
plot for data measured using the laser transducers labelled by L N in Fig. 8b on the
28 O. V. Savkina and L. R. Feldman
24
18
Stress (MPa)
12
ASTM C90
6 CSA A165
Knock-Out
0
0 0.0005 0.001 0.0015 0.002 0.0025
Strain (mm/mm)
Fig. 11 Stress versus strain plots for prisms built using each CMU type
most compressive face, and the symbol (T) in Figs. 13 and 14 indicates the curve
for data measured using the laser transducers labelled by L S in Fig. 8b on the most
tensile face in the select eccentrically loaded prisms.
20
15
Stress (MPa)
10 ASTM C90
CSA A165
5
Knock-Out
0
-0.001 0 0.001 0.002 0.003 0.004
Strain (mm/mm)
Fig. 12 Stress versus strain plots for prisms tested using concentrated test setup and e = 0
24
18
Stress (MPa)
Fig. 13 Stress versus strain plots for prisms tested using concentrated test setup and e = t/6
Effect of Concrete Masonry Unit Web Thickness on the Compressive … 29
24
18
Stress (MPa)
Fig. 14 Stress versus strain plots for prisms tested using concentrated test setup and e = t/3
This paper describes the preliminary results of a prism testing program intended to
evaluate the influence of CMU web height and thickness on resistance to concentric
and eccentric loads. Three types of 200 mm nominal CMUs were used: (1) meeting
ASTM C90-16 requirements for minimum web thickness, (2) meeting CSA A165-
14 requirements for minimum web thickness, and (3) meeting the normalized web
area requirements included in in ASTM C90-16. Each of these CMU types was used
to construct five-course tall, hollow prisms in running bond. Sixteen series with six
replicates in each were built and tested for a total of 92 prisms. A test setup meeting
the requirements of CSA S304-14 was used to apply concentric loading to 42 prisms.
An additional test setup that had rollers placed at the top and bottom of the prism
specimen was used to either apply a concentrated load at eccentricities of 1/6 and
1/3 of the measured CMU size from the centre of the prism, or a concentric load.
Laser displacement transducers and a digital image correlation system were used to
collect relevant test data.
The following observations were made in accordance with the collected prism
data:
1. Little difference was found related to the resistance of prisms constructed with the
various CMU types when subject to eccentric loading. Regardless of the CMU
type used to construct the prisms, the stress versus strain trends of eccentrically
loaded prisms were similar.
2. Statistically significant differences in masonry assemblage strength were identi-
fied for prisms constructed with CMUs meeting the web geometric requirements
of CSA A165-14 or ASTM C90-16 loaded concentrically using the CSA S304-14
approved test setup. No statistically significant differences in masonry assem-
blage strength were identified for prisms constructed with ASTM C90-16 or
knock-out CMUs. Those findings suggest that neither the height nor the thick-
ness of the web, but rather the web area affects the resistance of prisms to axially
concentric compressive loading.
30 O. V. Savkina and L. R. Feldman
3. All prisms tested using the conventional test setup were at least 29% stronger
than the CSA S304-14 predicted prism strength despite the statistically significant
differences in masonry assemblage strength for each CMU type. This supports
changes to CMU geometry, to potentially match those currently included in
ASTM C90-16, could be considered for inclusion in the next edition of CSA
A165.
It is recommended that further research be conducted to include a broader range
of CMU nominal strengths and geometric configurations including other CMU sizes.
It is further advised that prism tests that include the presence of grout are conducted
before final recommendations to changes for the CSA A165 standards are made.
Acknowledgements The authors wish to thank Brennan Pokoyoway, Structures Laboratory Tech-
nician, and fellow graduate students Nitesh Chhetri, Gordon Chui, Micah Heide, and Thomas
Vachon for their assistance with the testing of all specimens. The authors also wish to acknowledge
Rupert Ether from City Masonry Ltd. for building the prisms, and Cindercrete Ltd. for manufacturing
all CMUs used in the research. Financial support was provided by a MITACs Accelerate research
grant with the Canada Masonry Design Centre and the Canadian Concrete Masonry Producers Asso-
ciation as industry partners. Scholarship support for the first author was provided by the University
of Saskatchewan.
References
1. ASTM (2020) ASTM C140-20: standard test methods for sampling and testing concrete
masonry units and related units. American Society for Testing and Materials, West
Conshohocken, PA
2. ASTM (2016) ASTM C90-16: standard specification for loadbearing concrete masonry units.
American Society for Testing and Materials, West Conshohocken, PA
3. ASTM (2011) ASTM C90-11b: standard specification for loadbearing concrete masonry units.
American Society for Testing and Materials, West Conshohocken, PA
4. Boult B (1979) Concrete masonry prism testing. ACI J Proc 76(4):513–535
5. CSA. 2014a. CAN/CSA-A165–14 Standards on Concrete Masonry Units, Canadian Standards
Association Group, Toronto, ON, Canada
6. CSA (2014b) CAN/CSA-A179-14 Mortar and grout for unit masonry. Canadian Standards
Association Group, Toronto, ON, Canada
7. CSA (2014c) Reaffirmed 2019. In: CSA S304-14 design of masonry structures. Canadian
Standards Association Group, Mississauga, ON, Canada
8. Drysdale RG, Hamid AA (1979) Behaviour of concrete block masonry under axial compression.
ACI J Proc 76(6):707–722
9. Drysdale RG, Hamid AA (1983) Capacity of concrete block masonry prisms under eccentric
compressive loading. ACI J Proc 80(2):102–108
10. Hamid AA, Chukwunenye AO (1986) Compression behaviour of concrete masonry prisms. J
Struct Eng 112(3):605–613
11. Hess JA, Kincl L, Wolfe P (2010) Ergonomic evaluation of masons laying concrete masonry
units and autoclaved aerated concrete. Appl Ergon 41(3):477–483
12. Lang NR, Thompson JJ (2014) Recent changes to ASTM specification C90 and impact on
concrete masonry unit technology. ASTM Int STP1577:123–137
A Wave-Propagation-Based Approach
to Estimate the Depth of Bending Cracks
in Steel-Fiber Reinforced Concrete
A. S. Kırlangıç (B)
School of Computing, Engineering, and Digital Technologies, Teesside University,
Middlesbrough, United Kingdom
e-mail: s.kirlangic@tees.ac.uk
1 Background
Ultrasonic pulse velocity (UPV) and impact echo (IE) are the most common ultra-
sonic tests used for the condition assessment of concrete. Both techniques base on
the arrival time of Primary-wave [2, 3]. The UPV is primarily used to determine
the thickness of a structural element, whereas the IE is preferred to detect and locate
defects. Alternatively, the tests based on the ultrasonic surface waves (USW) provide
information on the variation of material properties along a medium. The USW-based
methods enable to monitor the wave characteristics, such as attenuation and disper-
sion [5]. Since these wave characteristics are altered in the presence of any crack,
they can be used to estimate the crack depth.
Breathing-type cracks occur under bending and align in vertical direction. The
depth estimation of such cracks is previously studied by many researchers. However,
in the previous studies, the cracks were modeled as clear-cut notches in the labora-
tory environment, rather than the real cracks [6, 7, 11, 13]. These studies revealed
satisfactory results using the frequency-dependent surface wave features. Neverthe-
less, the extraction and interpretation of the wave features obtained from the signals
recorded on the real structural concrete elements require further improvement of the
test configuration and utilization of advanced signal processing techniques.
In this study, a diagnostic procedure is developed to estimate the depth of surface-
breaking cracks. The experimental investigations are pursued on steel-fiber rein-
forced concrete (SFRC) beam specimens comprising cracks of different depth. A
multi-channel test configuration is implemented to capture the surface waves prop-
agating on the beams. Then, the recorded ultrasonic wave signals are processed to
demonstrate the correlation between the crack depth and the diagnostic features,
namely the attenuation coefficient and dispersion index. The attenuation coefficient
is determined by utilizing discrete wavelet transform (DWT) while the dispersion
index is extracted by the help of two-dimensional Fourier transform (2D-FT).
2 Wave Characteristics
here, Ai is a wave quantity at distance xi with respect to the excitation source, and β is
the geometric attenuation constant, which is equal to − 0.5 for the surface waves due
to their cylindrical wave-front. The wave quantity Ai can simply be the amplitude
in time signal or any magnitude associated with a specific frequency extracted using
signal processing techniques, such as the Fourier or wavelet transforms. The wavelet
transform is superior over the conventional Fourier transform since temporal infor-
mation is kept after the transformation. The continuous wavelet transform (CWT) of
a signal p(t) is given as [1]:
∞
1 ∗ t −b
WT(a, b) = √ p(t)ψ dt (2)
a a
−∞
where ψ(t) is called the mother wavelet function, a is the dilation parameter (scale),
and b is the location parameter for time shift. The dilated and shifted wavelet function
is denoted as ψ(t − b/a), and ψ ∗ is its complex conjugate. Equation 2 enables to
examine any particular frequency by substituting the appropriate scale parameter
a. However, if a broader frequency range is targeted, then, instead of the CWT, the
discrete wavelet transform would be more efficient. By utilizing the DWT, any signal
p(t) can be decomposed into its sub-signals, of each is associated with a specific
frequency bandwidth, which requires the following discretized the mother wavelet
[1]:
j
1 t − kτ0 a0
ψ j,k (t) = ψ j
(3)
a0
j a0
where j and k are integers, the fixed dilation step a0 = 2, and the translation factor
τ0 = 1. In this study, the DWT is preferred to determine the attenuation as it is
demonstrated by [8] as a reliable and convenient method.
can be summarized in three steps: (i) acquisition of the time signals (time-offset
domain), (ii) transformation of the data from the time-offset domain (t − x) into
a frequency-wavenumber domain ( f –k), and (iii) extraction of the phase velocity
(dispersion curve). The f –k plot represents the relationship between wavenumber k
and frequency f ; whereas, the wavenumber is related to the wavelength by k = 2π/λ.
And finally, Vph = 2π( f /k). The phase velocity Vph are determined from the f –k
plot, which are then used to constitute the dispersion curve. Once the dispersion
curve is obtained, it can be used as a diagnostic tool to estimate the condition of the
inspected medium.
A multi-channel test configuration allows acquiring multiple time signals at
different locations, and hence, the dispersion in wave propagation can be determined
experimentally. The two-dimensional Fourier transform of these acquired signals is
performed to obtain a two-dimensional spectrum in frequency-wavenumber domain,
aka f –k plot, where the incident, reflected, and transmitted waves can be identified.
The peaks in the plot are used to compute the phase velocities which constitute
the dispersion curve of each event [12]. In the case of a homogeneous half-space
medium, the dispersion curve appears as a straight line with a slope equal to the VR .
However, any disruption in the phase velocity due to any anomaly appears in the f –k
plot which can be used to interpret the damage condition. In this study, the disper-
sion curves attained from the experiments are used to define a diagnostic parameter
called “dispersion index” (DI), which relates the dispersion in phase velocity V ph to
the damage content as [9]:
The dispersion index is basically the cumulative variation in phase velocity caused
by the crack normalized with respect to the reference dispersion curve obtained from
the intact case.
3 Experiments
could be reached. The loading paths and the cracks created in each beam are shown
in Figs. 1 and 2, respectively. The beams are named after B0, B1, B2, B3, B4, B5, and
B6; as the increasing number refers to increasing CMOD; which varies from 1.00 to
2.25 mm with an increment of 0.25 mm. The crack depth is measured visually using a
ruler and found ranging from 53 to 94 mm as given in Table 1. The bending strength,
on the other hand, is determined ranging between 3.65 and 5.34 MPa (Table 1), of
which average strength is 4.64 MPa. Finally, the compressive strength is measured
as 31.8 MPa from three standard-size cylindrical specimens tested in comply with
EN 12390. B0 is not subjected to bending test in order to reserve it as the control
beam.
10 B1
B2
B3
B4
B5
8 B6
6
Force (kN)
0
0 0.5 1 1.5 2
Crack width (mm)
In addition to the crack depths measured by the visual inspection, a digital image
processing (DIP) algorithm is developed in MATLAB to quantify the crack depths
and areas on the images of cracks. The photograph of each beam is processed with the
developed DIP algorithm, and the crack forms shown in Fig. 3 are obtained. The crack
lengths and areas given in Table 1 are calculated by using the crack forms appearing
on these images. Except for two beams (B5 and B6), the crack depths obtained with
the DIP are found in agreement with the visually measured crack depths. Along
with the crack depth, the crack surface area is also calculated as a complementary
damage measure. B5 and B6, whose crack depths could not be determined correctly,
are found to be the two most damaged beams based on the crack area as expected.
The test setup includes an ultrasonic pundit (Proceq Pundit Lab), two transducers
with 54 kHz resonant frequency, a two-channel oscilloscope, and a computer. A
transducer is mounted 15 cm away from the crack to be used as the transmitter,
whereas the second one is deployed as the receiver and is shifted along an 18-cm-
long array. The receiving transducer is moved to 10 locations, of each separated 2 cm
apart as illustrated in Fig. 4a. The transmitter has 6 cm source offset with respect to
the first location of the receiver. Vacuum grease is preferred as the coupling agent
between the transducers and the beams. At each location, one signal is recorded after
averaging the acquired 100 signals. At the end of the ultrasonic tests, for each beam,
a dataset which consists of 10 signals is attained.
A Wave-Propagation-Based Approach to Estimate the Depth of Bending … 37
Fig. 3 Images of cracks obtained by the DIP algorithm (Images are not scaled)
38 A. S. Kırlangıç
The ultrasonic waves recorded on the intact beam (B0) and the one with the deepest
crack (B6) are normalized and plotted in Fig. 5. A significant reduction in the wave-
front of the surface waves after the crack can be clearly observed in the time histories
obtained from B6. For this beam, the maximum amplitude in the receiver #6 (R6) is
measured approximately equal to only 10% of its neighbor receiver #5 (R5), which
accounts a 90% drop in the wave energy. The R-wave velocity V R shown on the
time histories in Fig. 5, which is calculated based on the arrival times, is found
as 2528 m/s for the intact beam. On the other hand, for B6, due to the crack, two
different patterns of surface wave velocity, one before and the other after the crack,
are detected. Before the crack, the V R is determined 2659 m/s, which is in good
agreement with the measured velocity on B0, while after the crack no clear arrival
time is detected. The crack damps the initial wave-front significantly, resulting in a
weak one followed by stronger refracted waves from the crack tip. Since the wave-
front passing the crack becomes so weak, the V R could not be determined with high
confidence, and thus is not considered as a measure of the crack depth.
The acquired ultrasonic wave signals are processed using the discrete wavelet trans-
form to determine the wave attenuation. The decomposition of the raw signals with
the DWT results in multiply decomposed sub-signals, of each is called a “level”.
Each of these levels is associated with a specific frequency bandwidth, and hence a
certain range of wave-lengths. Among six different levels, the one representing the
frequency bandwidth of 31–62.5 kHz, which overlaps with the transmitter’s band-
width, is chosen and subjected to the Fourier transform to calculate the spectral
energy at each receiver location to determine the attenuation. The attenuation trends
shown in Fig. 6a represent the decrease in the spectral energy of these sub-signals
for B0 and B6. Since the beams have equal dimensions, the contribution of geomet-
rical attenuation to the total attenuation is not eliminated as it is same for all beams.
A Wave-Propagation-Based Approach to Estimate the Depth of Bending … 39
Therefore, the crack depth is evaluated based on the total attenuation α by fitting
the attenuation trends to eα(x1 −xi ) , where xi is the distance to the transmitter. As
shown in Fig. 6a, a significant increase in attenuation coefficient, almost 18 times, is
found between the intact beam and the most damaged one. In Fig. 6b, the attenuation
coefficient computed for each beam is normalized and displayed with respect to the
visually measured crack depths. The figure suggests that the crack depth in B1, B2,
and B4; and B3, B5, and B6 should be close to each other. Since the sub-signal level
is corresponding to the wavelengths between 4 and 8 cm, it can be predicted that the
crack depth is at least 8 cm in B3, B5, and B6. However, the other decomposed levels
comprising the wavelengths up to 10 cm should also be investigated to have better
estimation of the crack depth in these three beams. In general, the variation in atten-
uation displayed in Fig. 6b is in good agreement with the visually measured crack
depths (Table 1) for B1, B2, B5, and B6; whereas B3 and B4 exhibit contradictory
results indicating that the depth of surface cracks observed on these two beams may
vary throughout the beam cross-section.
3.3.2 Dispersion
4000
Flexural Mode
3500
3000
Phase Velocity (m/s)
2500
2000
1500
1000
Theory
500 B0
B6
0
0 10 20 30 40 50 60 70 80 90 100
Frequency (kHz)
4 Conclusion
The attenuation and dispersion behaviors of ultrasonic surface waves are investigated
for the purpose of depth estimation of breathing cracks in the steel-fiber reinforced
concrete beams. The attenuation and dispersion of surface waves are examined by
42 A. S. Kırlangıç
utilizing the DWT and the 2D-FT respectively. The attenuation coefficient is found
to distinguish the crack depth up to 8 cm. Beyond that, the decomposed levels
comprising different wavelengths should be investigated to have a better estima-
tion of the crack depth. Regarding the dispersion, although the disruption in phase
velocity is apparent in the f –k plots obtained for the cracked beams, the experimental
dispersion curves are not found sharp clear to calculate the dispersion index directly.
Therefore, further studies are planned to investigate the level of disruption in the
dispersion curves caused by the crack apart from the geometry.
Acknowledgements This work was funded by The Scientific and Technological Research Council
of Turkey (TUBITAK) [Reintegration Grant, Project ID: 118C022].
References
1. Addison P (2002) The illustrated wavelet transform handbook: introductory theory and
applications in science. Institute of Physics Publishing, Bristol and Philadelphia
2. ASTM C597-16 (2016) Standard test method for pulse velocity through concrete. ASTM
International, West Conshohocken, PA
3. ASTM C1383-15 (2015) Standard test method for measuring the p-wave speed and the thickness
of concrete plates using the impact-echo method. ASTM International, West Conshohocken,
PA
4. Graff KF (1975) Wave motion in elastic solids. Ohio State University Press, Belfast
5. Heisey JS, Stokoe KH, Hudson WR, Meyer AH (1982) Research report 256–2: determination
of in situ wave velocities from spectral analysis of surface wave. University of Texas, Centre
of Transportation Research, Austin
6. Kirlangic AS, Cascante C, Polak M (2015) Condition assessment of cementitious materials
using surface waves in ultrasonic frequency range. ASTM Int Geotech Test J 38(2):1–11
7. Kirlangic AS, Cascante C, Polak M (2016) Assessment of concrete beams with irregular defects
using surface waves. ACI Mater 113(1):73–81
8. Kirlangic AS, Cascante G, Polak MA (2017) Characterization of piezoelectric accelerometers
beyond the nominal frequency range. Geotech Test J 40(1):37–46
9. Kirlangic AS, Cascante G, Salsali H (2020) New diagnostic index based on surface waves:
feasibility study on concrete digester tank. J Perform Constr Facilities 34(6)
10. Richart FE Jr, Hall JR, Woods RD (1970) Vibrations of soil and foundations. Prentice-Hall,
Englewood Cliffs, New Jersey
11. Yang Y, Cascante G, Polak M (2009) Depth detection of surface-breaking cracks in concrete
plates using fundamental lamb modes. NDT and E Int 42(6):501–512
12. Zerwer A, Polak M, Santamarina JC (2003) Rayleigh wave propagation for the detection of
near surface discontinuities: finite element study. J Nondestr Eval 22(2):39–52
13. Zerwer A, Polak M, Santamarina JC (2005) Detection of surface breaking cracks in concrete
members using Rayleigh waves. J Environ Eng Geophys 10(3):295–306
Effect of Fastener Stiffness on Buckling
Behaviour of Wooden Built-Up Beams
R. Robatmili · G. Doudak
University of Ottawa, Ottawa, Canada
Y. Du (B)
Canadian Wood Council, Ottawa, Canada
e-mail: ydu@cwc.ca
1 Introduction
Wood built-up beams are formed by multiple pieces of individual members of the
same depth that are fastened or glued together (Fig. 2). Typically, such beams are
manufactured from sawn lumber or engineered wood products such as structural
composite lumber. Structural members can be constructed of multiple laminations
to create large cross-sections to bridge long spans and are often considered an
economical alternative to solid timber members.
Lateral-torsional buckling is a probable failure mode in long-span and deep built-
up beams that are not laterally restrained. The buckling failure load is significantly
affected by the level of composite action between the plies comprising the built-up
beam, which in turn is determined by the connections. In case of no composite action
between the plies, each ply behaves individually, and the buckling capacity of the
beam can be obtained as the sum of the buckling capacities of the individual plies.
Due to the discrete connections between the built-up beam plies and the fact that
these connections have limited stiffness, the buckling capacity of built-up beams
is expected to be smaller than that of solid sawn beam with the equivalent cross-
section. As a consequence of the non-rigid nature of the connections, significant
slip may occur between the beam laminations, once buckling failure occurs. This in
turn affects the geometrical properties, such as the moment of inertia and torsional
rigidity, compared to those obtained for solid members [3].
Although the Canadian wood design standard (CSA O86:19) includes detailed
provisions for built-up columns, its built-up beam clause is not comprehensive in
terms of the types and geometries of fasteners required to achieve a level of composite
action. Some provisions on built-up beams are available in the National Building
Code of Canada Clause 9.23.8.3 [13], however these provisions are limited to Part 9
buildings and are based on prescriptive requirements rather than engineering design.
This lack of consistency in the design requirements has resulted in some designers
simply assuming full composite action in design scenarios where it is not warranted,
while others may completely ignore the connections between the individual plies of
the beam, leading to a design that is too conservative.
Past analytical and experimental research on the LTB of wood beams has been
primarily focused on solid rectangular and I-shaped sections. Analytical studies on
solid rectangular beams include Sahraei et al. [15] who developed LTB solutions that
(a) (b)
capture the effects of moment gradient, partial twist end restraints, load height and
pre-buckling deformation. Hu et al. [9, 10] examined the LTB behaviour of wooden
beams braced by a flexible mid-span lateral support. Du et al. [4–6] investigated the
LTB of beam-deck systems, including non-sway systems in which beam top faces are
continuously restrained laterally (2016), and sway systems that are rigidly connected
to decking (2019) or with partial rotational connections between the beams and
decking (2020). Experimental studies on LTB have been conducted on sawn lumber
[20] and engineered wood I-joists sections [14, 16]. However, literature on the LTB
of wood built-up beams is very limited, with only one analytical solution [3] and no
experimental studies available. Within the above context, the present study introduces
a 3D FE model for wood built-up beams using commercial software ABAQUS.
The model captures the interactions between beam laminations and is capable of
predicting the elastic LTB capacities of built-up beams based on the fastener stiffness
and spacings. The primary objective of the present study is to identify parameters
affecting the lateral-torsional buckling capacity of built-up beams and investigate
their effects on the buckling capacity.
The FE model of wood built-up beams is created using the commercial software
ABAQUS. The element chosen for the laminations of built-up beams is an 8-node
brick-type C3D8 element from the ABAQUS library. As illustrated in Fig. 3, each
element has 8 nodes, each with 3 translational degrees of freedom (DOF), resulting
in a total of 24 DOFs for the element. This element has been successfully used by
other researchers to investigate the LTB behaviour of wood beams (e.g. [20]).
In order to represent the interactions between built-up beam laminations in the
FE model, connecting elements are created to simulate the fastener behaviour.
Among various connecting elements in the ABAQUS library, the two-node SPRING2
element is selected to model fastener stiffness in each of the three principal axes.
This element can provide partial restraints between any of the two beam laminations
by connecting a node from the first lamination to another node from the second lami-
nation. Three SPRING2 elements are needed at each fastener location to represent
the fastener withdrawal stiffness in the x direction and lateral stiffness in both y and
z directions.
Figure 4 shows a sample built-up beam model in the undeformed shape, with
4 beam laminations represented by C3D8 elements and connected by two rows of
fasteners using SPRING2 elements.
This section describes the material inputs used for the C3D8 elements simulating
built-up beam laminations and the SPRING2 elements for fasteners.
The material inputs for C3D8 elements are based on S-P-F No.1/No.2 grade sawn
lumber which is a common choice of wood species for built-up beams in Canada.
The longitudinal modulus of elasticity E L = 9500 MPa is taken directly from the
CSA O86:19 standard [2], while other properties are estimated based on FPL [8]
using the properties of Lodgepole Pine which is constituent species under the S-P-
F species combinations [12]. Table 1 summarizes the mechanical properties of the
S-P-F laminations used as material inputs for the ABAQUS C3D8 elements.
The stiffness inputs for the SPRING2 connecting elements are based on 3
common nails with a 3.76 mm diameter. The withdrawal stiffness is taken as K W
= 500 N/mm [18], while the lateral stiffness is predicted based on the following
expression from Eurocode 5 [7]:
ρm1.5 d 0.8
KL = (1)
30
48 R. Robatmili et al.
Table 1 Mechanical
Mechanical property S-P-F No. 1/No. 2 grade
properties of built-up beam
laminations E L (MPa) 9500
E T (MPa) 646
E R (MPa) 969
GLR , GLT (MPa) 437
GRT (MPa) 47.5
μLR 0.316
μLT 0.347
μRT 0.469
where K L is the slip modulus per nail per shear plane (N/mm), ρ m is the wood density
(kg/m3 ), and d is the nail diameter (mm). By substituting ρ m = 420 kg/m3 for S-P-F
sawn lumber [2] and d = 3.76 mm into the above equation, one obtains the slip
modulus K L = 810 N/mm, which is rounded to 900 N/mm for the FE analysis.
The load applied in the FE model is uniform moment about the strong bending axis
of beam laminations. To apply uniform moment loading, horizontal lines denoted as
FG and FFGG are first defined along the top edges at the beam left and right ends,
respectively (Fig. 5). In addition, Lines HI and HHII are defined along the bottom
edges at the beam left and right ends. At the beam left end, concentrated loads with
a magnitude of P along the beam longitudinal axis are applied at nodes along the FG
axis and in opposite direction at nodes in the HI axis. This procedure is repeated for
the right end of the beam except the direction of the concentrated loads at the FFGG
and HHII Lines are reversed. The same loading is applied to all individual plies of
the built-up beam.
These loads create uniform moment M = N x × P × d × n, where N x is the
number of nodes in a single lamination along the x axis, d is the depth of the beam
cross-section and n is the number of plies in the built-up beam.
The built-up beam considered in the FE model is assumed to have a simply supported
boundary condition with respect to torsion, vertical and lateral displacements. To
simulate this boundary condition, the displacement along the x and y axes and rotation
about the z axis are restrained at both ends of the beam laminations. Accordingly,
nodes along horizontal axes at the mid-height of the beam ends (i.e. Lines AB and
Effect of Fastener Stiffness on Buckling Behaviour of Wooden Built-Up … 49
(a) (b)
AABB in Fig. 6) are restrained from movement in the vertical direction. In addition,
nodes along vertical Lines CD and CCDD are restrained from movement in the lateral
direction.
(a) (b)
The centroid at each end of the beam lamination is denoted as nodes E and EE,
respectively. At node E, all three translational DOFs are restrained; however, node
EE is restrained in x and y directions but free to translate in z direction.
By restraining the displacements as mentioned above, simply supported boundary
condition is achieved at the ends of beam laminations.
A linear elastic eigenvalue buckling analysis was conducted to predict the LTB
capacity of simply supported built-up beams under uniform moment loading. Under
eigenvalue buckling analysis, the eigenvalue λ is extracted from the following
equation:
where [K E ] is the elastic stiffness matrix of the built-up beam, representing the
stiffness properties of beam laminations and fasteners. [K G ] is the geometric stiffness
matrix and λi denotes the ith eigenvalue corresponding to the buckling mode shape
{vi }. In the case of a built-up beam subjected to a reference uniform moment of
magnitude M defined in Sect. 2.3, the critical moment is M cr = M × λ1 , where λ1
is the lowest eigenvalue associated with a lateral-torsional buckling failure mode.
3 Model Verification
The mesh sensitivity analysis is conducted for a single ply of the built-up beam for
simplicity and the mesh density as determined from the sensitivity analysis is used
as the mesh for all plies.
The mesh sensitivity analysis is performed under uniform moment loading and
simply supported boundary conditions, using SPF No. 1/No. 2 grade sawn lumber
with a cross-section of 38 mm × 286 mm and a length of 5.14 m. The beam length
is selected such that failure is expected to occur in buckling rather than material
strength. The analysis is conducted for six mesh densities, as shown in Table 2. In
each mesh analysis, the element dimension in the longitudinal direction is kept at
approximately twice the other two dimensions for computational efficiency [19].
As illustrated in Table 2, the critical moment converges to a value of 3.19 kN·m
in Case 3, with 4 elements along the section width (N x = 4), 28 elements along the
section depth (N y = 28) and 256 elements along the beam length (N z = 256). The
mesh size is approximately 10 mm × 10 mm × 20 mm. Beyond Case 3 with increasing
Effect of Fastener Stiffness on Buckling Behaviour of Wooden Built-Up … 51
mesh density, no changes are observed in the beam critical moment. Hence, the mesh
density in Case 3 is selected as the optimum mesh size for subsequent analyses.
Since no experimental test results on wood built-up beams with discrete mechanical
fasteners are available in the literature, the verification of the FE model is conducted
by comparing the LTB capacity for a 2-ply built-up beam with nearly rigid connec-
tions against the classical LTB solution [17] with an equivalent solid section. The
2-ply built-up beam is assumed to have a lamination cross-section of 38 × 286 mm
(2 by 12 ) and a length of 5.14 m. The lamination material is assumed to be S-
P-F with material properties taken from Table 1. The fasteners are located such
that they provide nearly continuum connection by assigning SPRING2 elements
to join all nodes in the opposing wide faces of the two laminations. In addition,
the spring elements are intentionally assigned nearly infinite withdrawal and lateral
stiffnesses equal to 1 × 1010 N/mm, which are several magnitudes higher than the
actual stiffnesses of the 3 common nail presented in Sect. 2.2.
It is expected that the LTB capacity of the built-up beam modelled with this
connection be identical to that from the classical solution for an equivalent solid
section. Table 3 shows the critical moments predicted by FE model and the classical
solution for uniform moment loading. The critical moment obtained from the FE
model is 24.5 kNm which is only 3% higher than the value of 23.8 kNm obtained
from the classic solution for an equivalent solid section.
Table 3 Comparison of FE model and classical solution for a 2-ply built-up beam with rigid
connections
Ply dimension No. of plies Critical moment (kNm) [1]/[2]
FE model [1] Classical solution [2]
38 mm × 286 mm × 5.14 m 2 24.5 23.8 1.03
52 R. Robatmili et al.
4 Parametric Study
This section investigates the fastener parameters that can affect the LTB capacity of
built-up beams by using the ABAQUS FE model presented above. Towards this goal,
a sensitivity analysis on the stiffness of mechanical fasteners connecting individual
plies is presented. In addition, the effects of fastener spacings on the LTB capacity are
analysed by comparing the critical moments of built-up beams with varying spacings
while maintaining the stiffness for individual fasteners.
The sensitivity analysis on fastener stiffness was conducted for S-P-F built-up beams
with two to four laminations with the same lamination dimension of 38 mm ×
286 mm × 5.14 m. The laminations are assumed to be fastened together with 3
common nails following the nailing pattern shown in Fig. 7. This includes three rows
of nails with 600 mm spacing parallel to grain, 93 mm spacing perpendicular to
grain, 300 mm end distance and 50 mm edge distance. The spacings selected satisfy
the minimum nail spacing requirements in CSA O86:19 [2].
The built-up beams were each analysed with five runs each using the nail length
and pattern mentioned above. The beams are assumed to be simply supported and
subject to uniform moment loading. In the reference case, the lateral stiffness K L-ref
and withdrawal stiffness K W-ref are taken as 900 N/mm and 500 N/mm, respectively,
based on stiffness estimates from literatures, as described in Sect. 2.2. In Cases 2 and
3, the withdrawal stiffness is increased by a factor of 1.5 or decreased by a factor of
0.5, respectively, to its reference value K W-ref , while maintaining the reference lateral
stiffness K L-ref . In Case 4 and 5, the lateral stiffness is modified by the same 0.5 factor
from its reference value K L-ref while the withdrawal stiffness is kept constant at the
reference value. The purpose of the analysis is to quantify the effects of changes in
lateral or withdrawal stiffness for commonly used fasteners on the LTB capacity of
built-up beams. It is conceivable that other type of fasteners may provide significantly
greater stiffness values, however such fasteners are not typically used in light frame
wood construction.
The results of the sensitivity analysis are presented in Table 4. For 2-ply beams
under Case 2 and 3, where the withdrawal stiffness, K W , was either increased or
decreased by 50%, the same critical moment M cr = 7.98 kNm as the reference case
was obtained. The same observation is found for 3 and 4-ply beams where the critical
moment M cr remains at 11.2 kNm and 19.3 kNm, respectively, when increasing or
decreasing the withdrawal stiffness K W by 50%. Hence, it can be concluded that
the critical moment of built-up beams is not sensitive to changes in the withdrawal
stiffness. This may be expected since the contribution of the fastener withdrawal
stiffness is merely to keep the laminations together so they can buckle in the same
direction.
The results indicate that the LTB capacity is sensitive to the lateral stiffness of
fasteners. For 2 and 3-ply built-up beams, the LTB capacity is decreased by 7–8%
in Case 4 when the lateral stiffness K L is reduced by half, while in Case 5 the LTB
capacity is increased by 5–6% when K L is increased by 50%. For 4-ply beams, the
LTB capacity is found to be more sensitive to the stiffness K L where the LTB capacity
decreases by 13% when K L is reduced by half and increases by 10% when K L is
increased by 50%. Therefore, it can also be noted that although the effect of the
lateral stiffness of the fasteners is significant, it is not proportional to the increase or
decrease in the buckling capacity and the overall effect on the capacity is relatively
small in the range of changes made to the stiffness and based on the type of fasteners
considered.
(a)
(b)
The fastener spacings for Case 1 (reference case) are taken as those shown in
Fig. 7. The spacings for the other two cases are shown in Fig. 8a and b. In Case 2, the
number of fasteners in each row is approximately doubled by reducing the fastener
spacing parallel to grain from the reference 600 to 300 mm. In Case 3, the number
of fasteners in each row is approximately four times that of the reference case with
a spacing parallel to grain equal to 150 mm.
The results of the sensitivity analysis of fastener spacings are summarized in
Table 1. It is observed that the LTB capacity for the 2-ply beam is increased by
11% and 28% from the reference case, respectively, by increasing the number of
fasteners to approximately twice and four times that of the reference case. The 3-ply
beam shows a similar trend where the LTB capacity is increased by 9% and 22%,
respectively, when doubling and quadrupling the total fastener count. The 4-ply beam
is observed to be more sensitive to changes in the number of fasteners, where the LTB
capacity shows 18% increase when doubling the fasteners and a 48% increase when
quadrupling the fasteners. Therefore, it can be concluded that the LTB capacity is
sensitive to fastener spacings, although it can be observed again here that the increase
is not linearly proportional to the increase in the number of fasteners (Table 5).
Effect of Fastener Stiffness on Buckling Behaviour of Wooden Built-Up … 55
5 Conclusion
References
Abstract Concrete masonry units (CMUs) with knock-out webs are used in masonry
construction to accommodate the horizontal reinforcement that provides structural
integrity to the elements. Their use reduces structural self-weight, transportation
costs, and potentially minimizes workplace injuries. However, the impact of using
knock-out web units on the strength of masonry members and resulting failure modes
has not been investigated. An experimental program was therefore conducted to
investigate the load-resisting mechanism and failure behavior of masonry members
constructed using knock-out web units. Accurate measurements of surface defor-
mations and cracking patterns were therefore required. Conventional point-to-point
strain measurement methods do not provide adequately detailed information on
surface deformations and so cannot be used to describe the behavior of masonry
members constructed using knock-out web units. It is also challenging to obtain
consistent results using these conventional techniques as they are difficult and time
consuming to set up, susceptible to human error, and can be damaged relatively
easily. A full-field deformation measurement technique such as digital image corre-
lation (DIC) is a non-contact method and was used to measure the detailed surface
deformation and crack patterns of prisms subjected to concentric axial compression.
A total of 42 three-course tall prisms were constructed in running bond with face
shell mortar bedding in accordance with the provisions in CAN/CSA S304-19. The
experimental investigation is ongoing and the results described in this paper suggest
that the crack patterns at failure varied with CMU web height.
1 Introduction
Concrete masonry units (CMUs) with knock-out webs are commonly used in rein-
forced masonry construction to accommodate horizontal reinforcement. There are
also several socio-economic benefits of using CMUs with knock-out webs such as:
reduced structural self-weight, lower transportation costs, and to potentially mini-
mize workplace injuries [10]. CSA A165-14—CSA Standards on Concrete Masonry
Units [6] stipulates the web configurations of CMUs by providing minimum web
thickness requirements which vary with CMU size. In contrast, U.S. code ASTM
C90-16—Standard Specification for Loadbearing Concrete Masonry Units [3] stip-
ulates the CMU web configurations by minimum normalized web area requirements
which allow for CMUs with varying web heights and thicknesses. These provisions
allow for CMUs with web heights that are shorter than those typically used for
standard Canadian knock-out web units given the same web thickness. The Cana-
dian masonry construction industry may therefore benefit if the existing minimum
geometric requirements for the webs of CMUs are changed to follow those now
specified in ASTM C90-16 [3]. Such a change will also minimize the differences in
masonry materials, products, and research between the U.S. and Canada.
Webs in CMUs are responsible for the transfer of shear stress between the face
shells and so provide structural stability to the unit. Any changes to the geometry of
the webs might therefore impact the structural performance of masonry members in
terms of their strength and serviceability. The impact of any resulting change to the
resistance of masonry members can be investigated using conventional laboratory
prism testing subjected to axial loading. It is; however, difficult to quantify and
characterize the complex mechanical behavior of masonry members subjected to
loading due to their heterogeneous and anisotropic nature. Accurate measurements
of surface displacements, deformations, crack width, and crack mapping are required
to obtain meaningful qualitative and quantitative information describing the load-
resistance mechanisms and failure modes of masonry members [8].
Several conventional devices such as: strain gauges, linear extensometers, and
dial gauges are readily available to measure the displacements and deformations of
masonry members subjected to loading. These devices; however, can only measure
the local surface deformations and include high variability due to the quality of the
bond between the device and specimen surface, and random cracks forming near or
along the gauge length of the devices [8]. These conventional techniques also require
a large number of devices to be installed on a specimen to obtain a comprehensive
picture of the resulting deformations. This increases the complexity, cost, and time
required to conduct such tests and so renders them impractical.
Digital Image Correlation (DIC) systems can be used as a non-contact technique to
obtain the full-field surface deformation of masonry members. This method is based
on a principle of correlation between images of the undeformed and subsequently
deformed specimen captured using digital cameras during testing. The accuracy of
the results obtained relies primarily on the resolution of the captured images and
the quality of the speckle pattern on the specimen [9]. The resolution of the image
Application of Digital Image Correlation (DIC) Method for Concrete … 59
obtained depends on: the quality of the camera sensor, the sharpness of the image,
noise in the image, quality of lighting, the distance between the camera and specimen,
etc. [9]; whereas the quality of the speckle pattern depends on the method used in
the application of the pattern on the specimen surface. Past studies [4, 8, 9, 11]
conducted on the application of the DIC system showed that this technique can be
effectively used in measuring the complete surface deformation characteristics of
concrete masonry members.
This paper presents the full-field strain results and crack patterns, measured using
DIC systems, on the surface of concrete masonry prisms subjected to concentric
axial compression. CMUs with three different web heights were considered to inves-
tigate its impact on the failure mode of concrete masonry prisms. Both grouted and
ungrouted prisms were included. A total of 6 unique prism configurations with seven
replicates of each resulted in a total of 42 prisms that were constructed and tested.
Data obtained from the DIC system were analyzed to compare the failure modes of
both ungrouted and grouted concrete masonry prisms constructed using CMUs with
different web heights.
2 Experimental Design
Figure 1 shows that all prisms were three-course tall (h), one unit long (l), and one
unit wide (t). Prisms were constructed in running bond with face shell mortar bedding
following the requirements included in CSA S304-19—Design of Masonry Structures
[5]. All knock-out and half blocks were produced from standard stretcher units in
the laboratory to ensure all CMUs used in prism construction had the same material
properties. A full block was used as the top and bottom courses for all prisms to
prevent a plane of weakness adjacent to the bearing plates used in the test setup. Two
half blocks were used as the middle course with the cut ends facing outwards.
Figure 2 shows three different CMU web heights that were included in the exper-
imental investigation: standard full-height webs (Fig. 2a), knock-out webs cut such
that they were 120 mm tall (Fig. 2b), and knock-out webs cut such that they were
50 mm tall (Fig. 2c). CMUs with full-height webs, as shown in Fig. 2a, represent
60 N. Chhetri and L. R. Feldman
CSA A165-14 [6] regular stretcher units. Canadian block suppliers typically limit
the height of knock-out webs to about 120 mm and so CMUs with 120 mm tall webs,
as shown in Fig. 2b, represent CSA A165-14 [6] knock-out web units. Provisions
for minimum normalized web area requirements as prescribed in ASTM C90-16 [3]
allow for CMUs with varying web height and thicknesses. CMUs with knock-out
webs cut such that they were 50 mm tall with as-measured minimum web thick-
ness of 28 mm resulted in a minimum normalized web area of 52,500 mm2 /m2 . This
value approaches the minimum requirement of 45,140 mm2 /m2 as included in ASTM
C90-16 [3] and so these units represent ASTM C90-16 [3] knock-out web units.
Fig. 2 CMU web heights: a CSA A165-14 regular stretcher unit, b CSA A165-14 knock-out web
unit, and c ASTM C90-16 knock-out web unit
Application of Digital Image Correlation (DIC) Method for Concrete … 61
Table 1 shows the overall test matrix. A total of 6 unique prism test series each
consisting of seven replicates were included. The identification mark for each test
series as included in the first column of Table 1 are of the form ###Y with ###
representing the CMU web height used in constructing the prisms and Y representing
the grouting condition (H—Hollow, or G—Grouted). For example, 190H represents
the prism test series that was constructed using CSA A165-14 [6] regular stretcher
units with full-height webs with prisms left ungrouted.
All prisms included in the experimental investigation were constructed and tested
in accordance with CSA S304-19 requirements [5]. All prisms were allowed to
cure for a minimum of 24 h prior to grouting, thus allowing the mortar to gain
enough strength to resist the outward grout fluid pressure caused by the placement of
fresh grout. Plywood strips, wrapped with polyethylene sheets, were installed on the
lateral faces of prisms constructed using knock-out units to contain the grout within
the prism. These plywood forms were removed after allowing the grout to set for
a minimum of 24 h. The prisms were then cured under laboratory conditions until
testing.
Normal weight 200 mm concrete masonry units with a nominal strength of 15 MPa
were procured from a local supplier. All CMUs were procured in pallets at least
two weeks in advance of construction to allow their temperature and moisture to
equilibrate with laboratory conditions. Each pallet consisted of a combination of
flat-ended and frog-ended blocks, of which only the frog-ended block units were
used in prism construction. This ensured consistency in the resulting net surface area
of masonry prisms that resisted the applied axial load. The as-tested compressive
strength of the CMUs was established from tests of six randomly selected units in
62 N. Chhetri and L. R. Feldman
accordance with ASTM C140-20 [1]. The resulting average compressive strength of
31.2 MPa (COV = 8.15%) was calculated using the net cross-sectional block area.
Mortar was prepared in the laboratory in accordance with CSA A179-14 [7] provi-
sions to a workable consistency as established by the mason. Type MCS masonry
cement and a cement-to-sand ratio of 1:2.5 with a varying water-to-cement ratio of
0.7–0.8 was used for mortar preparation. Three batches of mortar were used during
construction with the compressive strength and coefficient of variation (COV) of the
3 batches ranging from 8.98 to 16.3 MPa and 11.0 to 22.5%, respectively. The large
variation between batches was potentially attributed to a combination of: the varying
moisture content of the sand used in the mix and a variation in the consistency of
mortar desired by the mason for the construction of the different prism types.
The grout was also mixed in the laboratory in accordance with CSA A179-14 [7]
requirements with a 1:5 cement-to-aggregate ratio and a target slump of 250 mm.
The aggregate was pre-mixed by the supplier to satisfy the gradation requirements
prescribed by CSA A179-14 [7] and consisted of a blend of fine and coarse aggregates
with a maximum size of 10 mm. A water-to-cement ratio between 0.90 and 1.00 was
used to obtain the targeted slump. A total of nine 75 mm diameter × 150 mm long non-
absorbent grout cylinders resulting from the three required grout batches were cast
and tested in accordance with ASTM C1019-18 [2]. An identical number of 150 mm
high by 75 mm long × 75 mm wide absorptive grout prisms were also cast and tested.
The average as-tested compressive strength and COV of the non-absorbent grout
cylinders ranged from 13.7 to 20.8 MPa and 0.98 to 5.55%, respectively. Similarly,
the average as-tested compressive strength and COV of the absorptive grout prisms
ranged from 14.4 to 22.8 MPa and 6.61 to 15.3%, respectively.
Prisms were braced with straps around the two-wheeled trolley before they were
moved from their as-constructed position to the test bed to prevent any damage to
the specimen (i.e., as primarily related to cracking of the mortar joint) prior to testing.
Each prism was positioned in the test bed directly underneath the loading plate of the
test setup to ensure a concentric compression load. Fibreboard was placed above and
beneath the prism to ensure that uniform loading was applied over the entire surface
contact area.
A digital image correlation (DIC) system was then installed as shown in Fig. 3a.
One front and side face of each prism were first painted flat white (Fig. 3b) followed
by the application of black dots (Fig. 3c), also known as speckle pattern, using black
spray paint. A speckle pattern with an approximate speckle density of 50%, consisting
of individual speckles of size not less than 0.75 mm in diameter, was generated for
accurate measurement of strain gradient by the DIC system [12]. Two stereo vision
recording systems (Systems 1 and 2 as shown in Fig. 3a) each consisting of two
cameras with 2448 × 2048-pixel resolution and lenses with an F-number of 1.4 and
focal length of 17 mm were set up to capture images of the front and one side face
Application of Digital Image Correlation (DIC) Method for Concrete … 63
Fig. 3 Digital image correlation system: a setup, b flat white painted face, and c speckle pattern
of each prism during loading. These systems were placed at different distances from
each painted face due to: the space limitation in the Laboratory, and to prevent any
possible damage caused by fragments breaking free from the prism at failure and
hitting the cameras. Light-emitting diodes (LEDs), as shown in Fig. 3a, were used to
artificially illuminate the painted prism faces so that the speckle patterns were clearly
visible and high-quality images were obtained. The stereo cameras were calibrated
prior to each prism test using a 40 mm planer calibration grid placed over each
painted surface to correct any potential lens distortion. Images were recorded at a
frequency of 1 Hz until specimen failure. A desktop computer was used to store all
images and 3D-DIC software (Vic-3D, version 8, Correlated Solutions) was used to
analyze images to calculate the average displacements and strains.
Each prism was then tested under concentric axial compressive load as shown
in Fig. 4. Two 1000 KN actuators, connected using a stiffened I-beam to obtain a
combined capacity of 2000 KN, were operated in load control to apply load at the
rate meeting the requirements as specified in CSA S304-19 [5]. The masonry platen
assembly, consisting of an upper platen attached to the spherical head fitted into a
socket plate, was used to ensure uniform loading over the entire contact surface area
of each prism. A 100 mm thick steel plate was placed on the laboratory strong floor
directly underneath the upper platen to provide a rigid surface for prism testing. A
computer-controlled data acquisition system recorded the applied load at a frequency
of one Hertz until failure.
64 N. Chhetri and L. R. Feldman
(a) (b)
Seven replicates were included within each test series to identify whether a statis-
tically significant difference in compression test results existed between any two
test series at a minimum 95% confidence level. Five out of seven replicates in each
test series were included for strain measurements using the digital image correlation
(DIC) system. The following subsections discuss the crack patterns, strain contours
on the prism surfaces generated by the DIC system, measured crack widths, and the
applied stress versus strain graphs for each test series.
Data obtained from the DIC system were processed and strain contour maps were
generated on one front and side face of the prisms. These strain contour maps were
evaluated to identify the location of initial cracking and corresponding level of applied
load, and the subsequent crack propagation in prisms until failure. Table 2 shows the
average masonry assemblage strength calculated based on the net effective cross-
sectional area, the average applied load at the initial cracking (Pcr ) stage, the average
peak load (Pmax ) at failure, and the average ratio of Pmax /Pcr for the prisms in each test
series. The initial cracking load for each prism was established based on the initial
appearance of a high lateral strain concentration in the strain contour maps generated
by the DIC system rather than by visual observations made during the testing. The
peak load for each prism was the load recorded at failure. The Pmax /Pcr ratio represents
Application of Digital Image Correlation (DIC) Method for Concrete … 65
the ability of the prism to resist load beyond the appearance of the first crack. This
ratio decreased for both hollow and grouted prisms as the web height of CMUs
used in prism construction decreased. This finding suggests that the load-carrying
capacity of prisms after the appearance of initial cracking is directly proportional to
CMU web height. Similarly, the Pmax /Pcr ratios for test series of grouted prism were
lower than those for corresponding hollow prism test series except for prisms within
test series 050G which had a higher Pmax /Pcr ratio as compared to test series 050H.
This suggests that the grouted prisms generally experienced sudden failure after the
appearance of initial cracking as compared to the hollow prisms. This is potentially
due to the grout sharing the applied load with the block unit that resulted in a stress
distribution causing both materials to fail simultaneously.
Figure 5 shows the lateral strain contour maps on the exterior web face for a
representative hollow prism from each of the three test series (190H, 120H, and
050H) at the initial cracking and peak load states. The variety of colors represent
different ranges of lateral strain (εxx ) as shown in the associated legend. Cracks are
associated with high tensile strain concentration as represented by the regions of
red, orange, and yellow in these figures. The first vertical crack in prisms within
test series 190H appeared at the bottom center of the exterior web within the middle
block course. This crack propagated in length toward the top and bottom courses
and widened as the load increased. Cracks in prisms within test series 120H initiated
in a similar fashion as that of prisms in test series 190H except that these cracks
propagated diagonally toward the top surface of the web within the middle block
course. Other minor cracks also appeared within the top and bottom block courses
as the load increased. Cracks in prisms within test series 050H appeared on the
top surface of the web within the middle block course (as opposed to the bottom
part of the web as observed in test series 190H and 120H) and initiated from the
corner created by the knock-out web. The prisms within test series 190H, 120H, and
050H exhibited cracks at the center of the exterior web face, somewhere between the
center and the web-face shell junction, and at the junction of web and face shell. The
shifting of vertical cracks from the center of the web to the web-face shell junction
was potentially due to the stress concentration at the corners of knock-out webs. This
increased with increasing knock-out height. Cracks were not noticed on the face shell
of any prisms which suggested that the lateral stresses remained low within these
elements. Crack patterns observed on webs and face shells confirmed that the tensile
splitting of webs was the predominant mode of failure, although the crack patterns
varied with CMU web height.
Figure 6 shows the lateral strain contour maps on the exterior web face for a
representative grouted prism from each of the three test series at the initial cracking
and peak load states. Cracks in prisms within test series 190G (shown in Fig. 6a)
initiated and propagated in a similar fashion to those within test series 190H. Cracks
in prisms within test series 120G (Fig. 6b) and 050G (Fig. 6c) initiated along the
web-face shell junction within the middle block course. These cracks propagated
Fig. 5 Initial cracking and peak load states for a representative hollow prism within test series:
a 190H, b 120H, and c 050H
Application of Digital Image Correlation (DIC) Method for Concrete … 67
in length to the top and bottom courses at web-face shell junction and were likely
due to the stress concentration at the corners of knock-out webs. In contrast with
observations made for hollow prisms, crack widths in grouted prisms did not appear
to increase with increasing applied load. This resulted in a sudden failure of grouted
prisms and is potentially due to the grout: providing continuity within the prism
thus improving their stability, sharing in resisting applied load and so reduced stress
concentration on exterior webs, and bonding well with the block unit to resist the
lateral expansion of the exterior web face.
Knock-out
web portion
Fig. 6 Initial cracking and peak load states for a representative grouted prism within test series:
a 190G, b 120G, and c 050G
68 N. Chhetri and L. R. Feldman
Table 3 Digital linear extensometer measurements across the crack location within exterior web
face
Load Initial length, L 0 Final length, L 1 Change in length, L Crack width,
(mm) (mm) (mm) (L)max − (L)cr
(mm)
Initial crack 103 103 0.144 3.72
Peak 107 3.86
Several tracking tools including the digital linear extensometer are available in 3D-
DIC software (Vic-3D, version 8, Correlated Solutions) to analyze the post-processed
results. A digital linear extensometer was placed across a pre-existing crack location
on the exterior web face of a given prism to obtain measurements of crack width
until specimen failure. Measurements were recorded at: (1) the unloaded state, (2)
the initial cracking state (Pcr ), and (3) the peak load state (Pmax ), with quantitative
information presented in Table 3. A digital linear extensometer was placed across
a pre-existing crack and thus captured its widening until prism failure. This feature
provided an advantage over the conventional methods where mechanical devices
are set up prior to the testing without knowing where cracks will occur. However,
conventional mechanical devices are helpful to validate results of the DIC system.
Table 3 presents the digital linear extensometer measurements across the crack
location on the exterior web face of a representative prism within test series 190H
at different loading stages. The recorded measurement includes the initial length of
the digital extensometer placed across the crack location at undeformed state (L 0 ),
the final length of the extensometer at a particular loading stage (L 1 ), the change in
the initial length of the linear extensometer (L = L 1 − L 0 ), and the lateral strain
between the two ends of linear extensometer (L/L 0 ). The difference in L values
between the initial cracking and the peak load states gives the maximum crack width
at failure and was found to be 3.72 mm. Crack widths for prisms within other test
series can be measured using a similar approach.
Images captured by the stereo recording systems were analyzed using Vic-3D soft-
ware (Vic-3D, version 8, Correlated Solutions) to generate the average strains within
the selected prism faces. Localized strains were also obtained from two different
digital linear extensometers placed across a pre-existing cracked and uncracked
section on the exterior web face of a given prism. The applied load at different
Application of Digital Image Correlation (DIC) Method for Concrete … 69
Stress (MPa)
test series 190H face (average
18 Uncracked
section value)
12
0
0 0.008 0.016 0.024 0.032 0.04
Lateral strain, εxx (mm/mm)
stages was divided by the net cross-sectional area of the prism to obtain the corre-
sponding stress values. These values were plotted against strains, obtained as previ-
ously discussed, to obtain the resulting stress versus strain plots. Figure 7 shows the
comparison between average lateral stress versus strain and localized lateral stress
versus strain generated on the exterior web face of a prism included in test series
190H. The localized measurements obtained from the digital linear extensometers
can both underestimate or overestimate the strains depending upon whether the device
was placed across a uncracked (red curve in Fig. 7) or cracked section (blue curve
in Fig. 7). Average of strains across several cracked and uncracked sections located
within the selected prism face (green curve in Fig. 7) potentially represent the actual
mechanical characteristics of concrete masonry prisms under loading. This suggest
that the full-field analysis results obtained using the DIC system can be effectively
used over those obtained from conventional point-to-point devices to represent the
overall mechanical behavior of concrete masonry assemblages.
Figures 8a and b show a typical average axial and lateral stress versus strain plot
of one of the exterior web face and face shell, respectively, for representative hollow
prisms from each of the three test series (190H, 120H, and 050H). The average
axial strain, εyy , is compressive in all instances and so is plotted along the negative
x-axis while the average lateral strain, εxx , is always tensile and so is plotted along
the positive x-axis. The average axial strain plot on the front face had the highest
magnitude of all strains until the prism approached the failure condition, at which
point there was an abrupt increase in average lateral strain within the web face. The
higher value of axial strain on the face shell was attributed to mortar crushing within
the bed joints given that it is softer than the CMUs. The sudden increase in average
lateral strain on exterior webs at failure was due to the extensive vertical cracks that
developed on the web face as the prism approached failure. High lateral tensile stress
concentration on the webs of CMUs occurred which caused failure. The form of
the curves shown in Fig. 8a and b suggested that the failure mode of hollow prisms
constructed using 200 mm CMUs was due to the tensile splitting of webs, irrespective
of the CMU web height, and no major cracks were recorded on the face shells.
70 N. Chhetri and L. R. Feldman
30 30
Stress (MPa)
18 18
120H 120H (εxx) 120H 120H (εxx)
(εyy) (εyy)
12 12
050H 050H 050H (εxx)
6 050H (εxx) 6
(εyy) (εyy)
0 0
-0.008-0.004 0 0.004 0.008 0.012 0.016 -0.008-0.004 0 0.004 0.008 0.012 0.016
Strain (mm/mm) Strain (mm/mm)
(a) (b)
Fig. 8 Typical average axial and lateral stress versus strain plots for representative hollow prism
within each of the three test series on: a exterior web face and b face shell
Figure 9a and b show a typical average axial and lateral stress versus strain plot of
one of the exterior web face and face shell, respectively, for representative grouted
prisms from each of the three test series (190G, 120G, and 050G). The average axial
strain, εyy , on the exterior web and front face increased gradually until just before
prism failure, at which point there was a sudden increase in average lateral strain, εxx ,
on the web. This was due to the appearance of several narrow vertical cracks on the
web face just before the failure as captured in DIC images. The form of the curves
for the grouted prisms was similar to those of the hollow prisms, and suggested that
the appearance of tensile cracks on the exterior webs ultimately caused the failure.
The failure of grouted prisms, irrespective of the CMU web height, was further
characterized by the crushing of the top portion of the grout and spalling of the face
shells from the top two block courses. The applied load was therefore effectively
shared between the grout and the block unit.
Different crack patterns were observed in prisms within various test series included
in this investigation. The analysis of individual crack patterns using the DIC system
provided a comparative analysis of prisms constructed using CMUs with different
web heights. The ability to map the full-field crack progression at different loading
stages and to measure the crack width at any loading stage made the DIC system an
effective tool. The only alternative would have been conventional hand-drawn crack
mapping which is subjected to greater inconsistencies and safety risk due to the need
to closely monitor cracking on brittle specimens approaching failure.
Application of Digital Image Correlation (DIC) Method for Concrete … 71
16 16
190G 190G (εxx) 190G
(εyy) 050G (εxx) (εyy) 190G (εxx)
12 12 050G (εxx)
120G (εxx) 050G 120G (εxx)
Stress (MPa)
Stress (MPa)
(εyy)
8 050G 8
(εyy)
120G
4 4
(εyy)
120G
(εyy)
0 0
-0.007 0 0.007 0.014 0.021 0.028 -0.007 0 0.007 0.014 0.021 0.028
Strain (mm/mm) Strain (mm/mm)
(a) (b)
Fig. 9 Typical average axial and lateral stress versus strain plots for representative grouted prism
within each of the three test series on: a exterior web face and b face shell
4. Crack width at failure was measured by the digital linear extensometer placed
across the pre-existing crack location on the exterior web face of a hollow prism
constructed with CMUs that included full-height webs. Crack width can be
measured at any loading stage using this post-processing tool.
5. The work related to this experimental investigation continues and will include
an evaluation of the influence of knock-out web geometry for CMUs of different
sizes. The expanded test database may offer additional insight related to the
resulting masonry assemblage strength and failure modes of prisms tested under
concentric axial loading.
Acknowledgements The authors wish to express their sincere thanks to the Natural Science and
Research Council of Canada, the Canada Masonry Design Centre, and the Saskatchewan Centre
for Masonry Design at the University of Saskatchewan for financial support. The first author also
gratefully acknowledges scholarship support provided by the University of Saskatchewan.
References
1. American Society for Testing and Materials (ASTM) (2020) ASTM C140-20: Standard test
methods for sampling and testing concrete masonry units and related units. West Conshohocken,
PA
2. American Society for Testing and Materials (ASTM) (2018) ASTM C1019-18: Standard test
method for sampling and testing grout. West Conshohocken, PA
3. American Society for Testing and Materials (ASTM) (2016) ASTM C90-16: Standard
specification for loadbearing concrete masonry units. West Conshohocken, PA
4. Bolhassani M, Hamid AA, Rajaram S, Vanniamparambil PA, Bartoli I, Kontsos A (2017) Failure
analysis and damage detection of partially grouted masonry walls by enhancing deformation
measurement using DIC. Eng Struct 134:262–275
5. Canadian Standards Association (CSA) (2019) CAN/CSA S304-19: Design of masonry
structures. CSA, Rexdale, ON, Canada
6. Canadian Standards Association (CSA) (2014a) CAN/CSA A165.1-14: Concrete masonry
units. CSA, Rexdale, ON, Canada
7. Canadian Standards Association (CSA) (2014b) CAN/CSA A179-14: Mortar and grout for
unit masonry. CSA, Rexdale, ON, Canada
8. Ghorbani R, Matta F, Sutton MA (2015) Full-field deformation measurement and crack
mapping on confined masonry walls using digital image correlation. Exp Mech 55(1):227–243
9. Kumar SL, Aravind HB, Hossiney N (2019) Digital image correlation (DIC) for measuring
strain in brick masonry specimen using Ncorr open source 2D MATLAB Program. Results
Eng 4
10. Lang NR, Thompson JJ (2014) Recent changes to ASTM specification C90 and impact
on concrete masonry unit technology. In: Tate MJ (ed) Masonry 2014, STP 1577. ASTM
International, West Conshohocken, PA, pp 123–137
11. Nghiem H-L, Marwan AH, Emeriault F (2015) Method based on digital image correlation for
damage assessment in masonry structures. Eng Struct 86:1–15
12. Reu P (2015) All about speckles: speckle size measurement. Exp Tech 39(3):1–2
Moment Redistribution Limits for Beams
with High Strength Steel Reinforcement
1 Introduction
used in design calculations to 500 MPa. This limit practically constrains the use of
HSR because its typically greater yield strength cannot fully be taken advantage of.
Moment redistribution can occur in ductile statically indeterminate structural
systems when plastic hinges form in regions of high moments. The plastic hinges
do not allow additional moment to occur at their location. Instead, the moment is
“redistributed” to other critical cross sections to maintain static equilibrium. If the
plastic hinges have sufficient inelastic rotation capacity, the indeterminate system
can form an unstable mechanism at the collapse load. The inelastic rotation capacity
is dependent on the curvature ductility, the ratio of the ultimate to yield curvatures,
φ u /φ y , which is roughly proportional to the inverse of the mechanical reinforcement
ratio, ω, given by Eq. (1). Where As is the area of flexural steel reinforcement, b
is the width of the compression region, d is effective depth, and f c is the specified
concrete compressive strength. Thus, increasing As or f y or reducing f c will increase
ω, reduce φ u /φ y , and reduce the inelastic rotation capacity at a plastic hinge.
As f y
ω= (1)
bd f c
0.125w f L 2 − M y−
R= × 100% (2)
0.125w f L 2
diverge from the ratio of the corresponding linear-elastic moments for a particular
load case.
If the inelastic rotation capacity at the initial plastic hinge is limited, failure may
occur at this location before a plastic collapse mechanism can form. Clause 9.2.4 of
A23.3:19 [7] therefore limits the maximum redistribution of negative moments in
continuous beams to the lesser of (30–50 c/d)% or 20%. An objective of the research
reported in this paper is to investigate the validity of this limit for beams reinforced
with HSR.
The provisions for maximum permissible moment redistribution in ACI 318-
19 [2], which are essentially identical to those in A23.3:19, are based on [6]. This
research was conducted at a time when the typical reinforcing steel yield strength was
280 MPa (i.e. 40 ksi). Although the problem continues to be studied (i.e. Lou et al.
[9]), the authors are not aware of any study that investigates moment redistribution
limits for beams with High Strength Reinforcement.
76 S. Akbar et al.
This paper presents a parametric study of the moment redistribution that occurs
for a simple two-span beam that is continuous over the interior support. Section 2
presents the methodology adopted: specific procedures for designing the critical span
and support cross sections; determining the moment–curvature response of these
sections using MATLAB codes described previously [1], and then determining the
load capacity of the beams using nonlinear analysis in SAP 2000 [8].
In Sect. 3, the detailed analyses of two beams are presented to illustrate the
methodology. In one case, a full collapse mechanism develops. In the other, insuffi-
cient inelastic rotation capacity at the initial plastic hinge prevents the full collapse
mechanism from developing.
In Sect. 4, the results of a detailed parametric analysis is presented. Two steel
grades are considered: ASTM A615/A615M Grade 100 (690 MPa) [3] and A706/
706M Grade 60 (420 MPa) [4], because they have relatively low and high curvature
ductilities, respectively, for a given ω [1]. Two concrete compressive strengths are
considered: f c of 30 and 70 MPa. Two load cases will be explored: live load on either
one or both spans. In the former case, the first plastic hinge forms due to positive
moment in the span, while in the latter case the first plastic hinge to forms due to
negative moment at the interior support.
In Sect. 5, the results of the parametric analysis are compared to the provisions
concerning moment redistribution in A23.3:19 [7]. And in Sect. 6, the summary,
conclusions, and recommendations for future work are presented.
Moment Redistribution Limits for Beams with High Strength Steel … 77
2 Methodology
The essential steps of designing the reinforcement at the critical beam cross
sections, generating the moment–curvature relationships for these cross sections,
and analysing the results using SAP2000 [8], are presented in this section.
Figure 2 shows schematically the procedure used to determine the span reinforcement
based on a specified reinforcement ratio, ρ − , at the interior support section. The
necessary steps are as follows:
1. Initialize the process defining the geometric properties, the concrete compressive
strength, f c , and steel yield strength f y .
2. Specify a support reinforcement ratio, ρ − , and corresponding steel area, A− s .
3. Compute the nominal support moment resistance, Mu− , in accordance with
A23.3:19 [7], i.e. neglecting material resistance factors φ s and φ c .
4. Compute the uniformly distributed load, wu , using linear-elastic analysis, that
equilibrates Mu− . In this case, for uniform live load on both spans, wu = 8 Mu− /
L2 .
5. Compute the elastic moment at the span, Mu+ , for the case of uniform dead load
on both spans and uniform live load on one span only, assuming, arbitrarily, that
the specified live and dead loads are equal.
6. Determine the span reinforcement area, A+ s , in accordance with A23.3:19 [7],
again neglecting material resistance factors φ s and φ c .
The material properties, section dimensions and steel areas are next used to generate
moment–curvature relationships for the critical span and interior support cross
sections using a MATLAB code presented previously [1]. If there is no clearly defined
yield point, a bilinear idealization is used [1]. The moments and curvatures at yield
and at ultimate are critical input values for the subsequent redistribution analysis.
The yield moment, which corresponds to the point where steel tensile stress reaches
yield, defines the initiation of plastic hinge formation. The ultimate moment and
corresponding ultimate curvature define whether the inelastic plastic hinge rotation
capacity is sufficient, given the rotational demand, to achieve a full plastic collapse
mechanism. Two definitions for ultimate moment were investigated: (1) maximum
moment value, and (2) moment corresponding to an extreme fibre concrete compres-
sive strain of 0.0035. The stress–strain relationship for concrete is as proposed by
Carreira and Chu [5] and the stress–strain relationship and associated parameters for
78 S. Akbar et al.
Fig. 2 Procedure for determining positive moment reinforcement at span given negative moment
steel area at support
Figure 3 outlines the steps to analyse the beam using SAP2000 Finite Element
Analysis programme [8].
The collapse load was determined using nonlinear analysis as follows:
1. Idealize the continuous beam in SAP 2000 using the geometric properties defined
in Sect. 2.1.
Moment Redistribution Limits for Beams with High Strength Steel … 79
2. Define the yielding and ultimate moments and curvatures as determined from the
moment–curvature analysis as properties of the plastic hinge at the critical span
and interior support cross sections.
3. Add 50 equidistant deformation-controlled plastic hinges along each of the two
beam spans.
4. Define the live (L) and dead (D) uniformly distributed load cases. Consider two
load combinations: live load applied to one span only or to both spans.
5. Run nonlinear analysis, where the applied load is increased in small increments
until failure occurs when a plastic collapse mechanism forms or the inelastic
rotation capacity at a plastic hinge is exhausted.
6. Determine the percentage of moment redistribution from the failure load using
Eq. (3) for case of live load on both spans. For live load on one span only, compute
the percentage of moment redistribution as:
80 S. Akbar et al.
0.083w f L 2 − M y+
R= × 100% (3)
0.083w f L 2
3 Example Calculations
To illustrate the methodology, two example calculations are presented. A case where
a full collapse mechanism is considered first, followed by a case where the inelastic
rotation capacity is exhausted before a full collapse mechanism forms.
A two-span beam, continuous over the interior support has a rectangular cross section
with a width, b, of 1000 mm, a height, h, of 200 mm and an effective depth, d, of
170 mm. The concrete compressive strength, f c , is 70 MPa, and the reinforcement
is ASTM A615/615M Grade 100 steel with a mean yield strength of 830 MPa. The
reinforcement at the interior support, A− s , is 2380 mm , corresponding to ω of 0.165
2
+
and the reinforcement at the span, As , is 1520 mm2 , which corresponds to ω of
0.106. Live load is applied simultaneously on both spans, which are each 5 m long.
Figure 4 shows the idealized bilinear moment–curvature relationships for the
critical span and interior support cross sections. The member exhibits linear-elastic
behaviour until the first plastic hinge forms at the interior support, indicated by
Point A on the figure. As the load is increased, the moment at the interior support
increases slightly until the plastic hinge at the span forms, Point B. The application of
additional load eventually causes the interior span cross section to reach its maximum
moment and curvature, Point C: this corresponds to the failure load. From Eq. (4),
the percentage of moment redistribution at the interior support cross section is:
320.8 − 289
R= × 100% = 9.8% (4)
320.8
Figure 5 shows the load–deflection response, with Points A, B, and C again corre-
sponding to the formation of the plastic hinge at the support, the formation of the
plastic hinge in the spans, and the support reaching its maximum capacity, respec-
tively. The formation of the plastic hinges causes the stiffness of the member to
decrease.
Moment Redistribution Limits for Beams with High Strength Steel … 81
Fig. 4 Bilinear moment–curvature relationships for span and interior support cross sections
The cross section geometry, span lengths, and steel and concrete strengths of the
previous case are again adopted. The reinforcement areas at the interior support and
span sections are 3740 mm2 and 2300 mm2 which correspond to ω of 0.26 and 0.16,
respectively. Live load is again applied simultaneously on both spans.
Figure 6 shows the bilinear moment–curvature responses the span and support
cross sections. The first plastic hinge again forms at the interior support, Point A on
the figure. When additional load is applied, the maximum moment and curvature is
reached at the interior support before a hinge form at the critical span cross section,
Point B. The curvature ductility factors, φ u /φ y , for the support and span cross sections
are 1.2 and 2.5, respectively. The low curvature ductility at the support limits its
inelastic rotation capacity, θ i , which can be quantified as:
L h
θi = φi dx (5)
0
where L h is the plastic hinge length and φ i is the inelastic curvature within the hinge.
Equation (4) is a simplified approximation as it ignores any tension stiffening that
may be taking place within the plastic hinge.
4 Parametric Study
This section presents the results of a detailed parametric analysis. Two steel grades are
considered: ASTM A615/A615M Grade 100 (690 MPa) [3] and A706/706M Grade
60 (420 MPa) [4], because they have relatively low and high curvature ductilities,
respectively, for a given ω [1]. Two concrete compressive strengths are considered:
f c of 30 and 70 MPa.
Figure 8 shows the variation of the percentage of moment redistribution with
the mechanical reinforcement ratio, ω, for the two steel grades and f c of 70 MPa.
When the live load is on one span only, any difference due to the two steel grades
is negligible because a full plastic collapse mechanism develops in all cases. The
first plastic hinge forms in the span section, which has less reinforcement and so
is more ductile than the support section. The support section yields and reaches its
ultimate capacity first. When the live load is on both spans, a full plastic mechanism
84 S. Akbar et al.
forms if the mechanical reinforcement ratio of the support section, ω, is less than
approximately 0.2, and the influence of the steel grade on the moment redistribution
percentage is negligible. For greater ω values, the inelastic rotation capacity of the
plastic hinge at the support is reached, a full plastic collapse mechanism does not
form, and the moment redistribution percentage is reduced. The reduced moment
redistribution percentage is particularly evident for the ASTM A615/A615M Grade
100 steel. Similar results were observed for f c of 30 MPa.
Figure 9 shows the idealized bilinear moment–curvature relationships for both
steel grades at the interior support and span cross sections for f c of 70 MPa. For both
steel grades, ω at the interior support is 0.25 so an incomplete mechanism forms. The
yielding and ultimate moments and ultimate curvatures for both grades are similar.
The yielding curvature, φ y , for the section with A615 Grade 100 reinforcement is,
however, markedly greater than that for the section with A706 Grade 60 reinforce-
ment. This reduces the curvature ductility, the inelastic rotation capacity, and the
percentage of moment redistribution. The section reinforced with the higher steel
grade has smaller steel area for a given ω value, so the cracked moment of inertia
and associated stiffness will be lower.
Moment Redistribution Limits for Beams with High Strength Steel … 85
To compare the results of the parametric study with the provisions in A23.3:19 [7],
it is helpful to recast these requirements in terms of the mechanical reinforcement
ratio, w. Horizontal force equilibrium at the Ultimate Limit State requires that:
φs A s f y
a= (6)
φc α1 f c
where a is the depth of the equivalent rectangular stress block, φ s and φ c are the
resistance factors applied to the steel and concrete material strengths, 0.85 and 0.65
respectively, and stress block parameter α 1 equals 0.85 − 0.0015 f c . Recalling that a
= β 1 c, where stress block parameter β 1 equals 0.97 − 0.0025 f c , substituting Eqs.
(1–5) and simplifying yields:
c φs
= ω (7)
d φc α1 β1
Thus, the requirement in Clause 9.2.4 of A23.3:19 that the maximum redistribution
percentage, Rmax , be less than 30–50 (c/d) is equivalent to
86 S. Akbar et al.
φs
Rmax = 30 − 50 ω % ≤ 20% (8)
φc α1 β1
In Fig. 10a and b, Eq. (8) is superimposed on the graphs from the parametric study
showing the variation of the moment redistribution percentage with w for concrete
strengths of 70 and 30 MPa, respectively. Clearly the A23.3:19 provisions provide a
conservative lower bound on the actual moment redistribution percentages for these
cases.
The objective of the research presented in this paper was to verify whether High
Strength Reinforcement, with a specified yield stress greater than 500 MPa, complies
with the current redistribution limits given in Clause 9.2.4 of CSA Standard A23.3:19
“Design of Concrete Structures”. A parametric study investigated the effect of using
ASTM A615 Grade 100 or ASTM A706 Grade 60 reinforcement on the moment
redistribution exhibited by a two-span beam that is continuous over the internal
support. Concrete compressive strengths of 30 and 70 MPa were considered and
loading cases with the live load on one span only or on both spans were investigated.
The procedure involved selecting the flexural reinforcement area at the interior
support and determining the corresponding critical span reinforcement area. An ideal-
ized bilinear moment–curvature relationship was determined for these cross sections
using a programming code developed previously [1]. The output curvatures and
moments at yield and ultimate became input for a nonlinear analysis using SAP2000
to quantify the moment redistribution. This procedure was repeated for different
internal support reinforcement ratios.
The following conclusions are drawn:
1. If a full plastic collapse mechanism forms at member failure, the moment redis-
tribution percentage is independent of the grade and quantity of reinforcement.
Moment Redistribution Limits for Beams with High Strength Steel … 87
2. Insufficient inelastic rotation capacity at the first plastic hinge to form may initiate
local failure of the cross section before a full plastic collapse mechanism forms.
3. The curvature ductility factor, φ u /φ y , and inelastic rotation capacity reduce as
the mechanical reinforcement ratio, ω, defined in Eq. (1), increases. Thus, the
maximum permitted redistribution must reduce for beams with increasing ω
values.
4. A beam reinforced with High Strength Reinforcement will have a lower curvature
ductility factor than a beam with the same ω reinforced with normal strength
reinforcement, and so the maximum permissible moment redistribution is less.
The magnitudes of the ultimate curvature, φ u , are similar but the yield curvature,
φ y , of the beam reinforced with HSR can be markedly greater because the steel
area and cracked section modulus are less.
5. For the cases investigated, the current provisions in Clause 9.2.4 are conservative
with respect to the maximum redistribution permitted.
Acknowledgements Financial support from the Natural Sciences and Engineering Research
Council of Canada is gratefully acknowledged.
References
1. Akbar S, Bartlett FM, Youssef MA (2021) Flexural ductility of concrete beams reinforced with
high strength steel. In: Proceedings, CSCE 2021 annual conference
2. American Concrete Institute (ACI) Committee 318 (2019) Building code requirements for
structural concrete (ACI 318-19). American Concrete Institute, Farmington Hills, MI
3. ASTM (2020) Standard specification for deformed and plain carbon-steel bars for concrete
reinforcement (ASTM A615/A615M). American Society for Testing and Materials (ASTM)
International, West Conshohocken, PA
4. ASTM (2016) Standard specification for deformed and plain low-alloy steel bars for concrete
reinforcement (ASTM A706/A706M). American Society for Testing and Materials (ASTM)
International, West Conshohocken, PA
5. Carreira DJ, Chu KH (1985) Stress-strain relationship for plain concrete in compression. J Am
Concr Inst 82(6):797–804
6. Cohn MZ (1965) Rotational compatibility in the limit design of reinforced concrete beams.
In: Flexural mechanics of reinforced concrete, SP-12, American Concrete Institute/American
Society of Civil Engineers, Farmington Hills, MI, pp 35–46
7. CSA (2019) Design of concrete structures (CSA A23.3:19). Canadian Standards Association,
Toronto, ON
8. Computers and Structures (2020) CSI SAP-2000 analysis reference manual. Berkeley,
California
9. Lou T, Lopes SMR, Lopes AV (2014) Evaluation of moment redistribution in normal-strength
and high-strength reinforced concrete beams. J Struct Eng 140(10):04014072
10. Mander TJ, Matamoros AB (2019) Constitutive modeling and over strength factors for
reinforcing steel. ACI Struct J 116(3):219–232
Investigating Release-Connection
Between Post-tensioned Concrete Slab
and Wall
1 Introduction
(a) (b)
Fig. 1 a Concrete load bearing wall with temporary release. b Concrete lead bearing wall with
permanent release
however, it permanently disengages the dowel bar between the wall and the slab. This
detail is mostly used when there is no structural need for force transfer between the
slab and its support in the direction of the release [5]. The behavior of this connection
under lateral loadings has not been investigated in the literature. In the experimental
study of this paper, the performance of a permanent Release-Connection under lateral
loading (Fig. 2b) is investigated in order to provide insight into its application when
there is structural need for force transfer between the slab and the wall.
(a) (b)
Fig. 2 a Release detail; after Aalami [5]. b The specimen of similar release detail used in this paper
for experimental investigation (SP-BAR6_H-UnWrapped)
92 M. Jonaidi et al.
2 Experimental Program
The experimental study presented in this paper was carried out in three parts as
described below. Further experiments and analysis are planned to be performed in the
future for a comprehensive investigation of the behavior of release detail especially
under cyclic loading.
on the specimens by the loading system. The hydraulic pump contained a calibrated
pressure gauge to measure the applied force. Dial gauges were used on the end of
the specimen opposite of the applied force to measure horizontal displacement. One
gauge was placed on either corner of the specimen to measure any rotation in the
specimen. Dial gauges were also placed on top of the specimen to measure any
vertical movement of the specimen under load.
The test specimen was loaded vertically with a dead load to simulate the vertical
reaction at the slab to wall connection. In a structure, this reaction comes from the
self-weight of the podium slab and from the structure supported by the podium. The
load was applied with two coils of PT cable with a combined weight of approximately
12 kips (53.4 kN) centered over the wall. The selection of this vertical loading was
dictated by the accessibility and the maneuvering area for the equipment to place the
coils on the top of the specimen.
The anticipated response of the fully wrapped specimen (Fig. 3a) was to experi-
ence relatively little resistance for displacements up to about 1 (25.4 mm). At that
point, the foam surrounding the rebar should be fully compressed, and the connection
was expected to resist the lateral force through direct bearing of the concrete on the
dowels. However, this assumption did not correlate with the results (Fig. 3a). This
specimen had a total horizontal displacement of about 7.2 (183 mm). The specimen
was loaded until all the dowels failed. The predominant failure mode was a shear
failure of the dowels at the slab to wall interface, or at the hook.
It should be noted that the data was collected from a single stroke push of the
hydraulic cylinder with a limited stroke. Stroke limitations of the hydraulic cylinder
required the force to be removed and additional shim plates were added to continue
loading the specimen. The unloading and reloading are not shown.
(a) (b)
This specimen was in a location that had limited accessibility and the maneuvering
area for the equipment to place the coils on top of the specimen. It was only possible to
place one coil on the top, applying approximately 6 kips (26.7 kN) centered over the
wall. This load remained in place for the duration of the test. The partially wrapped
hook specimen was expected to have greater resistance to lateral displacement since
the hook was embedded into the concrete. The results (load verses displacement) can
be seen in Fig. 3b.
The maximum resistance was greater for the partially wrapped hook specimen
than for the fully wrapped specimen. This increased resistance was likely due to the
tension developed in the rebars. That is, after the onset of lateral movement, fixed
hooks produced tension in the rebars that had led to the increased reaction force of
the wall on the slab, which increased the friction. This behavior is considered to be
similar to the shear-friction action described in the commentary of ACI 318-19 [6].
The ultimate load resisted was approximately 70 kips. The block moved nearly 3.5
(89 mm) horizontally after all dowels fractured.
To better evaluate the behavior of the specimens tested in Part One, it was decided to
isolate the effects of friction from the contribution of the reinforcement in providing
the lateral strength. The research program continued at Kennesaw State University
(KSU). Two pairs of two samples were used. Each included a cylindrical specimen
cast on top of a concrete base to simulate the interface between the slab and its support.
In each case, the surface of the base was polished (to be similar with the condition of
the surface in Part One), and a plastic bond breaker was used. The lateral load was
applied using a pulley-cable mechanism, and a dial-gauge indicator placed in front of
the block to measure the displacement. Sands and light weights were used to apply
loads in small increments. The first two samples were plain 5000 psi (34.5 MPa)
concrete, each sample weighing 105 lbs (0.47 kN), with no reinforcement to connect
them to the base (slab). These were fabricated directly to measure the coefficient of
friction between bare concrete surfaces. The results indicated a friction coefficient
of about 0.4.
The second set of two samples included similar cylindrical concrete sample along
with a foam-wrapped rebar in the center, which was anchored to the base. To achieve
the one-inch compressible thickness on each piece of rebar, two layers of half-inch
thick pipe insulation were used. In this case, the load was increased in small incre-
ments up to 270 lbs (1.20). Under this load the specimen moved approximately 0.9
(23 mm) horizontally (Fig. 4b). The aim was to have an indication of the resistance of
the compressible material, without bending the vertical rebar horizontally (Fig. 4b).
Investigating Release-Connection Between Post-tensioned Concrete … 95
(a) (b)
Fig. 4 a Four specimens used in part two. b The vertical view of specimen with vertical rebar and
compressible material
Two additional full-scale tests were performed to further investigate the behavior of
the specimens that were tested in Part One. Due to space and loading limitations, only
two rebars were used in these specimens. The identifier used for these specimens
start with SP-BAR2, which means two bars have been used in these specimens. A
specific testing rig has been constructed at the KSU, Marietta Campus, to apply
lateral load to the specimens. The poured concrete specimens that contain foam-
wrapped steel dowels (rebars), which extend into a poured concrete slab (base). A
set of experiments was performed using a hydraulic jack, and load and displacement
sensors. The method of casting the concrete specimen over the concrete below using
a plastic layer between them as bond breaker, simulates the practice of rereleasing the
slab from wall below at the time of tensioning cables in post-tensioned construction.
Timber forms were made to pour the 24 wide × 24 long × 14 (610 mm ×
610 mm × 356 mm) thick slabs. Concrete was well mixed in an electric mixer and
was vibrated adequately using an electric vibrator. Cylindrical samples were taken
during the casting of concrete to determine the strength of concrete. The #5 dowels
were first bent and then welded to the 1 (25.4 mm) thick steel plate of the test
platform. This was followed by casting a 5 (127 mm) thick concrete to embed the
dowels into the concrete as they are embedded in the concrete in a real case.
The specimens experienced lateral force using a 50 kip (222 kN) capacity
hydraulic jack. Load cells and displacement transducers (sensors) allowed accurate
force and displacement measurements to be recorded simultaneously at 2-s intervals.
The results of the experiments were obtained by a software in form of readings of
loads from load cell sensor and displacements from the transducer sensor. A special
framing was used to prevent twisting of the specimen, while it was free to have
vertical and horizontal movements. A side view and a picture of test set up are
shown in Fig. 5a, b.
96 M. Jonaidi et al.
(a) (b)
Fig. 5 a The side view of the testing rig. b A picture of test set up showing “SP-BAR2_H-Wrapped”
The test results for these specimens are presented in Fig. 6. The lateral load was
applied slowly. The transducer did not record any displacement until the load reached
548 lbs (2.44 kN) in SP-BAR2_H-Wrapped test, and 550 lbs (2.45 kN) in SP-BAR2_
H-Wrapped test. The duration of this loading was 2 min. After this, the transducer
recorded a displacement of 0.01 (0.25 mm). This indicates that the two specimens
behaved similarly before the static friction was reached. The friction coefficient, using
the weight of specimens (700 lbs or 3.11 kN) is determined to be approximately 0.8.
The surface under these specimens was hand-troweled. For the experiments in Part
Two, in which the surface of base concrete was polished, the friction coefficient was
approximately 0.4.
Following the initiation of sliding, the rate of lateral load increase for the H-
Wrapped specimen was less than that of the H-UnWrapped specimen. This can be
explained by the fact that for the H-Wrapped specimen, the horizontal movement led
to compression of the compressible material wrapped all around the bars, while for
the H-UnWrapped specimen, the load was also transferred through the embedded
hook, resulting in the bending of the vertical segment of the rebars.
The resistance of the H-UnWrapped specimen at 1 (25 mm) displacement is
5 kips (22.2 kN). For this case the simple calculation of a steel bar fixed at two
ends under a lateral displacement of , which equals to: = PL 3 /(12EI), can be
considered. The applied load P with = 1 will be P = 12 EI/L 3 . Using E of steel
(29,000 ksi, 200 GPa), “I” of circular section with 0.625 (15.9 mm) diameter, for
#5 rebar, and L = 10 (252 mm) for the vertical length of rebar, the force P will be
P = 2.6 kips (11.6 kN). Calculation shows that the section under this displacement
has already become fully plastic. To account for this affect the shape factor (ratio of
plastic modulus to yield modulus) for circular section, which is equal to 1.33, is to
be multiplied to the calculated P. Therefore two rebar should resist a force of P =
(2)(2.6)(1.33) = 6.9 kips (30.7 kN), whereas the curve shows 5 kips force at 1 . The
difference is likely due to the fact that the rebar experiences a slight release at each
Investigating Release-Connection Between Post-tensioned Concrete … 97
end, leading to an effective length of slightly greater than the assumed 10 (254 mm)
length.
The resistance of the H-Wrapped specimen at 1 (25 mm) displacement was
approximately 8.5 kips (37.8 kN) (see Fig. 6), which was considerably greater than
the that of the other specimen. This might be due to shear load transfer instead
of bending of the rebar, because the rebar in the H-Wrapped specimen could have
remained straight and in contact with the concrete over its entire length. Both spec-
imens resisted a force of approximately 10 kips (448 kN) at approximately 1.25
(32 mm) displacement.
As shown in Fig. 6, the lateral peak strength of the H-UnWrapped specimen was
about twice that of the H-Wrapped specimen. This observation is not consistent with
the results shown in Fig. 3, which requires further studies. Furthermore, when the
load on H-UnWrapped specimen reached approximately 27 kips (120 kN), a sudden
snap occurred in the steel frame at the back of the jack, after which the loading was
deliberately stopped. In fact the peak in the curve is not due to any failure in the
specimen.
Further experiments are required, and planned, to be carried out to fully investigate
and analyze the load transfer mechanisms and resistance of these connections. For
the next phase of experiments in this research project, a mechanism is designed to
provide vertical force in order to simulate the existence of the vertical load in real
life conditions. This will also make the testing condition to be more similar to the
98 M. Jonaidi et al.
condition of the testing of specimens in Part One, on which a vertical load was
applied.
3 Conclusion
Acknowledgements The authors would like to acknowledge the excellent work of KSU students:
Eric Shults, Chandler Cooper, Gehu Bautista, Joshuah Leljedal, Clint Morris, and Isaiah Akinfe-
unmi on the research project. The support of Tendon Systems, LLC, and their chief engineer Kyle
Bontreger P.E., is gratefully appreciated.
References
1. Post-Tensioning Institute (PTI) (2006) Post-tensioning manual, 6th ed., Phoenix, AZ, USA
2. Lin TY, Burns NH (1991) Design of prestressed concrete structures, 3rd edn. Wiley, USA
3. Aalami BO (2014a) Post-tensioned buildings design and construction, 1st ed., PT-
Construction.com, USA
4. ACI Committee 224 (2013) Guide to design detailing to mitigate cracking (ACI 224.4R-13),
American Concrete Institute, MI, USA
5. Aalami BO (2014b) Crack mitigation in post-tensioned members. Technical Note., PT
Construction.com, USA
6. ACI Committee 318 (2019) Building code requirements for structural concrete (ACI 318R-19).
American Concrete Institute, MI, USA
Experimental Investigation
of Single-Story CLT Shear Walls
1 Introduction
CLT is a type of mass timber panel made of three or more layers of orthogonally
glued lumbers [5]. With the inclusion of encapsulated mass timber structures into
both in the National Building Code of Canada [6] for buildings up to 12 stories, and
the International Building Code [4] for buildings up to 18 stories, the use of CLT
panels in tall building construction is becoming common across North America. The
Canadian Standard for Engineering Design in Wood [1] outlines provisions for CLT
as shear walls for platform-type construction where the floors act as a platform for the
next story. CLT walls comprised of a series of CLT panels are commonly connected
to adjacent panels vertically with splines using screws or nails and connected at their
base with shear brackets (SB) and hold-downs (HD) [8]. Numerous researches on
the performance of CLT connections and shear wall CLT panels have shown that
these can resist lateral loads from wind and seismic events [2, 3, 7, 9–11].
The objective of the research presented herein was to investigate the seismic
behavior of single and coupled CLT shear wall systems considering various panel
aspect ratios, superimposed dead loads, number of SBs, and vertical spline nail
spacing.
2 Experimental Investigations
ratios (height to length) were investigated: 2.5:1 (3000 × 1200 mm) and 3.5:1 (3000
× 850 mm) for both single- and coupled-panel shear walls; see Fig. 2. Additionally,
each specimen had either one or two SB per panel for both the single- and coupled-
wall specimens. The coupled panels were vertically connected using a nailed plywood
spline joint on one side with nails spacings that varied between 75 and 300 mm.
A schematic and a photo of the test setup are shown in Fig. 3. Lateral loads were
applied by a 250 kN actuator at the top of the wall through a steel side plate connected
to a steel H-beam that loaded each panel at it’s mid-point with large steel pins. Wooden
blocking between the H-beam and the panel at the center of each panel accommo-
dated the superimposed dead loads were applied onto the panels. Two levels of
superimposed vertical gravity loads representing moderately loaded walls in typical
platform-type timber building were applied: 20 and 30 kN/m. The load was applied
using three steel beams connected to a nearby strong wall and cantilevered over the
H-beam, with weights on hollow structural sections. This gravity load system also
prevented out-of-plane horizontal movements and minimized impact to the lateral
movement of the shear wall.
Fig. 2 Shear wall schematic: a single walls, b coupled walls [dimensions in mm]
104 Md Shahnewaz et al.
The shear wall configurations were tested under quasi-static monotonic loading at
a rate of 10 mm/min, to determine the displacement target for the subsequent quasi-
static reversed cyclic tests. Tests were stopped at failure, defined as the point where the
applied load dropped to 80% of the maximum. The reversed cyclic tests followed the
abbreviated CUREE loading history, a displacement-controlled loading procedure
per ASTM E2126 (2001). The 100% target displacement for the cyclic loading tests
was set to 60% of the observed displacement at failure from the monotonic tests.
The horizontal, vertical, and relative panel displacements were recorded at twelve
locations as shown in Fig. 3. Sensors #1 was a string pot measuring the wall’s
horizontal displacements. Sensors #2 and #7 in coupled walls and #2 and #4 in
single walls are the LVDTs measuring the vertical displacement at the bottom corner
of each panel recording the uplifts. Sensors #3 and #6 in coupled walls and #3 in
single walls are LVDTs measuring the horizontal displacement between the testing
apparatus and the bottom center of each panel to record the horizontal wall sliding;
Sensors #8 measured the relative displacement between the panels at spline joints in
coupled wall panels.
The load–deflection curves from monotonic and reversed cyclic tests are shown in
Figs. 4 and 5, respectively. The key test results for the wall behavior are listed in
Table 1.
Experimental Investigation of Single-Story CLT Shear Walls 105
Fig. 4 Load–deflection curves for monotonic tests: a single walls, b coupled walls
Fig. 5 Typical load–deflection curves from reversed cyclic tests: a single walls, b coupled walls
had the highest F max+ = 76 kN and displacement at peak loads d Fmax+ = 117 mm. For
the cyclic tests, the average capacity F max increased significantly as nail spacing was
decreased in the splines, 15% and 46% in CW9 and CW11 when compared to CW7.
However, the average d Fmax was not consistent, the average d Fmax was 6% higher in
CW9 and 13% lower in CW11 when compared to CW7. The single wall SW3, with
an aspect ratio of 2.5 and 2-SBs had the highest F max+ = 39 kN and displacement at
peak loads d Fmax+ = 91 mm.
Experimental Investigation of Single-Story CLT Shear Walls 107
The number of HD nails was reduced by 50% in four coupled walls tests and by
50–75% in three single walls. The walls with an aspect ratio of 3.5 had only been
tested for the reduced HD nails to facilitate the yielding in HDs nails (a desirable
failure mechanism to allow rocking of the wall panels) and also to evaluate the
variation in capacity and post-yield deformation. The coupled wall CW16, tested
under monotonic loads, had the same configuration as CW12 except for 50% fewer
HD nails; The F max was similar for both walls; however, the deformation at peak
loads d Fmax for CW16 was reduced by 27%. The monotonic test results observed in
single walls were inconsistent when the number of nails was reduced by 50–75%.
The F max for the SW6 (50% reduction in HD nails) was 24 kN, 4% less than SW5
(with 100% HD nails). Nevertheless, the displacement at peak loads d Fmax reduced
linearly with the reduction in nails, a 23% and 50% lower d Fmax observed when the
HD nails reduced by 75% and 50%, respectively.
4 Conclusion
In this study, the performance of single-story platform-type CLT shear walls was
tested under quasi-static monotonic and reversed cyclic loading. Shear walls with two
aspect ratios tested with various types of hold-downs, and shear brackets, plywood
surface splines, and gravity loading. Shear wall connections: hold-downs, shear
brackets, and vertical spline joints were tested under monotonic and cyclic/reversed
cyclic loading. The aspect ratio of the wall panel and the size of the hold-down had
a significant influence on the capacity and displacement of the CLT shear walls. The
results presented herein will be helpful for the design of low to mid-rise CLT platform
construction.
Acknowledgements The project was supported by Natural Resources Canada (NRCAN) through
the Green Construction Wood (GCWood) program and by the Government of Canada through a
Canada Research Chair.
References
1. Canadian Standards Association (CSA) (2019) Engineering design in wood, CSA Standard
O86-19, CSA, Toronto, ON, Canada
2. Gavric I, Fragiacomo M, Ceccotti A (2015) Cyclic behaviour of typical metal connectors for
cross-laminated (CLT) structures. Mater Struct 48(6):1841–1857
3. Hossain A, Popovski M, Tannert T (2019) Group effects for shear connections with self-tapping
screws in CLT. J Struct Eng 145:04019068
4. ICC International Building Code (2021) International code council (ICC), Falls Church, VA,
2021
5. Karacabeyli E, Gagnon S (2019) Canadian CLT handbook, 2019th edn. FPInnovations, Pointe-
Claire, QC, Canada
108 Md Shahnewaz et al.
6. NBCC (National Building Code of Canada) (2020) National building code of Canada 2015,
Canadian commission on building and fire codes. National Research Council of Canada,
Ottawa, ON
7. Popovski M, Schneider J, Schweinsteiger M (2010) Lateral load resistance of cross laminated
wood panels. In: Proceedings world conference on timber engineering (WCTE), Riva del Garda,
Italy
8. Shahnewaz M, Tannert T, Alam MS, Popovski M (2017) In-plane stiffness of cross laminated
timber panels with openings. Struct Eng Int 27(2):217–223
9. Shahnewaz M, Tannert T, Popovski M (2019) Resistance of cross-laminated timber shear walls
for platform-type construction. J Struct Eng ASCE 145(12):04019149. https://doi.org/10.1061/
(ASCE)ST.1943-541X.0002413
10. Shahnewaz M, Dickof C, Tannert T (2021) Seismic behavior of balloon frame CLT shear walls
with different ledgers. J Struct Eng 147(9):04021137
11. Tannert T, Follesa M, Fragiacomo M et al (2018) Seismic design of cross-laminated timber
buildings. Wood Fiber Sci:3–26
Reliability Analysis of Structural
Elements with Active Learning Kriging
Using a New Learning Function: KO
Function
1 Introduction
The crude Monte Carlo simulation (MCS) is a powerful and simple tool for relia-
bility analysis compared to other reliability methods since it does not require the
knowledge of the most probable point (MPP) that is required for variance reduc-
tion techniques, does not require continuity in the performance function (i.e., the
function that accounts for resistance and load models), supports highly nonlinear
limit state functions (LSF), and supports problems with high or small probabilities
of failure. However, crude MCS is expensive for reliability problems with heavy
computational models, such as finite element (FE), and problems that require a large
number of calls for the performance function such as problems with small probability
of failure, highly nonlinear LSF, or multimodal LSF [2]. To address the efficiency-
related challenge of using crude MCS while utilizing its advantages, surrogate models
such as Kriging can be used as a replacement to the performance function, in which
the heavy computational model (called the original model) is selectively used [5].
Kriging is a nonlinear regression composed of a regression function and a
stochastic process, where the predictions are mathematically derived by setting the
mean of the Kriging error equal to zero and minimizing the error variance [6]. To
build a Kriging predictor, a set of realizations of the original model is required, called
the design of experiment (DoE), which can be optimized in a sequential procedure
using active learning techniques. Active learning Monte Carlo simulation (AK-MCS)
requires a learning function for the sequential update of the DoE and a stopping
criterion to terminate the training process based on a predefined target accuracy and
efficiency of the analysis. The choice of the learning function can affect the accu-
racy of the analysis by capturing nonlinearity and multimodal LSFs for high- and
low-dimensional spaces, and it can affect the efficiency of the analysis by controlling
the size of the required DoE. Several learning functions were developed to improve
the efficiency of AK-MCS [2, 4, 14, 18, 20, 21]. One of the advanced and newly
developed probability-based learning functions is the KO learning function [7]. The
efficiency and performance of the KO function for reliability analysis are examined
in the present paper.
The formulation and concept of the KO learning function and its corresponding
stopping criteria are introduced in this paper. The validity of the KO function for
reliability analysis is examined through an example of a series system with four
branches. The performance of the KO learning function in terms of accuracy and
efficiency is compared to some of the commonly used learning functions in literature
including EFF [2] and U [4] learning functions through an example concerning the
reliability assessment of a reinforced concrete (RC) column.
Reliability Analysis of Structural Elements with Active Learning … 111
Fig. 1 Practical
implementation of AK-MCS
3 KO Learning Function
The KO learning function is defined as the probability that the response to point X
(where X is a vector of random variables corresponding to an AK-MCS trial) occurs
in a desired area in the vicinity of the LSF [7]. The mathematical interpretation of the
KO learning function isthe area under the probability distribution
function (PDF) of
a point X in a range of G(X) − (X), G(X) + (X) , where G (X) is an arbitrary
112 K. Khorramian and F. Oudah
desired response function which is a LSF in the context of reliability analysis, and
(X) is a measure of error which determines the borders of the desired range. The
formulation of KO learning function is presented in Eq. 1 [7].
G(X) + (X) − μĜ (X) G(X) − (X) − μĜ (X)
KO(X) = − (1)
σĜ (X) σĜ (X)
where (.) is the cumulative distribution function (CDF) of standard normal distri-
bution, μĜ (X) is the mean of Kriging predictor at point X, and σĜ (X) is the corre-
sponding standard deviation. For reliability analysis, the value of G (X) is set to zero
and the value of (X) is recommended to be 2σĜ (X) or 5σĜ (X) for sensitive highly
nonlinear analysis.
The stopping criterion (ST) for the KO function is defined as the probability of
occurrence of a point X outside of the desired area, or in other words, the ST is the
complement of KO function (i.e., ST = 1-KO) [7]. At the end of each updating loop
(see Fig. 1), the maximum value of KO would be selected and if it is less than a
prescribed ST, the loop would be terminated, and the final Kriging would be found.
KO is a versatile learning function because multiple combinations of ST and (X),
can be used.
4 Examples
In this section, two examples are presented. The first example illustrates the steps
of applying AK-MCS using the KO function. The second example examines the
application of the KO function for AK-MCS analysis of a RC column. For the
Kriging predictor, DACE MATLAB Toolbox was used [12, 13].
Example 1 Series System with Four Branches
This example is a series with four branches that has been used for the validity of
Kriging-based reliability methods [4, 22]. The performance function is presented in
Eq. 2.
⎧ ⎫
⎪
⎪ 3 + 0.1(X − Y )2 − (X√+Y )
⎪
⎪
⎪
⎪ ⎪
⎨ 3 + 0.1(X − Y )2 + (X√+Y ) ⎪
2
⎬
G(X, Y ) = min 2 (2)
⎪
⎪ (X − Y ) + √k2 ⎪
⎪
⎪
⎪ ⎪
⎪
⎩ (Y − X ) + √k2 ⎭
where X and Y are two standard normal random variables, k is a constant value which
is set as 6, and G is the performance function. A crude MCS with 106 trials indicates
a probability of failure of 0.0044 and a reliability index of 2.62.
The AK-MCS with 105 trials was conducted per the procedure explained in
previous sections (see Fig. 1) with KO learning function, where the (X) is set to 2σĜ
Reliability Analysis of Structural Elements with Active Learning … 113
Fig. 2 KO function evaluated at different analysis stages for whole AK-MCS trials for Example 1
(X) and ST is set to 95%. The Kriging configuration included a linear regression func-
tion and a Gaussian correlation function, and the number of points for the initial DoE
was set to 6. The analysis indicated that additional 115 points were required to meet
the stopping criterion. Select realizations of the KO learning function for multiple
stages of the analysis corresponding to different added points are presented in Fig. 2.
The KO function is a probability value, and hence, its value always ranges between
zero to one for each AK-MCS trial. From Fig. 2, the area of the KO function for all
AK-MCS trials decreases with the increase in the number of added points, which
suggests that as the Kriging predictor becomes more accurate, a smaller number of
AK-MCS trials are recognized to be in the desired area. Also, as the steps of the anal-
ysis progress, the value of the maximum of the KO learning function for AK-MCS
trials decreases.
The maximum learning function versus the number of added points is presented
in Fig. 3a. The convergency of the maximum learning function to ST of 95% implies
that there is only a 5% chance that the maximum learning point would occur in
the desired area for all AK-MCS trails. The behavior of added points versus the
maximum learning function, as shown in Fig. 3a, indicates a descending trend with
a slight slope followed by a jump, steeper slope, a higher jump, and a very steep
slope, because of the shape of the limit state function which has four branches. The
Kriging predictor is not properly mimicking the original model at the early analysis
stages, while its accuracy improves with the increase in the added points. The jumps
in the convergency curves can also be seen in the probability of failure and reliability
index curves versus the added points, as shown in Figs. 3b and 3c, respectively. The
reliability index and the probability of failure found by AK-MCS converge to the
values found by the crude MCS.
The update in the Kriging predictor from the initial Kriging to the final Kriging
predictor is shown in Fig. 4a, where the initial Kriging is unable to capture the LSF.
The blue points are the initial DoE used to build the initial Kriging, and the red
points are the added points that were concentrated around the LSF and improved the
Kriging predictor significantly, as shown in Fig. 4b. The LSF found by AK-MCS
114 K. Khorramian and F. Oudah
Fig. 3 Convergence for Example 1: a maximum learning function, b probability of failure (Pf ),
and c reliability index (β)
Fig. 4 Kriging prediction for Example 1: a initial and final Kriging, b 3D limit state and learning
points, and c 2D limit state, learning points, and AK-MCS trials
with KO learning function (i.e., the black line) was generally in good agreement with
the LSF found by the original model (i.e., the blue line) as shown in Fig. 4c. The
added points provide the required accuracy of the LSF, while the AK-MCS trials
(i.e., the black dots) determined the required number of added points.
The reliability results indicated that the reliability index and probability of failure
found by AK-MCS are 2.61 and 0.0046, respectively, with 4.5% error in the proba-
bility of failure prediction and 0.38% error in reliability index prediction. Therefore,
it can be concluded that AK-MCS with KO learning function is a valid reliability
method with a reasonable error margin.
For reliability analysis, seven random variables were considered including the
concrete compressive strength ( f c ), yield stress of steel rebar ( f y ), depth of compres-
sive rebar layer (d sc ), depth of tensile rebar layer (d st ), dead load (D), live load (L),
and a professional factor (PF), where the distribution type, coefficient of variation
(COV), and bias are presented in Table 1. The bias for concrete strength k fc was
determined using Eq. 3 [15].
A two-step reliability approach was used, where the mean values of loads are
determined in the first step to form the design equation, and in the second step the
load and resistance models are used to form the performance function [3, 9, 11, 17].
The design equation used for the first step is adapted from ACI 318-19 for the design
of reinforced concrete columns [1], as presented in Eq. 4.
where PD and PL are the nominal dead and live loads, Rn is the nominal resistance
calculated using the first-order analysis per ACI 318-19 [1], φ n is the member reduc-
tion factor (0.65), and U r is the utilization ratio which is a demand-to-capacity ratio
and is set to 1.3 for this example (i.e., over utilized section).
The nominal values of loads were calculated based on Eq. 4 with a dead-to-live
load ratio of 4.0 [16]. The nominal value times the bias yields the mean value, and
the mean value times the coefficient of variation provides the standard deviation. Rn
can also be determined using the second-order analysis, but it was conservatively
considered as first-order capacity. For the second step of the reliability procedure, a
crude MCS or AK-MCS can be adapted, with the performance function presented
in Eq. 5.
where φ(.) is the PDF of standard normal distribution, G + (X) is G(X) + (X), G −
(X) is G(X) − (X).
Figure 6 shows the convergence of AK-MCS probability of failure and reliability
index for the considered learning functions. The AK-CMS with KO5ST99.9 and EFF
Reliability Analysis of Structural Elements with Active Learning … 117
learning functions overpredict the reliability index, while all other learning functions
underpredict the reliability index compared to the crude MCS (shown as a dashed red
line in Fig. 6). Also, the AK-MCS reliability results using the considered learning
functions converge to the crude MCS results at different rates. Table 2 presents a
summary of the reliability results including the number of required calls for the
original model, number of trials for the analysis, probability of failure, reliability
index, and error (∈β ), which is calculated as the absolute difference of the reliability
index obtained using crude MCS minus the one obtained from AK-MCS and divided
by the crude MCS reliability index.
From Table 2, it is observed that AK-MCS with KO learning function predicts
the reliability results with an error range of 0.22–0.68%, while the analysis with
U learning function provides 0.31% error and the one with EFF learning function
provides 0.37% error. The analysis with the KO learning function requires 53 to 71
added points compared to the analysis with U and EFF learning functions that require
66 and 63 added points, respectively. AK-MCS with KO5ST99.9 is the most accurate
analysis among the studied cases with 0.22% error, and AK-MCS with KO2ST95 is
the most efficient analysis requiring 53 added points with a fair error of 0.56%.
Fig. 6 Comparison of convergency for Example 2: a probability of failure and b reliability index
5 Conclusion
This paper investigates the validity and efficiency of a new learning function named
KO learning function for active learning Kriging Monte Carlo simulation (AK-
MCS). The AK-MCS is a strong replacement for the crude MCS, because the former
provides efficient, yet accurate solution compared to the latter. In this paper, the
KO function was introduced, and its stopping criterion was illustrated through two
computational examples: a mathematical function with two random variables and a
reinforced concrete column reliability analysis with seven random variables.
The analysis of the studied examples showed the validity of the KO learning
function for AK-MCS analysis, where the analysis captured the limit state function
properly and assessed the reliability index and the probability of failure with a good
degree of preciseness compared to crude MCS. Also, AK-MCS with KO learning
function was compared to AK-MCS with EFF and U learning functions. The results
indicated that the KO function enhanced the efficiency and accuracy of the analysis
compared to the other two learning functions, for the studied examples. The study
did not cover the effect of randomness in the initial DoE and the effectiveness of
other AK configurations such as correlation and regression functions, which will be
addressed in future studies.
References
1. ACI (American Concrete Institute) (2019) Building requirements for structural concrete and
commentary. ACI 318-19, Farmington Hills, MI, USA
2. Bichon BJ, Eldred MS, Swiler LP, Mahadevan S, McFarland JM (2008) Efficient global
reliability analysis for nonlinear implicit performance functions. AIAA J 48(10):2459–2468
3. Buckley E, Khorramian K, Oudah F (2021) Application of adaptive Kriging method in bridge
girder reliability analysis. In: CSCE 2021 annual conference, Virtual, STR288: 1–8
4. Echard B, Gayton N, Lemaire M (2011) AK-MCS: an active learning reliability method
combining Kriging and Monte Carlo simulation. Struct Saf 33(1):145–154
5. Kaymaz I (2005) Application of kriging method to structural reliability problems. Struct Saf
27(2):133–151
6. Khorramian K, Oudah F (2022a) Active learning Kriging-based reliability for assessing
the safety of structures: theory and application. In: Leveraging artificial intelligence into
engineering, management, and safety of infrastructure. Taylor & Francis (CRC)
7. Khorramian K, Oudah F (2022b) A new learning function for active learning Kriging reliability
analysis using a probabilistic approach: KO function. Structural safety, Under review
8. Khorramian K, Sadeghian P, Oudah F (2022c) Slenderness limit for GFRP-RC columns: a
reliability-based approach. ACI Struct J, Accepted
9. Khorramian K, Oudah F, Sadeghian P (2021a) Reliability-based evaluation of the stiffness
reduction factor for slender GFRP reinforced concrete columns. In: CSCE annual conference,
Virtual, STR181: 1–8
Reliability Analysis of Structural Elements with Active Learning … 119
10. Khorramian K, Sadeghian P, Oudah F (2021b) Second-order analysis of slender GFRP rein-
forced concrete columns using artificial neural network. In: CSCE annual conference, Virtual,
STR180: 1–9
11. Khorramian K, Sadeghian P, Oudah F (2021c) A preliminary reliability-based analysis for
slenderness limit of FRP reinforced concrete columns. In: 8th International conference on
advanced composite materials in bridges and structures, ACMBS, Virtual, 73–78
12. Lophaven SN, Nielsen HB, Sondergaard J (2002a) DACE: a Matlab Kriging toolbox. Technical
Report IMM-TR-2002-1
13. Lophaven SN, Nielsen HB, Søndergaard J (2002b) DACE: a Matlab kriging toolbox. Vol. 2.
IMM informatics and mathematical modelling. The Technical University of Denmark, 1–34
14. Lv Z, Lu Z, Wang P (2015) A new learning function for Kriging and its applications to solve
reliability problems in engineering. Comput Math Appl 70(5):1182–1197
15. Nowak AS, Szerszen MM (2003) Calibration of design code for buildings (ACI 318): Part
1—Statistical models for resistance. ACI Struct J 100(3):377–382
16. Oudah F, El Naggar MH, Norlander G (2019) Unified system reliability approach for single and
group pile foundations-theory and resistance factor calibration. Comput Geotech 108:173–182
17. Oudah F, Khorramian K, Sadeghian P (2021) Optimized reliability-based approach for FRP
strengthening of existing concrete columns deficient under axial load. In: 8th International
conference on advanced composite materials in bridges and structures, ACMBS, Virtual, 153–
158
18. Shi Y, Lu Z, He R, Zhou Y, Chen S (2020) A novel learning function based on Kriging for
reliability analysis. Reliab Eng Syst Saf 198:106857
19. Shield CK, Galambos TV, Gulbrandsen P (2011) On the history and reliability of the flexural
strength of FRP reinforced concrete members in ACI 440.1 R. ACI Special Publ 275:1–18
20. Sun Z, Wang J, Li R, Tong C (2017) LIF: a new Kriging based learning function and its
application to structural reliability analysis. Reliab Eng Syst Saf 157:152–165
21. Zhang X, Wang L, Sørensen JD (2019) REIF: a novel active-learning function toward adaptive
Kriging surrogate models for structural reliability analysis. Reliab Eng Syst Saf 185:440–454
22. Zhang X, Wang L, Sørensen JD (2020) AKOIS: an adaptive Kriging oriented importance
sampling method for structural system reliability analysis. Struct Saf 82:101876
Effect of Random Fields in Stochastic
FEM on the Structural Reliability
Assessment of Pile Groups in Soil
Abstract Stochastic finite element method (SFEM) is used to consider the spatial
variability in the material properties of the model components by utilizing random
fields (RF). When combined with a robust reliability framework of analysis, SFEM
offers significant advantages as compared with conventional deterministic FE anal-
ysis, particularly for complex systems with variability in the material properties.
SFEM has been used in literature by examining the reliability of pile group foun-
dation in soil by considering the spatial variability of the soil while discarding the
variability in the properties of the pile and the pile cap. The objective of this paper is
to recommend the minimum number of RFs required to effectively model the resis-
tance of a pile group consisting of 10-m-long four steel piles embedded in loose sand
and evaluated at the serviceability limit state (SLS). A limited parametric study of the
following cases was conducted: deterministic analysis without RF, SFEM with only
soil RFs (friction angle and stiffness), SFEM with soil and pile RFs, and SFEM with
soil, pile, and pile cap RFs. Analysis results concluded that discarding the spatially
variability in the piles and pile cap has marginal impact on the resistance model at
SLS, and thus, SFEM with only considering the variability in soil suffice for the
considered cases. The SFEM model with soil RFs was utilized to conduct a crude
Monte Carlo simulation (MCS) to evaluate the reliability of the pile group at SLS.
1 Introduction
Accurate reliability analysis of laterally loaded pile groups in soil requires advanced
computational methods to build the resistance model by considering the variability
in the material properties of the soil and the interaction of the pile group with soil.
Stochastic finite element method (SFEM) can be realized to build the resistance model
because it considers the spatial variability in the material properties of the model
components through the use of random fields (RF). A literature review indicates the
following gaps in research related to the use of SFEM in assessing the reliability of
geotechniques [2, 14, 16, 17]: (a) lack of work that considered the combined effect of
the spatial variability in the properties of soil, piles, and the pile cap on the reliability
analysis of laterally loaded pile groups, (b) lack of work that considered the three-
dimensional (3D) variability of the soil behavior as opposed to representing soil
properties as one or two dimensional RFs; and (c) lack of work that considered the
lateral load as a random variable for studies where the crude Monte Carlo simulation
(MCS) was conducted using SFEM.
The objective of this paper is to address the identified research gaps by recom-
mending the minimum number of RFs required to effectively model the resistance
of a laterally loaded pile group consisting of 10-m-long four steel piles embedded in
loose sand and evaluated at the serviceability limit state (SLS). A limited parametric
analysis was conducted to assess the effect of RFs on the model response. Four
cases were considered: deterministic analysis without RF, SFEM with only soil RFs,
SFEM with soil and pile RFs, and SFEM with soil, pile, and pile cap RFs. The lateral
load–displacement behavior, stress distributions in the soil, and the bending moment
of the piles were compared to recommend the most suitable model to be considered
for reliability analysis. Finally, MCS was conducted using the recommended SFEM
model.
The reliability analysis was conducted for the performance function expressed in
Eq. 1 for SLS.
where G(X) is the performance function, RSFEM (X) is the resistance model suing
SFEM, L(X) is the lateral load, and X is a vector of random variables and RFs.
A two-step reliability analysis was utilized in this study [6–8]. The first step
includes an assessment of the mean of the wind load based on a deterministic FE
analysis by setting the mean resistance at SLS equal to the mean of the wind load.
For the second step, crude MCS was conducted by considering randomly generated
realizations of the RFs.
The lateral load model L(X) includes the wind load (i.e., W (X)) and the wind load
transformation effect (W T (X)) as expressed in Eq. 2.
Effect of Random Fields in Stochastic FEM on the Structural Reliability … 123
The bias, coefficient of variation (COV), and distribution of W (X) were considered
as 1.049, 0.103, and Gumbel distribution, respectively, while they were considered
as 0.68, 0.22, and lognormal distribution, respectively, for W T (X) [3].
The performance function in Eq. 1 was evaluated at the SLS, where the resistance
was considered as the lateral load corresponding to 25 mm displacement in the lateral
load–displacement curve. The development of the 3D SFEM simulation used to build
the resistance model is explained in the following subsections.
In this study, the soil stiffness, soil friction angle, yield stress of steel piles, and the
stiffness of the concrete cap were considered as 3D RFs. The realizations of the RFs
were generated using the expansion optimal linear estimation (EOLE) method [9,
15], where the summation of M number of eigenvectors of the covariance matrix
is used along with M standard normal random variables for each realization of the
124 K. Khorramian et al.
Fig. 1 Section cut through the FE model for pile group in loose sand
fields. Since EOLE works only for Gaussian RFs while lognormal RFs were used
in this study, Nataf transformation [10, 11] was used to convert the lognormal to the
normal RFs. More information on RF realizations can be found in literature [4, 5].
All RFs were lognormally distributed. The Gaussian correlation function with a
correlation length of 5 m was considered for soil stiffness and friction angle, while
a correlation length of 1 m was considered for concrete cap elastic modulus and
yield stress of steel piles. The mean of the soil modulus was 1625z + 1 kPa, where
z is the depth of the soil, a bias of 1 and a field coefficient of variation of 0.2 were
also considered for the modulus of elasticity of soil. For soil friction angle, the same
COV and bias as the soil modulus of elasticity were adapted with a mean value of
30.8°. The bias, mean, and COV were considered as 1.136, 26.6 GPa, and 0.1078,
respectively, for the concrete cap stiffness, and 1.17, 350 MPa, and 0.229 for the
yield strength of the steel piles [13]. Sample realizations of the soil random fields
for friction angle and stiffness are shown in Fig. 2a and b, respectively, and sample
realizations for pile cap stiffness and yield stress of piles are shown in Fig. 3a and b,
respectively.
Effect of Random Fields in Stochastic FEM on the Structural Reliability … 125
Fig. 2 A sample of random field realizations for soil properties: a friction angle, and b soil stiffness
Fig. 3 A sample of random field realizations for the pile group: a pile cap stiffness, and b yield
stress of piles
without RF, SFEM with only soil RFs, SFEM with soil and pile RFs, and SFEM with
soil, pile, and pile cap RFs.
The pile group resistance at the SLS limit of 25 mm for the models considering
RFs was 19% less than the model without RFs. The initial 2 mm displacement for
the models with RFs occurs due to the unbalance settlement of the pile group under
its own weight when the spatial variation in the soil stiffness in considered in the
SFEM. The soil von Mises stresses are shown in Fig. 5 for three of the considered
cases, where soil stresses are maximum when RF are not considered in the analysis
(i.e., No RF).
Applying RF to the pile cap and piles did not result in a notable change in the
bending moment of piles as shown in Fig. 6. In general, for fixed-headed long piles
installed in noncohesive soil, failure is expected at the pile rather than the soil (i.e.,
plastic hinges form at the maximum moment prior to the mobilization of the soil
capacity). However, for this study, the capacity of the group was determined at a
specific SLS failure criterion (i.e., displacement of 25 mm), where piles are not
expected to yield. It can be concluded that considering the spatial variability of the
stiffness of the pile cap and the yield strength of piles is not required for the studied
problem because it marginally impacts the response.
4 Reliability Analysis
The reliability analysis was conducted by only considering the 3D spatial variability
in the soil response based on the recommendations of the parametric analysis included
in Sect. 3. Wind and wind load transformation were considered as random variables,
and soil stiffness and friction angle were considered as RFs. MCS was used to conduct
the reliability analysis [12]. For each trial of the MCS, 3D RFs of the soil stiffness
and friction angle were generated, as shown in Fig. 7. Six terms of eigenvectors were
considered for the EOLE procedure because the associated variance is less than the
128 K. Khorramian et al.
Fig. 7 MCS trials for soil properties: a soil friction angle and b soil stiffness
recommended value of 5% in literature [4, 5]. The MCS was conducted using 173
trials which led to the assessment of a reliability index of 1.20 and a probability of
failure of 0.1149.
5 Conclusion
The objective of the paper was to recommend the minimum number of random fields
required to assess the resistance of a four-pile group foundation in loose sand at the
serviceability limit state (SLS) using a reliability framework of analysis. Nonlinear
three-dimensional (3D) stochastic finite element models (SFEM) were developed
to conduct a limited parametric study aimed at investigating the sensitivity of the
SFEM model to spatial variability in the soil properties, pile strength, and pile cap
stiffness using expansion optimal linear estimation (EOLE) random fields. Analysis
results suggest that considering the spatial variability in the soil properties (friction
angle and stiffness) suffice for the reliability analysis because the resistance model
was insensitive to variations in the pile and concrete cap properties.
Monte Carlo simulation (MCS) was conducted to assess the reliability index at
SLS for the considered four-pile group, where the resistance model was based on the
SFEM with soil random fields and the load model considered the variability in the
applied wind load. The reliability analysis indicated a reliability index of 1.2 for the
considered pile group configuration. The developed framework of analysis including
Effect of Random Fields in Stochastic FEM on the Structural Reliability … 129
random field generation and MCS will be utilized in future research to evaluate the
reliability index at ULS using adaptive learning techniques.
Acknowledgements The authors would like to thank Dalhousie University and Mathematics of
Information Technology and Complex Systems (Mitacs) and Norlander Oudah Engineering Ltd
(NOEL) for supporting this research program.
References
1 Introduction
Let G(X i , Z(t), V (s), t, s) describe the response of a structural system or a compo-
nent within a structural system, where X i is a vector of the ith realizations of the
time-independent random variables, Z(t) is a vector of realizations of the time-
dependent random processes at time t, and V (s) is a vector of realizations of the
random fields at the spatial dimension s. Let gi (Z(t), V (s), t, s) be a conditional
response evaluated for sample X i . G(X i , Z(t), V (s), t, s) and gi (Z(t), V (s), t, s)
are related as expressed in Eq. 1.
The procedure to assess the reliability of existing structures using the proposed
approach consists of the following five steps.
Step 1. Generate n number of realizations for X, Z, and V considered in Eq. 1.
Conduct SFE analysis per ith set of n realizations to predict the response gi
of the structure.
Step 2. Train n LSTM networks based on the response gi evaluated in Step 1. The
optimum number of n networks needed to achieve a predefined confidence
interval can be determined using adaptive techniques or a trail-and-error
approach.
Step 3. Generate N augmented gi responses per n trained LSTM networks using
MC sampling. The total number of predicted responses per time step is n
times N . Responses obtained in this step are referred to as “augmented”
since they are based on LSTM prediction and not a direct evaluation of the
performance function.
Step 4. Perform GP regression to form a predictor matrix for G in Eq. 1 of size k
by n, where k refers to the considered time increment in months or years.
Elements within the GP predictor matrix resemble trained GP regression
networks based on regression of n by l data, where l is the number of
realizations of the independent random variables used in training the LSTM
models.
Step 5. Estimate the probability of failure and the reliability index using MC simula-
tion by using the predictor developed in Step 4. Set a limit for the indicator
function and sum the number of failed points, N f (i.e., number of real-
izations not satisfying the limit set by the indicator). The probability of
exceedance (i.e., probability of failure) is the ratio of N f to N . The indi-
cator limit can be zero or non-zero depending on the limit state function
considered in the analysis.
The procedure described in Sect. 2.2 was applied to assess the reliability of the
mathematical function expressed in Eq. 3 and compared with crude MC analysis for
verification. The example is based on the work by Li and Wang [7], but with different
means and standard deviations for the random variables. The example included X and
Z only. Two time-independent normally distributed random variables and three time-
dependent stationary Gaussian processes were considered. The mean and standard
deviation of X and Z are included in Table 1. Exponential autocorrelation function
ρ was used to correlate between time tk and tk−1 to build the correlation matrix of the
processes as expressed in Eq. 4. The autocorrelation length, λ, was set to 0.01, 0.005,
and 0.005 for z 1 , z 1 , and z 3 , respectively. The Gaussian processes were generated
using direct decomposition of the complete covariance matrix.
Time-Dependent Reliability Analysis of Degrading Structural Elements … 135
(tk − tk−1 )2
ρ(tk , tk−1 ) = exp − (4)
λ
Ten LSTM models were utilized (n = 10 in Step 2 in Sect. 2.2). Analysis results
indicated a reliability index of 3.30 using the proposed procedure for a performance
indicator of G < 0 with 105 MC simulations, while crude MC analysis with 106 trails
yielded a reliability index of 3.41. The difference in the reliability indexes is marginal
and considered acceptable for structural engineering applications. The accuracy of
the proposed procedure for this example is, therefore, validated.
Table 2 Input statistical parameters of the random variables and fields considered in Example 2
Var. Definition Distribution Mean Bias COV
x1 Sustained load Normal 46.28 kN 1.00 0.010
x2 Concrete compressive Normal 36.70 MPa 1.14 0.140
strength
z 1 (t) Variable load Gaussian process 30.85 kN 1.00 0.170
z 2 (s) Modulus of GFRP at t Lognormal field 45.80 GPa 1.00 0.068
=0
z 3 (s) Strength of GFRP at t Lognormal field 756 MPa 1.00 0.068
=0
z 4 (s, t) Degradation rate Lognormal temporal field 0.008* 1.00 0.500
* unit
of GPa/k for degradation of GFRP modulus or MPa/k for degradation of GFRP strength,
where k is the time interval
The random variables and the corresponding distribution functions are summarized
in Table 2. Two time-dependent random variables, one process, two spatial random
fields, and one time–space random field were considered in the analysis. The mean
values for the load were obtained to yield a utilization ratio of unity (i.e., demand-
to-capacity ratio of 1 calculated based on CSA S806-17 [4]) and to induce a mean
sustained load of 40% in the GFRP reinforcement within the constant moment region
when subjected to four-point bending. The mean values for the material properties
and the degradation rate were based on Esmaeili et al. [5]. The bias and COV were
based on literature and assumptions. The mean of z 4 (s, t) corresponds to the constant
moment region and linearly varies to zero at the beam supports.
A three-dimensional (3D) nonlinear SFE model of the studied beam was devel-
oped using ABAQUS/Explicit [1]. The model was validated against the experimental
testing with a marginal percentage difference of 6% for the moment of resistance.
Figure 1 presents the geometry, loading, and mesh configuration of the SFE model.
Eight-nodes brick elements with reduced integration (C3D8R) and adaptive meshing
(ALE) were used for the concrete volume to avoid element distortion and hour
glassing. The GFRP cage (i.e., longitudinal and transverse rebars) was modeled
using beam elements. Embedded region constraint was specified to define the host
(concrete) and embedded regions (GFRB) in the model. The concrete section was
modeled using the concrete damage plasticity model (CPDM). The stress–strain rela-
tionships were obtained based on CEB-FIB model code [2]. The GFRB rebars were
modeled as linear elastic.
Time-Dependent Reliability Analysis of Degrading Structural Elements … 137
The GFRP modulus of elasticity, E GFRP (s, t), and the GFRP strength, f GFRP (s, t),
used to model the material response of the tension GFRP reinforcement are consid-
ered as random fields calculated as expressed in Eqs. 5 and 6, respectively. The
variables E GFRP (s, t) and f GFRP (s, t) are fully correlated. An exponential correla-
tion function was used to build the covariance matrix of z 1 , z 2 , z 3 , and z 4 , where z 1
was correlated in one temporal dimension, z 2 and z 3 were correlated in two spatial
dimensions, and z 4 was correlated in two spatial and one temporal dimensions. z 1
to z 4 were generated using the ELOE method. The correlation length was 1, 0.1,
0.01, 0.01, 0.01, and 1 for z 1 , z 2 z-dimension, z 2 x-dimension, z 4 z-dimension, z 4 x-
dimension, and z 4 time-dimension, where the cartesian dimension is shown in Fig. 1.
z 3 was fully correlated to z 2 .
Two sample realizations of E GFRP (s, t) are shown in Fig. 2. The sample real-
izations show the spatial variability of the modulus of elasticity along the lengths
of the tension reinforcing GFRP rebars indicated in Fig. 1 over time. The degrada-
tion profile of E GFRP (s, t) follows the shape of the constant moment region because
creep-rupture effect is a function of the sustained force in the GFPR rebars.
The load protocol consists of applying the sustained load, followed by applying
the variable load. The SFE model predicted the change in the demand force in the
tension reinforcement over 10 years in increments of 6 months (k of 20 in step 4 of
138 F. Oudah and A. E. Alhashmi
Sect. 2.2) due to concrete cracking and the cycles of loading and unloading generated
by the variable load.
The analysis results are presented in terms of the five solution steps described in
Sect. 2.2. Ten SFE models were built following the procedure outlined in Sect. 4.3
and based on the statistical parameter in Sect. 4.2. The response gi obtained from the
SFE models corresponded to the minimum of the difference between f GFRP (s, t)
described in Eq. 6 and the FE obtained GFRP stress in tension reinforcement (Step
1). Ten LSTM models were trained to predict the response gi , where sample trained
LSTM models are shown in Fig. 3 for a period of 10 years (Step 2), where the “actual”
and “predicted” gi correspond to the SFE obtained response and LSTM trained
response, respectively. One hundred thousand augmented responses were obtained
using each LSTM model (Step 3). GP regression was performed on the augmented
responses to form a predictor of G (Step 4). MC simulation was performed with 105
trails using the predictor formed in Step 4 to estimate the reliability index, where
the indicator corresponded to having G < 0. The reliability index was 2.30 for a
reference period of 10 years. Future work will examine the accuracy of the reliability
estimation based on the number of trained LSTM trails.
Time-Dependent Reliability Analysis of Degrading Structural Elements … 139
5 Conclusion
Acknowledgements The authors would like to acknowledge the financial support of Dalhousie
University and the Natural Sciences and Engineering Research Council of Canada.
140 F. Oudah and A. E. Alhashmi
References
Abstract Municipal solid waste management has emerged in the recent few years in
contemporary built environments due to the rapid increase in population and urban-
ization. Hence, this research aims at developing a hybrid sine cosine algorithm-
based feed-forward artificial neural network model for forecasting waste quanti-
ties in Poland. In addition, the developed hybrid model is compared against the
classical feed-forward artificial neural network model. The performance evaluation
analysis is explored using the indicators of mean bias error, root-mean-squared error,
Pearson correlation coefficient, Willmott’s index of agreement, and coefficient of effi-
ciency. Test results illustrated that the developed hybrid feed-forward artificial neural
network model trained using sine cosine algorithm significantly outperformed the
classical neural network model. It can be argued that the developed model could
assist decision-makers in the proper management of growing quantities of municipal
solid wastes.
N. Elshaboury
Construction and Project Management Research Institute, Housing and Building National
Research Centre, Giza 12311, Egypt
A. Al-Sakkaf (B)
Department of Buildings, Civil and Environmental Engineering, Concordia University, Montreal,
QC H3G 1M8, Canada
e-mail: alsakkaf.abobakr@concordial.ca
Department of Architecture and Environmental Planning, College of Engineering and Petroleum,
Hadhramout University, 50512 Mukalla, Yemen
G. Alfalah
Department of Architecture and Building Science, College of Architecture and Planning, King
Saud University, Riyadh 11421, Saudi Arabia
E. Mohammed Abdelkader
Structural Engineering Department, Faculty of Engineering, Cairo University, Giza 12613, Egypt
1 Introduction
A neural network is a nonlinear system that is inspired by the structure of the brain.
It is often made up of a large number of neurons grouped in layer(s) and linked
by weights and biases. Learning and testing are the two major stages in ANN. The
learning phase teaches the network how to determine a connection between input(s)
and output(s). Based on the training network, the testing phase predicts the output(s)
from the input(s). ANN finds patterns between input and output variables, which aids
in comprehending the system’s predictions. It can also detect changes in the strength
of variables’ correlations. Neural network models are nonlinear, which implies that
changes in network outputs are not proportional to changes in input variables. These
models can capture complicated data patterns and characteristics that linear models
cannot [29].
In this study, a neural network is used to forecast the MSW quantities across Polish
cities. The weights and biases of the FFNN are adjusted based on the differences
between predicted and goal values. As a result, the neural network may be trained
to determine the optimum weights and biases. Combining neural networks and opti-
mization algorithms improves problem-solving issues while avoiding overfitting and
146 N. Elshaboury et al.
Report the
findings of
Simulate and model
test model
Train FFNN using
using SCA assessment
Develop FFNN metrics
model
Prepare input
and output
parameters
local minima [21]. In this study, the SCA is utilized to train the FFNN model to
improve its performance. This algorithm is widely recognized as among the most
popular and efficient FFNN training techniques [27, 33]. More information on the
SCA can be found in the literature [28]. Figure 1 shows the flowchart of the hybrid
FFNN model. To begin training the network, the optimization algorithm establishes
the weights and calculates their fitness functions. The network fitness is interpreted
in this study by calculating the MSE as shown in Eq. 1. When the global best solution
(i.e., the minimal error function) is found, the optimization process comes to an end.
n
( pi − oi )2
MSE = i=1
, (1)
n
where n refers to the number of records as well as pi and oi refer to the predicted
and observed values, respectively.
1
n
MBE = | pi − oi |, (2)
n i=1
Automated Assessment of Municipal Solid Wastes Using a Hybrid Sine … 147
n
1
RMSE = (oi − pi )2 , (3)
n i=1
n
(oi − o)( pi − p)
R = i=1 , (4)
n 2 n
i=1 (o i − o) i=1 ( pi − p) 2
n
i=1 (oi − pi )
2
W I = 1 − n , (5)
i=1 (| pi − o| + |oi − o|)
2
n
( pi − oi )2
CE = 1 − i=1 n , (6)
i=1 (oi − o)
2
3 Model Development
The primary goal of this research is to anticipate MSW volumes in Polish cities based
on economic, social, and demographic aspects. The proposed framework’s flowchart
is illustrated in Fig. 2. The main components of the framework include preparing
the data matrix and determining the related input and output parameters. Population
(capita), employment-to-population ratio (percent), revenue per capita, number of
entities by type of business activity, and number of entities enlisted in the official
national register of business entities (REGON) per 10,000 population are among
the input factors, while total MSW quantity represents the output factor. The data
records are then split into two classes for training and testing purposes. The neural
network models are constructed for this purpose, and their prediction performance
is assessed using assessment measures. Finally, based on the presented results and
findings, the optimal forecasting model is recommended.
4 Case Study
The data for this study is extracted from a prior research project in Poland [25]. Table 1
shows the waste generation characteristics of several Polish cities with various social
and economic elements in 2019. The statistical data on MSW is depicted in Table
2, which shows that the wastes do not follow a normal distribution and are rather
right-skewed due to their positive skewness value.
Table 2 Statistical analysis of input and output parameters for MSW forecast
Statistical Population The Revenue Number Number Total
measures employment-to-population per of of entities MSW
ratio Capita entities enlisted in (mg)
by type REGON
of per 10,000
business population
activity
Mean 275,634.1 59.1 6650.6 8603.5 1453.3 35,914.6
Maximum 1,790,658.0 62.4 10,154.9 60,948.0 2548.0 129,111.6
Minimum 1839.0 56.4 4449.9 92.0 856.0 167.4
Median 99,350.0 58.9 6855.1 4197.0 1384.0 13,441.0
Standard 395,943.5 1.4 1352.4 13,022.0 443.5 41,816.4
deviation
Skewness 2.6 0.8 0.2 3.0 0.8 1.0
Kurtosis 8.4 0.6 0.4 10.8 0.2 − 0.5
5 Results
To ensure a fair comparison of the models, the FFNN and FFNN–SCA models
assume the same number of hidden neurons (i.e., 10). SCA assumes population
size and maximum number of iterations of 50 and 200, respectively. The neural
network models are built using the MATLAB R2019a. Figure 4 illustrates a plot of
the actual and predicted MSW across Polish cities. MBE, RMSE, R, WI, and CE are
five distinct metrics used to evaluate the performance of neural network models. As
depicted in Table 3, FFNN and FFNN–SCA models have MBE values of 30,652.04
and 16,803.80, respectively. RMSE values for the FFNN and FFNN–SCA models
are 38,571.68 and 22,455.08, respectively. The R-values of the FFNN and FFNN–
SCA models are 0.68 and 0.85, respectively. The FFNN–SCA model has a WI of
0.92, which is much higher than the FFNN model (WI = 0.78). Similar to R and WI
results, FFNN–SCA performs better in terms of CE. This illustrates that the FFNN
model trained using the SCA algorithm outperforms the standard FFNN model. As
a consequence, the FFNN–SCA model may be used to anticipate MSW volumes in
Poland using social and economic factors.
The results of the FFNN–SCA model are compared to those given in the literature,
as given in Table 4. Noori et al. [30] employed ANN and principal component
regression to estimate MSW output in Tehran to make short-term weekly predictions.
Fig. 4 Plot of the actual and predicted MSW across Polish cities
Table 3 Performance
Performance metric Prediction models
metrics of the standalone and
hybrid neural network models FFNN FFNN–SCA
MBE 30,652.04 16,803.80
RMSE 38,571.68 22,455.08
R 0.68 0.85
WI 0.78 0.92
CE 0.11 0.70
Automated Assessment of Municipal Solid Wastes Using a Hybrid Sine … 151
Table 4 Comparison of the findings of this study to those reported in the literature
Authors Performance metrics
Noori et al. [30] AARE = 0.044 and R = 0.837
Adeleke et al. [4] R-values for organic, paper, plastics, and textile wastes = 0.916, 0.862,
0.834, and 0.826
Proposed research MBE = 16,803.80, RMSE = 22,455.08, R = 0.85, WI = 0.92, and CE =
study 0.70
R and average absolute relative error (AARE) for the ANN were 0.837 and 0.044,
respectively. These metrics were deemed superior to those obtained through principal
component regression (R = 0.445 and AARE = 0.066). Adeleke et al. [4] used ANN
to predict the physical waste streams in Johannesburg based on climatic parameters.
The best topologies for forecasting organic, paper, plastics, and textile waste have
R-values of 0.916, 0.862, 0.834, and 0.826, respectively. The MBE, RMSE, R, WI,
and CE measures for the FFNN–SCA model in this study are 16,803.80, 22,455.08,
0.85, 0.92, and 0.70, respectively. As a result, the suggested model improves on
previously published performance measures.
6 Conclusions
might be enhanced further by selecting the most essential input elements in the data
set using dimensionality-reduction approaches.
References
1. Abdel-Shafy HI, Mansour MS (2018) Solid waste issue: sources, composition, disposal,
recycling, and valorization. Egypt J Pet 27(4):1275–1290
2. Abdulredha M, Abdulridha A, Shubbar AA, Alkhaddar R, Kot P, Jordan D (2020) Estimating
municipal solid waste generation from service processions during the Ashura religious event.
IOP Conference Series: Materials Science and Engineering 671(1):012075
3. Adeleke O, Akinlabi SA, Jen TC, Dunmade I (2020) Prediction of municipal solid waste
generation: an investigation of the effect of clustering techniques and parameters on ANFIS
model performance. Environmental Technology, 1–14
4. Adeleke O, Akinlabi SA, Jen TC, Dunmade I (2021) Application of artificial neural networks
for predicting the physical composition of municipal solid waste: an assessment of the impact
of seasonal variation. Waste Manage Res 39(8):1058–1068
5. Adhikari S, Nam H, Chakraborty JP (2018) Conversion of solid wastes to fuels and chemicals
through pyrolysis. Waste Biorefinery 239–263
6. Ali SA, Ahmad A (2019) Forecasting MSW generation using artificial neural network time
series model: a study from metropolitan city. SN Applied Sciences 1(11):1–16
7. Arafat HA, Arafat AR (2011) Prediction of generation rate of municipal solid waste in Pales-
tinian territories based on key factors modelling. Solid Waste Management Environmental
Remed 425–440
8. Araiza-Aguilar JA, Rojas-Valencia MN, Aguilar-Vera RA (2020) Forecast generation model
of municipal solid waste using multiple linear regression. Global Journal of Environmental
Science and Management 6(1):1–14
9. Buenrostro O, Bocco G, Vence J (2001) Forecasting generation of urban solid waste in
developing countries-a case study in Mexico. J Air Waste Manag Assoc 51(1):86–93
10. Chapman-Wardy C, Asiedu L, Doku-Amponsah K, Mettle FO (2021) Modeling the amount
of waste generated by households in the greater Accra region using artificial neural networks.
Journal of Environmental and Public Health
11. Elshaboury N, Abdelkader EM, Al-Sakkaf A, Alfalah G (2021) Teaching-learning-based
optimization of neural networks for water supply pipe condition prediction. Water 13(24):3546
12. Elshaboury N, Mohammed Abdelkader E, Alfalah G, Al-Sakkaf A (2021) Predictive anal-
ysis of municipal solid waste generation using an optimized neural network model. Processes
9(11):2045
13. Eurostat (2022) Municipal waste generation up to 505 kg per person. Available online: https:/
/ec.europa.eu/eurostat/web/products-eurostat-news/-/ddn-20220214-1. Accessed on 3 March
2022
14. Fan Z, Fan Y (2019) Research on prediction of municipal solid waste production based on grey
relation analysis and grey prediction model. IOP Conference Series: Earth and Environmental
Science 300(3):032070
15. Ghinea C, Cozma P, Gavrilescu M (2021) Artificial neural network applied in forecasting
the composition of municipal solid waste in Iasi, Romania. J Environ Eng Landsc Manag
29(3):368–380
16. Gómez-Sanabria A, Kiesewetter G, Klimont Z, Schoepp W, Haberl H (2022) Potential for
future reductions of global GHG and air pollutants from circular waste management systems.
Nat Commun 13(1):1–12
17. Grodzińska-Jurczak M (2001) Management of industrial and municipal solid wastes in Poland.
Resour Conserv Recycl 32(2):85–103
Automated Assessment of Municipal Solid Wastes Using a Hybrid Sine … 153
18. Huang L, Cai T, Zhu Y, Zhu Y, Wang W, Sun K (2020) LSTM-based forecasting for urban
construction waste generation. Sustainability 12(20):8555
19. Islam MR, Kabir G, Ng KTW, Ali SM (2022) Yard waste prediction from estimated municipal
solid waste using the grey theory to achieve a zero-waste strategy. Environmental Science and
Pollution Research 1–16
20. Jassim MS, Coskuner G, Zontul M (2022) Comparative performance analysis of support vector
regression and artificial neural network for prediction of municipal solid waste generation.
Waste Manage Res 40(2):195–204
21. Kawam AA, Mansour N (2012) Metaheuristic optimization algorithms for training artificial
neural networks. International Journal of Computer and Information Technology 1(2):156–161
22. Kaza S, Yao L, Bhada-Tata P, Van Woerden F (2018) What a waste 2.0: a global snapshot of
solid waste management to 2050. World Bank Publications, Washington
23. Kidane H, Tesfie N, Tadesse ATK (2020) Time series forecasting the quantity of municipal
solid waste generation using linear regression integrated with moving average in Mekelle
City-Ethiopia. Technology Reports of Kansai University 62(10)
24. Klojzy-Karczmarczyk B, Makoudi S (2017) Analysis of municipal waste generation rate in
Poland compared to selected European countries. E3S Web of Conferences 19:02025
25. Kulisz M, Kujawska J (2020) Prediction of municipal waste generation in Poland using neural
network modeling. Sustainability 12(23):10088
26. Kumar S, Gaur A, Kamal N, Pathak M, Shrinivas K, Singh P (2020) Artificial neural network
based optimum scheduling and management of forecasting municipal solid waste generation–
case study: greater Noida in Uttar Pradesh (India). J Phys: Conf Ser 1478(1):012033
27. Majhi SK (2018) An efficient feed foreword network model with sine cosine algorithm for
breast cancer classification. International Journal of System Dynamics Applications 7(2):1–14
28. Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl-
Based Syst 96:120–133
29. Mohammed Abdelkader E, Al-Sakkaf A, Ahmed R (2020) A comprehensive comparative
analysis of machine learning models for predicting heating and cooling loads. Decision Science
Letters 9(3):409–420
30. Noori R, Abdoli MA, Ghazizade MJ, Samieifard R (2009) Comparison of neural network and
principal component-regression analysis to predict the solid waste generation in Tehran. Iran J
Public Health 38(1):74–84
31. Oguz-Ekim P (2021) Machine learning approaches for municipal solid waste generation
forecasting. Environ Eng Sci 38(6):489–499
32. Roy S, Rafizul IM, Didarul M, Asma UH, Shohel MR, Hasibul MH (2013) Prediction of
municipal solid waste generation of khulna city using artificial neural network: a case study.
International Journal of Engineering Research-Online 1(1):13–18
33. Sahlol AT, Ewees AA, Hemdan AM, Hassanien AE (2016) Training feedforward neural
networks using sine-cosine algorithm to improve the prediction of liver enzymes on fish farmed
on nano-selenite. In: 2016 12th international computer engineering conference (ICENCO).
IEEE, pp 35–40
34. Singh D, Satija A (2018) Prediction of municipal solid waste generation for optimum planning
and management with artificial neural network—case study: Faridabad City in Haryana State
(India). International Journal of System Assurance Engineering and Management 9(1):91–97
35. Speight JG (2015) Waste gasification for synthetic liquid fuel production. In: Gasification for
synthetic fuel production. Woodhead Publishing, pp 277–301
36. Xia W, Jiang Y, Chen X, Zhao R (2021) Application of machine learning algorithms
in municipal solid waste management: a mini review. Waste Management & Research,
0734242X211033716
37. Zhang Z, Zhang Y, Wu D (2019) Hybrid model for the prediction of municipal solid waste
generation in Hangzhou China. Waste Management & Research 37(8):781–792
Behaviour and Resistance
of Glued-Laminated Timber Subjected
to Impact Loading
1 Introduction
shock tube testing on various glulam beams and obtained a DIF of 1.14 for strain
rates ranging between 0.14 and 0.51 s−1 . However, the authors noted that this DIF
was only applicable for cases where continuous finger-joints (i.e., single laminate
width) or closely aligned finger-joints (i.e., multiple laminates width) are not located
in the high moment region [8]. Otherwise, a DIF of unity was recommended [8].
Similar observations were made by Viau and Doudak [15] resulting in a DIF of 1.10
for strain rates between 0.37 and 0.51 s−1 for glulam beams. They also looked at
the effects of multiple pressure-impulse combinations and found that as long as the
elastic limit of the member is not passed, the wood member remains undamaged [15].
Throughout all the above studies, simply supported boundary conditions were used.
Additional studies have found that connections will largely influence the behaviour
of a member, and if designed correctly, can act an energy dissipater [11, 16].
Whilst the aforementioned studies have resulted in a foundational knowledge base
regarding the behaviour of wood elements under high strain rates, these have been
limited to strain rates of 0.14–0.51 s−1 . The impact behaviour of glulam has not
been well documented across a broader range of strain rates and loading regimes.
The present study aims to investigate the behaviour of glulam members subjected to
impact loading and strain rates that are higher than those reported in the literature,
and reports on the preliminary results from experimental testing for the purpose of
making design recommendations in terms of DIF and other considerations.
2 Experimental Programme
2.1 General
Quasi-static four-point bending flexural tests were conducted on three of the nine
specimens, S1 through S3, to obtain average 10-min flexural strength values. This
allowed the SIF for this sample of wood beams to be determined and to have accu-
rate strengths and stiffnesses with which to compare the dynamic results. The test
methodology was adapted from ASTM D198 [2], with slight modifications made to
158 N. Wight et al.
(a) (b)
keep the dynamic tests comparable. The force was applied using a hydraulic MTS
system and was recorded through a load cell located within the hydraulic head. The
displacement was recorded using a linear variable differential transformer (LVDT)
connected at the beam’s midspan. The specimens’ strain deformations were measured
using two strain gauges positioned on the tension side of the beam and two on the
compression side of the beam at midspan. The beams were loaded so that the average
time to failure was 16 min. The beams were simply supported and used the same
rollers and load transfer bar that was used for the dynamic tests. The roller plates at
the load application points and ends measured 150 mm in length to avoid crushing
of the wood during testing. A clear span of 2265 mm was used throughout testing.
The static test setup can be seen in Fig. 1.
To determine the DIF for the beams under impact load, dynamic testing was
conducted using the newly established drop weight impact testing facility at the
Royal Military College of Canada, capable of imparting up to 23 kJ of energy onto
small to full scale structural elements. The six-metre-tall impact hammer consists of
two sets of supported rails along which a drop weight box travels using six pillow
block ball bearings to guide the box. The box weighs 99.1 kg and can be increased
in weight by adding a lid (42.6 kg) and up to eleven plates weighing approximately
25.6 kg each, up to a maximum drop weight of 423.3 kg. The impact hammer testing
apparatus can be seen in Fig. 2. The hammer is equipped with a data acquisition
system capable of recording data at a rate of 500,000 samples per second. The box
is equipped with a linear encoder to record the drop height and determine the box’s
velocity upon impact.
Behaviour and Resistance of Glued-Laminated Timber Subjected … 159
Loading and boundary conditions identical to those described for the static test
setup were used for the dynamic testing. Due to the nature of the loading and the
likelihood of beam instability during response, lateral supports were provided at the
beam ends. Additionally, brace plates were placed on top of the beam at each end
in order to prevent upward motion of the beam after initial impact. End supports
were designed to allow for piezoelectric force sensors to be placed at each support,
seamlessly integrated with the roller and pin. The load application points, consisting
of a steel pin and roller, were designed such that they allowed movement, but with
enough restraint to keep all components together during and after impact. Welded
saddles and elasticized rubber cord prevented any tipping in the system and ensured
that movement was restrained to the vertical plane. A steel load transfer beam allowed
for the impact load to be transferred to the beam at two points to allow for a region of
constant moment and no shear force. An additional force transducer was placed on
the load transfer beam to monitor the applied load. A laser and string potentiometer
were used to determine the beam’s displacement–time history at the beam’s midspan.
Three high-speed cameras were used, capturing images at 10,000 fps, 1000 fps, and
500 fps, to record the beam’s behaviour, potential failure location, and to check for
the beam’s displacement in the case of discrepancies between the laser and string
potentiometer. Two strain gauges on the tension side of the beam and two on the
compression side of the beam at midspan were used to monitor the specimen’s
strain–time histories. A drawing and images of the dynamic test setup can be seen
in Figs. 3 and 4. A sample was used to determine an appropriate drop height and
weight to induce failure. Following this each sample, D1 through D6, was subject to
a 2000 mm drop, with a 244.7 kg weight in order to induce flexural failure.
160 N. Wight et al.
3 Experimental Results
Failure was determined to be the point at which the beams experienced a sudden
drop in load and could no longer support additional load. The average maximum
resistance for the static tests was 242.3 kN with a coefficient of variation (COV) of
0.02. When normalized to a 100% strength value which corresponds to a testing time
of 1 min based on [6], the resistances indicate an average SIF of 1.55. The stiffness
was taken as the slope of the resistance displacement curve from 40 to 90% of the
beam’s ultimate capacity. These percentages were kept constant for determining the
beams’ dynamic stiffness. The average stiffness was 11,125 kN/m with a COV of
0.03. The strain rate varied from 2.19 × 10–6 to 4.98 × 10–6 s−1 and the time to failure
varied from 11.35 to 24.70 min. All beams responded in a linear elastic manner, and
all beams exhibited a brittle tensile failure initiating at a knot or natural defect. The
results from the static tests can be seen in Table 1, including the static maximum
resistance (Rs,max ), stiffness (K), strain rate (˙ ), and the time to failure. An example
of a typical static test failure can be seen in Fig. 5.
Failure of the beam specimens was determined to occur at the maximum resistance,
consistently followed by a sudden drop in resistance. Statically, the resistance can be
obtained simply by calculating the summation of the reactions. However, the same
cannot be done with regard to the measured dynamic reactions [3]. The measured
dynamic reactions are dependant on both boundary conditions, resistance of the
specimen, and the applied load. Particularly, the weight of the load transfer beam,
rollers, and pins had to be considered when calculating the dynamic resistance of the
beam. Equations for the determination of the dynamic resistance have been previously
derived for glulam specimens subjected to simulated blast loading, during which a
load transfer device was used [9]. The dynamic resistance of a simply supported
162 N. Wight et al.
beam under four-point bending can be determined using Eqs. (1) and (2):
R(t) = (6/L) ∗ V (t)xeq + 0.5 L/3 − xeq F(t) (1)
xeq = 0.102m L 2 + 0.290m c L /(0.319m L + 0.870m c ) (2)
where R(t) is the beam’s dynamic resistance, V (t) is the dynamic reaction, F(t) is
the applied force, L is the beam’s clear span, x eq is the distance from the support to
the point of application of the equivalent inertia force, m is the distributed mass of
the beam, and mc is half of the mass of the load transfer beam lumped at the load
application point.
The average dynamic failure resistance was determined to be 285.6 kN with a
COV of 0.05. The stiffness was taken as the slope of the resistance displacement
curve from 40 to 90% of the beam’s ultimate capacity. The average stiffness was
13,368 kN/m with a COV which was 0.06. To determine the strain rate, the strain
was also taken at 40 and 90% of the beam ultimate capacity. This resulted in an
average strain rate of 0.87 s−1 with a COV of 0.16, and the average duration of
load was 12.22 ms. The 40 and 90% resistance points were taken to determine the
beams’ behaviour most accurately. When the resistance displacement relationship
was observed, there seemed to be an initial stiffness that changed slope once the
system settled and became less accurate near the maximum. As such, this range was
deemed to have the most accurate representation of the beams stiffness and strain
rate.
All beams responded in a linear elastic manner up until flexural failure, at which
point the specimens exhibited brittle flexural failure. All failures were initiated at
a knot, where the failure initiation could be observed with the high-speed camera
footage. The results from the dynamic tests can be seen in Table 2, including the
Behaviour and Resistance of Glued-Laminated Timber Subjected … 163
(a) (b)
Fig. 6 Representative dynamic failure mode (Specimen D4): a overall view; b underside view
maximum dynamic resistance (Rd,max ), stiffness (K), average strain rate (˙ ), and
duration of loading. An example of a typical dynamic test failure can be seen in
Fig. 6.
The use of identical spans, boundary conditions, and loading conditions allowed
for a direct comparison between the static and dynamic test results for the purpose
of quantifying high strain rate effects on the glulam specimens. The DIF on the
maximum resistance was calculated using Eq. (3):
1.25
1.20
D4
D1 D6
1.15
D3
DIF
1.10
D2
1.05
D5
where Rd,max is the maximum dynamic resistance and Rs,nor is the maximum static
resistance normalized to a 100% strength value corresponding to a testing time of
1 min based on Karacabeyli and Barrett [6].
For strain rates in the range of 0.67–1.05 s−1 , an average DIF of 1.13, with a COV
of 0.05, was determined. These observations were supported by statistical analyzes
by using a t-test, with a confidence interval of 95%. This value is very similar to the
DIF of 1.14 for strain rates between 0.14 and 0.51 s−1 for glulam beams of multi
laminate widths subject to shock tube loading [8]. Figure 7 shows the DIF obtained
for all samples varying from 1.04 at the low end to 1.19 at its highest, with the mean
value shown by the black dotted line.
CSA S850 currently provides recommended values for the SIF and DIF for
wooden products [4]. The recommended SIF for visually-graded lumber is 1.9,
machine graded lumber is 1.5, and glulam and engineered wood products are 1.2 [4].
The value of the DIF for visually-graded lumber, machine graded lumber, glulam
and engineered wood products is 1.4 in flexure [4]. Looking at the results, from these
experiments, these values should be reassessed based on the growing body of knowl-
edge on the subject. As indicated in the introduction, a DIF of 1.4 may be suitable for
light-frame wood walls [7], but not for all wood products. This study also highlights
the need to evaluate different wood products as having separate responses to high
strain rates and the need to reconsider the value for the DIF of glulam.
Representative resistance curves for static and dynamic tests can be seen in Fig. 8,
where the dynamic specimens were observed to exhibit higher resistance and higher
stiffness. When looking at the dynamic resistance displacement relationship, there
appears to be an initial stiffness that changes slope once the system settles and
becomes less accurate near the maximum. It is for this reason that 40 and 90% of
the total resistance were chosen to determine the stiffness and beam strain rates.
The beams were seen to experience an increase in stiffness by a factor of 1.20.
These observations were supported by statistical analyzes by using a t-test, with a
confidence interval of 95%.
Behaviour and Resistance of Glued-Laminated Timber Subjected … 165
400
350
300
Resistance (kN)
250
200
150
100
D3
50 S2
0
0 5 10 15 20 25
Displacement (mm)
5 Conclusions
A total of nine 137 mm × 267 mm × 2500 mm glulam beams were tested under
quasi-static and dynamic four-point bending in order to document their static and
dynamic behaviour, as well as to quantify the high strain rate effects in the glulam
specimens. For a dynamic strain rate range of 0.67–1.05 s−1 , statistically significant
DIFs of 1.13 on the maximum resistance and 1.20 on the stiffness were observed.
These findings highlight a need to re-evaluate the DIF for wood recommended in
CSA S850, which recommends a constant resistance DIF of 1.4 across all wood
elements. These findings also highlight the need to evaluate different wood products
as having separate responses to high strain rates, as the response of glulam varies
from that of sawn lumber and other engineered wood products.
References
1. ASCE (2011) Blast protection of buildings: ASCE/SEI 59–11 blast protection of buildings.
ASCE, Virginia
2. ASTM International (2021) Standard test methods of static tests of lumber in structural sizes:
D198−21a. ASTM International, Pennsylvania
3. Biggs J (1964) Introduction to structural dynamics. McGraw-Hill College
4. CSA (2012) Design and assessment of buildings subjected to blast loads S850–12. CSA Group,
Mississauga
5. Jacques E, Lloyd A, Braimah A, Saatcioglu M, Doudak G, Abdelalim O (2014) Influence of
high strain-rates on the dynamic flexural material properties of spruce–pine–fir wood studs.
Can J Civ Eng 41(1):56–64
6. Karacabeyli E, Barrett JD (1993) Rate of loading effects on strength of lumber. For Prod J
43(5):28–36
7. Lacroix D, Doudak G (2014) Investigation of dynamic increase factors in light-frame wood
stud walls subjected to out-of-plane blast loading. J Struct Eng 141:04014159. https://doi.org/
10.1061/(ASCE)ST.1943-541X.0001139
166 N. Wight et al.
8. Lacroix D, Doudak G (2018) Determining the dynamic increase factor for glued-laminated
timber beams. J Struct Eng 144(9):04018160
9. Lacroix DN (2017) Investigating the behaviour of glulam beams and columns subjected to
simulated blast loading. Université d’Ottawa/University of Ottawa
10. Marchand KA (2002) BAIT, BASS & RODS testing results. Applied Research Associates,
Prepared for the USAF Force Protection Battlelab
11. McGrath A, Doudak G (2021) Investigating the response of bolted timber connections subjected
to blast loads. Eng Struct 236:112112
12. Mindess S, Madsen B (1986) The fracture of wood under impact loading. Mater Struct
19(1):49–53
13. Poulin M, Viau C, Lacroix DN, Doudak G (2018) Experimental and analytical investiga-
tion of cross-laminated timber panels subjected to out-of-plane blast loads. J Struct Eng
144(2):04017197
14. US Army Corps of Engineers (2018) PDC-TR 18-02: analysis guidance for cross-laminated
timber construction exposed to airblast loading. USACE Protective Design Center, Omaha
15. Viau C, Doudak G (2021) Behavior and modeling of glulam beams with bolted connections
subjected to shock tube-simulated blast loads. J Struct Eng 147(1):04020305
16. Viau C, Doudak G (2021) Energy-absorbing connection for heavy-timber assemblies subjected
to blast loads-concept development and application. J Struct Eng 147(4):04021027
17. Wood LW (1951) Relation of strength of wood to duration of load. United States Department
of Agriculture Forest Service R1916
Vibration Performance of Mechanically
Laminated Timber Floors
1 Introduction
Wood has long been a widely used building material in traditional housing construc-
tion in North America and Europe. In recent years, with the development of engi-
neered wood products, especially mass timber panels (MTPs), wood has been increas-
ingly used in mid-rise buildings, such as the 85.4 m-tall Mjøsa Tower in Norway
and the 18-story student residential Tallwood house in Canada. MTPs are a cate-
gory of large-dimension wood products including cross-laminated timber (CLT),
nailed laminated timber (NLT), dowel-laminated timber (DLT), structural composite
lumber (SCL), or glued-laminated timber (GLT) panels. Besides the most well-
known CLT, mechanically laminated timber (MLT) panels such as DLT and NLT
are becoming available materials in the construction of mass timber buildings. Due
to the parallel laminations in the major strength directions, MLT could have a better
material utilization efficiency as one-way slabs than CLT. DLT is by now the only
all-wood mass timber product. Hardwood such as beech and maple dowels is used
to friction fit dimensional lumber planks together, creating a prefabricated structural
floor panel with lower cost and less environmental impact compared to other glued
wood products.
Timber floors are prone to a high level of human-induced vibration due to their
lightweight [1]. Although the floor vibration does not affect the safety of the building,
it can greatly affect the comfort of occupants and the stable operation of some preci-
sion instruments and ultimately affect the acceptance of wood as a building material
in the multifamily and office building market. Since people’s tolerance and perception
of floor vibration are subjective, there is no universal design method for evaluating
occupants’ acceptance of floor vibration. Existing design methods usually control
the floor vibration by limiting one or several parameters including floor fundamental
natural frequency, static deflection under a point load, peak velocity or acceleration
due to unit impulse, or root-mean-square (RMS) acceleration. Some current design
methods are based on the previous research on wood joisted floors, which can’t be
applied to mass timber floors. Onysko et al. [1] proposed a 1 kN static deflection
design criterion based on more than 600 field test floors built with lumber joists. Since
the field floors were mostly built before the 1980s, the result could be not reliable
when used in different construction types of floors or the topping is added on the
timber floor. Dolan et al. [2] proposed a design method by restricting the floor funda-
mental frequency of wood joisted floor that is 15 Hz for unoccupied floors and 14 Hz
for occupied floors. This method is simple to be applied but might be conservative
for floors that have longer spans or with heavy concrete toppings. Hamm et al. [3]
developed a criterion based on the test data for over 50 floors on-site. She suggested
that for floors with a higher demand, it should have fundamental frequencies higher
than 8 Hz and a deflection lower than 0.5 mm under 2 kN concentrated load, and for
floors with a lower demand, it should meet the fundamental frequency requirement
of 6 Hz and has a deflection less than 1 mm under the 2 kN concentrated load. Hu
et al. [4] developed a floor vibration performance criterion by the combination of
fundamental natural frequency, static deflection under 1 kN point load:
f
≥ 15.3 (1)
d 0.39
where d is the measured static deflection under 1 kN concentrated load at the center
of the floor, and f is the measured fundamental natural frequency of the floor. This
performance criterion was based on a large database of tested wood joisted floor
Vibration Performance of Mechanically Laminated Timber Floors 169
with subjective evaluation ratings. Other parameters like the peak velocity, peak
acceleration, and RMS acceleration were also considered when proposing this crite-
rion, though they all showed good potential and accuracy for use, the fundamental
frequency combined with the static deflection is easier to obtain and for designers to
adopt. This criterion corresponded well with the subjective evaluation rating result,
which means that when the ratio calculated from this equation is greater than 15.3,
the floor is most likely acceptable for occupants in terms of vibration serviceability
performance. This method has been used to develop the vibration-controlled span
design equation for wood joisted floors in Canada. Similarly, this approach was
adopted for developing the vibration deign method of CLT floors following a simpli-
fied approach. The Canadian engineering wood design standard CSA O86-19 [5]
provides a vibration-controlled span design equation for single-span CLT floors with
rigid edge supports.
0.29
(EIeff, f )
106
lv ≤ 0.11 (2)
m 0.12
where lv = vibration-controlled span limit, m, (EI)eff = effective flatwise bending
stiffness for a 1 m wide panel, N mm2 , m = linear mass of CLT for a 1 m wide panel,
kg/m. However, the applicability of Eq. 2 to other MTPs is uncertain due to the lack
of test data.
Besides those methods, there are direct design criteria related to human perception
of vibration. Direct method means that consider the human perception of whole-body
vibration for floor vibration serviceability, and they require suitable measurement of
responses. For instance, the ISO 10137 baseline curve [4] (Fig. 1) related to the accel-
eration in z-axis is the basis for vibration response-based vibration design methods
used in steel and concrete floors. Multiplying factors of different occupancies should
be used. In the case of residential buildings in the day time, a factor of 2–4 is suggested
for continuous or intermittent vibrations. Such vibration response methods are being
adopted by the timber engineering community to design mass timber floors [6], since
the CSA O86-19 CLT design equation cannot be directly applied to mass timber
floors with beam supports and concrete toppings. It should be noted that such direct
methods have not been verified with test data either.
Therefore, the objective of this study is to investigate the dynamic properties and
evaluate the vibration performance of mechanically laminated timber floors. Modal
tests were conducted on DLT floors of different spans and dimensions under two
different boundary conditions. The dynamic properties such as fundamental natural
frequencies, damping ratios, and mode shapes were identified. The static deflection
of DLT floors under 1 kN point load was measured. Walking test was then conducted
to obtain the acceleration data, as well as the subjective evaluation through surveys.
Finally, the applicability of current commonly-used vibration design criteria to DLT
floors was discussed.
170 C. Guo et al.
2.1 Materials
The DLT panels were assembled with #2 grade spruce/pine lumber with an average
density of 435 kg/m3 , as shown in Fig. 2. OSB panels with an average density
of 516 kg/m3 were used as sheathing covered on DLT floors according to the
construction practice.
Two floors with different dimensions and layouts were constructed in the labo-
ratory for testing. DLT floor 1 (DF1) was made by two 5.4 m × 2.0 m DLT panels
supplied by StructureCraft in Abbotsford BC, and DLT floor 2 was made by three
4.3 m × 1.8 m DLT panels supplied by International Timberframes in Golden, BC.
The panels were spanned in the major strength direction at 5.2 m o.c and 4.1 m
o.c, respectively. The non-destructive test developed by Zhou et al. [7] was used to
simultaneously measure the elastic constants of each DLT panel including modulus
of elasticity in major and minor strength directions (E x and E y ) as well as in-plane
shear modulus Gxy . These parameters are required if the floor plate is modeled
as a two-dimensional orthotropic thin plate. The floor dimensions and measured
elastic constants are shown in Table 1. Panel-to-panel connections were achieved
by installing full-thread self-tapping screws at 45° (8.5 mm diameter and 215 mm
length), with a spacing of 400 mm.
2.2 Methods
Experimental modal test [8] was first performed on these two floors with simple
supports to obtain their dynamic properties including natural frequencies, damping
ratios, and mode shapes. The floors were sitting on wood walls resting on the
ground, which can be treated as rigid supports. Floors were tested under two different
boundary conditions, namely simply-supported on two opposite sides and free on
the other two sides (SFSF) and all the four sides are simply-supported (SSSS), as
shown in Figs. 3 and 4. In this study, both floors were supported on the major spans
under the SFSF boundary condition.
To obtain the mode shapes, a 7 × 7 grid was divided on the floor and each point
except the points on the supported edges is impacted by an instrumented impact
hammer for three times to obtain an averaged frequency response function (FRF)
curve. Four accelerometers were installed on the floor with hot melt glue, and the
signals from the impact hammer and accelerometers were recorded by a dynamic
data analyzer (Brüel and Kjær, LAN-XI 3050). BK Connect (Brüel and Kjær) vibra-
tion engineering software was used to conduct the modal analysis. The location of
accelerometers and test setup is illustrated in Fig. 5.
After the modal test, floor vibration performance test was conducted in each group
for their acceleration levels under normal human walking. The walking test was
performed by a 75 kg male evaluator following several walking paths (Fig. 6) with
a gait around 2 Hz. For each floor, four accelerometers were mounted on the left
and right midspan (Am-1 and Am-2) and on top and bottom of the floor center
(Ac-1 and Ac-2), respectively. The acceleration responses were collected at certain
locations of possible maximum vibration magnitude and recorded by BK Connect
172
Software. The time-domain acceleration data of the whole walking path was post-
processed based on ISO 2631 [9]. By using the weighting curve Wm proposed in
ISO 2631, the acceleration values can be calculated from the frequency-weighted
floor acceleration-time response as follows:
⎡ ⎤ 21
T
1
aw = ⎣ aw2 (t)dt ⎦ (3)
T
0
where aw (t) is the weighted acceleration as a function of time, m/s2 , T is the duration
of the measurement, s. In this study, the duration of the whole walking path was
considered for calculating the acceleration values.
174 C. Guo et al.
Fig. 5 Schematic drawing of modal test setup with selected accelerometer locations
Deflection test on DLT floors was conducted according to ISO 18324 [10] by
applying a 1 kN point load (a 100 kg concrete block) at the center of the floor
(Fig. 7), the deflection of the floor was measured by one dial indicator at midspan
under the loading point.
Subjective evaluations were conducted for the acceptance level of the DLT floors
according to ISO 21136 [11], categorizing the floors into levels 1–5, i.e., 1 = defi-
nitely unacceptable, 2 = unacceptable; 3 = marginal, 4 = acceptable, 5 = definitely
acceptable. A survey with 20 evaluators was conducted on each floor. The evaluator
first walked on the floor between two ends and then stood stationary at the center
of the floor while a 75 kg walker (the same person for recoding acceleration levels)
Vibration Performance of Mechanically Laminated Timber Floors 175
Fig. 7 Deflection test (Left: Applied 1 kN concentrated load on the floor, Right: Data collection)
walked on the floor. Each evaluator completed a questionnaire provided in ISO 21136
to report his/her perception and acceptance regarding vibration levels if the floor is in
a mid-rise residential or office building. All the ratings for each floor were averaged
and reported as its final rating.
Modal tests were performed on the floor panels separately before they were connected
together. The first three natural frequencies of the tested floors were obtained from
FRF curves as shown in Fig. 8, the corresponding mode shapes under two different
boundary conditions are shown in Figs. 9 and 10. The first three natural frequencies
as well as the damping ratios of the single panel and connected floors are presented
in Table 2.
The natural frequencies of a floor can be represented as a function of its span,
mass, and stiffness [12]. It can be observed from the table that there is no significant
difference between the single panel and connected DF1 in the fundamental natural
frequency, while a slight decrease can be seen in the first natural frequency for DF2
when they were connected together to act as a large mass timber floor. Since DLT is a
parallel laminated timber panel, its bending stiffness in the minor strength direction
is relatively low compared with its bending stiffness in the major strength direction.
The fundamental natural frequency under SFSF is not affected much by the width of
the floor. Even with panel-to-panel connections between each floor panel and four
edges simply-supported (SSSS), the fundamental natural frequency is only 1 Hz
higher than that under SFSF.
176 C. Guo et al.
Fig. 8 Frequency response functions (FRF) from the roving impact hammer test
The shown first three mode shapes are related to longitudinal bending, torsional,
and transversal bending, respectively. Besides the expected vibration mode, some
localized vibration can be found in the mode shapes, which indicates that the
connected floor panels are not fully continuous [13].
Damping can make a positive contribution to reducing floor vibration in construc-
tion. However, damping is a systematic parameter depending on the floor material
and construction details. Unlike wood joisted floors, mass timber panels, especially
CLT, have a relatively low damping ratio. Hu et al. [14] suggested that a 1% damping
ratio should be used for mass timber floors. In this study, the damping ratios of the
DLT floors were estimated from the FRFs using the rational fraction polynomial
method. The damping ratio of DF1 and DF2 are about 0.8% and 1.5%, respectively,
for the fundamental frequency mode. Since DF2 was made of 3 panels and DF1 was
made of 2 panels, the friction between laminates and adjacent panels is thought to
be a major source of damping.
From the experimental test, potential floor vibration performance criteria parame-
ters including natural frequencies and static deflection under 1 kN point load were
collected and shown in Table 3. They are compared with several existing timber floor
vibration design criteria to verify their applicability for DLT floors.
From Table 3, it can be observed that DF2 has a better vibration performance
compared to DF1. It has a subjective rating higher than 3, nearly all the participants
cannot feel the vibration when they were walking on the floor but can feel the vibration
when others were walking on the same floor, which means that the floor is likely
to be acceptable for occupants. While floor 1 performed badly in the subjective
evaluation, all the participants can even feel the excessive vibration when they were
178 C. Guo et al.
walking on the floor. Since the single panel cannot be considered as a full-size floor,
subjective evaluation was not conducted on those. In addition, there is no significant
difference between the single panel and connected DF1 in the deflection under 1 kN
concentrated load, while a slight decrease can be seen in the deflection for DF2 when
the single panels were connected.
Since the fundamental natural frequency is the most governing parameter in the
evaluation of floor vibration performance, the following discussion will mainly focus
on that. The average fundamental natural frequency of DF1 and 2 is 10.94 Hz and
14.13 Hz, respectively. Dolan [3] suggested that the fundamental natural frequency
for unoccupied floors should be greater than 15 Hz to maintain a good vibration
performance, which is higher than all the results from the tested DLT floors. Addi-
tionally, based on Hamm’s criterion [3] for both higher and lower demands by limiting
the fundamental natural frequency and deflection under 2 kN concentrated load, none
of DLT floors’ test results can fully meet the requirement.
By using the method proposed by Hu [4], the calculated f /d 0.39 of DF1 is lower
than the limit but the value of DF2 is higher than the limit. The result is shown in
Fig. 11 with the criterion curve, it can be seen that the result corresponds well with the
subjective evaluation result, where the DF1 is unacceptable and DF2 is acceptable.
Since the measured modulus of elasticity of DLT panels in major strength is
close to that of CLT floors [15], the vibration-controlled span design equation was
used to calculate the allowable span for DLT floors. From the result in Table 3, the
vibration-controlled span for DF1 and 2 should be 4.33 m and 4.30 m by using the
measured MOE and density, while the span of DF1 and 2 is 5.2 and 4.2 m. As a result,
DF2 is likely to perform well in vibration but DF1 seems to have an unacceptable
vibration performance based on this design method, which corresponds well with
the subjective evaluation result.
The peak and root-mean-square acceleration under human walking on floors were
collected, and the maximum value from all the walking paths is shown in Table 4. The
Vibration Performance of Mechanically Laminated Timber Floors 179
Fig. 11 Comparison
between measured result and
their subjective rating by
Hu’s method
captured acceleration waveform of DF2 under human walking is shown in Fig. 12,
each peak represents a step on the floor. A transient response of the floor under human
walking excitations can be observed, and the frequency-weighted data is also shown
[16].
As shown in Table 4, the measured maximum acceleration value usually appeared
in the central accelerometers. The peak values of DF1 and DF2 are 0.018 and 0.014 m/
s2 , respectively. It can be observed from the table that the RMS acceleration values
of DF1 and DF2 are 0.004 and 0.003 m/s2 , respectively, which are even lower than
the ISO 10137 baseline limit and much lower than the tolerance limit after the
multiplying factor has been applied. The result indicated that both DLT floor vibration
performance is most likely to be acceptable. However, from the subjective evaluation
on DF2, the ratings under two boundary conditions were 3.39 and 3.52, respectively,
which means that the floor vibration for human occupants was slightly better than
“marginal”. And for the human occupants, DF1’s vibration performance level is
considered as unacceptable.
In general, the comparison between different floor vibration criteria and exper-
imental test data has different results. Hu’s method as well as the CLT vibration-
controlled span design equation are consistent with the experimental results and
Table 4 Maximum accelerations measured from the walking test and the tolerance limits (m/s2 )
Sensor Peak value Root-mean-square ISO baseline Tolerance
location value limit*
DF1 SFSF Ac-1 0.020 0.004 0.007 0.014
DF1 SSSS Am-2 0.016 0.004 0.007 0.014
DF2 SFSF Ac-1 0.012 0.002 0.008 0.016
DF2 SSSS Ac-1 0.016 0.003 0.009 0.018
* After, a multiplying factor of 2 (for quiet office) has been applied
180 C. Guo et al.
Fig. 12 Acceleration-time history data of DF2 under human walking excitation (path 2)
correspond well with the subjective rating, while Dolan’s and Hamm’s methods
do not match well with the experimental results. When considering the residents’
perception of floor vibration, the poor correlation between the acceleration criteria
and subjective evaluation shows that the acceleration criteria in ISO 10137 cannot
fully meet the requirements for the subjective feelings of the floor, which means that
this method might not be suitable enough in the design of mass timber floors.
4 Conclusion
Modal test and vibration performance tests were conducted on DLT floors to investi-
gate their vibration serviceability performance in this study. Since DLT is a parallel
laminated timber panel, the bending stiffness in the minor strength direction is rela-
tively low compared with its bending stiffness in the major strength direction, the
width of the floor, and the four edges simply-supported boundary conditions do
not affect the fundamental natural frequencies much. From the corresponding mode
shapes, localized vibration can be observed, indicating that in the application or
numerical modeling, they cannot be assumed as a fully continuous panel.
Since there are no universal vibration evaluation criteria for DLT floors, several
commonly-used criteria were chosen to evaluate their vibration serviceability perfor-
mance in this study, which led to different results. CSA O86-19’s CLT vibration-
controlled span equation and Hu’s method have a good adaptation to DLT floors,
while the ISO method using RMS acceleration to evaluate the floor vibration perfor-
mance is far from the subjective evaluation results. Meanwhile, this study only
Vibration Performance of Mechanically Laminated Timber Floors 181
considers the vibration performance of bare DLT floor slabs with rigid supports,
while there are a large proportion of floors supported by post and beams in practical
applications, as well as those with concrete toppings, their effects on the vibration
performance of floors will be further investigated. Therefore, more experiments and
studies are needed in future to develop a generic vibration-controlled design method
for mechanically laminated timber floors.
Acknowledgements This project was financially supported by NSERC Discovery Grant and BC
Forestry Innovation Investment—Wood First program. The authors appreciate the technical support
from Wood Innovation Research Laboratory at the University of Northern British Columbia. The
authors want to thank anonymous evaluators during the subjective evaluations.
References
Abstract In a new era, full of possibilities for manufacturing large timber sections
made of engineered wood, such as cross-laminated timber (CLT), the strength and
structural integrity of mass timber sections can be retained for a much longer time
in fire. Increased availability of CLT in Canada and its successful use in mass timber
construction worldwide have generated interest in its properties and performance
when subjected to fire. There are many benefits of using CLT, such as utilizing
sustainable and renewable construction materials like wood and the fact that CLT
has excellent acoustic, thermal, and seismic performance. However, having a product
with a broad portfolio of choices brings many difficulties, specifically in terms of how
the number and thickness of the lamellae of a CLT slab can affect its fire performance.
This chapter summarizes the most recent research on the behaviour of CLT floor
slabs when exposed to fire. The review of the state-of-the-art literature discusses
different experimental works examining the change in internal temperatures, char
depth and progression, and structural integrity of CLT floor slabs when subjected to
fire. The reviewed research studies varied in the scale of testing, test setup, make-
up of CLT sections, and wood species utilized. Despite the variances across the
different reviewed studies, many of the same conclusions were drawn, providing
consistency in the general outcomes. Although experimental testing is the ground
truth to verify the fire resistance of construction materials and structural assemblies,
fire testing is time-consuming and costly. This demonstrates the need for developing
computer models that can accurately simulate the actual behaviour of CLT slabs
when subjected to fire. Accordingly, this chapter also discusses recent numerical
and modelling attempts to simulate the behaviour of CLT slabs when subjected to
standard and natural fire scenarios.
1 Introduction
This paper presents a literature review on the cross-laminated timber (CLT) floor
slabs when exposed to fire. Precisely, the changes in the material both during and
after fire exposure and the possible means to simulate the behaviour of CLT floor
slabs using finite element modelling (FEM) software, such as ABAQUS, CST Fire,
DIANA, and SAFIR, have been reviewed and discussed.
Engineered wood products such as CLT have been gaining popularity since it
was first introduced in the early 1990s in Europe. Not until 2017 when CLT was
fully implemented into the previous version of the CSA-O86 Engineering design
in wood standard [3]. While with the release of the current version of CSA-O86
[4], encapsulated mass timber construction (EMTC) has been incorporated into the
new standard. Such a new type of construction is an alternative solution for tall
wood buildings to meet the code objectives and functional statements pertinent to
the requirements of non-combustible construction [13]. EMTC is defined as a type of
construction in which fire safety is attained using encapsulated mass timber elements
with an encapsulation fire resistance rating and minimum dimensions for the struc-
tural timber members and assemblies. Following such a construction technique, the
CLT section is not directly exposed to fire but instead protected mainly using Type X
fire resistance gypsum boards. Thus, the EMTC is delaying the effect of fire on
timber elements and enhancing their fire resistance. An example is seen in the UBC
Brock’s Commons Residence building in Vancouver, BC, the tallest hybrid wood-
based building at the time of its construction in 2017. The mass timber skyscraper has
18 stories with an overall height of 53 m [2]. This shows that constructing high-rise
wood buildings is possible and that a performance-based design approach can meet
fire resistance requirements.
CLT panels are characterized as mass timber sections made by gluing and pressing
together wood lamellas with alternating fibre directions (±90°) [8]. The manufac-
tured panels shall have an odd number of lamellae (Typ. 3, 5, 7, or 9) arranged
symmetrically around the centre layer, with the thickness of each layer ranging from
20 to 45 mm. Utilizing such an innovative mass timber structural system has many
key advantages, such as more use of sustainable and renewable construction materials
like wood and the fact that CLT panels have excellent acoustic, thermal, and seismic
performance. Also, CLT specifically offers dimension stability since the changes due
to swelling and shrinkage are controlled by the crosswise laminations [12].
However, the vast portfolio of available sections with varying thicknesses and
numbers of plies increases the difficulty of understanding how CLT panels would
behave in a structure since the layers that make up a CLT panel have significant effects
on the performance and structural capacity of their sections. This is particularly true
considering fire exposure and the change in measured temperature profiles, charred
depths, and mid-span deflections observed during experimental testing. Knowing
how these changes influence the CLT panels in terms of their structural performance
can assist in meeting the applicable code requirements. Therefore, extensive experi-
mental testing is crucial as the need and desire to build more tall mass timber buildings
Behaviour of Cross-Laminated Timber Slabs Subjected … 185
grow in North America and worldwide. The objective of the National Building Code
of Canada [10] is to limit the probability that combustible construction materials
within a story of a building would be involved in fire, which could lead to the growth
of fire and consequently its spread across the same building level within the time
available for occupants to evacuate the building and for emergency responders to
perform their duties. Therefore, EMTC has been introduced in the current CSA-
O86 Engineering design in wood standard [4]. The overall objective is to limit the
probability that mass timber elements significantly contribute to the fire spread and
severity in buildings [13].
Another essential consideration in CLT production is the adhesives used to bond
the lamellas together while manufacturing those panels. There are fewer temperature-
sensitive adhesives that can be used since the charring rate increases if delamination
occurs [14]. This is shown clearly by the study conducted by Muszyński et al. [9],
in which the difference between polyurethane (PUR) and melamine formaldehyde
(MF) on the behaviour of CLT panels when exposed to fire was studied. In addition,
Okuni and Bradford [11] studied the influence of the type of adhesives while focusing
on developing a model of the thermal performance of CLT panels exposed to fire. The
latter study investigated three adhesives: MF, phenol resorcinol formaldehyde (PRF),
and PUR. Examining the differences in strength values of CLT panels utilized various
types of adhesives, it was determined that MF and PRF have favourable thermal–
mechanical properties compared to PUR and therefore, should be recommended in
the production of CLT panels [11]. In addition, there are higher chances of delamina-
tion occurring when PUR is used than for PRF and MF. Therefore, PRF and MF are
highly recommended over PUR in manufacturing CLT panels since they have better
thermal properties and are less likely to contribute to considerable delamination in
CLT panels when exposed to fire.
Another concern that needs to be considered is whether the fire resistance rating
of a building element obtained from a standard fire test might not represent its fire
performance if subjected to a natural fire instead [6]. As with natural fires, the time–
temperature curves are more rapidly increasing than that of standard fire, with distinct
stages of growth, fully developed burning, and eventual decay stage. In contrast,
standard fires have no decay but ever-increasing furnace temperatures with time up
to 1260 °C at 8-h duration. This means the duration of higher temperature experienced
in natural fire testing could have sizeable effects on the structural performance of
CLT panels as it directly correlates to how quickly the timber burns and thus the
loss of its mechanical properties. Studies conducted by Mindeguia et al. [8], Wiesner
et al. [15], and Li et al. [6] dealt with fire performance of CLT panels subjected to
natural fires and thus, more realistic conditions with consistent results having higher
charring rate values than those provided in Eurocode 5 [1] were considered. The
current guidance in Eurocode 5 [1] determines the standard fire resistance of timber
elements and may not be directly applicable to non-standard fire testing.
Additionally, Lineham et al. [7] concluded the need to develop and validate a more
detailed and rational procedure to model and predict the structural fire response
of CLT panels exposed to non-standard fires. Accordingly, this demonstrates the
186 S. Barclay and S. Salem
need for accurate FE computer models that engineers can use. Fire testing is time-
consuming and costly and thus, computer simulations allow the product to be opti-
mized for each purpose [12]. In general, CLT sections gradually lose strength as
it is controlled mainly by the in-depth thermal penetration and the resulting loss of
their mechanical properties [15]. While a fair number of studies have been conducted
on CLT floor systems, there remain relationships that are not fully understood yet,
which are necessary before confidently determining the fire resistance of CLT floors
in each situation and level of fire exposure.
2 Methodology
A quantitative literature review on CLT floor slabs has been conducted and thus
included in this chapter to examine their behaviour and changes when exposed to
fire conditions. Additionally, an examination of the different attempts in modelling
the behaviour of such mass timber sections to assist in design calculations has
been performed. Information was collected using secondary data to evaluate other
researchers’ findings and experimental results. The results presented in five research
articles selected out of the several ones cited in the introduction section of this chapter
have been addressed and compared in detail. This included conducting a content
analysis, identifying patterns, and drawing conclusions across the five selected arti-
cles. The experimental programs presented in the reviewed research articles varied
regarding the scale of testing, type of testing, make-up of CLT sections, etc. The
numerical analysis techniques used in a few of the studies included in the five
reviewed articles looked at many of the same relationships and generally had good
agreement with experimental results. Even with the variances across the different
reviewed articles, many of the same conclusions were drawn, providing consistency
in the experimental results and numerical predictions. This speaks to the validity
of the experimental work since lab-based experiments cannot consistently be repli-
cated, especially considering the specimens’ different layouts and testing methods
and procedures.
3 Research Methods
According to the reviewed articles, the research methods used while studying the
fire performance of CLT panels can be seen in Table 1. The table provides a summa-
rized breakdown of each study’s experimental and numerical work carried out. The
critical differences are the type of testing, fire scenarios investigated, objectives, and
modelling techniques. Full-size fire experiments are more desirable since using larger
specimens for fire resistance tests is more realistic and allows capturing the actual
performance of structural components and assemblies under fire exposure. However,
Behaviour of Cross-Laminated Timber Slabs Subjected … 187
it is not always possible to perform such tests due to high cost and lack of labora-
tory capabilities. Therefore, many researchers preferred to perform smaller-scale fire
tests. As for investigating the effects of different fire scenarios, a comparison between
the influence of standard fire and natural compartment fires is very beneficial.
Figure 1 illustrates the different testing schematics for the natural fire conditions
adopted in the experimental study conducted by Mindeguia et al. [8]. Figure 2 shows
a general fire resistance test setup adopted in the experimental study conducted by
Wang et al. [14], a typical schematic for standard fire tests of CLT floor assemblies.
Most of the fire testing completed when studying the behaviour of CLT floor slabs
followed standard fire testing conditions as it is the most straightforward approach
to compare results with different parameters or different materials for equivalent
fire severity. However, studying test specimens under natural fire conditions allows a
more accurate representation of fire in a realistic setting. The key benefit of natural fire
exposure is adjusting for different ventilation conditions. This mainly affects the fire
growth rate, as when a fire reaches the point of flashover, this influences the duration
of the steady state burning stage of the fire. The studies conducted by Mindeguia
et al. [8] and Wiesner et al. [15] were part of the Epernon Fire Test Programme,
a multi-partner collaborative research project launched in 2017 and focused on the
comparison between natural and standard fires in their effects on buildings.
To demonstrate the importance of ventilation conditions and how a good under-
standing of the fire dynamics is necessary to predict the material’s behaviour being
tested, the researchers followed the three different scenarios shown in Fig. 1. All
scenarios have different in-depth temperatures and structural behaviour recorded.
Another important distinction between the fire experiments presented in the five
selected articles is the design and setup of the experiments and how the change of
study parameters was altered. For the studies conducted by Frangi et al. [5] and Wang
et al. [14], the primary study parameter was changing the thickness and number of
plys used in the composition of the CLT panels, as well as the corresponding changes
in the temperature profile and charring rate of their test specimens. In the study
conducted by Mindeguia et al. [8], the structural capacity of the CLT panels was
investigated, while Wiesner et al. [15] examined the thermo-mechanical behaviour
of the CLT panels. Four of the five selected studies mentioned above compared
how the respective structural behaviour of CLT panels in question changed during
different fire scenarios. Whereas in the study conducted by Schmid et al. [12], the
focus was on the simulation through computer modelling of the CLT behaviour to
develop simplified design equations.
188
Fig. 1 Illustration of compartment configurations and corresponding opening factors for three
different natural fire tests [8]
Fig. 2 A general fire resistance test setup for CLT floor slabs [14]
4.1 Experimental
Fig. 3 Thermocouple arrangements of the tested CLT panels, a top plane view; b side view of
three-ply (left) and five-ply (right) CLT panels [14]
Fig. 4 Estimated char layer depths for standard fire exposure and natural fire scenarios based on
the measured location of the 300 °C isotherm within the CLT sections [15]
was higher than the nominal charring rate (0.635 mm/min). This is believed to be
because the CLT panels used in the said experiments were not edge glued. Thus,
the gap between the lamellas gradually increased as the fire progressed, leading to
local two-dimensional (2D) fire exposure resulting in an increased charring rate.
Based on the developed time–temperature curves of those experiments, it was also
concluded that the local falling off might be the dominant condition for the five-
ply CLT panels. Frangi et al. [5] focused their study on how the thickness of the
lamellas altered the behaviour of the CLT panels. According to their research, the
conclusion was that thicker layers are more favourable regarding fire performance
than sections with thin layers. CLT panels with thinner layers lead to reduced time at
which the fire exposed layer fell off, causing the next inner layer to become exposed
to increased temperatures once the external layer falls off. Therefore, the char layer
depth recorded by each group of researchers is an important parameter. It influences
the residual strength of the CLT section that remains after fire exposure to sustain
the applied loads without collapse. While, in general, the values agree reasonably
well with the standard value from Eurocode 5 [1], there are still some differences [8].
For instance, the charring rate was calculated at 0.71 and 0.75 mm/min for the CLT
slabs exposed to the two standard fires. At the same time, the slabs exposed to the
three natural fire scenarios were less well predicted. Other variances in experimental
results included a change in the manufacturing process as with the study results
conducted by Wang et al. [14] or a difference in the adhesive type used as with the
results from the study conducted by Frangi et al. [5]. The char layer depth is an
essential parameter for CLT sections; however, it is also important to base design
recommendations on additional factors such as those shown by comparing the effects
of standard and natural fire scenarios on the fire performance of CLT panels.
Behaviour of Cross-Laminated Timber Slabs Subjected … 193
4.1.3 Deflections
Fig. 5 Mean mid-span deflection versus time relationships of CLT panels for three different natural
fire and two standard fire scenarios [8]
Fig. 6 Mid-span deflection versus time relationships of CLT panels, a for the three-ply CLT panels;
b for the five-ply CLT panels [14]
subjected to flexure bending with relatively high load levels. In other words, once
the char front has passed the first ply, there is not enough strength remaining in the
CLT panel to sustain the applied loads as the CLT panel is basically down to only
the top ply in the case of a three-ply CLT panel.
4.2 Numerical
The ability to simulate the fire behaviour of CLT panels through computer modelling
is a challenging task to perform with a high level of accuracy. However, such an
alternative approach to study the fire behaviour of CLT panels is essential as fire
Behaviour of Cross-Laminated Timber Slabs Subjected … 195
Fig. 7 Calculation process using CST Fire software, adopted for CLT to determine the bending
capacity of the heated cross-section [12]
After completing a critical review of the most related studies that are published in
the available literature on the behaviour of CLT floor slabs when exposed to fire, it
is obvious to realize the benefits of using such mass timber products and the reasons
behind the rapidly growing interest in its utilization in tall wood buildings. However,
more research is needed to reach the necessary satisfactory level of knowledge to
accurately simulate their behaviour when subjected to fire and thus provide simplified
design equations that can be reliably utilized and ultimately be implemented in design
standards.
One challenging aspect regarding the development of guidelines for the fire resis-
tance of CLT floor slabs is the broad portfolio of choices for available CLT panels that
can be used in the design of mass timber buildings. This includes how the number
and thickness of the lamellae can affect the strength and overall fire behaviour of
CLT floor slabs. Changing the number of plys in a CLT panel composition is advanta-
geous as it significantly influences the strength and stiffness of the section according
to the design requirements. Therefore, the thickness of the plys should be stream-
lined, and according to the reviewed studies, it was shown how thicker lamellae are
recommended when designing those CLT floor slabs for fire resistance. It is also
another essential design aspect to realize that while the char front progresses, it shall
encounter a glue line, leading to the charred layer of a CLT panel falls off.
An important area for further investigation is considering different fire scenarios;
as the charring behaviour of CLT floor slabs under natural fire scenarios was less
well predicted than that when subjected to a standard fire. Although understanding
the charring behaviour is essential to determine the residual strength of CLT sections
Behaviour of Cross-Laminated Timber Slabs Subjected … 197
exposed to fire, it shall not be the sole mean to predict their fire resistance. This is
mainly due to the additional loss of the mechanical properties of the heated wood
beneath the char layer during fire exposure and throughout the decay phase, which
leads to developing a more in-depth understanding of the different failure mecha-
nisms of CLT slabs subjected to fire. For instance, the change in their fire resistance by
using adhesives with different fire resistance during manufacturing CLT panels and
the difference in local and global layers falling off during fire exposure. In addition,
the effects of in-depth smouldering in CLT slabs for prolonged fire exposure need to
be further studied and analysed, as it can lead to unexpected structural collapse.
In conclusion, this chapter presents a state-of-the-art review of the most recent
research studies on the fire resistance of CLT floor slabs through both experimental
and numerical approaches. It was concluded that fire exposure significantly influ-
ences the behaviour of CLT floor panels and their structural strength and stiffness.
While computer modelling efforts have agreeable results, they still require further
development to accurately represent the thermal and mechanical properties of the
broad range of commercially available CLT panels concerning various natural and
standard fire scenarios.
References
1. CEN (2009) Eurocode 5: design of timber structures—Part 1–2: general—structural fire design,
British Standards Institute, London, UK
2. Connolly T, Loss C, Iqbal A, Tannert T (2018) Feasibility study of mass-timber cores for the
UBC tall wood building. Buildings 8(8):98
3. CSA O86 (2014) Engineering design in wood. Canadian Standards Association (CSA), Toronto,
Canada
4. CSA O86 (2019) Engineering design in wood. Canadian Standards Association (CSA), Toronto,
Canada
5. Frangi A, Fontana M, Hugi E, Jübstl R (2009) Experimental analysis of cross-laminated timber
panels in fire. Fire Saf J 44(8):1078–1087
6. Li X, Zhang X, Hadjisophocleous G, McGregor C (2015) Experimental study of combustible
and non-combustible construction in a natural fire. Fire Technol 51(6):1447–1474
7. Lineham SA, Thomson D, Bartlett AI, Bisby LA, Hadden RM (2016) Structural response of
fire-exposed cross-laminated timber beams under sustained loads. Fire Saf J 85:23–34
8. Mindeguia JC, Mohaine S, Bisby L, Robert F, McNamee R, Bartlett A (2020) Thermo-
mechanical behaviour of cross-laminated timber slabs under standard and natural fires. Fire
Mater 45(7):866–884
9. Muszyński L, Gupta R, Hong S, Osborn N, Pickett B (2019) Fire resistance of unprotected
cross-laminated timber (CLT) floor assemblies produced in the USA. Fire Saf J 107:126–136
10. NRC (2020) National building code of Canada. National Research Council of Canada, Ottawa,
ON, Canada
11. Okuni IM, Bradford TE (2020) Modelling of elevated temperature performance of adhesives
used in cross laminated timber: an application of ANSYS mechanical, 2020 R1 structural
analysis software. Environmental Sciences Proceedings 3(1):46
12. Schmid J, Klippel M, Just A, Frangi A, Tiso M (2018) Simulation of the fire resistance of
cross-laminated timber (CLT). Fire Technol 54(5):1113–1148
13. Su J (2018) Fire safety of CLT buildings in Canada. Wood and Fiber Science 50 (Special Issue:
CLT/Mass Timber):102–109
198 S. Barclay and S. Salem
14. Wang Y, Zhang J, Mei F, Liao J, Li W (2019) Experimental and Numerical analysis on fire
behaviour of loaded cross-laminated timber panels. Adv Struct Eng 23(1):22–36
15. Wiesner F, Bartlett A, Mohaine S, Robert F, McNamee R, Mindeguia JC, Bisby L (2020)
Structural capacity of one-way spanning large-scale cross-laminated timber slabs in standard
and natural fires. Fire Technol 57(1):291–311
Experimental Fire Testing of Damaged
Glulam Beam Connections Retrofitted
with Self-tapping Screws
Abstract The main objective of the research study presented in this paper is to
investigate the effects of using self-tapping screws (STS) to retrofit damaged glulam
beam connections subjected to standard fire. In this experimental study, two full-size
glulam beam-end bolted connections with wood-steel-wood connection configura-
tion utilizing two different bolt patterns have been retrofitted using STS after being
deliberately damaged through physical testing until failure. In the connection config-
uration with the first bolt pattern (4BP1), two rows of bolts, each of two bolts, were
symmetrically positioned near the top and bottom sides of the beam section. Whereas
in the configuration with the second bolt pattern (4BP2), the bottom row of bolts was
shifted upward to be located at the mid-height of the beam section to further contribute
to the moment-resisting capacity of the connection. Subsequently, the retrofitted
connections were experimentally tested at elevated temperatures that followed the
CAN/ULC-S101 standard fire time–temperature curve while being loaded to the
maximum design load of the weakest undamaged, unreinforced connection config-
uration. The experimental results of the retrofitted glulam beam connections were
compared to those of identical but undamaged, unreinforced connections that were
experimentally tested in a prior related study to highlight the influence of STS in
strengthening the damaged connections when subjected to fire. Results show that the
retrofitted glulam beam connections maintained a minimum of approximately 67%
of the fire resistance time of identical but undamaged, unreinforced connections.
1 Introduction
During the lifespan of a timber building, its structural elements may exhibit a
variety of defects and failures. This may include splits that can develop due to
shrinkage because of changes in the moisture content of wood or due to exces-
sive flexure bending, tensile, or shear stresses. Various retrofitting techniques are
used to strengthen damaged timber elements to rehabilitate such timber buildings.
Self-tapping screws (STS) have been the most economical and accessible retrofitting
technique for timber structural components, especially connections. Although STS
have been proven to be very effective in enhancing the strength of glued-laminated
timber (glulam) beams with wood-steel-wood (WSW) connections in construction,
there have been only very few studies on the effects of STS on retrofitting glulam
beams with such type of commonly used connection in fire conditions [1, 2]. Hybrid
connections such as WSW are widely used in mass timber construction to join wood
members and transfer loads.
The effect of STS on the strength of bolted glulam connections subjected to
bending was investigated by Lam et al. [3]. In that study, three beam configurations
were tested: unreinforced, reinforced, and retrofitted. The STS, 300 mm in length
and 8 mm in diameter, were placed perpendicular to the wood grain in the beam
section. A slotted-in steel plate was used to connect the glulam beam and column
sections. The beams were subjected to monotonic loading until failure. The results
of their experimental program showed that the maximum moment resistance of the
unreinforced connections was 31.49 kN m with a maximum rotation of 2.97°. In
comparison, the reinforced connections had a moment resistance of 65.88 kN m with
a maximum rotation of 16.59°, whereas the retrofitted connections had a moment
resistance of 58.85 kN m and exhibited a maximum rotation of 13.29°. Therefore,
the use of STS in the reinforced connections resulted in a moment-carrying capacity
increase by a factor of 2.1. Whereas the use of STS in the retrofitted connections
increased their moment resistance by a factor of 1.87. Also, as evident from the
connection rotation values attained, the ductility of the connections was significantly
increased with the utilization of STS.
To understand the effect of STS on the ductility and load-carrying capacities of
structural wood members, Blaß and Schädle [4] conducted experiments on glulam
beams with dowel-type connections that were reinforced with STS and subjected to
tensile loads applied parallel to wood grain. Three different connection configura-
tions were experimentally examined. In the first configuration, no STS reinforcement
was used. Whereas in the second configuration, ten STS were used; while in the
third configuration, twenty STS were used. Upon completion of the experimental
program, it was observed that the unreinforced connections experienced minimal
displacements and low failure load (i.e. 4 mm maximum displacement at 33.0 kN
failure load). When ten STS were used, the ductility and strength of the connections
increased. More considerably, when twenty STS were used, the reinforced connec-
tions experienced a significant increase in ductility and load-carrying capacity (i.e.
16 mm maximum displacement at 40.0 kN failure load).
Experimental Fire Testing of Damaged Glulam Beam Connections … 201
In a prior study that is related to the present one, Owusu et al. [5] conducted
an experimental investigation on the behaviour of unreinforced glulam beam bolted
connections subjected to standard fire. The study was conducted on WSW connection
configuration with two different bolt patterns identical to the configurations examined
in the present study. In the first pattern (P1), two rows of bolts, each of two bolts, were
placed symmetrically close to the top and bottom sides of the beam section. Whereas
in the second pattern (P2), the bottom row of bolts was raised to the mid-height of the
beam to further contribute to the moment resistance of the connection being closer
to the top side of the beam section subjected to tensile forces. The failure time of
the connection configuration with the first bolt pattern (P1) was 33 min, whereas for
the configuration with the second bolt pattern (P2), its failure time was 32 min. It
was observed during the fire tests that both connections began to experience slight
wood splitting and row shear out at the top row of bolts, which ultimately led to
the failure of the connections. In terms of connection rotations, it was observed
that both connection configurations experienced similar maximum rotation values
(approximately 0.2 radian). Therefore, it was concluded that the bolt pattern utilized
in the connections had a limited effect on the rotation or the failure time of the
connection in standard fire conditions [5].
The study presented in this paper aimed to experimentally investigate the effect
of using self-tapping screws to retrofit glulam beams with damaged WSW bolted
connections. The results of this experimental study were compared to those of undam-
aged, unreinforced connections that were examined by Owusu et al. [5] to evaluate the
strengthening effect of STS reinforcement on the connections subjected to standard
fire.
2 Experimental Program
Initially and as part of the prior related study conducted by Owusu et al. [5], the
glulam beam-end connections were tested until failure and deliberately damaged in
ambient conditions. Subsequently, the damaged beams were retrofitted using STS
and tested again but under standard fire exposure in the current study. The two test
specimens were exposed to elevated temperatures that followed the CAN/ULC-S101
standard fire time–temperature curve. Three sides of the beam (except the top one)
were exposed to elevated temperatures during fire tests. The fire experiments took
place at Lakehead University Fire Testing and Research Laboratory (LUFTRL).
The following sections describe the ambient and fire testing procedures, the
materials used, the loading procedure, and test setup details.
202 M. Hegazi and S. Salem
2.1 Materials
The glulam beams examined in the present study were preserved from the tests
conducted by Owusu et al. [5]. The 1600-mm long beams had cross-sectional dimen-
sions of 184 mm × 364 mm and were made of black spruce pine glulam with stress
grade 24f-EX. The laminas utilized in manufacturing the glulam beam sections were
of approximately 25 × 50 mm cross-sectional dimensions. The mechanical properties
of the glulam beams used are shown in Table 1.
A 12.7-mm (1/2 ) thick steel plate grade 300 W T-stub connector was used to connect
the glulam beam to the supporting steel column. Figure 1 shows a schematic of the
T-stub steel connectors used in the connection configurations with the two patterns
of bolts (P1 and P2).
The STS used to retrofit the connections were SWG ASSY VG plus CSK [7]. The
screws had a length of 300 mm and 8 mm outer thread diameter. Before driving the
STS into the glulam beam section, a pilot hole was drilled for each screw using a
3-mm diameter drill bit to approximately 200 mm deep into the beam section (about
two-thirds of the screw length). The predrilling was deliberately done to prevent
premature splitting in the wood.
Experimental Fire Testing of Damaged Glulam Beam Connections … 203
Fig. 1 Schematics of the T-stub steel connectors utilized in the two connections, a configuration
with bolt pattern P1; b configuration with bolt pattern P2 (all dimensions in mm)
The results of two full-size fire experiments are presented in this paper. Both tests
involved fire resistance testing of damaged glulam beam-end connections that were
retrofitted using STS driven perpendicular to wood grain and subjected to standard
fire exposure. In the connection configuration with the first bolt pattern (namely
4BP1), four bolts were arranged symmetrically near the top and bottom sides of the
beam. While in the configuration with the second bolt pattern (namely 4BP2), the
bottom row of bolts was raised to the mid-height of the beam cross-section to further
contribute to the moment resistance of the connection being located near the top side
of the beam under tensile forces. Six STS were driven into the beam cross-section
perpendicular to wood grain in each connection. Table 2 summarizes the test matrix
of the experimental study presented in this paper.
Figures 2 and 3 show schematics of the beam-end connection configurations with
four connecting bolts in two different arrangements, patterns P1 and P2, respectively.
Fig. 2 Details of the beam-end connection configuration with four bolts arranged in the first pattern
(4BP1)
Fig. 3 Details of the beam-end connection configuration with four bolts arranged in the second
pattern (4BP2)
Before commencing the fire testing phase, the two beam-end connection configura-
tions were tested at ambient temperature as part of a prior related study conducted by
Owusu et al. [5]. Each beam was connected to a steel supporting column by a steel
T-stub connector and four 19.1-mm (3/4 ) diameter A325M high-strength structural
bolts. The bolt end distance, edge distance and spacing were designed as per CSA
O86-19 [8]. A slotted cut of 15-mm width was prepared at the centre of the beam
cross-section to accommodate the steel connecting plate, allowing a tolerance of
approximately 1 mm on each side of the embedded steel plate as per CSA O86-19
[8].
Experimental Fire Testing of Damaged Glulam Beam Connections … 205
Fig. 4 Failure of the WSW connection configurations tested in ambient condition, a configuration
with four bolts in pattern P1; b configuration with four bolts in pattern P2 [6]
The beam-end connections were deliberately loaded under bending moment and
indirect shear force until failure, when splits developed along the top (occurred first)
and bottom (occurred afterwards) rows of bolts. Figure 4 illustrates the typical failure
of the WSW connection configurations observed during the ambient tests carried out
by Owusu et al. [5].
After the beam-end connections were damaged through ambient testing, they
were retrofitted using STS in preparation for the fire tests of the present study. Six
self-tapping screws were used to reinforce the beam section at its end connection and
were driven perpendicular to wood grain using an electrical impact wrench. Figure 5
shows the top side of one of the STS-reinforced glulam beam specimens. The STS
were installed in pairs and spaced at 100 mm such that the first pair was centred
between the beam end and the first column of bolts; the second pair was centred
between the two columns of bolts; and the third pair was 50 mm beyond the second
column of bolts into the beam length away from its connected end.
The fire tests of the retrofitted beam-end connections were conducted in the large-size
fire testing furnace accommodated at Lakehead University Fire Testing and Research
206 M. Hegazi and S. Salem
Laboratory (LUFTRL). The setup of the fire tests is like that used in the ambient
tests, except that several layers of 1.0-inch thick ceramic fibre blankets were used to
insulate the steel supporting and loading components as well as the top side of the
beam to simulate the existence of a slab on top of the beam as it would be in the
case of an actual construction configuration. Thus, the beams were exposed to fire
on three sides only since the top side was fire protected with the applied insulating
ceramic fibre blankets.
The two different connection configurations were exposed to elevated temper-
atures that followed the CAN/ULC-S101 standard fire time–temperature curve [9]
while being loaded to a service load equivalent to the maximum design load capacity
of the weakest undamaged, unreinforced connection configuration (4BP1). The
connection failure criterion was set for the fire tests to a maximum beam-end deflec-
tion corresponding to a connection rotation of 0.1 radians. However, fire tests were
continued beyond this point and until the beam could not sustain the applied load
anymore.
The applied load was maintained at 10.5 kN throughout the fire tests, which
generated a bending moment of 14.8 kN m on the glulam beam-end connection.
This is the same load level that was applied to the identical undamaged, unreinforced
connection configurations experimentally tested by Owusu et al. [5] under standard
fire exposure. Consistency in applying the same load allowed direct comparison of
the fire resistance test results of the present study against those of the prior related
study conducted by Owusu et al. [5]. The load was applied using a loading cylinder
connected to a manual hydraulic pump. A load cell was placed outside the furnace
on top of the loading steel post and under the piston of the hydraulic loading cylinder
to precisely measure the magnitude of the applied loads during fire tests. Figure 6
shows one of the glulam beam test assemblies installed inside the fire testing furnace
with all instrumentation and fire insulating layers installed.
After retrofitting the damaged beam-end connections using the STS and before fire
testing, the width of the major splits in each specimen was measured using a digital
calliper to assess the effects of those splits on the fire resistance of the retrofitted
connections. The initial condition of the retrofitted beam-end connections evaluated
right before fire tests is shown in Fig. 7.
2.4 Instrumentation
During fire tests, the increased deflections of the beam free end were measured using
a draw-wire displacement transducer installed outside the furnace and attached to the
top of the steel loading post. The measured beam vertical deflections and the distance
between the connection interface with the supporting column and the applied load
location were used to calculate the beam-end connection rotations with respect to
the sturdy vertical supporting steel column.
In addition to the mechanical measurements, thermal measurements were also
captured during fire resistance tests. Accordingly, twelve metal-shielded Type-K
Experimental Fire Testing of Damaged Glulam Beam Connections … 207
Fig. 6 A glulam beam test assembly installed inside the fire testing furnace with all instrumentation
and fire insulating layers installed
(a) (b)
Fig. 7 Condition of the retrofitted beam-end connection configurations assessed right before fire
tests, a connection configuration with bolt pattern P1; b connection configuration with bolt pattern
P2
thermocouples were connected to each test specimen at several locations within the
beam-end connection. Thermocouples were inserted at different depths inside the
beam section from both front and back faces and from the top and bottom sides
of the beam to determine the temperature variations due to heat transfer inside the
wood throughout the fire tests. The measured temperatures were used to determine
208 M. Hegazi and S. Salem
the actual wood charring rates. However, no thermal measurements are reported in
this paper.
Fig. 8 Connection configuration with four bolts arranged in the first bolt pattern (4BP1) undergoing
fire testing
Experimental Fire Testing of Damaged Glulam Beam Connections … 209
but undamaged, unreinforced connection tested by Owusu et al. [5], assuming that
utilizing the STS as a retrofitting technique would restore the full strength of the
damaged connection and thus suggesting the same original fire resistance time (i.e.
33 min).
As for the STS-reinforced connection configuration with its four bolts arranged in
the second pattern (4BP2), it achieved a failure time of 27 min under the same load
level applied on an identical but undamaged, unreinforced connection configuration
tested by Owusu et al. [5], which had a failure time of 32 min. This indicates an
approximately 84% recovery of the connection fire resistance time because of the
STS efficient retrofitting technique. This connection configuration failed in ambient
testing for the same reason as the failure of the connection configuration with the
first bolt pattern (4BP1), which was excessive splitting along the top row of bolts.
However, the width of the major split exhibited by this connection configuration was
only 1 mm. Images of the retrofitted connection taken during the fire test are shown
in Fig. 9.
During the first 15 min of the fire test, it was observed that the already existing
split at the connection did not propagate or increase in width and that the beam-end
connection remained solid and intact. As the fire test progressed, it was observed that
a localized charring at a relatively large wood knot near the top row of bolts (refer
to Fig. 7b) had occurred. Therefore, it is believed that the presence of that natural
defect (wood knot) negatively affected the behaviour of this specific connection under
standard fire exposure. Otherwise, this connection configuration could have achieved
a longer fire resistance time, as the original split along the top row of bolts remained
primarily unchanged until failure, mainly due to the binding effect of the utilized
STS.
Fig. 9 Connection configuration with four bolts arranged in the second bolt pattern (4BP2)
undergoing fire testing
210 M. Hegazi and S. Salem
Figure 10 depicts the time-rotation curves for the connection configurations with the
two bolt patterns (P1 and P2). For the configuration with the four bolts arranged in
the first pattern (4BP1), it was observed that the connection rotation increased in a
relatively linear trend for the first 8 min approximately. Then, it started to rise in a
more of a nonlinear trend at around 19 min and until failure at 22 min. Whereas for
the connection configuration with the second bolt pattern (4BP2), it was observed
that the connection rotation increased in a relatively linear trend for the first 10 min
approximately, followed by a slightly increased slop until about 20 min into the fire
test when the time-rotation relationship took more steep slope until failure at 27 min.
As a result of the extended duration of the initial linear trend of the time-rotation
relationship exhibited by the configuration with the second bolt pattern (4BP2), the
said configuration had a 5 min longer failure time. This can be attributed to the less
width (only 1 mm) of the initial split that existed in this connection compared to that
(5 mm) in the configuration with the same number of bolts but arranged in the first bolt
pattern (4BP1). As previously mentioned, a wider initial split resulted in more heat
penetrating the core of the beam section. Thus, this led to accelerated localized wood
charring, which negatively affected the fire behaviour of the retrofitted connections.
For the identical but undamaged, unreinforced connection configurations that
were tested by Owusu et al. [5] and as depicted in Fig. 11, it was noticed that
both connection configurations experienced little to no rotation for the first 17 min
approximately. Afterwards, the time-rotation relationships for both configurations
followed a more of a nonlinear trend at about 23 min and until failure at 33 min
4BP2
4BP1
0.25
0.2
Rotation (rad)
0.15
0.1
0.05
0
0 500 1,000 1,500 2,000 2,500 3,000
Time (s)
and 32 min for the configurations with the first and second bolt patterns (4BP1 and
4BP2), respectively.
Looking at all time-rotation relationships in Figs. 10 and 11, it can be noticed
that the undamaged, unreinforced connections behaved in a more ductile manner
compared to that of the damaged, STS-reinforced connections. This can be attributed
to the fact that the implementation of the STS significantly increased the stiffness
of the retrofitted connections, which also influenced the fire resistance of those
connections.
In the experimental study presented in this paper, the fire behaviour of damaged
wood-steel-wood glulam beam-end connections with different bolt patterns that were
retrofitted using self-tapping screws was investigated. The study focussed on the
performance of the tested connections in terms of their failure time and rotational
behaviour under standard fire exposure. The experimental results of the present study
were compared to those of a prior related study on identical but undamaged, unrein-
forced connections that was conducted by Owusu et al. [5]. The followings are the
main conclusions out of this new experimental study.
• Although the utilization of STS resulted in the recovery of substantial magni-
tude of the strength and fire resistance time, the full original strength and fire
resistance of the connections were never recovered. For instance, the undam-
aged, unreinforced connection configuration with the first bolt pattern (4BP1)
212 M. Hegazi and S. Salem
had a failure time of 33 min; whereas the identical but damaged, STS-reinforced
connection sustained the same applied load level for only 22 min under standard
fire exposure, which indicates approximately 67% recovery of the connection fire
resistance time. Also, the undamaged unreinforced connection configuration with
the second bolt pattern (4BP2) had a failure time of 32 min, whereas the identical
but damaged, STS-reinforced connection sustained the same applied load level
for only 27 min under standard fire exposure, which indicates approximately 84%
recovery of the connection fire resistance time.
• The width of the initial splits developed in the glulam beams at the connec-
tion because of the ambient testing considerably affected the failure time of the
connections. For instance, more width (i.e. 5 mm) of the initial split existed in the
connection configuration with the first bolt pattern (4BP1) compared to that (only
1 mm) in the configuration with the second bolt pattern (4BP2), resulted in more
heat penetrated the core of the beam section and led to accelerated localized wood
charring which negatively affected the fire behaviour of the retrofitted connection
and thus reduced its failure time.
• Utilization of self-tapping screws is an economical and efficient technique for
retrofitting damaged wood structural elements and connections. Based on the
outcomes of the current study presented in this paper, such retrofitting technique
can recover substantial magnitudes of the strength and fire resistance time of such
structural building components.
Acknowledgements This research project was partially funded using the Discovery Grant awarded
to the second author by the Natural Sciences and Engineering Research Council of Canada (NSERC).
The authors would like to thank lab technologists Cory Hubbard and Morgan Ellis for their assistance
in the Civil Engineering’s Structures Laboratory at Lakehead University.
References
7. CCMC (2014) Evaluation report: SWG ASSY VG plus and SWG ASSY 3.0 self-tapping wood
screws. Canadian Construction Materials Centre Report No. CCMC 13677-R, National Research
Council of Canada, Ottawa
8. CAN/CSA O86-19 (2019) Engineering design in wood. Canadian Standards Association,
Canada, Rexdale, Ontario
9. CAN/ULC S101-19 (2019) Standard methods of fire endurance tests of building construction
and materials, 5th edn. Underwriters Laboratories of Canada, Ottawa
Finite Element Modelling
of CLT-Concrete Composite Sections
Utilizing Wood Screws as Shear
Connectors
1 Introduction
Timber structures that utilize timber-concrete composite (TCC) floor systems have
been gaining vast popularity. Nowadays, there is considerable interest in their appli-
cations in mass timber buildings and bridges for several reasons, mainly being an
environmentally friendly material with low labouring cost structural systems [9].
Such composite slabs consist of a timber section on the tension side, a concrete layer
on the compression side, and adequate shear connectors to activate the composite
action between the timber section and the concrete layer. In TCC systems, timber is
used as the primary tensile load-carrying material due to its high strength-to-weight
ratio. In contrast, concrete is used for its stiffness and acoustic separation advantages.
It was found that the reduction in the charred cross sections of timber in TCC systems
due to fire governed their failure [14]. However, the shear connections between timber
and concrete in such composite sections determine the ultimate state of its failure.
Eurocode 5 [2] introduced the mechanically jointed beams theory (gamma method)
for flexible elastic shear connections based on the differential equation for partial
composite action in TCC sections [11, 13]. The primary technique to provide a
sufficient connection between timber and concrete is to use shear connectors which
can come in a wide variety of metal connectors, grooved connections, and adhesive
connections [18]. The strength and stiffness of the utilized shear connectors can be
estimated using the gamma method from Annex B of Eurocode 5 [2], especially
when considering conventional floor systems where the span is at least 25 times the
depth of the section. Conventional and simple metal shear connectors in common
TCC systems are metal screws.
As far as mechanical properties are concerned, wood materials have a large ratio of
anisotropic behaviours, especially modulus of elasticity and strength, in parallel and
perpendicular-to-wood grain directions. Regardless of wood grain orientation, wood
materials have different strengths in tension and compression, nearly linear elastic-
brittle failure behaviour in tension and a nonlinear with limited ductility up to ultimate
stress under compression, [12, 18]. Due to this large ratio of anisotropic behaviours,
modelling timber structures requires constitutive laws that can adequately capture
the stress–strain relationship and failure of wood under multi-axial stress states [5].
Wood construction materials include sawn timber and engineered wood products such
as cross-laminated timber (CLT). CLT is a proprietary mass timber product fabri-
cated with wood laminations arranged in alternating perpendicular layers (commonly
ranging from 3 to 9), with the outer layers oriented in the direction of the major
strength axis. Following such a manufacturing technique, the finished product has
higher mechanical characteristics than individual lamination.
The main aim of the present study is to simulate the shear behaviour of CLT-
concrete composite sections that utilize self-tapping screws (STS) as shear connectors
based on a prior related experimental study conducted by Salem and Virdi [16] on
TCC sections under shear forces.
Finite Element Modelling of CLT-Concrete Composite Sections … 217
Salem and Virdi [16] conducted an experimental study on eight medium-size CLT-
concrete composite sections under direct shear forces (Fig. 1). There were three
parameters to investigate, including the age of concrete (i.e.14 days and 28 days),
screw spacing (i.e.150 and 200 mm) and the number of screw rows (i.e. 2 and 3
rows). The three-ply CLT panels utilized in the composite sections were all made
of spruce-pin-fir (S-P-F) planks and were cut to 600 mm wide × 1200 mm long.
Rugged structural steel self-tapping screws of 8 mm diameter and 100 mm length
were utilized as shear connectors in those CLT-concrete composite sections. The
screws were designed with special thread that helps enlarge the hole to allow easy
penetration in wood without the need for pre-drilled holes and thus maintain high
screw withdrawal resistance. The STS were driven into the CLT panels in pairs, with
each screw inclined 45° in the opposite direction with respect to the other. Most
of the screw length was driven into the CLT panel, maintaining the screw head at
25 mm distance perpendicular to the panel top surface (at 50% of the thickness of the
concrete layer). The mechanical properties of the utilized screws as per the ICC-ES
Evaluation report (2017) are provided in Table 1. The targeted concrete compressive
strength was 30 MPa based on a concrete mix design that included 300 kg (40%)
fine aggregate; 300 kg (40%) coarse aggregate; 90 kg (12%) cement; and 60 kg (8%)
water.
Table 1 Mechanical
Property Allowable strength (MPa)
properties of self-tapping
screws Bending strength 1175.0
Tensile strength 1298.0
Shear strength 881.0
Adapted from ICC-ES Evaluation report, 2017
In this study, the Hashin damage model [7] can adequately capture failure modes
of wood under biaxial stresses condition. Thus, the two-dimensional failure criteria
employed to predict the onset of damage and damage evolution law are based on
the energy dissipated during the damage process and linear material softening [1].
It considers four different damage initiation mechanisms defined in effective stress
components, σ̂i j (i, j = 1, 2) as per Eq. (1).
⎧ 2
⎪
⎪ F tf = σ̂X11T + α σ̂S12L
⎪
⎪
Tensile failure of fibres
⎪
⎪ 2
⎪
⎨ F c = σ̂11C
f X
Compression failure of fibres
2 2
⎪
⎪ t
= σ̂
+ α σ̂12
⎪
⎪ Fm
22
T SL
Tensile failure of matrix
⎪
⎪
Y
2 c 2 2
⎪
⎩ F = 22T + Y T − 1 σ̂22C + σ̂12L
c σ̂
m 2S 2S Y S
Compression failure of matrix
(1)
Finite Element Modelling of CLT-Concrete Composite Sections … 219
where, X T and Y T are wood tensile strengths along parallel and perpendicular-to-
wood grain directions, respectively; X C and Y C are wood compressive strengths
along parallel and perpendicular-to-wood grain directions, respectively. Similarly,
S L and S T are the wood shear strengths along parallel and perpendicular-to-wood
grain directions; and α is a factor to determine the shear contribution of the fibre
T
tensile initiation criterion. The effective stress vector σ̂ = σ̂11 σ̂22 σ̂12 is related
T
to stress vector σ = σ11 σ22 σ12 by Eq. (2).
σ = ωσ (2)
where, ω f and ωm are scalar damage variables along parallel and perpendicular-to-
wood grain directions, respectively; and ωs is shear damage variable expressed as a
function of ω f and ωm .
There are two available approaches in ABAQUS [1] to predict the behaviour of
concrete behaviour, including cracks and crushing: crack and plastic damage models.
In the present study, due to higher competence for convergence, the plastic damage
model was selected for simulating the concrete layer of the TCC section subjected to
shear forces. The concrete plastic damage model assumes two concrete failure mech-
anisms: the cracking and crushing continuum damage mechanism and the stiffness
degradation mechanism that defines crack propagation.
Concrete plastic damage models require input values such as modulus of elas-
ticity, Poisson’s ratio, plastic damage parameters, and description of compressive
and tensile behaviour of the modelled materials. The five plastic damage parameters
are the dilation angle; the flow potential eccentricity; the ratio of initial equi-bi-axial
compressive yield stress to initial uniaxial compressive yield stress; the balance of the
second stress invariant on the tensile meridian to that on the compressive meridian;
and the viscosity parameter that defines visco-plastic regularization.
The compression stress–strain curve of concrete proposed by Saenz [15] has been
used to define the uniaxial compressive stress–strain curve for concrete according to
Eq. (4).
220 J. Tashakori and S. Salem
E c εc
σc = 2 3 (4)
1 + (R + R E − 2) εε0c − (2R − 1) εε0c + R εε0c
where,
R E (Rσ − 1) 1 Ec f c
R= − , RE = , E0 = (5)
(Rε − 1) 2 Rε E0 ε0
σt wt 3 −c wt wt
= 1 + c1 e 2 wcr − 1 + c13 e(−c2 ) (6)
f ct wcr wcr
Gf
wcr = 5.14 (7)
f ct
In Eq. (7), the f ct is the concrete tensile strength, while Gf is the concrete tensile
fracture energy which can be estimated using Eq. (8) as per Coronado and Lopez [4].
0.22
fc da
G f = 2.5αo 1+ (wc)−0.3 (8)
0.051 11.27
where, α 0 = 1.44 for crushed 0 angular aggregates, and d 0 (mm) is the maximum
aggregate size. In the present study, it has been assumed that d 0 = 32 mm and the
water-to-cement ratio in the concrete mix wc = 0.45.
The connection between timber and concrete is usually defined by contact elements
[6, 17]. The contact elements used for the present study have five parameters: normal
stiffness of interface K nn , tangential stiffness of interface K tt , cohesion c, coefficient
222 J. Tashakori and S. Salem
of friction ϕ, and tensile strength f t of the interface. Elastic behaviour up to the failure
point was considered for the normal and tangential stiffness of the contact elements.
Therefore, it can capture the linear elastic-brittle failure behaviour according to Fig. 4.
Both CLT panel and concrete slab are generated and meshed in the ABAQUS models
developed in the present study as two separated objects (parts in ABAQUS termi-
nology). At the same time, the contact interface between the two materials has a
shear resistance contribution along with the primary shear resistance of the screws
utilized as shear connectors in the modelled TCC sections.
According to the test set-up of the experiments conducted by Salem and Virdi [16]
on TCC sections that utilized self-tapping screws as shear connectors, see Fig. 5a,
the boundary conditions for the FE models involved restraining the bottom end of
the concrete slab and the top end of the CLT panel in both the lateral and transversal
directions. Also, each screw was defined along its length by embedded constrain
into both the CLT and concrete sections. Accordingly, there was no need to define
contact elements along the screw body within the host materials (i.e. CLT panel and
concrete slab). Those boundary conditions are shown in Fig. 5b.
Finite Element Modelling of CLT-Concrete Composite Sections … 223
According to Fig. 6, the finite element models can predict the TCC sections’
behaviour with self-tapping screws as shear connectors under direct shear forces.
As seen in Fig. 6, there is good agreement between the load–displacement relation-
ships of the experiments and those predicted by the FE models. For this comparative
study, the experimental results of two 150-mm screw spacing specimens with two
and three rows of screws as per Salem and Virdi [16] were considered to validate the
FE models developed in the present study.
However, the minor discrepancies between the experimental results and those
predicted by the FE models can be attributed to the difference between the nominal
strength of CLT panel, concrete, and self-tapping screws materials compared with
their actual values.
According to Fig. 7a, the Hashin damage model [7] employed in the FE models
developed in the present study allowed accurate prediction of the end failure of
the CLT panel under excessive load. The ability of the FE model to represent the
shear resistance of the modelled screws and its effects on the overall resistance of
the modelled TCC sections are illustrated in Fig. 7b. Therefore, the implementation
of the Hashin damage model [7] in the FE models allowed accurate prediction of
the anisotropic behaviour of wood material. According to the experimental results
224 J. Tashakori and S. Salem
Fig. 6 Load–displacement relationships for the validation of the FE models against the experi-
mental results
and observations of Salem and Virdi [16], TCC sections that utilized three rows of
screws exhibited more cracks in both the longitudinal and lateral directions compared
to those that used only two rows of screws, which exhibited only longitudinal cracks
along one of the screw rows [16]. This behaviour was clearly depicted by the FE
model, as shown in Fig. 7b, as the shear demand for the TCC section represented in
the said figure is by far more than that of the TCC section modelled with only two
rows of screws.
The FE models developed in the present study can predict the screw deformations,
as shown in Fig. 8a, as the deformations were imposed from one side of the section.
Also, all modelled screws did not reach full active performance inside the CLT section
at the maximum shear capacity of the TCC section at a total slip of approximately
0.7 mm. Also, at a 5.0 mm slip, see Fig. 8b, the screws still did not reach full
active performance, too, even though most of them had already failed at that point.
Therefore, it is more conservative to consider an active screw number in the CLT
section to estimate the maximum shear capacity of the entire TCC section. Such slip
influence was not seen within the concrete section and thus, the screws subjected
to tensile stresses have contributed more to the overall shear strength of the TCC
section.
5 Conclusions
Fig. 7 Damage criteria for TCC sections with three rows of screws at 150 mm spacings, a CLT
section with screws implemented; b Concrete slab with shear failure indications due to screw effects
Fig. 8 Screws ultimate stress state for a TCC specimen with three rows of screws at 150 mm
spacing a Corresponded to maximum shear capacity at 0.7 mm slip; b Deformation at 5.0 mm slip
226 J. Tashakori and S. Salem
subjected to excessive pressure. The accuracy of the FE models that used the Hashin
damage model [7] to simulate the anisotropic behaviours of wood material, such as
the modulus of elasticity and strength parallel and perpendicular-to-wood grain, has
been proven by the excellent correlation between the experimental results and the
numerical predictions with both concrete and wood materials nonlinearity captured
with the available damage concepts in ABAQUS software. The screw performance
in the CLT section was influenced by the slip between the CLT panel and concrete
slab in which none of the screws achieved their full active performance, either at
maximum shear capacity point (0.7 mm slip) or at ultimate slip (5 mm slip).
References
17. Schäfers M, Seim W (2011) Investigation on bonding between timber and ultra-high
performance concrete (UHPC). Constr Build Mater 25(7):3078–3088
18. Yeoh D, Fragiacomo M, Deam B (2011) Experimental behaviour of LVL-concrete composite
floor beams at strength limit state. Eng Struct 33(9):2697–2707
Design and Construction of Temporary
Works to Facilitate the Construction
of Go Rail Tunnel Under the Highway
401 and 409
Abstract Toronto Tunnel Partners (TTP), under the contract to Metrolinx, recently
completed the construction of two 10 m tall, 8.5 m wide rail tunnels crossing 21 lanes
of Highway 401 at the Highway 409 interchange to facilitate the future expansion
of the GO transit rail corridor. The tunnels were advanced by use of the Sequential
Excavation Method (SEM) and required the use of a pre-support pipe canopy to assist
with the ground support above the tunnel crowns. A 26 m × 10 m × 9.8 m deep
strutted sheet pile shaft was constructed occupying the full median space between
Highway 401 and 409 lanes to facilitate auger boring operation necessary to install
this pipe canopy. The shaft also aided in the temporary support to the adjacent existing
rail tunnel and bridge wing walls that were partially exposed during construction.
The shaft also needed to be designed to allow for pipes to be installed through the
load resisting sheet pile walls, and later the unbalanced loading during the tunnel
construction. At the tunnel portals, a custom thrust frame was required to allow
the contractor to complete auger boring up to 4.5 m above grade and was required
to resist a 250-tonne thrust load at this height. Retaining walls and work platforms
were also required at the portals to facilitate installation of monitoring equipment and
temporary support of equipment on the active railway during short duration closures.
This paper will provide a comprehensive overview of the design and installation
of the unique temporary structures which helped facilitate the construction of this
challenging project.
Keywords Transit
1 Introduction
The two new tunnels are each 180 m long, have an excavated cross section of 9 m
× 10.5 m and are situated 2.5 m below the highway pavement surface immedi-
ately adjacent to the existing rail tunnel to the south. The tunnels were advanced
utilizing the Sequential Excavation Method (SEM), with the top heading advanced
first, followed by the bench/invert. Pre-support measures include the installation of a
continuous pipe canopy using 810 mm diameter steel pipes, installed using state-of-
the-art auger boring technology. Temporary works included a shaft near the middle
of the highway in a narrow median space between live traffic lanes, a jacking frame
to install the auger boring pipes and Mechanically Stabilized Earth (MSE) walls and
work platforms were also required at the portals.
1.2 Geology
The soil at the project location consists of clayey silt with sand fills for the top 8 m of
depth underlain by a dense glacial till. The extensive depth of fill was present at the
site due to the highway embankment built up surrounding the existing rail crossing.
The glacial till is comprised of a heterogeneous mixture of clayey silt, sand, and
gravel. The till varies from compact to very dense consistency. The groundwater
table was observed to be at approximately 152.5 m elevation. The close proximity
of the existing rail tunnel also influenced the subsoil conditions encountered, as it
Design and Construction of Temporary Works to Facilitate … 231
had sand backfill against the walls that was looser than the surrounding highway
embankment fills.
2 Temporary Structures
The design for the SEM tunnel requires pre-support to be installed in the ground
prior to tunnel advancement. The tunnel pre-support consisted of a series of 810-
mm-diameter steel pipes forming a pipe canopy above the tunnel crown which would
be advanced by auger boring in the soil. Due to the length of the auger boring drives
from the west side, and obstruction from the existing pile foundations on the east
side, a shaft was constructed near the middle of the highway crossing to allow for the
pre-support pipes to be installed in four drives, one drive from each tunnel portal, and
two drives from the middle of the highway. Figure 1 depicts the final arrangement
of the tunnel pre-support pipes and the location of the temporary shaft.
2.1.2 Geometry
The median shaft was 26 m long, 10 m wide, and 9.8 m deep and was constructed using
strutted sheet piles. Since the shaft had to accommodate the auger boring machines
that were used to advance the pre-support pipes, the shaft needed to maximize the
available median space between the Highway 401 eastbound express lanes and the
409 eastbound ramp lanes, with less than 0.5 m between the vehicular barriers and
the sheet pile walls of the shaft (Fig. 2). The orientation of the rail tunnel to the
highway lanes was skewed by about 35°, which complicated nearly every aspect of
the design and construction details for the project.
2.1.3 Design
The Highway 409 ramp is located on the east side of the shaft where the grade is
approximately 1.5 m higher than the grade on the Highway 401 side, which caused
an asymmetrical loading on the shaft structure. The design of the temporary shaft
had to account for this asymmetrical loading as well as the unbalanced loading from
differing soil depths and highway skew angle. Also, the shaft was designed to provide
temporary support to the adjacent existing rail tunnel and 409 ramp wing wall during
construction, from the unbalanced earth load. The struts were carefully positioned
Design and Construction of Temporary Works to Facilitate … 233
to allow for unobstructed auger boring equipment operation within the shaft and to
minimize the ground movement during the construction. In addition, the shaft was
designed to resist vertical load from a 15 Ton gantry crane, which was installed to
service the shaft during construction.
One of the most unique and challenging aspects of the shaft design was the require-
ment to install the tunnel pre-support pipes through the sheet pile wall, requiring the
wall to be continuously cut along the top of the new tunnel profile, near the zone of
maximum bending stress of the piles. When each cut was made, the load on the cut
pile had to be redistributed to adjacent sheet piles. Procedures for cutting the sheet
pile wall were developed which included the injection of acrylate grouts to help
stabilize the soil behind the area of the cut and a series of welded tabs to re-attach
the sheet pile wall across the skewed pipe joint to partially reinstate the structural
capacity of the shoring wall (Fig. 3).
The temporary shaft was analyzed using 2D plane strain finite element program
RS2 by Rocscience. The geometry of the model is shown in Fig. 4. The three-
dimensional effects of the shaft were captured by analyzing the shaft using SAP2000
finite element software. A 2D finite element model with springs applied on both active
and passive side was created in SAP2000. The 3D model was also used to evaluate the
effects of cutting the sheet pile walls during the tunnel pre-support pipe installation
and the transfer of loading from the adjacent rail tunnel and wing wall structure.
The use of two distinct analysis methods also aided in verifying the behavior of
the structure under the unusual loading conditions imparted on the shaft at various
stages of construction to provide better confidence in the validity of the design.
The walers and struts were designed using CSA S16-14, design of steel structures.
The sheet pile was designed using European Standard EN 1993-5. The web crippling
capacity of sheet pile to resist the forces from the jacking forces was calculated based
on EN 1993-5.
The sheet piles were vibrated through the fill and granular material with minimal
resistance up until approximately 8 m below the ground surface when they encoun-
tered the native till. A diesel pile hammer was used to drive the sheet piles to their
final depth into the hard till layer. Embedment of the sheet piles into the hard tills
was difficult but necessary to develop the cantilever behavior of the pile toes that
allowed them to be cut during pipe canopy installation. The excavation of the shaft
was conducted in multiple stages and the construction sequence of the shaft is listed
below.
1. Installation of sheet pile.
2. Excavate up to1.2 m below the first strut elevation.
3. Installation of the first strut.
4. Excavate 1.2 m below the second strut elevation.
5. Installation of the second strut.
Design and Construction of Temporary Works to Facilitate … 235
The auger boring of the pre-support pipes was advanced from the median shaft and
the east and west portals. The horizontal thrust from the auger boring machine was
resisted by the sheet pile walls within the shaft. Since the ground elevation at the
portals was located approximately 4.2 m below the top of the pre-support pipe, the
auger boring machine required a custom frame to resist the horizontal thrust force
required to advance the pre-support pipes.
2.2.2 Geometry
The full assembly of the reaction frame measures 21.0 m long, 3.6 m wide, and
4.4 m high and is comprised of three sections bolted together. The reaction frame
was primarily constructed with hollow steel sections and the thrust beams were made
from W-sections.
2.2.3 Design
The horizontal thrust force is exerted on the reaction frame at different elevations
due to the umbrella arch shape of the pipe canopy. The thrust force was transferred
to the reaction beam, which was a part of the reaction frame. The reaction beams
were made of W-sections, which is efficient in resisting large bending moments.
The reaction frame was fabricated in a truss arrangement with hollow steel sections,
which performs well under tension and compression loads. Then, the force is resisted
by the reaction frame and transferred to the shear pins that are inserted in steel tubes
236 P. Nadesparan et al.
Fig. 5 Auger boring reaction frame (left) and various setups for different pipe drives (right)
which are embedded in the temporary concrete working slab (Fig. 5). The working
slab is designed to safely transfer the horizontal force to the ground by a combination
of friction and passive resistance from the soil backfill. The height difference of up
to 4.2 m, between the point of application of the thrust force and point of resistance
at the shear pins in the base slab, generated a substantial overturning moment on the
frame. Concrete blocks were placed within the reaction frame to raise the elevation of
the auger boring machine to the required pipe level and to counteract the overturning
moment generated by the machine thrust.
The thrust force and overturning moment varied based on the canopy pipe elevation.
The reaction frame is required to resist larger overturning moments from taller pipes
in the pipe canopy and relatively smaller overturning moments during the installation
of the lower-level pipes. Setting up the full length of the reaction frame for each pipe
installation would take considerable time and effort during construction. Hence, the
reaction frame was fabricated in three separate segments which could be assembled
and disassembled based on the resistance required. The shorter frame with minimal
concrete ballast blocks was used for lower-level pipes. The full length of the reaction
frame with multiple levels of concrete blocks was used to install pipes at the tunnel
crown.
Due to the uneven topography on either side of the tunnel, an equipment platform
was required to be built up from grade to allow for the installation of monitoring
equipment below the highway surface. A series of small diameter pipes were installed
above the tunnel crown, immediately below the highway surface to install Horizontal
Design and Construction of Temporary Works to Facilitate … 237
2.3.2 Geometry
The retaining wall was approximately 4.2 m tall and retained approximately a 120 m2
area of level ground to provide a working platform for the drill rig. The working
platform was adjacent to the Metrolinx rail corridor (Fig. 6).
2.3.3 Design
The MSE retaining wall was L-shaped and was reinforced with geogrid and
compacted with granular fill. It was designed to resist the horizontal load of 120 kN
from the drill rig and 12 kPa construction surcharge load. The retaining wall was
analyzed using a 2D plane strain finite element program RS2 by Rocscience.
A 75 mm compacted granular was placed in between the intersecting geogrids at
the corner of the retaining wall for the complete activation of the geogrids.
238 P. Nadesparan et al.
As the retaining wall was temporary during construction, the design incorporated
existing concrete blocks that the contractor had on site as the facing blocks. Use of
existing concrete blocks on site expedited the construction of the retaining wall and
overall schedule.
3 Conclusion
The design and construction of the temporary works to facilitate the installation was
successfully performed as intended and helped mitigate the risks associated with the
challenging excavation of a large diameter SEM tunnel near the surface of a major
highway. The design-build nature of this project allowed for a deep collaborative
effort between the contractor and engineers for the design of both the temporary and
permanent works. The project was successfully completed in the summer of 2021.
Comparison Between Effects of Current
Wind Turbine Design Loads
and Downburst Loads
Abstract Wind turbines are among the most rapidly increasing technologies for
delivering sustainable energy. Good wind sites are usually found in rural regions,
where thunderstorms and strong wind events are becoming more frequent and
destructive, as a result of climate change. Downbursts are one of these events linked
with thunderstorms which occur in a sudden and localized manner. One of the chal-
lenges in the analysis and design of wind turbines under downbursts is that, the
associated forces acting on the tower and blades depend on the characteristics of
the event including its size and location. The review of current design codes for
this type of structures shows a lack of procedures for estimating the wind loading
on wind turbines due to high intensity wind events, such as downbursts. The wind
loads used in those design codes are based on large-scale wind events. In the current
study, a comparative study is conducted using the previously developed numerical
model, HIW-TUR, on a variety of wind turbines in order to assess the differences
between current wind turbine design loads and downburst loads. HIW-TUR accounts
for different downburst parameters, such as the size, jet velocity, and the location rela-
tive to the wind turbine center, as well as the change in the pitch angle of the blades.
An extensive parametric study is conducted considering a large number of downburst
configurations and different blade pitch angles. Moments at the tower base and the
roots of the blades are obtained under different downburst configurations and are
compared with those calculated using the International Electro-Technical Commis-
sion IEC 61400-1 [10]. Using the same reference velocity, downburst loads on wind
turbines are found to be higher than the design loads, resulting in higher straining
actions on the tower and blades.
1 Introduction
Wind energy is one of the most rapidly expanding renewable technologies due to its
environmentally friendly characteristics [22]. According to the Global Wind Energy
Council, wind turbines have been built at a rapid rate [8]. In order to enhance effi-
ciency, most wind turbines are built in rural locations where strong wind speeds are
anticipated. Those areas can be exposed to thunderstorms and strong wind events,
which are becoming more frequent and destructive as a result of climate change
[5]. Wind turbines are designed to operate under normal wind conditions and to
sustain extreme wind events with loads calculated using the international guide-
lines and standards [6, 7, 10]. Large-scale wind phenomena, such as hurricanes and
typhoons, are employed by those design standards to determine the wind loads acting
on wind turbines. However, unanticipated strong wind events have recently resulted
in a number of wind turbine breakdowns [14]. Downbursts are one of the extreme
wind events associated with thunderstorms. A downburst is defined as a mass of fast-
moving air that descends on the ground and then spreads outwards in all directions,
forming vortices [4]. The analysis of wind turbines under downbursts involves the
following challenges: (1) downbursts are localized events, which makes the forces
acting on the tower and the blades to vary depending on the size and location of
the event relative to the tower, (2) downbursts create horizontal and vertical winds
resulting in a three-dimensional wind field that differs from boundary layer winds,
and (3) it is difficult to predict the wind direction that the wind turbine will be exposed
to during a downburst, in order to adjust the pitch angle of the blades accordingly.
Current design codes do not consider downburst wind loading and do not provide
information regarding the critical load profiles associated with downburst events
acting on wind turbine structures. Few studies have been conducted on the impact of
thunderstorm downbursts on wind turbines. Ahmed et al. [1], Kwon et al. [12], Lu
et al. [13], Nguyen et al. [16, 17], Nguyen and Manuel [18, 19], Nguyen [20], and
Zhang et al. [24] were the first to study the influence of downburst wind on wind
turbines. They came to the conclusion that the downburst wind loads are different
from the extreme wind loads recommended by the International Electro-Technical
Commission guidelines [10].
The main objective of the current study is to conduct a comparative study using the
numerical model, HIW-TUR, recently developed by Ahmed et al. [1] on a number of
wind turbine structures in order to assess the difference between current wind turbine
design loads and downburst loads. The numerical model, HIW-TUR incorporates a
previously developed computational fluid dynamics (CFD) model for the downburst
wind field coupled with a wind turbine structural model. In addition, HIW-TUR inte-
grates current wind turbine design profiles so that a comparison can be made easily
between the effect of synoptic winds and downbursts at the same reference velocity.
Comparison Between Effects of Current Wind Turbine Design Loads … 241
This study begins with providing a brief description of the numerical model, HIW-
TUR. A number of wind turbine structures are investigated under both downburst
wind loads, and design wind loads specified by the IEC 61400-1 [10] guidelines. An
extensive parametric study for the wind turbines is conducted considering a large
number of downburst configurations and different blade pitch angles. For each wind
turbine, moments at the tower base and at the roots of the blades are obtained under
different downburst configurations and compared with those calculated using the
International Electro-Technical Commission guidelines [10].
Details of the numerical model, HIW-TUR, as well as its validation can be found
in [1]. The downburst wind field incorporated into this numerical model is based on
the computational fluid dynamics (CFD) simulation conducted by Kim and Hangan
[11]. This CFD simulation was carried out on a small-scale impinging jet model
using the RANS method. This CFD simulation produced a wind field having a radial
(horizontal) and a vertical (axial) component, both vary in space and time. A proce-
dure to scale-up the radial velocity (V RD ) and the vertical velocity (V VR ) based on
the diameter, Dj , and the jet velocity, V j , of a full-scale downburst is incorporated
into the numerical model, HIW-TUR. Figure 1 shows a scheme of a wind turbine
exposed to a downburst. As shown in the figure, the location of the downburst relative
to the tower is defined by the polar coordinates (R and θ ). The forces acting on the
tower and blades depend on:
A procedure to calculate the aerodynamic forces acting on the tower and the
blades taking into account all those parameters is incorporated in HIW-TUR. The
components of the structure are modeled using beam elements and the straining
actions on the tower and the blades are calculated. A parametric study is automatically
built-in by varying Dj , R, θ, and β. The peak straining actions obtained for the entire
parametric study are determined.
The design loads specified by the IEC 61400-1 [10] are also incorporated into
the numerical model. This allows comparing the peak straining actions associated
with a downburst to those estimated using the design code. The comparison is done
in this study while equating the velocity at the hub height of the tower for both the
downburst and the synoptic wind specified in the design code.
242 M. R. Ahmed et al.
3 Sequence of Analysis
A flow chart showing the sequence of the analysis adopted in the current study is
provided in Fig. 2. Firstly, the wind turbine tower and the blades are numerically
modeled using their geometric and airfoil properties and the blade pitch angle, β,
is set to a certain value. The numerical model, HIW-TUR, integrates current wind
turbine design profiles and design loads cases. For each design load case, the forces
acting on the tower and blades and the moments at the tower base and each blade root
are calculated. The envelope moments at the tower base and each blade root for all
those load cases are determined for each wind turbine. The wind field created by the
CFD simulation is scaled-up, and the velocity components at each node in the wind
turbine tower and blades model are estimated based on the downburst configuration
(specified by V j , Dj , R and θ ). The forces acting on the tower and blades and the
moments at the tower base and each blade root are estimated at each time increment
until the end of the entire time history of the downburst. For the whole time history, the
peak moments at the tower base and at each root of the blades are then calculated.
The entire approach is repeated by performing a comprehensive parametric study
that covers a wide range of probable downburst occurrences surrounding the wind
turbine. The parametric study considers a total number of 5,544 downburst scenarios
with various values of Dj , R/Dj , and θ. Seven values for the blade pitch angle (β)
are investigated in the current study for all design loads cases, as well as under
Comparison Between Effects of Current Wind Turbine Design Loads … 243
all downburst configurations leading to 38,815 possible evaluations for each wind
turbine. The current study explores five various sizes of wind turbines that are often
used in the field, resulting in a total number of analyses of 194,075.
Start Model the wind turbine components: the tower and the blades
Input the tower properties, the airfoil data for the blade
sections, and the blade pitch angle, β
Calculate the forces acting on the tower and blades and the moments at the
tower base and each blade root for each design load case
Calculate the envelope moments at the tower base and each blade root for
all design load cases
Scaling-up the downburst CFD data into the full-scale data to obtain VX, VY, VZ and time
t=0
Calculate the forces acting on the tower and blades and the moments at the
tower base and each blade root at each time increment
Yes
t<tmax t = t + dt
No
Determine the peak moments at the tower base and each blade root of wind
turbines under a specific downburst configuration
Determine the peak moments at the tower base and each blade root under all downburst
configurations for a wind turbine with a specific blade pitch angle
Determine the envelope moments at the tower base and each blade root for all design load cases as
End
well as under all downburst configurations for all wind turbines with all possible blade pitch angles
The properties of wind turbine structures considered in the current study are
presented in Table 1 based on the baseline wind turbine models reported in Malcolm
and Hansen [15], and Ahmed et al. [1].
This section details the design wind loads estimated using the International Electro-
Technical Commission [10] guidelines for normal and extreme wind turbine condi-
tions, as well as their changes with blade pitch angle (β). The key objective of this
part is to calculate the design moments for the tower and blades for each pitch angle
(β), which will be compared with the downburst scenarios. The numerical model,
HIW-TUR, integrates current wind turbine design profiles and design loads cases.
According to the IEC 61400-1 [10], a power law profile is recommended for deter-
mining the normal wind speed profile (NWP) and the extreme wind speed profile
(EWP), as shown in Eqs. (1), and (2), respectively:
where, V NWP (Z) represents the normal wind speed as a function of height, Z, above
the ground, V hub is the wind speed at the level of the hub height, Z hub is the hub
height, V EWP (Z) is the extreme wind speed as a function of height, Z, and V ref is the
246 M. R. Ahmed et al.
reference wind speed at the hub height average over 10 min, which depends on the
wind turbine class.
The design moments of the wind turbine subjected to normal winds are determined
using FAST [3] under operational conditions while the design moments of the wind
turbine subjected to extreme winds are determined using HIW-TUR in the parked
conditions. For all studied wind turbines, this study assumes the cut-in speed is 3 m/s
and the cut-out speed is 25 m/s, according to the wind tower specifications provided
by Dai et al. [2]. As a result, during operational conditions, the maximum wind
velocity at hub height (V hub ) is considered to be 25 m/s, with turbulence intensity
(I ref ) equal to 0.16 (category A) according to the IEC 61400-1 [10]. The wind turbine
tower is presumed to be parked and placed in a high wind location (class I) during
extreme wind conditions. The maximum gust considered for class I wind turbines
is 70 m/s, with a mean speed of 50 m/s at the hub height and a gust factor of 1.4,
according to the IEC 61400-1 [10]. Design load case 6.2 in the design code is not
considered in the current study, as it assumes the loss of the wind turbine control
system with a change in the wind direction by +/− 180° giving higher design wind
loads and, therefore, uneconomic design.
Tower results are processed to obtain the moments, M XT , M YT , M ZT and, M RT at
the tower base, where, M XT is bending moment of the tower about X-axis, M YT is the
bending moment of the tower about Y-axis, M ZT is the torsional moment of the tower
about Z-axis, and M RT is the resultant bending moment of the tower. Considering
the blades, flap-wise moments (M FB ), edge-wise moments (M EB ), in-plane moments
(M XB ), out-plane moments (M YB ), and resultant bending moment (M RR ) at the roots
of the blades are also evaluated. The peak moments at the tower base and at the
roots of the blades are determined for each blade pitch angle (β). The results are
summarized to obtain the resultant bending moments (M RT ) at the tower base, and
the resultant bending moments (M RR ) at the roots of the blades.
6 Downburst Analyses
According to Holmes et al. [9], and Zhang et al. [23], the typical downburst jet
diameter (Dj ) can be observed in the range of 500–1,500 m in real events. As a
result, in the current study, the downburst jet diameter (Dj ) is varied from 500 to
1,500 m with a 100 m increment. The radial distance (R) is varied such that the
ratio R/Dj changes from 0 to 2 with a 0.1 increment, since it is observed that for
R/Dj ratios greater than 2, downburst velocities become small and their influence
may be ignored. The angle θ is varied from 0° to 360° degrees in 15° increments.
The parametric study utilizes a value of 70 m/s for peak downburst radial velocity
(V RDmax ). According to the IEC 61400-1 [10], this value of V RDmax corresponds to
the maximum gust wind speed for synoptic wind at the hub height for class I turbines.
As a result, the effect of extreme winds and downbursts at the same reference velocity
can be compared. Since the ratio between V RDmax and V j in the used CFD simulation
is 1.15, this V RDmax corresponds to a downburst jet velocity (V j ) of 61 m/s [11].
Comparison Between Effects of Current Wind Turbine Design Loads … 247
Table 2 Summary of the downburst critical configurations for the studied wind turbines
Notation Towers Blades
S70/ 750 kW 1.5 MW 3 MW 5 MW S70/ 750 kW 1.5 MW 3 MW 5 MW
1500 1500
Dj (m) 600 500 800 1,100 1,500 750 600 900 1,300 1,800
R/Dj 1.3 1.3
θ° β= 180 15
0°
β= 180 150
15°
β= 165 150
30°
β= 165 135
45°
β= 135 120
60°
β= 105 315
75°
β= 90 285
90°
248 M. R. Ahmed et al.
After determining the peak straining actions resulting from the loads of the design
code and the downburst wind effect on the considered wind turbines, a comparison
between those straining actions is conducted. Figures 4a–e and 5a–e are developed
to show the variation of the envelope resultant moments at the tower base (M RT )
and at the roots of the blades (M RR ), respectively, with the blade pitch angle (β) due
to design loads and the downburst loads acting on the studied wind turbines. The
figures demonstrate that β = 90° corresponds to the lowest of the peak values of the
moments, regardless of the wind turbine size and the downburst location. Based on
these findings, it is advised that the wind turbine be parked at a blade pitch angle of
β = 90° during extreme wind conditions, in order to reduce the wind influence on
both the tower and the blades.
80000 40000
MRT (kN.m)
MRT (kN.m)
60000 30000
40000 20000
120000 375000
MRT (kN.m)
300000
MRT (kN.m)
90000
225000
60000
Design Loads 150000
30000 Design Loads
Downburst Loads 75000
Downburst Loads
0 0
0 15 30 45 60 75 90 0 15 30 45 60 75 90
β (Degree) β (Degree)
(e) 5MW
1000000
800000
MRT (kN.m)
600000
400000
Design Loads
200000
Downburst Loads
0
0 15 30 45 60 75 90
β (Degree)
Fig. 4 a–e Variation of envelope resultant moments (M RT ) of design loads and downburst loads at
tower base with pitch angle (β)
Comparison Between Effects of Current Wind Turbine Design Loads … 249
2000
MRR (kN.m)
4500 1500
3000 1000
Design Loads Design Loads
1500 Downburst Loads 500 Downburst Loads
0 0
0 15 30 45 60 75 90 0 15 30 45 60 75 90
β (Degree) β (Degree)
8000 20000
MRR (kN.m)
MRR (kN.m)
6000 15000
4000 10000
(e) 5MW
50000
40000
MRR (kN.m)
30000
20000
Fig. 5 a–e Variation of envelope resultant moments (M RR ) of design loads and downburst loads
at the roots of the blades with pitch angle (β)
As shown in Fig. 4a–e, downburst loads produce higher moments on the tower
than the design loads, particularly at pitch angles (β) greater than 45°. At pitch angle
(β) = 90°, the moments at the tower base developed by the downburst loads are
higher than those developed by the design loads by approximately 200%. It can
be seen from Fig. 5a–e that downburst loads develop higher moments on the blades
than the design loads for pitch angles (β) greater than 30°. However, the design loads
create higher moments on the blades for pitch angles (β) less than 30°. At pitch angle
(β) = 90°, the moments at the roots of the blades developed by the downburst loads
are higher than those developed by the design loads by approximately 150%. This
can happen for some reasons: (1) for the same reference velocity at the hub height,
normal and extreme wind profiles produce more loads on the blades above the hub
height than the downburst load profiles, while downburst wind profiles create higher
loads on the tower, (2) a downburst wind event is a sudden localized wind phenomena
that cannot be previously predicted. The random location of the downburst and, as a
result, the wind direction, which prevents adjusting the rotor plane orientation as well
250 M. R. Ahmed et al.
as the blade pitch angle (β) as is common in strong synoptic winds, (3) downbursts
generate a 3-D wind field with radial and vertical profiles, which induce additional
moments on the blades as the pitch angle (β) value increases.
8 Conclusion
Acknowledgements The authors would like to acknowledge the support from the Natural Sciences
and Engineering Research Council of Canada (NSERC); National Natural Science Foundation of
China (51878426); International Collaboration Program of Sichuan Province (18GJHZ0111); and
the Fundamental Research Funds for Central Universities of China.
References
1. Ahmed MR, El Damatty AA, Dai K, Ibrahim A, Lu W (2022) Parametric study of the quasi-
static response of wind turbines in downburst conditions using a numerical model. Engineering
Structures250. https://doi.org/10.1016/j.engstruct.2021.113440
2. Dai K, Sheng C, Zhao Z, Yi Z, Camara A, Bitsuamlak G (2017) Nonlinear response history
analysis and collapse mode study of a wind turbine tower subjected to tropical cyclonic winds.
Wind and Structures, An International Journal 25(1):79–100. https://doi.org/10.12989/was.
2017.25.1.079
3. FAST (2016) An aeroelastic computer-aided engineering (CAE) tool for horizontal axis wind
turbines. https://nwtc.nrel.gov/FAST
4. Fujita T (1985) The downburst: microburst and macroburst
5. GFDL (2021) https://www.gfdl.noaa.gov/global-warming-and-hurricanes/
6. GL (2010) Guideline for the certification of wind turbines, Germanischer Lloyd Industrial
Services GmbH
7. GL Wind (2005) Guideline for the certification of offshore wind turbines, Germanischer Lloyd
Industrial Services GmbH.
8. Global Wind Energy Council (2019) https://gwec.net/global-wind-report-2019/, http://gwec.
net/publications/global-wind-report-2/
9. Holmes JD, Hangan HM, Schroeder JL, Letchford CW, Orwig KD (2008) A forensic study of
the Lubbock-Reese downdraft of 2002. Wind and Structures 11(2):137–152. https://doi.org/
10.12989/was.2008.11.2.137
10. IEC 61400-1 (2005) IEC 61400-1 international standard “wind turbines—Part 1: design
requirement”, international electro-technical commission, 2005 + Amendment 2010
11. Kim J, Hangan H (2007) Numerical simulations of impinging jets with application to
downbursts. J Wind Eng Ind Aerodyn 95(4):279–298. https://doi.org/10.1016/j.jweia.2006.
07.002
12. Kwon DK, Kareem A, Butler K (2012) Gust-front loading effects on wind turbine tower
systems. J Wind Eng Ind Aerodyn 104–106:109–115. https://doi.org/10.1016/j.jweia.2012.
03.030
13. Lu NY, Hawbecker P, Basu S, Manuel L (2019) On wind turbine loads during thunderstorm
downbursts in contrasting atmospheric stability regimes. Energies 12(14). https://doi.org/10.
3390/en12142773
14. Ma Y, Martinez-Vazquez P, Baniotopoulos C (2019) Wind turbine tower collapse cases: a
historical overview. Proceedings of the Institution of Civil Engineers Structures and Buildings
172(8):547–555. https://doi.org/10.1680/jstbu.17.00167
15. Malcolm DJ, Hansen AC (2006) WindPACT turbine rotor design study: June 2000–June 2002
(Revised). http://www.osti.gov/bridge
16. Nguyen HH, Manuel L (2014) Transient thunderstorm downbursts and their effects on wind
turbines. Energies 7(10):6527–6548. https://doi.org/10.3390/en7106527
17. Nguyen HH, Manuel L (2015) A Monte Carlo simulation study of wind turbine loads in
thunderstorm downbursts. Wind Energy 18(5):925–940. https://doi.org/10.1002/we.1740
18. Nguyen HH, Manuel L, Veers PS (2011) Wind turbine loads during simulated thunderstorm
microbursts. Journal of Renewable and Sustainable Energy 3(5):053104. https://doi.org/10.
1063/1.3646764
252 M. R. Ahmed et al.
19. Nguyen H, Manuel L, Veers P (2010) Simulation of inflow velocity fields and wind turbine
loads during thunderstorm downbursts. In: 51st AIAA/ASME/ASCE/AHS/ASC structures,
structural dynamics, and materials conference; 18th AIAA/ASME/AHS adaptive structures
conference; 12th. https://doi.org/10.2514/6.2010-2651
20. Nguyen HH (2012) The influence of thunderstorm downbursts on wind turbine design
21. Somers DM (2004) S816, S817, and S818 Airfoils: October 1991–July1992. https://doi.org/
10.2172/15011674
22. Union of Concerned Scientists (2013) https://www.ucsusa.org/energy/renewable-energy
23. Zhang Y, Hu H, Sarkar PP (2013) Modeling of microburst outflows using impinging jet and
cooling source approaches and their comparison. Eng Struct 56:779–793. https://doi.org/10.
1016/j.engstruct.2013.06.003
24. Zhang Y, Sarkar PP, Hu H (2015) An experimental investigation on the characteristics of fluid–
structure interactions of a wind turbine model sited in microburst-like winds. J Fluids Struct
57:206–218. https://doi.org/10.1016/j.jfluidstructs.2015.06.016
Effect of High Intensity Wind Loads
on Steel Poles Transmission Lines
Abstract The vulnerability of transmission lines to high intensity wind (HIW) loads,
such as downbursts and tornadoes, has been realized through several transmission
lines failures around the world. Transmission poles are commonly used in urban
and suburban high-voltage transmission lines. However, transmission poles can be
more vulnerable to HIW loads than lattice transmission towers due to their limited
redundancy. The study aims to investigate the sensitivity of steel poles transmission
lines to downbursts and tornadoes. The critical load cases are determined by equating
the peak straining actions due to HIW loads, to those obtained due to the synoptic
wind and ice loads cases. HIW loads cases are calculated using the new provisions
recently incorporated in ASCE 74 (2020) guidelines. The results of this study provide
the magnitude of HIW loads beyond which the considered transmission poles are
not safe, if they were designed to withstand only synoptic wind and ice loads cases
without a margin of safety.
1 Introduction
Steel poles transmission lines are commonly used in urban areas where land for the
installation of lattice towers is scarce. However, the limited redundancy of steel poles
constitutes one of the main drawbacks of using this system. Tornadoes and down-
bursts are types of meteorological events, which are identified as high intensity wind
(HIW) events. According to Fujita [10], a downburst is a falling cold and humid mass
of air that impinges on the ground, then moves horizontally, while Fujita [11] defined
alternately in the longitudinal and transverse directions of the towers, along with a
uniform profile applied on the conductors. The downburst load cases include three
critical scenarios defining different downburst locations with respect to the line. It
was recommended to calculate the forces in these load cases by assuming a value for
the downburst jet velocity (V j ) varying between 50 and 70 m/s, as recorded in the
literature.
The objective of this study is to investigate the sensitivity of steel poles trans-
mission lines to HIW loads through identifying the critical load cases that cause the
same peak straining actions as the synoptic wind load cases. The study is divided
into five sections. Section 1 (this section) provides an introduction about HIW and
the ASCE-74_2020 [2] load cases for HIW, while Sect. 2 provides details about the
considered systems in this study. Section 3 discusses the method and sequence of
analysis and Sect. 4 shows and discusses the results. Conclusions are stated in Sect. 5.
2 Considered Systems
Two real steel poles transmission lines are considered in the current study. Figure 1
shows a typical tangent tower for each system, while the height of the towers, span
of the lines as well as the conductor’s properties are summarized in Table 2.
256 A. Shehata et al.
Fig. 1 Considered
transmission towers
Synoptic wind loads acting on the towers are evaluated using the ASCE-74_2010
[3] guidelines, which include provisions for synoptic wind loads and ice loads. The
following two load cases are considered:
1. Case 1: Considering synoptic wind loads only. The reference 3-s gust wind speed
at 10 m height is assumed to be 40 m/s, with exposure type “C”, and topographic
factor, K zt , of 1.
Effect of High Intensity Wind Loads on Steel Poles Transmission Lines 257
2. Case 2: Considering combination of synoptic wind and ice loads. The refer-
ence wind speed is assumed to be 17.88 m/s with 25.4 mm ice accretion on the
conductors.
Two different wind angles of attack are considered for each case, 0º (Trans) and
90º (Long). Accordingly, the towers are analyzed under four load cases, C1(Trans),
C1(Long), C2 (Trans), and C2 (Long). The peak axial stresses, σ N , due to synoptic
wind load cases are evaluated as well as the peak axial stresses corresponding to the
combined effect of synoptic wind load and gravity load, σ NG .
The towers are analyzed under the downburst load cases defined by ASCE-74_2020
[2]. The analysis process starts by assuming a downburst jet velocity (V j ), then the
towers are analyzed under the three downburst load cases. The peak straining actions
resulting from those load cases are determined, including σ D the peak axial stress
value due to downburst load cases and σ DG the peak axial stress value due to the
combined effect of downburst load and gravity load. The peak axial stresses due to
downburst load cases are compared with the corresponding peak axial stresses due
to the synoptic wind load cases by calculating the factor λD/N as follows:
λ D/N = σ D /σ N (1)
Similar to the downburst case, the towers are analyzed under the tornado load
cases defined by ASCE-74_2020 [2]. Using all possible combinations, four different
combinations for each case are considered [Case 1 (A), Case 1 (B), Case 1 (C), Case
1 (D) and Case 2 (A), Case 2 (B), Case 2 (C), and Case 2 (D)]. The analysis process
starts by assuming a tornado gust velocity (V t ), then the towers are analyzed under
all possible tornado combinations. The peak straining actions resulting from those
combinations are determined, including σ T the peak axial stress value due to tornado
load cases and σ TG the peak axial stress value due to the combined effect of tornado
load and gravity load. The peak axial stresses due to tornado load cases are compared
with the corresponding peak axial stresses due to the synoptic wind load cases by
calculating the factor λT/N as follows:
258 A. Shehata et al.
λT /N = σT /σ N (2)
The analysis is repeated by increasing the tornado gust velocity (V t ) gradually till
the factor λT/N becomes equal to one. As a result, the corresponding value of V tcr
represents the critical tornado gust velocity at which the tornado case has the same
effect on the tower as the synoptic wind loads specified in the ASCE-74_2010 [3].
The results of both towers’ analyses are presented and discussed in the following
subsections, with M x and M y denoting the bending moment in the longitudinal and
the transverse direction, respectively, and N is the normal force acting on each section.
In addition, μN , μD , and μT are the ratios of the peak axial stress due to the synoptic
wind, downburst, and tornado to the nominal capacity of the members, respectively.
An elevation view of the pole showing the sections at which the results are reported
is shown in Fig. 2 as well as the tornado longitudinal velocity profile and transverse
velocity profile for the three different wind fields. The results of the synoptic wind,
downburst, and tornado analyses are given in Tables 3, 4, and 5, respectively. The
variation of the factor λ with respect to the reference velocity, V j or V t , is plotted
to determine the critical reference velocity that leads to λ equals to 1.0, as shown
in Figs. 3, and 4. Apparently, Case 1 (Trans) leads to the peak stresses at various
sections of the pole, which is expected since both the tower and the conductors are
loaded in the transverse case. In the same manner, Case 1 (Transverse) is the most
critical downburst case for various sections of the pole with critical jet velocity of
36 m/s. For the case of tornado, Case 1 (B) leads to peak stresses at many sections
of the pole with critical tornado velocity of 42 m/s.
Figure 5 shows an elevation view of the H-Frame with the sections at which the
results are reported as well as the tornado longitudinal velocity profile and transverse
velocity profile for the three different wind fields. The results of the synoptic wind,
downburst, and tornado analyses are given in Tables 6, 7, and 8, respectively. The
variation of the factor λ with respect to the reference velocity, V j or V t , is shown in
Figs. 6, and 7. The results indicate that Case 1 (Long) is the most critical case for
the column since the structure behaves as a cantilever in the longitudinal direction
Effect of High Intensity Wind Loads on Steel Poles Transmission Lines 259
Fig. 2 (a) Reported sections in SP (b) Tornado longitudinal velocity profile distribution along the
height (c) Transverse velocity profile distribution along the height for the three different wind fields
Fig. 3 Variation of λD/N with the jet velocity for SP; (a) Section 1-1, (b) Section 4-4
Fig. 4 Variation of λT/N with the tornado velocity for SP; (a) Section 1-1 (b) Section 4-4
262 A. Shehata et al.
Fig. 5 (a) Reported sections in HF. (b) Tornado longitudinal velocity profile distribution along the
height. (c) Transverse velocity profile distribution along the height for the three different wind fields
Table 6 Critical sections under synoptic wind loading for HF
Sec Gravity ASCE 2010-Synoptic wind (V 10 = 40 m/s)
N G (kN) M Gy (kN) M Ny (kN m) M Nx (kN m) N N (kN) σ N (±MPa) σ NG (MPa) Critical μN
1–1 −210.1 −14.8 0.0 −647.0 0.0 56.6 −61.9 C1 (Long) 0.28
2–2 −170.1 3.3 0.0 −373.9 0.0 32.7 −36.9 C1 (Long) 0.11
3–3 0.0 −50.7 −55.8 0.0 −3.6 9.0 −17.2 C2 (Trans) 0.04
4–4 −2.1 −32.1 −23.3 0.0 6.0 5.4 −13.0 C2 (Long) 0.03
Effect of High Intensity Wind Loads on Steel Poles Transmission Lines
263
264
5 Conclusion
In this study, two real steel poles transmission lines were analyzed under the HIW
load cases specified in ASCE-74_2020 [2]. The critical downburst jet velocity and the
critical tornado velocity were determined by equating the peak straining actions due
266 A. Shehata et al.
to HIW loads to those obtained due to the synoptic wind and ice load cases specified
in ASCE-74_2010 [3]. The critical downburst jet velocity is found to be 36 m/s for
both systems, while a tornado velocity of 42 m/s and 40 m/s is the critical tornado
velocity for the steel pole and the H-Frame systems, respectively. As a result, the
considered systems would not withstand downbursts and tornadoes with reference
velocities greater than the stated values, if they were designed under the synoptic
wind and ice loads only without a margin of safety. Since downburst jet velocity of
50 m/s has been observed in the field and the maximum velocity of F2-tornado is
about 70 m/s, it can be concluded that steel poles are vulnerable to failure during
downbursts and tornadoes.
Acknowledgements The authors gratefully acknowledge CEATI INTERNATIONAL Inc. for the
financial support provided for this research.
References
16. Hangan H, Kim JD (2008) Swirl ratio effects on tornado vortices in relation to the Fujita scale.
Wind Struct 11(4):291–302
17. Hangan H, Roberts D, Xu Z, Kim J (2003) Downburst simulation. Experimental and numerical
challenges. In: Proceedings of the 11th international conference on wind engineering, 1–5 June,
Lubbock, TX, USA
18. Hydro One Failure Report (2006) HYDRO ONE NETWORKS INC, “Failure of towers 610
and 611, circuit X503E–500 kV guyed towers near the Township of Waubaushene, Ontario, 2
Aug 2006
19. Ibrahim AM, El Damatty AA (2019) Behaviour and design of guyed pre-stressed concrete
poles under downbursts. Wind Struct 29(5):339–359
20. Ibrahim AM, El Damatty AA, El Ansary AM (2017) Finite element modelling of pre-stressed
concrete poles under downbursts and tornadoes. Eng Struct 153:370–382
21. McCarthy P, Melsness M (1996) Severe weather elements associated with hydro tower failures
near Grosse
22. Ramsdell JV, Rishel JP, Buslik AJ (2007) Tornado climatology of the contiguous United States.
Nuclear Regulatory Commission Rep. NUREG/CR-4461, Rev. 2, 246 pp
23. Shehata AY, El Damatty AA (2008) Failure analysis of a transmission tower during a microburst.
Wind and Structures, An International Journal 11(3):193–208
24. Shehata AY, El Damatty AA (2007) Behaviour of guyed transmission line structures under
downburst wind loading. Wind Struct 10(3):249–268
25. Shehata AY, El Damatty AA, Savory E (2005) Finite element modeling of transmission line
under downburst wind loading. Finite Elements in Analysis and Design 42(1):71–89
Performance of Timber-Steel Dowel
Connections Reinforced
with Self-tapping Screws Under
Monotonic and Cyclic Loading
Abstract The use of mass timber for mid- and high-rise structures has risen dramat-
ically in recent years and is expected to continue to increase given recent changes
to the 2020 National Building Code of Canada, which permits mass timber struc-
tures up to 12-storeys tall. Critical to the growth of mass timber in Canada is the
development of safe and resilient seismic force resisting structural systems (SFRS).
One common SFRS used for mass timber structures are braced frames. In mass
timber braced frames, the steel connections are relied upon to provide ductility
and energy dissipation capacity under earthquake loads. However, the ductility and
energy dissipation capacity of steel dowel connections can be limited by the onset
of a brittle failure mechanism, (e.g. row shear or group tear-out) prior to significant
dowel yielding. To address this challenge, this paper presents experimental results on
the structural performance of timber-steel dowelled connections reinforced with self-
tapping screws. Twelve connections were tested under monotonic and cyclic loads.
The tested connections included two different fastener diameters of 11 and 16 mm,
with an internal steel plate, both with and without reinforcing screws designed to
prevent brittle shear failure. Experimental results demonstrate that the use of a larger
number of smaller fasteners results in higher connection ductility before the onset
of brittle row shear failure. Furthermore, results show that reinforcing steel dowel
connections with self-tapping screws can significantly increase connection ductility
and effectively prevent premature shear failure. Under cyclic loading, tested connec-
tions reinforced with self-tapping screws exhibited significant ductility and energy
dissipation capacity. Overall, results of this study demonstrate the potential for using
self-tapping screws to reinforce steel dowel connections and improve the seismic
performance of mass timber braced frames.
1 Introduction
Fuelled by the demand for sustainable building alternatives to steel and concrete, the
use of mass timber for mid- and high-rise structures has risen dramatically in recent
years. This includes several notable projects completed in Canada, including Brock
Commons, an 18-storey timber tower on the campus of the University of British
Columbia [1], The Arbour, a ten-storey mass timber building on the campus of George
Brown College in Toronto, Canadian Architect [2] and WoodWorks! Headquarters,
a seven-storey industrial building known as the Wood Innovation and Design Centre
[3]. Current projections suggest that the quantity of mass timber buildings will double
each year until 2034 [4]. These projections, coupled with changes to the National
Building Code of Canada (NBCC), which will permit mass timber structures up
to 12-storey tall, suggest that the number of structures constructed of mass timber
in the coming years will continue to rise [5]. However, critical to development and
continued use of mass timber as a reliable and resilient alternative to steel in concrete
is their seismic performance.
To date, the most common seismic force resisting system (SFRS) for mass timber
structures are cross-laminated timber (CLT) shear walls. There have been signifi-
cant efforts in the research community to study the behaviour of CLT shear walls
under earthquake loads and to develop design guidelines for their use, including
the establishment of ductility (Rd ) and over-strength factors (Ro ) [6]. Despite their
high in-plane stiffness, CLT shear walls rely on hold-down connections to yield and
dissipate seismic energy during an earthquake, meaning that the CLT panel remains
entirely elastic [7]. Consequently, other SFRS, such as braced frames or moment-
resisting frames may be more economical and efficient when compared with CLT
shear wall systems because they use less wood, especially for low- and mid-rise
structures (1–6 storeys) in which the lateral load demands are smaller.
Despite the potential benefits of braced frame SFRS for mass timber structures,
there has been comparatively less research on their performance, particularly under
earthquake loads. The 2015 National Building Code of Canada (NBCC) does recog-
nize both limited and moderately ductile timber braced frames as an SFRS for mass
timber structures up to 20 and 15 m tall in regions of high seismicity [8]. In these
systems, the brace connections are relied upon for ductility and energy dissipation
capacity. This is typically achieved through the use of steel dowels/fasteners (e.g.
bolts or drift pins) to facilitate the brace connection, which are designed to yield in
bending to dissipate energy. However, because wood can be a brittle material, partic-
ular attention needs to be paid to the design of such connections, ensuring brittle
failure modes such as group shear tear-out or row shear failure of the connection
does not occur before or near the initiation of fastener yielding.
There is a limited body of work in the literature on the performance of dowel-
type connections for mass timber braced frame structures tested in direct axial
tension under cyclic loads. Several researchers have conducted experimental studies
of dowelled timber connections under monotonic loading. Jorissen [9] studied timber-
to-timber bolted connections, while Pedersen [10] examined connections with steel
Performance of Timber-Steel Dowel Connections Reinforced … 271
knife plates subjected to combinations of tension and transverse loading. Dorn et al.
[11] also examined connections with a steel knife plate, and in some experiments, a
clamp was used to simulate the reinforcement provided by self-tapping screws. Both
Jockwer et al. [12] and Yurrita et al. [13] provide extensive reviews of the testing and
modelling to date for dowelled timber connections and emphasize the importance
of ductile failure modes. Yurrita et al. [13] present tests on multiple knife plates
with dowelled connections, while Jockwer et al. [12] point out that reinforcement by
means of self-tapping screws can be a means of avoiding brittle failure modes.
There has been comparatively little reported on the cyclic behaviour of dowel-type
connections for timber. Piazza et al. [14] presented both monotonic and cyclic test
results for timber-to-timber dowelled connections. Notably, they found that the use
of self-tapping screws could improve ductility. Oudjene et al. [15] conducted cyclic
tests of connections with a steel knife plate and a single dowel. Jalilifar et al. [16]
investigated the behaviour under cyclic loading of dowelled connections, however,
the tests were intended to simulate CLT-to-CLT connections.
This review indicates a need to understand the ductility and energy dissipation
capacity of timber-steel connections typical of those used for mass timber braced
frames. The ductility of timber-steel connections can be limited by the onset of a
premature brittle failure mode (e.g. group tear-out or row shear). Strategic placement
of self-tapping screws to increase connection capacity against brittle failure modes
has the potential to lead to increased ductility and energy dissipation capacity, but
little-to-no experimental validation of the effectiveness of this approach has been
carried out. The overarching objective of this research is to examine the performance
of timber-steel connections with and without reinforcing screws to understand their
ability to increase connection ductility and energy dissipation capacity.
2 Experimental Programme
The objective of this study was to evaluate the effectiveness of using self-tapping
screws to improve the ductility and energy dissipation capacity of timber-steel dowel
connections. A typical test specimen is shown in Fig. 1, and consists of a bolted
connection on the bottom, which was intentionally over-designed to remain elastic
during the test, and a dowelled connection on top which was the subject of this
research. The specimens were fabricated of 190 × 136 mm Spruce-Pine-Fir (SPF)
Grade 24f-ES glued-laminated (glulam) timber supplied by Nordic Structures and
produced in accordance with CSA O177-06 Qualification Code for Manufacturers of
Structural Glued-Laminated Timber [17], as specified in CSA O86-19 Engineering
Design for Wood [18]. This glulam grade is produced from spruce-pine-fir (SPF)
species with a density of 560 kg/m3 , a mean relative density (G) of 0.47, and a
moisture content of 12%. The design strengths include a modulus of elasticity (E) of
13.1 GPa, a parallel-to-grain tension strength ( f t ) of 20.4 MPa, and a longitudinal
shear strength ( f v ) of 2.5 MPa.
272 T. Breijinck et al.
Fig. 1 Typical specimens: a 2–11 mm dowels, and b 1–16 mm dowel, c Reinforcing screw layout
Prior to testing, an evaluation of the design strength of the control specimens was
carried out according to the guidelines in the Canadian wood design standard, CSA
086-19 [18]. For both connection types, both row shear and yielding failure modes
were considered. It is noted that because the connections did not include more than
Performance of Timber-Steel Dowel Connections Reinforced … 273
one row of fasteners, group tear-out was not considered. The row shear strength
of the connections in this study (PRS ) which includes a single row of fasteners is
determined according to Eq. 1:
where ϕ w is the resistance factor for brittle failures, f v is the longitudinal shear
strength of the timber, K ls is a factor for member loaded surfaces, t is the member
thickness, nc is the number of fasteners in the row, acr i is the minimum of the fastener
spacing and the distance to the loaded end of the connection, K D is the load duration
factor, K Sc is the service condition factor, and K T is the treatment factor.
For the connections in this study, ϕ w is taken as 1 to get a more realistic connection
capacity for laboratory testing, f v is 2.5 MPa, K ls is 0.65 for side members, t is
90 mm (see Fig. 1), and acr i is 100 mm for the 16d specimens and 50 mm for the 11d
specimens. The service condition factor and treatment factors are both taken as 1.0.
Because the applied loading is considered short-term loading based on CSA O86-19,
which is defined as a load not expected to last more than 7 days [18], a load duration
factor of 1.15 is applied to the tabulated design strength, which is based on standard
term loading. The 16d and 11d connections each have an unreinforced row shear
strength of 42.6 kN. It is noted that because the design standard lists characteristic
strength values (fifth percentile), the design strength will not accurately reflect the
observed connection strength in the lab. However, the design strength should provide
a good estimate of the governing mode of failure.
In addition to the row shear strength, the yielding resistance (PY ) of the
connections was also determined, which can be calculated according to Eq. 2:
PY = φy n u n s n F (2)
where ϕ y is the resistance factor for yielding failures, nu is the unit lateral yielding
resistance, ns is the number of shear planes, and nF is the number of fasteners in the
connection. For the connections in this study, ϕ y is taken as 1.0, ns is 1 for a single
internal steel plate, and nF is 1 or 2 for the 16d and 11d connections, respectively. The
unit lateral yielding resistance (nu ) is determined using the Johansen yield equations
274 T. Breijinck et al.
[19] and taken as the minimum of four potential failure modes (modes a, c, d, and g in
CSA O86-19) for three member connections [18]. For the connections in this study,
modes (d) or (g) govern for all connection types, in which the unit lateral yielding
resistance (nu ) is computed as the minimum of Eqs. 3 and 4:
1 f2 fy 1 t1
Case(d) : n u = f 1 dF2 + (3)
6 ( f1 + f2 ) f1 5 dF
2 2 f2 fy
Case(g) : n u = f 1 dF (4)
3 ( f1 + f2 ) f1
where f 1 and f 2 are the embedment strengths of the timber, d F is the diameter of the
fastener, t is the dowel-bearing length, and f y is the yield strength of the fastener in
bending. The timber embedment strength is determined as 50G(1–0.01d F ), where G
is the mean relative density of the wood. For the connections in this study, the mean
relative density of the wood is 0.42 (spruce-pine-fir), resulting in timber embedment
strengths of 17.6 MPa and 18.7 MPa, respectively. The fastener diameter is 16 mm
or 11 mm depending on the connection, and the fastener yield strength in bending is
taken as 946 MPa or 400 MPa, respectively. Ultimately, the 16d and 11d connections
have a yield resistance of 29.3 kN and 24.2 kN, respectively. Overall, based on the
guidelines in CSA O86-19, both connections are expected to yield before the onset of
brittle row shear failure. The factor of safety between brittle and ductile failure modes
for these connections are 1.45 and 1.76 for the 16d and 11d specimens, respectively.
Figure 2 shows the experimental test set-up and instrumentation layout for a typical
specimen. The specimens were tested under direct axial tension using an Instron 8802
universal testing machine. The specimens were mounted in a steel bracket that was
connected to the timber specimen with 6–19 mm (3/4 ) steel bolts. The bolted connec-
tion was intentionally over-designed to ensure it remained elastic during testing. The
top of the steel bracket was secured in the test machine using a 7/8 steel bolt inserted
into its hydraulic grips. The tested connection was oriented at the bottom of the test
set-up and was secured in the test machine using its hydraulic grips, which were
attached to the steel knife plate in the timber.
Figure 2 also shows the instrumentation used to measure the structural response
of the specimens. The load was measured using the 250 kN load cell mounted to
the test frame. The displacement of the connection was measuring using both the
actuator displacement and two string potentiometers mounted to the side of the test
specimens (see Fig. 2). This permitted the displacement of the test connection to be
determined by subtracting the average displacement from the string potentiometers
from the actuator displacement. Two linear potentiometers were also mounted across
Performance of Timber-Steel Dowel Connections Reinforced … 275
the slot in the timber specimen to capture any opening during testing. All data was
recorded at a rate of 1 Hz.
The specimens were tested under both monotonic and cyclic loading (see Table 1).
The monotonic load was applied at a rate of 2 mm/min up to failure. To simulate the
displacement demand and damage effects on a timber-steel dowel connection during
a major seismic event, the specimens were tested under a pre-determined cyclic
lateral load sequence up to failure. The CUREE loading protocol was utilized [20],
which is intended for seismic evaluation of wood-based components. The protocol
is based on a reference displacement (Δy ), which was determined in this study based
on the yield displacement of each specimen type during the monotonic test. After
determining the reference displacement, testing consisted of a primary cycle followed
by a series of trailing cycles with an amplitude equal to 75% of the amplitude of the
preceding primary cycle. It is noted that the initiation cycles, which are typically
executed at the start of the CUREE loading protocol and are intended to check the
loading equipment and measurement devices were not applied in this study. Figure 3
shows a typical loading protocol for the 16d and 11d specimens, which were based on
reference displacements of 6.62 mm and 5.95 mm, respectively. Primary amplitudes
of 0.1y , 0.2y , 0.3y , 0.4y , 0.7y ¸ 1.0y , and then was increased at increments
of y up to failure. The rate of displacement was varied between 1 and 10 mm/sec,
according to the recommendations of Krawinkler et al. [20]. After reaching the end
of the loading protocol, specimen 11d-CYC-R was monotonically pulled to failure
to assess the residual strength of the connection.
276 T. Breijinck et al.
Table 2 summarizes the important structural response parameters for each test and
Fig. 4 shows the load deformation response from both the monotonic and cyclic tests.
The yield load (Py ) and displacement (y ) were determined using the secant method,
in which the intersection of the elastic and post-yield secant slopes is taken as the
yield point. The ultimate load (Pu ) is the maximum load achieved by the specimen
during testing while the ultimate displacement (u ) is the displacement after a 20%
drop in load carrying capacity from the ultimate. To compare the seismic performance
of the connections with and without reinforcing screws, the displacement ductility
(μ ) is computed as the ratio of the ultimate displacement to the yield displacement
(μ = Δu /Δy ).
The monotonic test results show that, as anticipated the specimens tested under
monotonic loads without reinforcing screws do experience some yielding prior to
failure, achieving displacement ductility’s of 3.9 and 2.9 for the specimens with the
16 and 11 mm dowels, respectively. After the initiation of yielding, both connections
exhibit a sudden and brittle row shear failure, resulting in a significant drop in load
carrying capacity. Specimens 16d-MON and 11d-MON achieved ultimate loads of
69.4 kN and 51.3 kN, respectively, and ultimate displacements of 11.5 mm and
10.2 mm, respectively. The ultimate capacities of the connections are 2.37 and 2.12
times larger than the predicted design values, which is not surprising, given that timber
standards typically use characteristic strengths in design. Interestingly, specimen
16d-MON, which had a lower factor of safety against brittle failure when compared
with specimen 11d-MON (1.45 vs. 1.76) actually exhibited higher ductility.
The governing failure mode after initial fastener yielding for both connections
was brittle row shear. A typical row shear failure observed in this study is shown in
Fig. 5. As the specimen is loaded, the tension on the plate transfers into the dowel,
which is forced against the adjacent wood. Under increasing load, the surrounding
wood densifies until the applied force exceeds the shear strength of the timber and
causes a ‘plug’ of wood to shear from its surroundings. Under increasing load, the
length of the plug protruding from the timber (visible in Fig. 5a) continues to increase
to failure. It is noted that the timber does not shear uniformly across the length of the
dowel. Alternatively, it initiates close to the internal steel plate and extends outwards.
As the dowel bends, it shortens and the ends of the dowel rotate, bearing into the
wood. Under increasing load, the wood plug extends towards the sides of the timber.
At the sides of the timber, the wood splits and cracks are visible longitudinally along
the specimen through the dowel.
Following tests on the unreinforced connections under monotonic load, an iden-
tical pair of connections were tested that were reinforced with self-tapping screws
(see Fig. 1). In contrast to the unreinforced specimens, the reinforced specimens
had similar levels of strength with much larger deformation capacities. Specimens
16d-MON-R and 11d-MON-R achieved ultimate loads of 73.5 kN and 62.5 kN,
278 T. Breijinck et al.
respectively, which were only approximately 10% greater when compared with the
unreinforced specimens, however, the real difference in the observed behaviour of
the connections pertained to their deformation capacity. Through the addition of the
reinforcing screws, the ductility of specimens 16d-MON-R and 11d-MON-R was
3.5 and 15.6, respectively. For specimen 11d-MON-R, this represents an over 500%
increase in the displacement ductility when compared with the control. For specimen
16d-MON-R, while this is a slight drop in ductility compared with the control, the
force deformation plot in Fig. 4a shows that although there was a sudden drop in load
(larger than 20% of the ultimate strength) which limited the theoretical displacement
ductility of this connection, it had a much higher deformation capacity, and was
able to sustain very large displacements (i.e. up to 40 mm), without a drop of more
than 30% from its ultimate load carrying capacity. This is a much more desirable
behaviour, which occurs gradually as opposed to the controls, which failed suddenly
without warning.
With respect to failure mode, both connections reinforced with screws exhibited a
yielding mode of failure, similar to the control specimens. However, the results also
show that brittle row shear failure was effectively arrested or prevented completely
due to the presence of the reinforcing screws. In specimen 16d-MON-R, a row shear
failure did initiate at a displacement of 6.76 mm, which resulted in an 18% drop in
load. However, the failure was arrested at the location at which the screws bisected
the shear plane, effectively preventing brittle failure and allowing the connection to
sustain large deformations prior to failure. In specimen 11d-MON-R, there were no
signs of the initiation of a row shear failure early on during the test and the connection
exhibited a very ductile response, with some strength degradation/softening under
increasing displacement. Figure 6 compared the degree of bending in the dowels
for connections with and without screws. The results clearly show the increase in
deformation capacity of the connections reinforced with screws, in which the steel
dowels exhibit significant bending. Ultimately, addition of the reinforcing screws was
found to arrest crack/shear plane development and prevent brittle row shear failure.
Performance of Timber-Steel Dowel Connections Reinforced … 279
Initially, the connection behaves elastically as the dowel bears against the surrounding
wood, which has a very high stiffness. As the deformation increases, the dowel will
bend and eventually yield, while also crushing the surrounding wood. Upon load
reversal, the dowel unloads elastically, but some permanent deformation remains,
requiring that a compressive load be applied to reach zero displacement. Upon
reloading, the crushed wood no longer occupies the space adjacent to the dowel,
meaning that the stiffness of the connection will equal the stiffness of only the dowel
until it comes into contact with the wood again, at which point the stiffness will
increase and the dowel will yield again. Under multiple load reversals, the connec-
tion continues to lose stiffness and strength, but no sudden failure of the specimens
was observed in any test.
4 Conclusions
Acknowledgements Technical support and knowledge were provided by Joshua Woods and Colin
MacDougall and their insight was vital to this research. The financial support provided by the Natural
Sciences and Engineering Research Council is gratefully acknowledged. The material discount
provided by Nordic Structures in this research is greatly appreciated.
References
1. Hasan Z (2017) Inside Vancouver’s Brock Commons, the world’s tallest mass timber
building. https://www.archdaily.com/879625/inside-vancouvers-brock-commons-the-worlds-
tallest-timber-structured-building. Accessed 5 Mar 2022
2. Canadian Architect (2019) The Arbour. https://www.canadianarchitect.com/the-arbour/.
Accessed 5 Mar 2022
3. MG Architecture (2013) Wood innovation and design Centre. https://mg-architecture.ca/work/
wood-innovation-design-centre/. Accessed 5 Mar 2022
4. Anderson R, Atkins D, Beck B, Dawson E, Gale CB (2020) North American mass timber state
of the industry. Forest Business Network, p 5
5. NRC (2020) National building code of Canada (NBCC). National Research Council of Canada
6. Aloisio A, Fragiacomo M (2021) Reliability-based overstrength factors of cross-laminated
timber shear walls for seismic design. Eng Struct 228:111547
7. Dong W, Li M, Ottenhaus LM, Lim H (2020) Ductility and overstrength of nailed CLT hold-
down connections. Eng Struct 215:110667
8. NRC (2015) National building code of Canada (NBCC). National Research Council of Canada
9. Jorissen AJM (1998) Double shear timber connections with dowel type fasteners (thesis). Delft
University Press, Delft
10. Pedersen MU (2001) Dowel type timber connections: strength modelling. Danmarks Tekniske
Universitet
11. Dorn M, de Borst K, Eberhardsteiner J (2012) Experiments on dowel-type timber connections.
Eng Struct 47:67–80. https://doi.org/10.1016/j.engstruct.2012.09.010
12. Jockwer R, Fink G, Köhler J (2018) Assessment of the failure behaviour and reliability of
timber connections with multiple dowel-type fasteners. Eng Struct 172:76–84. https://doi.org/
10.1016/j.engstruct.2018.05.081
13. Yurrita M, Cabrero JM, Quenneville P (2019) Brittle failure in the parallel-to-grain direction
of multiple shear softwood timber connections with slotted-in steel plates and dowel-type
fasteners. Constr Build Mater 216:296–313
14. Piazza M, Polastri A, Tomasi R (2011) Ductility of timber joints under static and cyclic loads.
Proc Inst Civ Eng-Struct Build 164(2):79–90
15. Oudjene M, Tran VD, Khelifa M (2017) Cyclic and monotonic responses of double shear single
dowelled timber connections made of hardwood species: experimental investigations. Constr
Build Mater 132:188–195
16. Jalilifar E, Koliou M, Pang W (2021) Experimental and numerical characterization of mono-
tonic and cyclic performance of cross-laminated timber dowel-type connections. J Struct Eng
147(7):04021102
17. CSA (2006) Qualification code for manufacturers of structural glued-laminated timber. CSA
O177-06, Rexdale, ON, Canada
18. CSA (2019) Engineering design in wood. Canadian Standards Association (CSA) O86-19
19. Johansen KW (1962) Yield-line theory. Cement and Concrete Association, London
282 T. Breijinck et al.
20. Krawinkler H, Parisi F, Ibarra L, Ayoub A, Medina R (2001) Development of a testing protocol
for wood frame structures. In: CUREE Publication No. W-02. Consortium for Research in
Earthquake Engineering (CUREE), Richmond, CA
21. Yurrita M, Cabrero JM (2018) Brittle failure in the parallel-to-grain direction of multiple shear
softwood timber connections with slotted-in steel plates and dowel-type fasteners. Constr Build
Mater 216:296–313. https://doi.org/10.1016/j.conbuildmat.2019.04.100
Assessment of Current Analysis
Methodology of Light-Frame Wood
Buildings Under Lateral Loads Using
Wood3D
Abstract Wood shear walls are the main components used to resist lateral loads in
a light-frame wood building (LFWB). In a previous study, a novel simplified model,
that accurately predicts the deformations and the straining actions in the shear walls
of a LFWB through three-dimensional finite element analysis, was developed. This
efficient model implicitly simulated all the components of the shear walls including
the sheathings, the studs, and the nails. In this model, the bending and the shear
stiffness of the walls are simulated using nonlinear link elements. An extensive
database was developed to determine the link properties for all the work config-
urations. An interface was also developed that is integrated with the commercial
software ETABS and the database, allowing an efficient three-dimensional analysis
of LFWBs under lateral loads. This numerical package is named “Wood3D”. The
software Wood3D can estimate the straining actions in all the components of a LFWB
under the combined effects of gravity and lateral loads. In the current study, Wood3D
is used to analyze and conduct a design check for an existing LFWS that was designed
using the approximate procedure, currently employed in the industry. The results are
used to assess the accuracy of this approximate procedure in comparison with the
rigorous analysis and design checks conducted using Wood3D. Three models of
the existing LFWB are generated using different modeling assumptions. The results
from this analysis conclude that the current design method for wood shear walls in
the industry is over-conservative.
1 Introduction
Up to the nineteenth century, wood was one of the primary materials used in struc-
tures [8]. The demand to have a more sustainable future is increasing which is why
extensive research is being done to make that possible. One of the main reasons
that hinder the use of light frame wood in the construction of mid-rise buildings is
the difficulty in conducting accurate analyses of such structures under lateral loads.
Wood shear walls are hard to model using finite element modeling software because
of their complexity. In order to accurately model a wood shear wall, one would have
to model the sheathing, the studs, blocking, and the nails (Fig. 1). Because of this
difficulty, hand calculations are used as the primary analysis method for LFWBs.
In the industry, the typical procedure for designing a shear wall in a LFWB involves
first calculating the total shear force at each story using the National Building Code
of Canada (NBCC) 2015. The forces are first distributed to the walls based on the
tributary area based on a flexible diaphragm assumption. The resulting values for
the walls deflection are calculated using the CSA O86 [6]. Those values are used to
estimate the stiffness of the walls. The forces are then distributed based on the walls’
stiffness. New deflections are calculated to estimate new stiffness and iterations are
carried out till the wall deflections converge. The next step involves applying the
CSA O86 [6] provisions to select appropriate wall sheathing and wall studs that can
withstand the applied loads. This method is difficult to perform, time-consuming and
not entirely accurate which makes the design either too conservative or unsafe.
Regarding previous studies conducted on the numerical modeling of LFWBs,
a three-dimensional model was developed by Collins et al. [4]. In this model, the
shear walls were simulated using frame elements, shell elements, and two energet-
ically equivalent diagonal nonlinear springs. This model can predict the maximum
displacements of the structure as well as the structure’s demands, such as a nailed joint
connection. This three-dimensional nonlinear finite element method (FEM) proce-
dure was validated in another study by Collins et al. [5]. This modeling technique is
not simple and would take a long time to complete for a large, complicated building
compared with Wood3D. Li et al. [9] conducted a study on post and beam timber
buildings subjected to seismic loads using a nonlinear time history model, where
the three-dimensional model consisted of an assembly of floor/roof diaphragms and
wall subsystems. In this study, the in-plane stiffness of the diaphragms was consid-
ered by modeling the floor/roof elements as beam and truss elements. On the other
Fig. 1 Components of a
wood wall
Assessment of Current Analysis Methodology of Light-Frame Wood … 285
hand, shear walls were modeled as wall posts and nonlinear shear springs using
vertical beam elements. In terms of base shear forces and first story inter-story drift
responses, the comparison of the predictions of this numerical with a shake table
testing matched very well [9]. Although the modeling technique of this study is
simpler than the study done by Collins et al. [4], it does not predict the internal
forces in all the connections in a wall. Chen et al. [3] investigated the deflection
of LFWBs using a numerical simulation. In this study, a modified macro-element
model was developed. This model can estimate the lateral and vertical deformations
of shear walls under lateral loads. A study done by Casagrande et al. [1] came up
with a simplified tool for the analysis of one-story timber shear walls under both
vertical and horizontal loads. This study is later extended by Rossi et al. [12] into
the analysis of multi-story shear wall buildings. Filiatrault et al. [7] presented a
numerical model that is able to predict the displacement response and energy dissi-
pation characteristics of wood shear walls. Three structural components made up the
shear wall in this model; rigid framing members, linear elastic sheathing panels, and
nonlinear sheathing-to-framing connectors. A full-scale test of wood-framed shear
walls subjected to monotonic and cyclic loading was used to validate this model. The
study concluded that the model’s predictions coincided nicely with the experimental
findings of Filiatrault et al. [7].
It is evident that there is a need to develop a numerical technique that is accurate
while does not require high-resolution finite element modeling, which is complicated
and time-consuming. At Western University, research has been done to develop a
simplified way to model wood shear walls in mid-rise LFWBs. The simplified method
allows the user to model wood shear walls as a frame element with two link elements
at each end.
A software, called Wood 3D, has been developed using this technique and is
integrated with ETABs that allows efficient three-dimensional analysis of LFWBs
under lateral and gravity loads. After conducting the analysis, Wood3D provides the
user with the walls’ deflection, the inter-story drift, the straining actions in the studs,
the forces in the nails, the forces in the tie rods, and the stresses in the sheathings.
In this study, Wood3D is used to analyze and verify the design of a LFWB that has
been previously designed using the approximate procedure currently used by design
engineers in the industry.
The main objective of this study is to use the software Wood3D to model and
analyze a real LFWB and use the results to assess the adequacy of the simplified
analysis method currently used in the industry. The building considered in this study
is a four-story residential building that includes reinforced concrete core sections
in addition to the wood shear walls. The study assesses the effect of eliminating
the cores and relying only on the wood shear walls to resist the lateral loads. The
paper starts by providing a brief description of the simplified model for individual
shear walls using equivalent springs, which is the basis of the technique used in
Wood3D. A database was developed to estimate the properties of the equivalent
springs for all possible shear wall configurations. A brief description of Wood3D,
which involves an interaction between a developed MATLAB code, the database,
and the Software ETABs, is provided. The building considered in the study is then
286 A. Risha et al.
described. Three numerical models were constructed for this building using Wood3D.
The first model includes the reinforced concrete cores as well as the wood shear
walls to resist the lateral loads. This duplicates the structural system used in the real
building. The model is analyzed under the set of loads used in the design of the
building, which was previously performed using an approximate method based on
hand calculations. The straining actions obtained using Wood3D on various elements
of the walls of the buildings indicate how adequate is the hand calculations procedure.
In the second model, the concrete cores are transformed into gravity loads elements
such that the lateral loads are resisted using only wood shear walls. The results of the
analysis conducted using Wood3D are used to assess if a multi-story building can
be constructed solely from wood. In the third model, the gravity-load resisting walls
of the building were reconfigured to carry lateral loads and the performance of this
modified building was assessed using Wood 3D.
a
b
Fig. 2 Detailed wall versus simplified “two-link” model. a Detailed FEM model of shear wall,
Niazi et al. [10]. b Simplified two-link model, Peng et al. [11]
Assessment of Current Analysis Methodology of Light-Frame Wood … 287
The results from the load-displacement pushover analysis obtained from the high-
resolution model are used to obtain the properties of the links of the simplified model.
The decomposition of the displacement into flexural and shear components is carried
out, which are used to obtain the axial and shear values of the two-side links. In
comparison with the comprehensive FEM method, the simplified FEM approach
captured the pushover analysis results accurately with acceptable deviations while
requiring significantly less computation time, Peng et al. [11].
3 Description of Wood3D
The software Wood3D is designed such that it provides interaction between three
components: (i) a computer code written using MATLAB (ii) a graphical interface
and (iii) a database for the link properties for all potential wall configurations. A
description of the developed database is provided below.
The objective here was to develop a database that can be used to estimate the prop-
erties of the links that can simulate any possible configuration of a light-frame wood
shear wall. In order to do so, a code was developed using MATLAB that allows an
automated generation of the data file needed to generate a high-resolution model for
a general light-frame wood wall. The code also automatically performs the analysis
of the wall using SAP2000 and then carries on the process established by Peng et al.
[11] to obtain the corresponding properties of the link elements simulating the wall.
In order to cover all potential practical configurations for the shear walls, the combi-
nation of all the parameters provided in Table 1 is considered in the development
of this database. The parameters include the wall length, nail spacing, stud spacing,
stud size, sheathings, and rod size. This is in addition to a multiplier of the wall’s
own weight which will reflect the total axial force acting on the wall due to dead
load. This database is stored in Wood3D. Once the user specifies the configuration
of a shear wall, the database provides an estimate of the values of the corresponding
simplified model. The straining actions for each wall corresponding to any specific
drift of the wall are stored. After the analysis is conducted, once the drift is known,
the database can provide all the straining actions on all components of the wall.
288 A. Risha et al.
The user starts by creating a 3D model for the building using the software ETABs.
The model will generate a text file, *.e2k. The flow chart provided in Fig. 3, shows
the three components of Wood3D and how they interact with the software ETABs.
Three models of the same four-story building with different modeling assumptions
are considered in this study. The building consists of concrete core elements (Cc),
lateral-load resisting shear walls (W L), and gravity-load resisting walls (W G). The
structural elements considered in the three models are described below:
Model A: Considers the CC, W L, and W G elements as they are implemented in
the building. This model can be used to assess the real behavior of the existing
building.
Model B: Considers W L and W G as they are implemented, while the concrete
cores are replaced by only gravity load resisting elements. This model is used to
assess the performance of the building in case the lateral-load resistance of the
building relies only on the wood shear walls.
Model C: considers W L, while CC are replaced with gravity elements, and W G
are transformed to lateral-load resisting wood elements. This model reflects the
situation that all the wood walls (including the gravity ones) are used to resist the
lateral loads without reliance on the concrete core.
Figure 4 shows the plan view of the building including the elements CC, W L, and
W G. Figure 5 provides a plan view of model B showing W L. Figure 6 provides a
plan view for model C showing W L and W G. A schematic for the three-dimensional
model of the building on ETABs is shown in Fig. 7.
The gravity loads, including dead, live, and snow loads were considered in the
analysis together with the seismic loads, specified in the drawings of the rest of the
Assessment of Current Analysis Methodology of Light-Frame Wood … 289
Development of
Wood3D
5. Calculate the
deflection for each
wall for the different
load combinations
6. Check the
force/strength ratios
building. The seismic analysis was conducted using the equivalent static approach
method specified in the NBCC (2015). Wind loads are not considered in the analysis
since the design of this low-rise building is governed by seismic loads. The load
combinations involving gravity and seismic loads specified in the NBCC (2015) are
considered.
The three models are analyzed using the software Wood3D and the results in
terms of deflection, story-drift, and straining actions in various components of all the
wood walls are determined.
290 A. Risha et al.
Fig. 4 Plan view of all the vertical elements including CC, W L, and W G
Fig. 6 Plan view for the lateral-load resisting elements for model C
Assessment of Current Analysis Methodology of Light-Frame Wood … 291
The results obtained from the analysis of the maximum straining actions in various
components of the walls together with the capacity of their components are discussed
below.
Figure 8 displays the maximum axial load versus displacement curve of all the nail
connections in the four-story residential building considered. This curve represents
the characteristic of the nails that were implemented into Wood3D. The figure outlines
the maximum axial force in the nails in model A, model B, and model C. Model A
and model C are within the acceptable range, while model B reaches the peak of the
curve. This demonstrates that the nail connections in models A and C can resist the
applied axial loads, whereas model B would fail. This behavior is predicted since
model B was not designed to resist all the lateral loads with the help of the wood
shear walls alone.
292 A. Risha et al.
Force kN
1
0.8
0.6
0.4
0.2
0
0 0.002 0.004 0.006 0.008 0.01 0.012
Displacement m
1
0.8
Model C
0.6
0.4
0.2 Model A
0
0 0.002 0.004 0.006 0.008 0.01 0.012
Displacement m
Figure 9 displays the maximum obtained shear force versus displacement curve for
all the frame-to-frame connections in the residential building. From this graph, all
models A, B, and C are within the acceptable range, with model A being the most
conservative, and model B is the closest to the peak. Although model B is closer to
the peak of the curve in Fig. 9, it is still capable of withstanding the maximum shear
force in the frame-to-frame nail connections.
Figure 10 displays the shear force versus displacement curve of the frame-to-
sheathing nail connections. The absolute maximum shear forces in the nail connec-
tions are outlined for each model in the figure. Similar to the behavior of the axial force
Assessment of Current Analysis Methodology of Light-Frame Wood … 293
Force kN
Model A
0.8
0.6
0.4
0.2
0
0 0.005 0.01 0.015 0.02 0.025
Displacement m
in the frame-to-frame nails, the shear force in the frame-to-sheathing nail connec-
tions are within the acceptable range for model A and C, while model B reaches the
peak of the curve. Therefore, the frame-to-sheathing nail connections in model B
are not able to withstand the applied shear force. This is also predictable because,
as stated previously, model B was not designed to resist all the lateral loads with the
help of the wood shear walls alone.
The maximum axial and shear stresses in the sheathings are presented in Table 2,
where σ a is the maximum axial stress in the sheathings and σ s is the maximum
shear stress in the sheathings. Comparing these with the maximum tensile strength
of 5.5 MPa, maximum compressive strength of 5.3 MPa, and maximum shear strength
of 1.5 MPa obtained from table 6.3.1A of the Canadian Wood Council (CWC) Wood
Design Manual, the OSB sheathings in all three models are safe in the axial direction.
Model A is safe in shear while models B and C fail. The highest stresses are found
in model B compared with model A and model C. This is anticipated because model
B resists the applied lateral load only with the help of the specified shear walls.
The force to strength ratios of all the models are displayed in Table 3. From Eq. (1),
from section 5.1 of the CWC Canadian Wood Manual, the maximum allowable force
to strength ratio for wood walls cannot exceed 1. Model A displays a force to strength
ratio for the studs, parallel to the grain of 0.13. This means that the studs are very
safe. The walls in this initial model seem to all pass the design criteria. After some
more investigation, it was concluded that the reinforced concrete cores in the ETABs
model are attracting most of the lateral loads due to their higher stiffness. This caused
a small amount of lateral load to be transmitted to the links, therefore, resulting in a
very low force to strength ratio. Model B has a force to strength ratio for the studs,
parallel to the grain, of 1.12 (Table 3). This means that the walls are not passing
the design criteria, which was also predicted because the seismic loads assigned are
the same as the ones assigned to model A. The maximum force to strength ratio
for the studs, parallel to the grain for model C is 0.32. This ratio is greater than
the ratio obtained from model A, which is expected since the structure has a total
of 752 structural wood walls. All these walls are utilized to resist both gravity and
lateral loads, which are not designed to do so initially. This model determines that
the four-story residential building can resist the total lateral loads solely using the
wood shear walls.
2
Pf Mf
+ ≤1 (1)
Pr Mr
where
Pf factored compressive axial load
Pr factored compressive load resistance parallel to the grain
Mf factored bending moment
Mr factored bending moment resistance.
Once the strength of the wood walls on Wood3D is checked, the next step requires
checking the total deflection and inter-story drift of each story against the NBCC
2015 limit under section (4.1.8.2.11). In Tables 4 and 5, the deflection and inter-
story drift of model A for each story are amplified by the RdRo values and compared
Assessment of Current Analysis Methodology of Light-Frame Wood … 295
Table 5 Amplified
Story level Inter-story drift x h/40 (mm)
inter-story drift of model A
RdRo = (1.5)(1.3) (mm)
compared with code limits
1 4.76 75.00
2 7.84 75.00
3 8.68 75.00
4 8.28 75.00
with the total height from ground level to the specified story over 40 (H/40) and the
story height over 40 (h/40). The table below shows that the structure successfully
passes the criteria for both the total deflection and inter-story drift. This is expected
since this is an existing building that has already been designed and constructed,
therefore, it must pass all the serviceability limit state criteria.
Similar to model A, the deflection and inter-story drift of model B are analyzed
as well. Tables 6 and 7 outline the total deflections and inter-story drifts at each
story level of the four-story residential building. The data in the table gives that the
amplified deflections and drifts by RdRo do not meet the code limits with both the
deflection and inter-story drift being over the limits by a large amount.
Table 7 Amplified
Story level Inter-story drift x h/40 (mm)
inter-story drift of model B
RdRo = (1.5)(1.3) (mm)
compared to code limits
1 197.94 75.00
2 129.60 75.00
3 100.98 75.00
4 64.72 75.00
296 A. Risha et al.
Table 9 Amplified
Story level Inter-story drift x h/40 (mm)
inter-story drift of model C
RdRo = (1.5)(1.3) (mm)
compared with code limits
1 145.00 75.00
2 101.05 75.00
3 79.16 75.00
4 49.80 75.00
Moving on, Tables 8 and 9 outline the deflection and inter-story drift of model C
against code limits. Both tables show that the model does not meet either the total
deflection or the inter-story drift criteria. This is again because the existing model
requires the reinforced concrete cores, in order to meet the serviceability limit state
criteria. Removing these cores, by substituting them with gravity resisting elements
at each story level and forces them to only resist gravity loads. This shifts the rest of
the lateral loads to be resisted by the specified wood shear walls only. It is concluded
that assigning all the walls in the building, as shear walls allows the building to meet
a few of the ultimate limit state criteria but fails in the more stringent criteria, the
serviceability limit state.
As previously stated, three models were generated for the four-story residential
building using different modeling assumptions. The performance of these three
models is summarized in Table 10. Model A, modeled exactly as the existing building,
passes all the design criteria. This includes the strength and serviceability criteria.
Considering that the absolute maximum forces and stresses of the LFWB’s walls
were extracted and analyzed in this study, this concludes that there is room to opti-
mize the design of model A. The analysis performed using Wood3D proves that the
design of the wood shear walls in the existing building is over-conservative. Model
B, as summarized in Table 10, does not pass any of the strength or serviceability limit
state criteria. This is predicted since the lateral-load resisting system in model B only
consists of the specified wood shear walls. The building was not initially designed to
resist all the lateral loads using the wood shear walls alone. Model C, which includes
Assessment of Current Analysis Methodology of Light-Frame Wood … 297
transforming all the wood walls in the LFWB into shear walls, passes most of the
strength criteria but fails the serviceability limit state criteria. This concludes that
although the wood shear walls in model A have room for optimization, even after
adding more shear walls (model C), the LFWB still requires reinforced concrete
cores to be able to meet the serviceability limit state criteria. Steel tie rods are not
added to any of the shear walls in the models analyzed. Adding steel tie rods to model
A or C could enhance the serviceability limit state performance as well.
5 Conclusion
This paper’s main objective is to use the software Wood3D to analyze an existing
LFWB and use the results to assess the adequacy of the simplified analysis method,
currently used in the industry. The building considered in this study is a four-story
residential building that includes reinforced concrete cores in addition to wood shear
walls. The performance of all the three models analyzed was predicted. This study
proved that it is easy to use Wood3D due to its integration with ETABs, a commonly
used software in the industry. If used in the industry, Wood3D can substitute the
current use of hand calculations and tedious methods to design wood shear walls. It
would also produce a less conservative, safer design, and will save time. Wood3D
analyzes the wood shear walls in LFWBs and presents the user with the following
information: the deflection, the inter-story drift, the axial load in the studs, maximum
shear force in the studs, the maximum and minimum moment in the studs, the
maximum axial shear force on the frame-to-frame nails, the maximum shear force in
the frame-to-sheathing nails, the maximum axial and shear stress in the sheathings,
the maximum force on the tie rods, and the force to strength ratio of the studs. The
software also calculates if the properties specified by the user are safe or not safe,
making it an effective method to check and design wood shear walls in LFWBs.
After the end of this study, the software Wood3D has been updated to include an
optimization scheme that is used to optimize the design of LFWBs.
298 A. Risha et al.
Acknowledgements Financial support from Strik Baldinelli Moniz Ltd., London, Ontario, Canada,
and the funding from the Ontario Centre of Innovation is gratefully acknowledged.
References
Fadi Oudah
1 Introduction
In Canada, steel reinforced concrete (RC) wharves are exposed to severe environ-
ment. Concrete degradation due to freeze–thaw damage and steel corrosion are the
primary deterioration mechanisms for RC wharves as evident in research and real-
life evaluations [9]. The time-dependent deterioration mechanisms of steel corrosion
and freeze–thaw damage impact the wharf safety at the ultimate limit state (ULS) and
the wharf functionality at the serviceability limit state (SLS). Evaluation of existing
structures is typically driven by the requirements of the ULS as opposed to SLS, since
the deflection and crack width are typically less of a concern as compared with the
structural safety. The structural safety at ULS is generally assessed using reliability
F. Oudah (B)
Department of Civil and Resource Engineering, Dalhousie University, Halifax, Canada
e-mail: fadi.oudah@dal.ca
analysis to quantify the probability of loads exceeding the resistance and to calculate
the reliability index of the structure accordingly.
Structural reliability analysis can be time-dependent or time-independent. The
former accounts for the variations of loads and resistance during a studied service
life, while the latter discards possible variations in the load and resistance with time.
For RC wharves, the degradation in the resistance due to the freeze–thaw action and
the corrosion effect cannot be discarded, which requires the use of a time-dependent
reliability analysis to assess the structural safety at ULS and the service life of
wharves. Several studies employed time-dependent reliability analysis framework to
predict the change in the structural reliability of RC wharves [1–3, 9]. The scope of
the available studies varied based on the considered limit state and the performance
objective of the developed framework of analysis. Review of literature indicates the
following research gaps:
• Lack of robust framework of analysis: The robustness of the framework of anal-
ysis relates to the ability of the developed framework to realistically predict the
degradation of the structural resistance when exposed to a changing environment.
For RC wharves, this requires the use of robust degradation models that account
for the reduction in the concrete compressive strength due to freeze–thaw damage,
reduction in the reinforcing steel cross-sectional area, and changes in the rates of
degradation as a function of climate change.
• Lack of efforts to transfer available framework of analyses to real-life appli-
cations: For time-dependent reliability analysis to be frequently used by struc-
tural engineers, the analysis framework needs to be presented in an efficient
and user-friendly format without compromising the accuracy of the model. This
could be achieved by presenting the analysis framework in user-friendly charts or
standalone computer application with user-friendly interface.
The objectives of this research are to: (1) develop a robust time-dependent reli-
ability framework of analysis to predict the service life of steel RC concrete decks
in wharves; and (2) utilize the developed framework for calibrating sample user-
friendly service life evaluation charts for Halifax, Nova Scotia that can be readily
used by practicing engineers. The framework of analysis accounts for the combined
effects of corrosion and changes in the freeze–thaw degradation rate due to climate
change. The paper structure consists of presenting the limit state function (LSF) and
the framework of analysis, followed by describing the calibration and application of
the evaluation charts.
The limit state function (LSF) considered in this analysis corresponds to the ultimate
limit state (ULS) of an RC wharf slab subjected to bending as expressed in Eq. (1).
The factored applied moment, Mf (t), corresponds to the applied gravity dead and
live loads as expressed in Eq. (2). The factored moment of resistance, Mr (t), is
User-Friendly Time-Dependent Reliability-Based Charts to Evaluate … 301
β1 = 0.97 − 0.0025 1 − ψFT,f (t) f c ≥ 0.67 (6)
where, λD is the dead load factor, λL is the live load factor, MD is the moment
due to dead load, ML is the moment due to live load, As is the area of the tension
reinforcement, f y is the yield strength of the tension reinforcement, d is the effective
depth of the section, c is the depth of the neutral axis, f c is the concrete compressive
strength, and b is the member width.
The framework of the reliability analysis consists of developing the load model, resis-
tance model, and presenting the solution scheme for the time-dependent reliability
analysis. Eighteen random variables were considered in the analysis. The definition,
distribution type, bias, coefficient of variation (COV), and the corresponding refer-
ences are summarized in Table 1, while the detailed description of the variables are
included in the following subsections.
Bias = −2.4713 × 10−5 f c3 + 0.003174 f c2 − 0.135436 f c + 3.064, MPa (7)
302 F. Oudah
Table 1 (continued)
Var. Definition Distribution Bias COV Reference
xFT,N Model uncertainty in Normal 1 0.15 Oudah
predicting the total equivalent [12]
number of effective
freeze–thaw damage cycles
xFT,NA Model uncertainty in Normal 1 0.1733 Oudah
predicting the field-measured [12]
annual number of
freeze–thaw cycles
xFT,f Model uncertainty in Normal 1 0.0911 Oudah
predicting damage to concrete [12]
strength
xFT,Ec Model uncertainty in Normal 1 0.0593 Oudah
predicting damage to concrete [12]
dynamic modulus of elasticity
*Equation (7)
4 Load Model
The load model consists of dead load moment, MD , and live load moment, ML . The
bias and COV of MD were based on values used in calibrating the load factors of NBC
[10] with a normal distribution type. The bias and COV of ML follow an extreme
value distribution type for time t based on the model proposed by Zadehh and Nanni
[13] as expressed in Eqs. (8) and (9).
t
Bias = 1 + 0.14ln (8)
50
0.18
COV = (9)
1 + 0.14ln 50t
5 Resistance Model
The damage factor ψc,A is calculated as expressed in Eqs. (10)–(13), while the bias
and COV of the random variables are included in Table 1 [5].
⎧
⎨0 t1 ≥ t
α(t) = 2(t − t1 )V1 /∅ t 2 > t ≥ t1 (11)
⎩
2[(t2 − t1 )V1 + (t − t2 )V2 ]/∅ t ≥ t2
Jr
V = (12)
γst
Ic = 0.1096Jr (13)
where, α(t) is the time-dependent metal loss factor, t1 is the time for steel corrosion
to initiate, t2 is the time for the corrosion-induced crack to occur, V1 is the corrosion
rate of the steel rebar before the occurrence of corrosion crack (mm/yr), V2 is the
corrosion rate of the steel rebar after the occurrence of corrosion crack (mm/yr), ∅
is the diameter of the reinforcing bar, Ic is the corrosion current density, Jr is the
instantaneous corrosion rate, and γst is the density of steel (7.85 g/cm3 ).
The value of Ic can range from 0.5 to 2 µA/cm2 based on the corrosion activity
[4]. The calculation procedure for t1 and t2 is based on corrosion measurements in
RC wharves. The reader is referred to Akiyama et al. [3] for the detailed steps utilized
in calculating t1 and t2 based on the airborne chloride content and the wharf position
with respect to the sea level. The predictions of t1 and t2 considered model uncertainty
in predicting the coefficient of diffusion of chloride, xDc and model uncertainty in
predicting the amount of corrosion product, xQcr .
The damage factor ψc,B is calculated as expressed in Eqs. (14) and (15), while the
bias and COV of the random variables are included in Table 1 [12].
where xC,b is a random variable to account for model uncertainty in predicting ψc,B .
User-Friendly Time-Dependent Reliability-Based Charts to Evaluate … 305
The damage factor ψFT,f is calculated as expressed in Eqs. (16)–(23), while the bias
and COV of the random variables are included in Table 1 [12].
ψFT,f (t) = xFT,f −0.9833ψFT,Ec (t)2 + 1.871ψFT,Ec (t) (16)
b
ψFT,Ec = 1 − xFT,Ec a 1 − ψFT,Ed E d /E c (17)
NT /xFT,N + 1356
ψFT,Ed = 5ln /7 (18)
1315
3
f c = η(E c )3 = η 0.33E d1.24 (19)
1/3
E c = 30234 f c /60 (20)
E d = (E c /a)1/b (21)
Crude Monte Carlo (MC) simulation was utilized to conduct the time-dependent reli-
ability analysis. The time-dependent probability of exceedance, Pf (t), as expressed
in Eq. (24), equals to the integration of the joint probability density function of the
vector X of the random variables as a function of time, f X (t) (x(t)) and based on
the G(X (t)) performance function describing the ULS of the wharf deck [refer to
Eq. (1)].
306 F. Oudah
where, Fi is the number of failure points in year i, Si is the number of survival points
in year i, and (.) is the cumulative distribution function of the standardized normal
distribution.
The framework of analysis presented in Sect. 3 was applied to generate sample user-
friendly time-dependent reliability-based evaluation charts for RC wharves with f c
of 35 MPa and subjected to moderate and high corrosion activity and freeze–thaw
damage in Halifax, Nova Scotia, Canada. The sample evaluation charts are shown in
Fig. 1. The input parameters to the charts are rebar diameter, Φ (10, 20, and 30 M),
reinforcement ratio, ρ (1 and 2%), corrosion activity (moderate vs. high), year of
construction, T0 (1980, 2000, and 2020), depth of the concrete cover, dc (40 and
80 mm) and the section demand-to-capacity ratio (0.7–1.2). The output parameter
is the time-dependent reliability-based prediction of the wharf deck service life that
meets a target annual reliability index of 3.0 based on Oudah [12]. The demand-to-
capacity ratio is calculated based on NBC [10] and CSA A23.3 [8] predictions of Mf
and Mr , respectively. The moderate and high corrosion activities refer to corrosion
rates of 1 µA/cm2 and 2 µA/cm2 , respectively [4]. The year of construction input
parameter, T0 , reflects the impact of climate change on the mean of annual number
of freeze–thaw cycles in Halifax. Weather forecast by Climate Atlas of Canada
[7] indicates a gradual decrease in the number of cycles with time. The following
expression was calibrated to calculate the mean value of NA as a function of T0 :
The evaluation charts indicate the decrease in the service life as a function of the
following: increase in demand-to-capacity ratio (i.e., over-utilized sections), decrease
in the rebar diameter (i.e., a smaller rebar diameter is more susceptible to cross-section
User-Friendly Time-Dependent Reliability-Based Charts to Evaluate … 307
Fig. 1 Sample family of time-dependent reliability-based evaluation charts for wharf decks in
Halifax, Nova Scotia with f c of 35 MPa
308 F. Oudah
reduction due to corrosion when compared with larger ones), increase in the total
number of subjected freeze–thaw cycles, increase in the cross-section reinforcement
ratio, and the decrease in the concrete cover thickness. The projected decrease in the
number of freeze–thaw cycles with time due to climate change often yields a marginal
increase in the service life as evident from the evaluation charts. For example, the
service life of an RC wharf with f c of 35 MPa, ρ of 1%, Φ of 30 M dc of 40 mm,
and a demand-to-capacity ratio of 1.0, subjected to a moderate corrosion activity
and built in 1980 is five years less than an identical wharf built in 2020 as depicted
from Chart 1 of Fig. 1. The proposed format of the evaluation charts is user-friendly,
since it is presented in terms of information that can be readily determined by the
structural engineer who is assessing the service life of the wharf. Similar charts can
be developed for RC wharves of a different importance category, corrosion activity,
number of freeze–thaw cycles, design code, and geographical location.
The following is a list of the chart limitation:
• Drainage, if exists, is considered clogged.
• The concrete degradation models for the freeze–thaw damage are based on
normal concrete, where E d is determined using a transverse fundamental resonant
frequency test.
• The effective number of freeze–thaw cycles causing the concrete to damage is 1/
6.5 of the total number of freeze–thaw cycles recorded by weather station. Lower
ratios were recorded in other parts of the world (1/85 in inner Nova Scotia). The
value of 1/6.5 trends on the conservative side.
• Corrosion degradation models are based on uncoated steel reinforcement and do
not account for the possible change in the mean degradation rates with time.
10 Application Example
Problem Statement. Determine the service life of a steel RC wharf deck in Halifax,
Nova Scotia, Canada, constructed in 1990. The concrete deck is of normal impor-
tance. The unfactored moments due to dead and live loads are 55 kN m and 80 kN m,
respectively. The deck has the following properties: f c = 35 MPa; dc = 40 mm; h
= 250 mm; 25 M bars spaced at 120 mm.
Solution. The solution consists of the following steps:
Step 1. Calculate the reinforcement ratio. ρ s = As /bd = 500 mm2 * (1000/
120 mm)/(1000 mm * 250 mm) * 100 = 1.67%.
Step 2. Determine the corrosion activity. In absence of field tests of the corrosion
intensity factor I c , visual inspection can be used to determine the corrosion activity
level (moderate vs. high). A moderate corrosion was considered in this example.
Step 3. Calculate the demand-to-capacity ratio. It is assumed that the load combi-
nation used in calculating the moment demand is 1.25D + 1.5L [10], while the flex-
ural resistance is calculated using CSA A23.3 [8]. The design demand-to-capacity
ratio of the section is therefore 0.84 (189/224 kN m).
User-Friendly Time-Dependent Reliability-Based Charts to Evaluate … 309
Step 4. Interpolate linearly between T0 of 1980 and 2000 in Chart 1. Service life
equals to 46 years.
Step 5. Interpolate linearly between T0 of 1980 and 2000 in Chart 2. Service life
equals to 44 years.
Step 6. Interpolate linearly using the service lives in Step 4 and 5. Service life
equals to 45 years.
Therefore, the service life of the considered wharf is 45 years. The margin of error
is approximately 2% when compared with a rigorous crude MC simulation of the
considered wharf.
11 Conclusion
Acknowledgements Dalhousie University, Ocean Frontier Institute (OFI), and the Structural
Assessment and Retrofit (SAR) research group are acknowledged.
References
1. Akiyama M, Frangopol DM, Suzuki M (2012) Integration of the effects of airborne chlo-
rides into reliability-based durability design of reinforced concrete structures in a marine
environment. Struct Infrastruct Eng 8:125–134
2. Akiyama M, Frangopol DM, Takenaka K (2017) Reliability-based durability design and service
life assessment of reinforced concrete deck slab of jetty structures. Struct Infrastruct Eng
13:468–477
3. Akiyama M, Frangopol DM, Yoshida I (2010) Time-dependent reliability analysis of existing
RC structures in a marine environment using hazard associated with airborne chlorides. Eng
Struct 32:3768–3779
4. Andrade C, Alonso C (1996) Corrosion rate monitoring in the laboratory and on-site. Constr
Build Mater 10:315–328
310 F. Oudah
Abstract Over the years, the construction of mechanically stabilized earth (MSE)
structures has been a practical solution in infrastructure projects; the simplicity in
installation and economical benefits are well accepted nowadays. Designing and
construction of a structure founded on a compressible soil is one of the major chal-
lenges in retaining wall projects; MSE wall is not an exception [1]. In cases that large
settlements are expected, special considerations shall be taken into account for the
design, site preparation, and construction in order to guarantee a sound performance
[2]. The MSE walls of the Vergel Bridge in Northern Mexico were founded on a
highly compressible ground. The ground improvement solution consisted of a series
of pressure grout injection. Further to poor ground problem, an out-of-spec backfill
was used as the reinforced fill in the MSE walls in this project. To overcome this
problem, a new supporting retaining wall was constructed in front of the existing wall
to stabilize it. Other challenges in this project are regarding construction schedule
that should have been carried out without shutting down traffic. This paper summa-
rizes and discusses the design approach and construction process in repairing this
structure.
1 Background
The Vergel Bridge was originally built in 2014 in Gomez Palacio, Durango, Mexico,
as part of the highway “Segundo Periférico” that helps the traffic flow between
Northern Mexico and United States (Fig. 1).
A couple of years after being in service, deformations in the wall facing and asphalt
pavement over the existing MSE wall were observed. Consequently, a geotechnical
study was performed to characterize the backfill and foundation soil that determined
the poor behaviour of MSE may be the result of many factors, but mostly because
of the characteristics of the underlaying soil and poor quality of the backfill. It was
also mentioned that long-term settlement due to consolidation at the base of the MSE
wall is not completed, which could generate additional settlement in the future and
put the stability of the structure at risk [3].
In 2020, the design and implementation of the ground improvement, and
construction of an MSE wall as a lateral support for the distressed MSE wall started.
The MSE wall requires a foundation that can support the applied pressure. If the
bearing pressure exceeds the bearing capacity of the foundation, the wall will be
distressed. Over the past decades, some structures have been founded on a poor
ground without any consideration in the MSE wall design, resulting in excessive
settlement and foundation failure. This could be due to various reasons, but mainly
it is because of the absence of a proper geotechnical study [4].
As a part of the geotechnical exploration in this project, four boreholes were
investigated with 20.0, 15.0, 6.8, and 15.0 m deep, of which disturbed and undisturbed
samples were extracted by Shelby tubes. See Fig. 2.
Design and Behaviour of Walls Over Compressive Soil Case Study … 313
According to the laboratory test results on the soil samples, the backfill of the existing
MSE mass was heterogeneous, formed mainly by intercalation strata of clayey gravels
with sand, sandy clay, and clayey sand with gravel. This type of backfill is not suitable
for an MSE wall [3]. The amount passing the 0.075 mm (No. 200) sieve was above
15% and the plasticity index greater than 6%, which are above the permissible values
established for MSE walls based on the minimum requirements of the design and
supply company to satisfy the internal stability of the structure [5]. See Fig. 3.
The result of the mechanical parameters of backfill material was:
• Bulk unit weight: 19 kN/m3 .
• Internal friction angle: 2°–18°.
• Cohesion: 75–95 kPa [3].
Test Method
Selected Backfill Requeriments
(AASHTO) (ASTM)
Mechanical Properes
Angle of internal fricon Ø≥34°. T236* D3080
Index Properes
Parcle size ≤ 102 mm (4 in) T27 D422
Amount passing the 0.075 mm (No. 200) sieve ≤ 15 % T27 D422
The uniformity coefficient (Cu) ≥ 2 T27 D422
Liquid limit ≤ 30% T90 D4318
Plascity Index ≤ 6 T90 D4318
Organic material < 2% None D2974
Electrochemical Properes
pH between 5 and 10 T 289-91 I G51
Resisvity (at 100% saturaon) > 3000 ohm-cm T 289-91 I G57
Water soluble chloride content < 100 ppm T 291-91 I D512
Water soluble sulfate content < 200 ppm T290-91 I D516
1.1.3 Settlement
Based on the test results, the total consolidation settlement of the foundation soil was
calculated 22.5 cm. At the time of this study, approximately 80% of the settlement
had been completed. Therefore, it is estimated that the wall undergoes an additional
settlement of 4.50 cm in the following years. See table and graph (Table 1 and Fig. 4)
[3].
0.25
0.2
Selement (Meters)
0.15
0.1
0.05
0
0 2 4 6 8 10 12 14
Time (Years)
On the other hand, a collapse potential of 3.25% was determined by the consultant.
Considering the thickness of compressible stratum and according to the analysis
carried out, the foundation soil presents a moderate collapse problem, which can
generate additional settlement and undermine the stability of the structure [3].
2 Design Solution
To stabilize the ground and prevent further settlement, the approved solution
consisted of grout injections under the existing MSE wall and consolidate its founda-
tion to meet the project requirements. The systematic distribution of inclined injec-
tions under the existing MSE mass and vertical injections were placed on a surface
starting in the existing wall face to 6.1 m outwards, allowing the improvement of the
underlying soil properties. See Fig. 5 [7].
The design of ground improvement was carried out to modify the original mechanical
properties of the foundation soil through the injection of grout, by filling the voids
and providing the required bearing capacity to support the structure. The designer
proposed a detailed procedure for grout injections to further avoid the stress of the
soil under the existing wall.
The position, length, and inclination of the injections were determined according
to the findings of the project analysis. See Fig. 6.
316 E. G. Toledo and G. Calatrava
Since the reinforced backfill was out of specification, the internal stability of the
wall was concluded to be compromised. Therefore, to cover and stabilize the most
affected side, a new MSE wall was designed and constructed in front of the existing
wall. The design of this MSE wall system has considered essential design features,
such as settlement, internal, external, and global stability. The facing panels in the
reinforced earth wall employed in this project are discrete with joints around each
panel. One of the advantages of the MSE structures is its flexibility, making it capable
of tolerate bigger total and differential settlements than other solutions, while the
good appearance is kept [8]. See Fig. 7.
Design and Behaviour of Walls Over Compressive Soil Case Study … 317
The design parameter of the backfill material used in the new MSE wall was according
to AASHTO (Division II—Construction) 7.3.6. As per follow:
• Internal friction angle of 34°
• Bulk unit weight reinforced zone 17.50–19.50 kN/m3
• Bulk unit weight retained zone 19.5 kN/m3
• < 15% passing 0.075 mm
• Plastic index < 6 [7].
Because of the short distance between both wall facings, a connection mechanism
between the existing wall and the new MSE Wall was designed to help increase the
internal stability of the new structure. See Fig. 8.
3 Construction Challenge
Drilling procedure was in accordance with the project specifications. Once this was
done, the prepared pipes were inserted into holes, to proceed with the injection
method. During drilling process, the operators were cautious to avoid affecting the
integrity of the existing structure and damaging underground utilities. See Fig. 9.
Inclined pipes below the existing MSE wall and vertical pipes along the perimeter
of the new MSE wall were injected in stages:
Stage 1: Inclined pipes below the existing MSE wall and vertical pipes along the
perimeter of the new MSE wall were injected in layers (see “injection procedure”
below), creating a barrier for the subsequent grout injection area (stage 2).
Stage 2: The remaining vertical pipes inside the new MSE wall perimeter were
injected, following the same procedure as in stage 1 [7].
The process begins with the placement of PVC pipes with elastomeric valves along
its length inside the drilled holes. The second step is injecting the gap outside pipes to
make a seal to prevent the following injection from going up along this space instead
of penetrating the ground. In the third step follows the insertion of the “injection
line” inside each pipe. The injection starts from bottom to top of the pipes, so the end
of the injection line shall reach the last valve to proceed with the injection of grout.
The pumping pressure must be greater than the resistance of the elastomeric valve
to allow the flow of grout inside the layer of soil to be improved. Once the grout fills
the voids of the predetermined soil area, the end of the injection line is moved to the
next valve and starts the injection for the next layer. This process is repeated until
reaches the top of the foundation soil [7]. See Fig. 10.
One of the main advantages of the reinforced earth wall system over other solutions
is the quick installation procedure that allows to meet tighter construction schedule.
In this particular case, while the foundation soil improvement was progressing,
320 E. G. Toledo and G. Calatrava
the facing panels were being produced, allowing the installation of the MSE wall
immediately after soil improvement completion. The use of this system allowed the
contractor to complete the project on schedule Fig. 11.
MSE wall system is well known and accepted as an engineering solution, by owners
and contractors. Like many other solutions, MSE structures have limitations and
design challenges; when the structure is mainly founded on compressible soil,
special attention should be put on its settlement, and proper consideration should
be implemented in the design to avoid future performance issues.
Grout injections under an existing MSE structure could be a solution to improve
underlaying soil properties when the wall undergoes growing settlement, and the
reconstruction of the structure is not an option. Nonetheless, the suitability of the
foundations to carry the applied bearing pressure under the MSE wall and the overall
global stability always must be reviewed in the design prior to the construction.
Acknowledgements Authors would like to acknowledge all involved in the Vergel Bridge Project
related to reinforced earth walls design and construction: Tierra Armada S.A. de C.V. (Mexico),
Euro Estudios S.A. de C.V., Prefamovil S.A. de C.V. and Constructora Mursa S.A. de C.V., SCT
Durango.
Design and Behaviour of Walls Over Compressive Soil Case Study … 321
References
1. Ramirez A, Valero F, Perez C (2008) Walls over compressible soils and unstable slopes. Exam-
ples. In: Miyata and Mukunoki (eds) New horizons in earth reinforcement—Otani. Taylor &
Francis Group, London
2. Mirmirani S, Brockbank W (2012) Application of MSE walls on poor foundations. Geo Manitoba
Building on the Past
3. Euro Estudios SA de CV (2019) Estudio de Mecanica de Suelos “MME Gomez Palacios”
4. Schodts PA (1990) Design and behavior of reinforced earth soil structures on soft foundations.
British Geotechnical Society
5. The Reinforced Earth Company, (RECo) (2011) Construction and quality control procedures
manual
6. American Association of State Highway & Transportation Officials (AASHTO) (2017)
AASHTO LRFD bridge design specifications, 8th edn. Washington, DC, USA
7. Tierra Armada SA de CV (2020) PSV Aduana Km. 11+270.00 Mejoramiento de Suelos en Muro
Ampliación Etapa 2
8. Wu P, Mackenzie RK (2001) Reinforced earth, a soil reinforcement technology’s contribution to
Canadian Civil Engineering. In: 29th annual conference of the The Author(s), under exclusive
license to Springer Nature Switzerland AG
Structural Damage Detection of Steel
Corrugated Panels Using Computer
Vision and Deep Learning
Xiao Pan, Soham Vaze, Yifei Xiao, Sina Tavasoli, and T. Y. Yang
Abstract In recent years, steel corrugated panels have been introduced in Canada
to construct long-span and low-rise frameless systems. However, such systems are
prone to have critical structural damages such as buckling under extreme loading
conditions such as earthquakes. Identification of these damages is crucial for owners
to make informed decisions. In recent years, computer vision methods have been
successfully developed and applied in structural visual damage detection for different
types of structures including concrete, steel, masonry and timber structures, ranging
from regional-scale post-disaster collapse identification, to localized applications
such as metal surface defects detection, joint damage detection, concrete crack and
spalling detection. However, there are almost no attempts in vision-based detec-
tion of buckling damage. In this paper, a hierarchical buckling damage detection
framework has been proposed for steel structures, which consists of system-level
buckling identification, component-level buckling localization. First, global buck-
ling identification is performed on the image of the panels using the convolutional
neural networks (CNN)-based classification algorithms. If the panel is identified as
buckled, then the YOLOv3-tiny object detection algorithm is applied to localize the
damaged area. Extensive monotonic and cyclic laboratory tests have been conducted
on the steel corrugated panels, where image and video data are collected for training,
validation, and testing of the CNN algorithms. Results indicate that the CNN-based
vision methods can achieve high accuracy in detecting and localizing the buckling
damage for the steel corrugated panels. Moreover, additional discussion about further
investigations of these steel panels is also presented.
1 Introduction
Steel structures are widely constructed worldwide. Primary failure modes in steel
members due to tensile, compressive, flexural and torsional loads have been exten-
sively investigated over the past decades. One of the complex failure criteria, shear
buckling, is observed in hot-rolled and cold-form steel thin-walled elements. Past
investigations have collectively identified three main forms of shear buckling: local,
global, and interactive. Comprehensive studies have been performed to provide
analytical solutions to characterize and quantify different buckling modes, and
provide design recommendations [5, 6, 9, 10, 12, 35, 37, 40, 55]. With the growing
need for faster manufacturing and economical construction, complimented by better
computational methods, investigations of lightweight thin-walled steel components
have gained traction in recent decades. Thus, it has become more crucial than ever
before to dive deeper into studying the buckling behavior of these components and
quantify their performance under various loading conditions.
Elements of conventional hot-rolled steel sections are replaced with thin-walled
components to reduce weight. Local buckling in these components has been exten-
sively observed as a crucial failure mode [43]. Shear buckling has also been observed
in steel plated dampers developed to retrofit structures in high seismic activity zones
attributing to the high ductility of steel [44–53]. Thin steel plate shear walls are
presented as an alternative solution to reinforced concrete counterparts as primary
lateral load resisting elements in the structure. A combination of different buckling
modes such as local, global and interactive buckling is observed in these steel plates.
Numerous investigations have been carried out to accommodate such shear walls in
different structures. One of the main drawbacks of thin steel plates is out-of-plane
instability which is compensated by adding either stiffeners or corrugations. Past
studies have extended this idea to replace flat webs in girders with corrugated steel
elements, thus eliminating the need for transverse stiffeners [7, 8, 40].
Cold-form steel corrugated panels, with their lightweight nature, economical
construction, higher initial stiffness and out-of-plane stability than flat panels, have
been proposed as a more efficient version of steel shear walls. Earlier studies provide
details about failure modes and design recommendations to facilitate the efficient
application of these elements [1, 3, 4, 13, 14, 45, 46, 56]. Considering that adding
corrugations offers additional surface area in the axial direction combined with the
previously mentioned advantages, a recent study has proposed using these wall panels
as primary load-bearing elements while resisting seismic forces in low-rise long-span
structures [47]. It has been shown that these wall panels are prone to buckle under
combined axial and shear monotonic and cyclic loadings. Therefore, it is important
to identify the buckling damages of the corrugated panels which may be caused by
extreme natural hazards such as earthquakes. This will allow the building owners or
decision makers to apply necessary repair or replacement actions.
Traditionally, manual site inspection is usually conducted to identify structural
damages. However, manual inspection is time-consuming and inherently biased,
where the accuracy depends on the skillset of the inspectors. In the past few decades,
Structural Damage Detection of Steel Corrugated Panels Using … 325
research about structural damage detection using different types of sensors has been
developed to enhance the efficiency in structural health inspection [36, 38, 48]. These
contact-type sensor-based method identify structural damages using natural frequen-
cies and mode shapes. Several limitations can be observed when using contact
sensors. Many contact sensors are relatively sensitive to external environmental
conditions, such as temperature and humidity [21, 49]. Dedicated experts are typi-
cally required to install and calibrate the sensors with the use of professional software
package to account for environmental effects [18, 19].
In the past decade, there have been great advancements of computer vision tech-
niques, since the breakthrough achieved by AlexNet [20] which is built on the archi-
tecture known as deep convolutional neural networks (CNNs), a specific type of
deep neural networks (or deep learning). Subsequently, more CNNs-based architec-
ture with different types of substructures have been proposed, such as VGG [41],
GoogleNet [42], ResNet [16], and DenseNet [17], and MobileNet [39] for image
classification tasks.
In recent years, the use of various types of deep neural networks in different
structural engineering applications cannot be overstated such as structural design
[24], structural dynamic response analysis [28, 29] of nonlinear shear frames and
rocking structures [25], vision-based structural damage detection [2, 15, 22, 23, 26,
27, 30–34, 44, 50, 51]. Although the effectiveness of CNN-based vision methods has
been demonstrated in detecting various types of structural surface damages, there
remain almost no attempts in detection of structural steel buckling damage using
CNNs. In this paper, a hierarchical identification approach is used. First, a global
buckling identification is performed on the image of the steel corrugated panels using
the CNN-based classification algorithms. If the panel is identified as buckling, then an
object detection algorithm built on YOLOv3 is applied to localize the damaged areas.
Extensive monotonic and cyclic laboratory tests have been conducted on the steel
corrugated panels, where image and video data are collected for training, validation,
and testing of the CNN algorithms. Preliminary results indicate that the CNN-based
vision methods can achieve high accuracy at real time speed in identifying and
localizing the buckling damage for the steel corrugated panels. Moreover, the results
provide a solid foundation for 3D vision-based buckling detection and quantification,
which remains as a further ongoing study.
2 Methodology
At the very end of the networks, global average pooling (GAP) is performed to reduce
output dimensions from the previous layers, which is then followed by the FC layer
associated with softmax function.
The training and testing images for ResNet-50 were collected in the structural labora-
tory at the University of British Columbia (UBC). The 700 collected images were then
augmented using the standard data augmentation techniques such as random crop-
ping, horizontal flipping, small translations, rotation, and small scaling, to improve
the robustness of model and reduce overfitting. In the end, 3500 augmented images
were generated, of which 80% were selected as training set, and 20% were selected
as testing set. Within the training set, 80% of the data were selected to train the model
and the rest 20% were used for validation. Therefore, 3500 × 0.8 × 0.8 ≈ 2240,
3500 × 0.8 × 0.2 ≈ 560, and 492 × 0.2 ≈ 700 images are allocated for training,
validation, and testing purposes, respectively. Table 1 shows the class distribution
of the images before and after data augmentation, which shows there is no class
imbalance. Training of YOLOv3-tiny was done by back-propagation and stochastic
gradient descent (SGD) with momentum, where the values of the training options are
reported in Table 2. Training of ResNet-50 was implemented in MATLAB R2021a
on an Alienware Aurora R8 (a Core i7-9700 K@3.60 GHz, 16 GB DDR4 memory
and 8 GB memory GeForce RTX 2070 GPU).
Once the panel is classified as buckled, object detection algorithm using the YOLOv3-
tiny is applied. Compared to region-based methods such as R-CNN, Fast R-CNN and
Faster R-CNN, the advantage of YOLO is runs at very fast speed, while still achieving
similar precision in small scale application (i.e., when the number of classes being
detected is relatively small). In general, the YOLOv3 networks start with a feature
extractor which can be built upon existing CNN-based image classifiers such as
ResNet, DarkNet-53. Then detection head is applied to the feature map, which is
the output of the feature extractor, to allow the networks to detect objects at various
scales. In this study, a specific tiny version of YOLOv3 is adopted, as shown in Fig. 3.
More details of the selected YOLOv3-tiny are described in Pan and Yang [27].
The images collected at the UBC structural laboratory were manually labeled to create
training and testing sets for the YOLOv3-tiny. Similar to Sect. 2.1.2, standard data
Structural Damage Detection of Steel Corrugated Panels Using … 329
augmentation techniques are also applied. In the end, 1050 images were generated, of
which 80% were used for training and the rest were used for testing of the YOLOv3-
tiny networks. Similar to ResNet-50, training of YOLOv3-tiny was also done by
back-propagation and stochastic gradient descent (SGD) with momentum, where
the training option is presented in Table 3.
Training of YOLOv3-tiny was implemented in MATLAB R2021a using the same
hardware configuration described in Sect. 2.1.2.
Extensive monotonic and cyclic quasi-static pushover tests have been conducted
where image and video data were collected for training, testing, and validation of the
proposed method. Figure 4 shows the sample images collected for both undamaged
and damaged panels.
This subsection describes the results regarding the localization of local buckling esti-
mated by YOLOv3-tiny. The performance of an object detector can be evaluated by
precision–recall curve and the mean average precision (mAP) which is the enclosed
330 X. Pan et al.
area under the precision–recall curve [11]. In this study, for each image, the predicted
boxes and the ground truth boxes are aggregated first, respectively. For each image,
the aggregation is performed by taking the union of all the boxes. Then, the Inter-
section over Union (IoU) between the prediction and the ground truth is taken as the
IoU between the aggregated box area and the aggregated ground truth area. Figure 7
shows the precision–recall curve for training and testing, which demonstrates the
Structural Damage Detection of Steel Corrugated Panels Using … 331
effectiveness of the YOLOv3-tiny detector. The mAP value is estimated about 95%
during training, while 87% during testing, under the assumption of the IoU described.
This shows there may exist some level of overfitting during training. Additional data
with more variety should be collected to enhance the results. Figure 8 presents sample
predicted results. The YOLOv3-tiny is effective in localizing most of the buckling
regions in all the sample images.
4 Further Studies
Experimental tests are actively being conducted at the time of writing. The results
presented in this paper are for single-panel specimens only. Research about the
three-panel specimens and six-panel specimens are currently being investigated both
332 X. Pan et al.
5 Conclusions
Steel corrugated panels are prone to buckle under monotonic or cyclic loadings.
In this paper, a simple hierarchical approach has been proposed for buckling
detection of steel structures, which consists of system-level buckling identifica-
tion and component-level buckling localization. First, global buckling identification
is performed on the image of the panels using the convolutional neural network
(CNN)-based classification algorithms. If the panel is identified as buckled, then the
YOLOv3-tiny object detection algorithm is applied to localize the damaged area. The
results show that the CNN-based classification and object detection algorithm can
achieve relatively high accuracy in both training and testing, although there exists
334 X. Pan et al.
some room to improve the buckling localization precision with more experimental
data to be collected in the future. On the other hand, the proposed framework can only
approximately identify the location of the buckling. Further study is required to quan-
tify the exact out-of-plane displacements due to buckling, which is of prime interest
to researchers during buckling or post-buckling analysis of steel plate structures. For
this purpose, the paper briefly summarizes a list of ongoing investigations.
Acknowledgements The authors would like to acknowledge the funding provided by the BEHLEN
Industries, Natural Sciences and Engineering Research Council (NSERC), International Joint
Research Laboratory of Earthquake Engineering (ILEE), National Natural Science Foundation of
China (grant number: 51778486), China Scholarship Council. The authors would also like to thank
Mr. Pat Versavel and Mr. Trevor Veitch from BEHLEN Industries for their assistance and collab-
oration in corrugated panel testing. Any opinions, findings, and conclusions, or recommendations
expressed in this paper are those of the authors.
References
15. Gao Y, Mosalam KM (2018) Deep transfer learning for image-based structural damage
recognition. Comput-Aided Civ Infrastruct Eng 33(9):748–768
16. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceed-
ings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, pp
770–778
17. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional
networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition,
pp 4700–4708
18. Huynh TC, Kim JT (2017) Quantification of temperature effect on impedance monitoring via
PZT interface for prestressed tendon anchorage. Smart Mater Struct 26(12):125004
19. Huynh TC, Kim JT (2018) RBFN-based temperature compensation method for impedance
monitoring in prestressed tendon anchorage. Struct Control Health Monit 25(6):e2173
20. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional
neural networks. Adv Neural Inf Process Syst 25:1097–1105
21. Li J, Deng J, Xie W (2015) Damage detection with streamlined structural health monitoring
data. Sensors 15(4):8832–8851
22. Liang X (2019) Image-based post-disaster inspection of reinforced concrete bridge systems
using deep learning with Bayesian optimization. Comput-Aided Civ Infrastruct Eng 34(5):415–
430
23. Lu X, Xu Y, Tian Y, Cetiner B, Taciroglu E (2021) A deep learning approach to rapid regional
post-event seismic damage assessment using time-frequency distributions of groundmotions.
Earthquake Eng Struct Dynam 50(6):1612–1627
24. Málaga-Chuquitaype C (2022) Machine learning in structural design: an opinionated review.
Front Built Environ 6.https://doi.org/10.3389/fbuil.2022.815717
25. Pan X, Málaga-Chuquitaype C (2020) Seismic control of rocking structures via external
resonators. Earthquake Eng Struct Dynam 49(12):1180–1196. https://doi.org/10.1002/eqe.
3284
26. Pan X, Yang TY (2020) Postdisaster imaged-based damage detection and repair cost estimation
of reinforced concrete buildings using dual convolutional neural networks. Comput-Aided Civ
Infrastruct Eng 35:495–510. https://doi.org/10.1111/mice.12549
27. Pan X, Yang TY (2021) Image-based monitoring of bolt loosening through deep-learning-based
integrated detection and tracking.Comput-Aided Civ Infrastruct Eng 1–16.https://doi.org/10.
1111/mice.12797
28. Pan X, Wen Z, Yang TY (2021b) Dynamic analysis of nonlinear civil engineering structures
using artificial neural network with adaptive training. Mach Learn. arXiv:2111.13759
29. Pan X, Wen Z, Yang TY (2021a) Dynamic analysis of structures using artificial neural network
with adaptive training. In: 17th world conference on earthquake engineering, Sendai, Japan
30. Pan X (2022) Three-dimensional vision-based structural damage detection and loss estimation–
towards more rapid and comprehensive assessment. Doctoral dissertation, University of British
Columbia. https://doi.org/10.14288/1.0422384
31. Pan X, Yang TY (2023) 3D vision-based bolt loosening quantification using photogrammetry,
deep learning, and point-cloud processing. J Build Eng 106326
32. Pan X, Yang TY, Xiao Y, Yao H, Adeli H (2023) Vision-based real-time structural vibration
measurement through interactive deep-learning-based detection and tracking methods. Eng
Struct 281:115676
33. Pan X, Yang TY (2023) 3D vision-based out-of-plane displacement quantification for steel
plate structures using structure from motion, deep learning and point cloud processing. Comp
Aided Civil Infrastruct Eng 38:547–561
34. Pan X, Tavasoli S, Yang TY (2023) Autonomous 3D vision-based bolt loosening assessment
using micro aerial vehicles. Comp Aided Civil Infrastruct Eng 1–12
35. Park HG, Kwack JH, Jeon SW, Kim WK, Choi IR (2007) Framed steel plate wall behavior
under cyclic lateral loading. J Struct Eng 133(3):378–388
36. Qarib H, Adeli H (2014) Recent advances in health monitoring of civil structures. Sci Iranica
21(6):1733–1742
336 X. Pan et al.
37. Sabouri-Ghomi S, Ventura CE, Kharrazi MH (2005) Shear analysis and design of ductile steel
plate walls. J Struct Eng 131(6):878–889
38. Salawu OS (1997) Detection of structural damage through changes in frequency: a review. Eng
Struct 19(9):718–723
39. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC (2018) MobileNetV2: inverted residuals
and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern
recognition, pp 4510–4520
40. Sause R, Braxtan TN (2011) Shear strength of trapezoidal corrugated steel webs. J Constr Steel
Res 67(2):223–236
41. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image
recognition. arXiv:1409.1556
42. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Rabinovich A (2015) Going deeper
with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern
recognition, pp 1–9
43. Szychowski A, Brzezińska K (2020) Local buckling and resistance of continuous steel beams
with thin-walled I-shaped cross-sections. Appl Sci 10(13):4461
44. Tavasoli S, Pan X, Yang TY (2023) Real-time autonomous indoor navigation and vision-based
damage assessment of reinforced concrete structures using low-cost nano aerial vehicles. J
Build Eng 106193
45. Tong JZ, Guo YL, Pan WH (2020) Ultimate shear resistance and post-ultimate behavior of
double-corrugated-plate shear walls. J Constr Steel Res 165:105895
46. Tong JZ, Guo YL, Zuo JQ (2018) Elastic buckling and load-resistant behaviors of double-
corrugated-plate shear walls under pure in-plane shear loads. Thin-Walled Struct 130:593–612
47. Vaze S (2021) Experimental and numerical investigations of Frameless cold form steel corru-
gated wall panels subjected to in-plane monotonic and cyclic loads. Master thesis, University
of British Columbia
48. Wang T, Song G, Liu S, Li Y, Xiao H (2013) Review of bolted connection monitoring. Int J
Distrib Sens Netw 9(12):871213
49. Xia Y, Chen B, Weng S, Ni YQ, Xu YL (2012) Temperature effect on vibration properties of
civil structures: a literature review and case studies. J Civ Struct Heal Monit 2(1):29–46
50. Xiao Y, Pan X, Tavasoli S, Azimi M, Noroozinejad Farsangi E, Yang TY (2023) Autonomous
inspection and construction of civil infrastructure using robots. In: Farsangi EN, Noori M, Yang
TTY, Lourenço PB, Gardoni P, Takewaki I, Chatzi E, Li S (eds) Automation in construction
toward resilience: robotics, smart materials & intelligent systems
51. Xu Y, Lu X, Cetiner B, Taciroglu E (2021) Real-time regional seismic damage assessment
framework based on long short-term memory neural network. Comput-Aided Civ Infrastruct
Eng 36(4):504–521
52. Yang TY, Banjuradja W, Etebarian H, Tobber L (2021) Numerical modeling of welded wide
flange fuses. Eng Struct 238:112181
53. Yang TY, Li T, Tobber L, Pan X (2020) Experimental and numerical study of honeycomb
structural fuses. Eng Struct 204:109814. https://doi.org/10.1016/j.engstruct.2019.109814
54. Yang TY, Li T, Tobber L, Pan X (2019) Experimental test of novel honeycomb structural fuse.
ce/papers 3(3–4):451–456
55. Yi J, Gil H, Youm K, Lee H (2008) Interactive shear buckling behavior of trapezoidally
corrugated steel webs. Eng Struct 30(6):1659–1666
56. Zhao Q, Sun J, Li Y, Li Z (2017) Cyclic analyses of corrugated steel plate shear walls. Struct
Des Tall Spec Build 26(16):e1351
Aerodynamic Effects of Height-to-Width
Aspect Ratio Variation in Tall
Buildings—Numerical Study
Sherine Ali
Abstract High winds are considered among the most dangerous natural hazards
besides earthquakes to tall buildings. Under high wind, tall buildings experience
aerodynamic effects that play a crucial role in determining their principal wind-
induced responses. Nowadays, high-rise buildings are often remarkably flexible, low
in damping, and light in weight. Thus, they generally exhibit increased susceptibility
to wind-induced responses. Consequently, it has become necessary to develop tools
to enable structural engineers to determine wind effects on tall buildings with high
confidence from structural integrity and serviceability perspectives. In this regard,
computational fluid dynamics (CFD) simulation is a suitable option for studying
the sensitivity of wind pressure on tall buildings. The numerical study presented in
this paper addresses the aerodynamic response of a standard tall building through
large eddy simulation (LES) employing the consistent discrete random flow gener-
ation technique (CDRFG). Applying the CDRFG technique to generate the inflow
boundary conditions allowed an accurate depiction of the turbulence spectra. The
aerodynamics behavior has been investigated for four same-height (180 m) tall build-
ings with different height-to-width ratios (i.e., 6.0, 4.0, 3.6, and 3.0). Based on the
outcomes of the numerical study presented in this paper, it has been found that the
wind-induced responses obtained from the LES models led to an acceptable esti-
mation of the wind pressure distribution and reactions of the buildings studied in a
time-efficient manner.
S. Ali (B)
Department of Civil Engineering, Lakehead University, Thunder Bay, ON, Canada
e-mail: sali16@lakeheadu.ca
1 Introduction
Windstorms are considered among the most dangerous natural hazards to tall build-
ings. Under high wind, tall buildings experience aerodynamic effects that play a
crucial role in determining the principal wind-induced responses of high-rise build-
ings. Nowadays, tall buildings are often remarkably flexible, low in damping, and
light in weight. Thus, they generally exhibit increased susceptibility to wind-induced
responses [2]. It is generally unrealistic nor practical to design a tall building that
would not move under wind-induced forces. According to a recent study conducted
by Elezaby and El Damatty [4], tall buildings always acquire some flexibility that
the wind design does not usually benefit from. Thus, allowing minimal possible
wind-induced vibration is expected in the design practice of high-rise buildings.
In the 1960s, several researchers investigated the relationship between surface
wind pressure and tall building configurations. Based on the building height variation,
the research outcomes conducted by Baines [1] on the pressure distributions obtained
from wind tunnel tests show that maximum positive pressure was in the vicinity of
two-thirds of the building height rather than the top of the building. Several studies
have been carried out to investigate the relationships between building dimensions
and wind patterns, and aerodynamic effects on tall buildings. In that regard, Lin
et al. [8] evaluated the wind effects based on wind tunnel tests, concluding that
building aspect ratio and side ratio could influence the wind loads of both square and
rectangular-shaped tall buildings. Also, Li et al. [7] investigated the aerodynamic
effects on various shaped buildings constructed in a hilly terrain field and having
different aspect and side ratios. According to the results of the latter study, it was
concluded that wind loads on rectangular-shaped buildings were more influenced by
the building aspect and side ratios than those on circular-shaped buildings.
In another study by Mou et al. [10] that aimed to ascertain wind pressure distri-
butions on various squared-shaped tall buildings through the application of compu-
tational fluid dynamics (CFD) techniques, models were developed to simulate wind
pressure distribution on buildings to investigate two parameters: height–width (HW)
and height–thickness (HT) ratios. Numerical results show that both HW and HT
ratios greatly influenced the wind characteristics of the studied buildings. Results
also indicated that positive pressure on building surfaces varied, where a narrower
windward surface tended to exhibit higher wind pressures while a wider surface
experienced higher negative wind pressure. Also, the building thickness had little
influence on altering positive wind pressure. Notably, pressure distributed on leeward
surfaces showed significant differences with the changes in the height–width (HW)
and height–thickness (HT) ratios. In addition, both positive and negative pressures
around the modeled buildings were increased by larger widths, while negative wind
pressure became ineffectual with the increase of building thickness [10].
The results of research studies that spanned more than five decades to examine
wind flowing mechanisms and determine wind characteristics on tall buildings have
proven that air flows around tall buildings are susceptible and complicated [3].
Enhancing the response of tall buildings under aerodynamic effects requires detailed
Aerodynamic Effects of Height-to-Width Aspect Ratio Variation in Tall … 339
knowledge of how wind flows around different building geometric shapes and the
building height-to-width aspect ratio [13]. Consequently, it has become necessary to
develop tools enabling structural engineers to estimate wind effects on tall buildings
with a high degree of confidence from structural integrity and serviceability perspec-
tive. In this regard, computational fluid dynamics (CFD) simulation is a suitable
option for studying the sensitivity of wind pressure on tall buildings.
A numerical study conducted by Elezaby and El Damatty [4] developed a three-
dimensional finite element model for a 65-story high-rise building previously tested
in the Wind Tunnel Laboratory at Western University, Canada, using a pressure
model. The said model evaluated the time history variation of the straining actions
on various building structural elements. To develop appropriate numerical models
to acquire accurate computed results, several factors shall be considered, such as
the size of the computational domain, grid generation, boundary conditions, solver
setting, and residual control [10]. Various guidelines have been proposed regarding
the computational domain of models, including those addressed by He et al. [6]. For
instance, it was recommended that distances around the modeled building be large
enough to sufficiently represent the wind fields, where upstream and downstream
zones are at least five to ten times the building height.
Although wind tunnel testing is mostly desired in the wind computational engi-
neering, they are time-consuming and costly. Considering the available resources
and the various dimensional and geometrical variations of tall buildings required
to be studied, this mandated the need to develop advanced computational methods
resulting in the complexity of CFD simulations. It is worth mentioning that CFD
simulations have advantages over wind tunnel testing. For instance, CFD does not
suffer from incompatible similarity requirements since simulation can be conducted
on a full-scale building. Also, unlike in wind tunnel testing, with CFD simulations,
building configurations can easily be changed [9].
Since there is a direct relationship between building dimensions and wind patterns
and aerodynamic effects on tall buildings, the numerical study presented in this
paper addresses the aerodynamic responses of rectangular-shaped tall buildings with
different height-to-width ratios through high accuracy, three-dimensional large eddy
simulation (LES)-based analysis employing the consistent discrete random flow
generation technique (CDRFG). The application of the CDRFG technique to generate
the inflow boundary conditions allowed accurate depiction of the turbulence spectra
[5]. The current study investigates the aerodynamics behavior of four 180-m high
tall buildings with varying height-to-width ratios (i.e., 6.0, 4.0, 3.6, and 3.0).
340 S. Ali
The numerical study presented in this paper is based mainly on developing computer
models using Star CCM+ [11] and large eddy simulation for four standard buildings
with different height-to-width ratios to investigate the aerodynamic performance of
tall buildings. All buildings were modeled with the same height (180 m) and the same
depth (45 m), whereas the building width varied (i.e., 30, 45, 50, and 60 m), leading
to a different height-to-width ratio (i.e., 6.0, 4.0, 3.6, and 3.0) for each building case,
respectively. The following sections of this paper describe the numerical models
developed to complete this study.
The total grid number of each model varies since the building in each model has a
distinct width (i.e., 30, 45, 50, and 60 m) in a fixed computational domain (750 m ×
500 m × 300 m), with the center of the building located at coordinates (X = 250 m,
Y = 250 m, Z = 0), as shown in Fig. 1. The front of the computational domain is
defined as the velocity inlet, and the back is the pressure outlet. While the other two
long sides of the computational domain are defined as the symmetry planes, as shown
in Fig. 1. Also, the angle of attack used in the simulations was (0°). The bottom and
the sides of the computational domain were defined as no-slip surfaces.
The type of inlet was velocity inlet in Star CCM + and large eddy simulations
developed in this study. Since the height of all four modeled buildings was quite large
(i.e., 180 m), the power-law wind profile was adopted to fit the actual atmospheric
boundary layer.
A few important values, such as the reference velocity, time step, and physical model
time, were specified to set up the models used in this numerical study properly. The
value for reference density was set to (ρ = 1.29 kg/m3 ), and kinematic viscosity of
air was set to (v = 1.3 × 10–5 m2 /s). Based on the size of the finest mesh, a time step
of (d t = 0.03 s) was specified and used to run the model of each building case. A
maximum physical time of 120 s was used in the model, resulting in 4000 time steps.
Also, it is recommended to use five inner iterations for each time step to guarantee
higher accuracy for the model outcomes, maintaining the Courant number (CFL)
less than 1.0.
v, and w), pressures, forces, and moments on tall buildings with very high precision.
Simulations performed using the said program can be conducted by considering
variation over time (unsteady) or steady flow simulations. For this study, the focus
was on the unsteady flow using LES analysis to model the turbulences. In such
unsteady simulations, the principle is that the larger the eddies are, the more energy
and more influence on the surroundings and the modeled building exist.
One of the primary considerations when setting up this type of simulation is deter-
mining how to generate the velocity inflow conditions since this can significantly
affect the precision of the outputs. It is also assumed that wind is affected by the
upstream conditions before it reaches the building. Thus, it persistently varies due
to the influence of terrain and exposure. This highlights the advantage of utilizing
the consistent discrete random flow generator (CDRFG) technique based on mathe-
matical calculations and formulation instead of extracting the wind using a database
or recycling method. For the models developed as part of this numerical study, the
inflow conditions were generated and set in Excel files containing each velocity
component depending on its location within the computational domain and the time
(i.e., x, y, z, and t).
The force and moment values exhibited by a building are crucial for the design to
ensure its structural integrity and maintain the safety of occupants. By running the
LES simulations, the time histories for the forces and the moments were obtained
and then averaged due to the nature of the simulation being unsteady and transient.
Figure 4 shows the time histories for each of the four cases for the forces in the x-axis
(along-wind direction) and y-axis (across-wind direction) and the moments about x-
and y-axes.
Table 1 summarizes the monitored averaged results from each of the time histories
and the percent difference of each building case compared to Case 1 when the building
has varied height-to-width aspect ratios. Based on the presented results, it can be
noticed that the windward force in the x-axis increased with the increase of the
building width, resulting in an inverse relationship between the forces and the height-
to-width aspect ratio of the building. Similarly, the moment about the y-axis increased
with the decrease of the height-to-width ratio of the building. It also has been noticed
344 S. Ali
Fig. 4 Time histories for the forces in x- and y-axes and moments about x- and y-axes for all four
cases
that the difference (%) of F y and M x for Case 4 vs. Case 1 is slight compared to the
difference (%) of the other cases (i.e., Case 2 vs. Case 1, and Case 3 vs. Case 1).
4.2 Velocities
Figure 5 shows the profiles of the instantaneous velocity magnitudes on both elevation
and plan for all four building cases with varying height-to-width aspect ratios. Two
sections were taken during the simulation to show the velocity profiles. For the
elevations, a vertical section passing through the centroidal axis of the building
(parallel to the x-axis) has been taken. In contrast, the plan section was taken at two-
thirds of the building height (i.e., 120 m from ground level), where the maximum
wind force is anticipated.
Figure 6 shows the profiles of the mean velocity magnitudes on both elevation
and plan for all four building cases with varying height-to-width aspect ratios, using
the same two sections taken for the instantaneous velocity magnitude profiles. From
the said figure, it can be seen how the shape and length of the wake behind the
building are altered. The velocity magnitudes increased at the separation points of
the building with the decrease of its height-to-width aspect ratio from Case 1 through
Case 4, proving that the turbulent region is becoming more prominent. The area
behind the building becomes more disturbed with the introduction of eddies when
the height-to-width aspect ratio is declining.
Table 2 summarizes the maximum instantaneous and mean velocity values for
all four cases that resulted from the completed simulations. Based on the presented
velocity values, it can be concluded that with the decrease of the height-to-width
aspect ratio (due to the increase of the building width) from Case 1 through Case 4,
both the instantaneous and mean velocities increased.
Figure 7 shows the mean pressure coefficient (C p ) distribution across each building’s
vertical surface (i.e., windward, left side, right side, and leeward) and the roof surface
for all four building cases.
Table 3 summarizes the maximum positive and negative values of the mean
pressure coefficient for each building case.
Upon examining Fig. 7 and Table 3, it can be noticed that the positive mean C p
values on the windward surface slightly decreased with the decline of the building
height-to-width aspect ratio from Case 1 through Case 4; however, the negative mean
C p values on the rest of the surfaces increased with the decrease of the height-to-
width aspect ratio, especially in the regions toward the windward side of the roof
surface. As anticipated, the most significant values occur on the windward surface of
the building, with its positive peak pressure appearing at approximately two-thirds of
the building height from the ground level. However, the other surfaces (i.e., sidewalls,
leeward, and roof) exhibited negative mean C p values.
346 S. Ali
5 Model Validation
To validate the computer models developed using Star CCM + and large eddy simu-
lation as part of the present study, the outcomes of the model for building Case 2
(45 m × 45 m × 180 m high) were compared with the results of the wind tunnel
test conducted by the Tokyo Polytechnic University, which are documented as part
of the aerodynamic database of high-rise buildings developed within the twenty-
first Century COE Program on Wind Effects on Buildings and Urban Environment.
Different model predictions, such as the instantaneous and mean velocities, mean
and peak pressure coefficients, and RMS values, were compared against the results of
the said wind tunnel test, and the maximum results difference was calculated at 12%.
This suggests a reasonable accuracy of the present numerical study and provides
liability verification of its computer simulations. As an example, and to illustrate the
comparison between the numerical study predictions and the wind tunnel test results,
Fig. 8 shows the mean pressure coefficient (C p ) distribution across each building’s
vertical surface (i.e., windward, left side, right side, and leeward) out of the wind
tunnel test conducted by the Tokyo Polytechnic University on a 1/400 building model
(height: width: depth = 0.4 m: 0.1 m: 0.1 m). Comparing the mean pressure coef-
ficient distribution contours on the building’s faces from the numerical simulation
(Fig. 7—Case 2) with those from the said wind tunnel test (Fig. 8), a high similarity
in both values and patterns can be noticed. Any discrepancies between the two figures
(Fig. 8 vs. Fig. 7—Case 2) can be attributed to the fact that CFD simulations do not
suffer from incompatible similarity requirements since simulation can be conducted
on a full-scale building, unlike wind tunnel tests.
6 Conclusions
The key findings of the numerical simulation of the present study on the aerody-
namic performance of tall buildings with height-to-width aspect ratio variations
evidenced that the variation of the building width significantly influenced the wind
characteristics of the four studied tall buildings of the same height and length. This
study compared the substantial changes in the instantaneous flow velocity and mean
pressure on the building surfaces for various aspect ratios (varies widths). It was
determined that a narrower windward surface is more likely to exhibit higher wind
pressures. At the same time, a wider windward surface is more susceptible to negative
wind pressure effects. It was also concluded that the wider the building becomes, the
350 S. Ali
Fig. 8 Mean pressure coefficient distribution over the building’s faces (Adapted from wind tunnel
tests of Tokyo Polytechnic University)
stronger the suction pressure on the building roof, side and leeward surfaces will be.
Also, side surfaces exhibited considerable fluctuations in wind pressure among the
four building cases studied.
The predictions of the models developed in the current study were compared with
the results of the wind tunnel tests conducted by the Tokyo Polytechnic University,
which are documented as part of the aerodynamic database of high-rise buildings
developed within the twenty-first Century COE Program on Wind Effects on Build-
ings and Urban Environment. This proves the accuracy of the present numerical study
and provides liability verification of its computer simulations.
Acknowledgements The author would like to thank Dr. H. Aoude for the wind engineering knowl-
edge gained through taking his graduate course at the University of Ottawa. Thanks are also due
to Dr. A. Elshaer for the CFD simulations knowledge gained through taking his graduate course at
Lakehead University.
Aerodynamic Effects of Height-to-Width Aspect Ratio Variation in Tall … 351
References
1. Baines WD (1963) Effects of velocity distribution on wind loads and flow patterns on buildings.
In: Proceedings of the symposium on wind effects on buildings and structures, vol 1. National
Physical Laboratories, Teddington, UK, pp 26–28
2. Bezabeh MA, Bitsuamlak GT, Tesfamariam S (2020) Performance-based wind design of tall
buildings: concepts, frameworks, and opportunities. Wind Struct 31(2):103–142
3. Blocken B (2014) 50 years of computational wind engineering: past, present and future. J Wind
Eng Ind Aerodyn 129:69–102
4. Elezaby F, El Damatty A (2020) Ductility-based design approach for tall buildings under wind
loads. Wind Struct 31(2):143–152
5. Elshaer A, Aboshosha H, Bitsuamlak G, El Damatty A, Dagnew A (2016) LES Evaluation of
wind-induced responses for an isolated and a surrounded tall building. Eng Struct 115:179–195
6. He BJ, Yang L, Ye M (2014) Strategies for creating good wind environment around Chinese
residences. Sustain Cities Soc 10:174–183
7. Li Z, Sun Y, Huang H, Chen Z, Wei Q (2010) Experimental research on amplitude characteristics
of wind loads of super tall buildings in hilly terrain field. J Build Struct 6:171–178
8. Lin N, Letchford C, Tamura Y, Liang B, Nakamura O (2005) Characteristics of wind forces
acting on tall buildings. J Wind Eng Ind Aerodyn 93(3):217–242
9. Montazeri H, Blocken B (2013) CFD simulation of wind-induced pressure coefficients on
buildings with and without balconies: validation and sensitivity analysis. Build Environ 60:137–
149
10. Mou B, He BJ, Zhao DX, Chau KW (2017) Numerical simulation of the effects of building
dimensional variation on wind pressure distribution. Eng Appl Comput Fluid Mech 11(1):293–
309
11. Siemens Industries Digital Software. Simcenter STAR-CCM+, version 2021.
12. Tokyo Polytechnic University Aerodynamic database of high-rise buildings. The twenty-first
century CEO program on wind effects on buildings and urban environment. Retrieved online
through the following link: http://www.wind.arch.t-kougei.ac.jp/info_center/windpressure/hig
hrise/Homepage/homepageHDF.htm
13. Tse KT, Hu G, Song J, Park HS, Kim B (2021) Effects of corner modifications on wind loads
and local pressures on walls of tall buildings. Build Simul 14:1109–1126
Bearing Capacity of Precast Hollow Core
Slabs at Load Bearing Wall–Slab Joint
Keywords Hollow core · Precast concrete · Bearing capacity · Crushing · Core fill
1 Introduction
Hollow core concrete slabs have been a popular building component for precast
construction for many years. Hollow core benefits from the continuous longitu-
dinal voids running through the slabs reduce weight, material used, and ultimately
cost; they can additionally provide a space for mechanical or electrical equipment
running through them [1]. Along with these advantages, the use of precast building
components is becoming more popular with the recent shift and focus on off-site
construction. All these points indicate that hollow core concrete has been and will
continue to be a popular choice for small, medium, and large construction projects.
Typical design and construction of buildings using these precast elements uses
a simple slab system, where the hollow core slabs are oriented to span between
the internal load-bearing shear walls or to the exterior walls. Figure 1 shows typical
details for internal load-bearing wall connections and external wall connections used
in precast construction.
As illustrated above, typical connections between bearing walls and hollow core
slabs consist of a hollow core slab sitting between the walls and a bearing strip
placed underneath them. Gaps between the slabs and the wall are filled with grout,
and steel reinforcement is grouted in keyways to ensure continuity over the supports.
In both cases, the design of the connections sees the axial forces transferred from
the bearing walls through the hollow core slabs through a relatively small bearing
width (typically a minimum of 3 on each slab). As buildings start to increase in size
and weight, crushing and failure of the concrete at the ends of these slabs can be a
concern as the forces acting through the walls and slabs increase.
The research being completed in this project focuses exclusively on the first case
illustrated in Fig. 1, which is the internal load-bearing wall connection. The speci-
mens used in this testing are 8 untopped hollow core planks from Strescon Ltd. in
Saint John, NB, who is the partner of this research project. The results of this research
Fig. 1 Typical interior connection (left) and exterior connection (right) [1]
Bearing Capacity of Precast Hollow Core Slabs at Load Bearing … 355
provide insight into the capacity of the hollow core planks and the effects of filling
cores of the planks. With these results, industry will have insight into the capacity of
the planks and how they perform in this common type of connection. This research
fits into a long-term plan and series of tests with the industry partner and their prod-
ucts; the research presented in this paper consists mainly of a literature review on
past research of hollow core slabs, as well as the test method and preliminary results
and indications of slab behaviour.
The following section focuses on an introduction to hollow core concrete and its
behaviour under typical loading, as well as the specific application of these slabs
being considered for this project. Significant past available research has focused on
the behaviour of these slabs under flexure and shear. This review will focus on the
crushing of these slabs and concrete and bearing loads.
Hollow core slabs are precast/prestressed concrete elements that have voids running
throughout the length of the slab. These precast elements are common products used
as floor or roof members in panel construction buildings like hotels and apartments
and can also be used as partition members or bridge deck elements [2].
Hollow core slabs have many benefits as precast building elements. They have
a simple and quick production process, mainly benefitting from the zero-slump
concrete used for their production. The zero-slump concrete aided with heated casting
beds result in slabs that can immediately hold shape and achieve prestress release
strengths in less than 24 h; this results in quick turnaround between the casting of
different members.
Hollow core slabs can significantly reduce the weight and cost of buildings,
approximately 50% or more of the floor or roof area can be void [3]. While
achieving this reduction in weight, structurally, they can provide the efficiency of
other prestressed members in terms of their load capacity, an increased span length,
and deflection control [1]. Hollow Core slabs allow for diaphragm action through
grouted keyways along the members; these grouted keyways result in the hollow
core system behaving similarly to a monolithic slab and transfer forces between one
another [1].
The voids running through hollow core slabs present an opportunity for building
services to be located inside them, unlike traditional solid slabs. When properly
coordinated for alignment, the voids in hollow core slabs may be used for electrical
runs, mechanical runs, or plumbing [1, 3]. Other advantages of hollow core slabs
356 L. Marshall et al.
include their excellent fire resistance and sound transmission qualities, with fire
ratings of up to four hours and a sound transmission class rating of 47–57.
Significant past research has been done on the behaviour of these slabs under both
flexure and shear, and available standards and design manuals provide the require-
ments and general procedures for flexural and shear design [4]. The following section
references failure in both flexural and shear, and web shear. Figure 2 shows these
cracks in a hollow core slab subjected to a four-point bending test; these results were
used to develop the design equations that used to analyze and design hollow core
slabs [5].
The shear behaviour of hollow core slabs is a significantly researched topic. Equations
found in various codes and standards have been determined to be deficient in their
prediction of the shear capacity of hollow core slabs. Experimental results show a
wide range of results in slabs with no web reinforcement making it difficult to predict
capacities [5]. As illustrated in Fig. 2, the two main modes of shear failure are the
development of either flexural shear cracks or web shear cracks depending on the
span to depth ratio.
Flexural Shear Capacity
Flexural shear failure happens in slender and lightly prestressed members when
the tensile strength of concrete is exceeded, and vertical tensile cracks initiate at
the bottom face of the slab [6]. With increased loading, cracks begin to widen and
propagate vertically beyond the prestress strands until mid-height of the member;
further loading increases the shear stress on the upper, uncracked portion of the slab
which causes cracks to take an inclined form before reaching the top surface of the
slab [6].
Fig. 2 Flexural, flexural shear, web shear cracks in hollow core slab
Bearing Capacity of Precast Hollow Core Slabs at Load Bearing … 357
ACI 318-19 gives a detailed equation for the flexural shear capacity of prestressed
members, represented by equation [2].
Vi Mcre
Vci = 0.6λ f c bw dp + Vd + (1)
Mmax
Web shear cracks tend to initiate at the “critical point” which represents the posi-
tion of maximum principle tensile stresses on webs of hollow core slabs. Small cracks
will generally start here, and without warning can start to propagate upward joining
the edge of the support with the top of the slab [6]. Web shear failure can be almost
instantaneous making it the least desirable shear failure mode.
Several experimental programs in the past have looked at testing the accuracy of
these equations in predicting the actual capacity of hollow core slabs in both shear
and flexure. Many of the studies determined that the ACI codes of predicting shear
and flexural capacity were quite conservative in their estimations [5–7]. While the
equations were considered adequate for the prediction of load-carrying capacities,
it was determined that they were insufficient in predicting the failure modes of the
members. Rahman et al. [8] noted the anomaly between the predicted and observed
failure modes, for shear span to depth ratio greater than 8, a flexure–shear failure is
indicated by ACI code equations; whereas, in full-scale load tests, a flexural failure
was noted.
358 L. Marshall et al.
While the main benefit of hollow core slabs is the continuous voids running through
the slabs, it is sometimes necessary or beneficial for portions or the cores to be filled.
Filling the cores of concrete is typically a strategy to improve the shear capacity of
a section, specifically the web shear capacity [9, 10]. Core filling is a beneficial way
of increasing strength in these slabs for several reasons. Core fill can selectively be
implemented in different locations based on the strength requirements in different
areas of a floor plan. Additionally, core fill or grout may be applied at any stage of
the production or construction allowing for further implementation flexibility [10].
Several different variables have been tested when looking at the effects that the
core filling has on the strength of these slabs. The main variables tested consist
of the timing of the core fill in relation to the production of the slab (i.e., before
prestressing, before curing, immediately after extrusion, and after curing) and also
the number of cores being filled. Additional areas of study include the bond strength
between the hollow core slabs and the core-filled concrete. Studies performed at
the University of Minnesota showed the significant benefits to filling cores within
one hour of extrusion, with results showing the core fill and slab both acting as
fully prestressed in web shear capacity determination [10]. Other benefits include
roughening the surface of the web walls to increase bond strength.
The available research focuses almost solely on the effect core fill has on the shear
capacity (specifically web shear) of these slabs, and not on the bearing or crushing
capacity of hollow core with core fill added. Similar principles were tested in this
research with respect to the core fill timing and the effect of the quality of the core
fill; for example, if the core fill is added after extrusion in the shop, or if it is added
in the form of grout in the field after placement.
The type of failure being studied in this project is not a shear or flexure failure of the
slabs, but a crushing failure along the terminal ends of the slab where the above wall
is sitting on it, in a bearing-type failure. In general, the crushing strength of concrete
joint such as column–to–slab or wall–to–slab connections can be calculated as the
sum of the compression strength of the concrete slab and the strength of the bars
linking the upper and lower columns or walls [11]. The compressive or crushing
strength of the slab is calculated as the product of the confined crushing strength
of the concrete, f cc and the support area, As ; however, if no special confinement
reinforcement is available, f cc is replaced by just the concrete uniaxial compressive
strength f c [11]. Research in the past focusses on crushing and bearing failures of
concrete in connection zones, due to the high forces being transferred through them.
The crushing and bearing capacity of these structural joints and connections is
an important and well-researched topic as the strength of the joints is imperative for
Bearing Capacity of Precast Hollow Core Slabs at Load Bearing … 359
The bearing wall–hollow core slab connection being focused on in this research
project is a typical structural detail in many precast concrete structures. Typical
details for this type of connection can be found in Fig. 1. As illustrated, this connection
consists of two hollow core slabs sitting between load-bearing walls, transferring the
load from the upper wall through the slabs and to the bottom wall and floors below.
Other components of this joint include grouting between slabs and walls, and steel
reinforcement in keyways to ensure the connections can transfer diaphragm shear
and ensure continuity throughout the joint [1]. Understanding the behaviour of these
joints is crucial in the design of a building; as in many cases, the strength and safety
of the entire structure is governed by the connections of the different precast elements
[14].
The PCI designates equations to describe the transfer of axial load through this
horizontal joint, both for a grouted joint and an ungrouted joint. Equation 3 repre-
sents the grouted condition, and Eq. 4 represents the ungrouted condition. Figure 3
summarizes the forces acting on the joint.
φ Pn = φ0.85Ae f c Re (3)
360 L. Marshall et al.
φ Pn = φtg f u C Re /k (4)
Studies in the past have regarded the horizontal joint as the weakest link in this
type of construction [14]. A study conducted by Harris & Iyengar was looked to
examine the behaviour of horizontal wall to floor joints under increasing axial load;
they examined both the strength and stiffness characteristics at failure of eighteen
different hollow core slabs and wall specimens of three different sizes. Their results
yielded a range of results based on the dimensions of the hollow core. Figure 4 shows
an example of the experimental setup.
The main modes of failure of the joint were axial shortening across the entire joint
causing cracks in the walls, splitting of the walls and longitudinal cracks in the slabs,
or crushing of the slab ends where the walls were sitting on the top of them. Also
done in this study was a comparison of their results with PCI prediction methods,
and conclusions drawn from this comparison showed the PCI equations being very
conservative estimates of joint strength [14].
Much of the available literature on hollow core concrete focusses on its benefits as
a precast off-site construction product, or its behaviour under flexure and shear. The
connection explained in Sect. 2.5 is one of the most commonly used connections
in large panel precast buildings. The bearing capacity of these slabs is something
that is not well known in industry; to account for this, core fill is generally added
to the ends of the slabs to be conservative. Core filling these slabs is a process that
is very time-consuming and expensive for manufacturers, so eliminating this step
would prove to be beneficial for many precast plants. It is important to understand
Bearing Capacity of Precast Hollow Core Slabs at Load Bearing … 361
the behaviour of this joint under higher loads and its ultimate capacity to ensure the
safety of the building is maintained. The closest available research on the capacity of
this connection is only semi full-scale and is outdated in terms of newer applications,
as well as codes and standards.
The results in the proposed research plan will provide a better understanding of
the performance of this joint. The full-scale experimental tests will determine the
capacity of this slab as well as the effects of core filling and grouting of the slabs,
both of which are common in practice. Understanding the capacity and potential
ways to increase, it will help moving forward as it could make hollow core slabs a
more viable option in even larger precast buildings.
3 Experimental Method
The research presented in this paper is phase one of a two-phase project to study
the bearing capacity of hollow core slabs. The first phase of this project involved
a review of currently available research, a design of a suitable test machine for
the requirements of the tests, developing a testing program, completing preliminary
full-scale tests, and reviewing data. The research presented in this paper includes
preliminary findings that were used to develop a more in-depth testing program for
phase 2 of the project.
The first step after a literature review was the design of a test machine; this was
required as no available equipment in the laboratory had both the size and capacity
to perform the desired tests. The goal of the design was to create a self-contained
362 L. Marshall et al.
testing apparatus that could easily be moved, while maintaining the capacity to fail
the slabs. As illustrated in Fig. 5, the main idea of the apparatus was to have two
large steel W-beams simulate the walls of a building and compress the concrete
specimens between them using 200-tonne capacity hydraulic jacks. A steel plate
with a width of 200 mm was welded to the W-sections to ensure proper bearing
width on the specimens. Additional components of the apparatus were 4—Dywidag
deformed prestressing bars used to carry and contain the load. The steel sections were
designed using CSA S16-14 standards to resist a load of approximately 4000 kN,
and to be completely rigid so load could be evenly distributed across the specimen.
Due to space requirements in the laboratory, the tests were performed at the industry
partner location in Saint John, NB. Testing at this location meant the machine had
to be placed on a rail system to ensure that it could be moved down the test setup.
Figure 5 shows the final test frame setup on the rail system.
Once the design of the test frame was complete, an experimental test program
was determined through collaboration with the industry partner. The main goals of
the first phase of the project were to determine the capacity of the slabs and how
different variables affected them. Based on the previous work and experience with
the slabs, it was determined that the different variables used in this phase would look
at the effects of core filling and grouting these slabs. Table 1 illustrates the planned
variables, number of specimens, tests planned, and number of tests completed. Given
time constraints due to weather, crane availability, and unforeseen issues with the
testing apparatus, only half of the tests were able to be completed. As will be explained
in following sections, the slabs that were not used for the first phase of testing were
ripped down to be used in the second round of testing.
Slabs were cast at the precast plant of the industry partner and were allowed to
sit in their yard for 1 month before testing. During casting, concrete cylinder molds
were made to be used to determine concrete properties after testing.
The testing of the specimens took place over the course of 3 weeks; the testing
schedule was largely dictated by the availability of the cranes, weather constraints,
as well as availability of the team performing the tests. The first step involved setting
up the planks in the orientation in which they would be tested. The slabs were set
up in a way which had the ends of facing one another separated by a 25-mm gap to
simulate real build conditions. To ensure that the tests were as accurate as possible,
it was necessary to ensure that all temporary supports as well as the support rail for
the apparatus were completely level.
After the layout was in place, the test apparatus was constructed on the rail system.
This involved placing the bottom beam onto the rail, and then placing the constructed
top beam along with the bars and rams on top of it. The construction was done in a
way where the entire top of the apparatus could be removed from the assembly and
placed back on after the next specimens were in place.
The test process for each slab took approximately 3–4 h to complete. Slabs were
set up on the temporary supports and lined up with each other. The bottom of the
apparatus was lined up to ensure that the proper 90-mm (3.5 ) bearing area on each
slab was met; the crane would then lift the entire top of the apparatus and place it
on top of the slab and the bottom portion of the machine. After the top beam was in
place, a high-strength plaster like material was added to the top and bottom of the
specimen to ensure full bearing contact. The two pieces were then attached using
large nuts on the bottom of the thread bars. The data collection components of the
machine, which included strain gauges on all bars, pressure transducers connected to
the hydraulic rams, and displacement sensors on all four corners of the W-sections,
were then put in place.
Testing commenced when all components of the machine were in place, and
data collection tools were ready and hooked up to the data logger. To do the tests,
the hydraulic rams were slowly loaded to ensure all parts of the machine working
364 L. Marshall et al.
properly. The pressure was brought up to approximately 3900 kN which was the
capacity of the test machine. When the testing was completed, the load was removed
from the machine and the results were inspected to assess damage and failure methods
of the planks.
The review of data consisted mainly of comparing the expected versus actual
capacities and reviewing the difference between as-built slabs, core-filled slabs, or
field grouted slabs. These differences include differences in capacity and stiffness.
Data of each test were analyzed to look for evidence of cracking within the members
prior to failure, or any discrepancies from what is expected from the response of the
slabs under this type of loading. The results of these tests will be explored further in
the following sections.
As explained in the previous sections, the results presented in this paper are prelim-
inary and served as a starting point for this project. Based on the results from the
first phase of tests, a more refined testing program has been developed that will look
more in depth at the desired variables. However, the results found in the first phase
of testing do provide an insight into the behaviour of these slabs under this loading
condition which will be covered in this section.
The results in this section are separated by each individual test and variable. Each
section includes a description of the test and variables being considered, as well as
a description of the test, failure mode (if any), and comparisons with other methods
of estimating crushing resistance such as using equivalent area methods. Each of the
tests will then be compared to analyze the effects of the different variables on the
slab performance.
The first test performed was the as-built test. This test was done to determine a
baseline for the slab strength and used to compare the effects of different variables.
Figure 6 shows the results of the test on the as-built slab.
The failure presented in Fig. 6 depicts a punching-type failure over the width
of the bearing area. The failure started with cracks forming diagonally up from the
bottom corner of the slab end to the corner of the bearing area on the top face of the
slab. Almost immediately after cracks started to form and with little warning, the
slab was failed as the machine punched through the slab.
The data gathered from this test were analyzed to determine the capacity of the
slabs. Table 2 shows approximate values for the actual slab capacity compared with
other methods of estimating slab capacity. The capacity of the slab was estimated
using an equivalent bearing area method, using a compressive strength of 63.7 MPa
Bearing Capacity of Precast Hollow Core Slabs at Load Bearing … 365
(determined from cylinder tests) and accounting for core lengths and widths within
the bearing cross-section area.
The values in Table 2 indicate a large difference between the two values of this
test (~ 35% difference). This difference could indicate multiple items about the test.
It could indicate that there were problems during the set-up of the test; as will be
explained further with other tests, if the machine was not perfectly lined up against
the slab, one side would experience more force than the other. The higher force on one
side reduces the effective bearing area and causes a localized failure, this then creates
a zipper-type effect along the entire width of the slab. By looking at displacement
data, it was noted that one corner of the beam displaced slightly more than the corner
on the opposite side, indicating a potential for this phenomenon. Other indications
could be an overestimation or ineffectiveness of traditional methods of determining
the bearing strength of hollow core slabs.
More analysis of this test was done to compare the load vs. displacement of the
slab to determine the stiffness. Figure 7 shows the plot developed for the load vs.
displacement for this test.
This displacement picked up by the transducer includes a significant amount
of noise which is expected for the small displacements being experienced. It was
assumed that everything under 1.5 mm over displacement was negligible and likely
caused by wind or other parameters given the test setup. For future rounds of testing,
366 L. Marshall et al.
more accurate displacement sensors will be used which are less susceptible to outside
conditions. This plot shows a maximum displacement of almost 5 mm under peak
loads. Using the linear portion of the plot, a stiffness of approximately 211 kN/mm
was determined for the as-built slab; this will be compared to values from other tests
at the end of this section.
The second test performed was on a partially filled hollow core slab. For the partially
filled tests, specific cores at the ends of the slabs were filled with a self-consolidating
concrete mix. Core filling differs from grouting due to the method they are completed.
Core filling takes place within the precast shop; immediately after extrusion, the
flanges at the core location are dug out from the hollow core slabs and the cores
are filled with a self-consolidating concrete mix. In practice, different cores may be
filled or left open based on several different requirements or specifications (strength,
heating, plumbing, electrical, etc.). The addition of this core fill results in a step in
the manufacturing process that is not time nor cost efficient; determining whether
core fill is required (or how much may be required) is one of the most important
outcomes of this project. It was decided to test a slab which had some of the cores
filled but not all. For this test, the slab had 4 of 6 of the cores filled at the ends.
Given the additional concrete in the ends of this plank causing it to become more
comparable with a regular concrete slab, it became clear that the test machine would
be unable to crush the concrete specimen. The test was still carried out to determine
how the slab reacted under high loading. Given a problem with the size of the thread
bars used to carry and contain the load that went unnoticed, the machine ended
up failing before the maximum load was reached. Although the machine failed at
roughly half the expected capacity, the load vs. displacement data were plotted for
Bearing Capacity of Precast Hollow Core Slabs at Load Bearing … 367
the salvageable data. Figure 8 shows the plot of load versus displacement for the first
portion of the test.
As expected, the slab with core fill at the ends resulted in a stiffer slab. The
machine climbed to approximately 2400 kN before one of the bars on the machine
failed. Under this load, the slab deformed just under 3.5 mm compared to the almost
5 mm seen in the first test. This corresponded to a stiffness of approximately 1296 kN/
mm as measured from the linear portion of the load–displacement plot.
When looking at the capacity estimated by compressive strength and bearing area
of this slab (considering the empty cores and self-consolidating mix strength), the
capacity is approximately 8200 kN, about 2 times higher than the crushing capacity
of the machine. This was the first indication for the research team that a separate
approach would need to be taken to estimate and model the capacity of core-filled
slabs more closely. This will be elaborated on in the following sections.
The third test performed was on a set of slabs which had been fully field grouted.
This field grouted variable was important as this is the most common way that these
slabs would be found in regular building practice. Slabs are shipped to the site with
cores at the ends empty and are then grouted when installed in place. To simulate
proper field conditions where the grout is properly distributed through all cores, stop
caps were placed 1 foot inside each of the ends. A grout mix was then poured through
the gap in the slabs and vibrated to ensure even distribution through all the cores.
As will be explained in the following section, it is unlikely that this joint is grouted
to this level of care every time; therefore, this test serves as a best-case scenario in
terms of grouting these slabs.
368 L. Marshall et al.
Given estimates on slab capacity based on bearing area and compressive strengths
of the grout and hollow core mixes, it was determined that the fully grouted slab
connection would have a strength of approximately 7715 kN. This meant that the
machine would not have the capacity to reach the failure load, given this the slabs
were loaded to the capacity of the machine to analyze the stiffness of the grouted
specimens compared with the core-filled and as-built slabs. Figure 9 shows the effects
of loading this slab to approximately 3500 kN.
Although the fully grouted specimen was not completely failed, there were visible
cracks throughout both the grouted face of the slab and the slab itself. Other than
the surface cracks on the member, no structural damage appeared on the slabs after
the tests. Failure of grout within these connections can be an area of concern for
several reasons; usually, these connections contain steel reinforcement that helps
provide continuity over the supports. Loss of this action could result in increased or
uneven loads on the slabs or walls underneath them. Figure 10 shows the load versus
displacement relationship for this test.
The above figure shows a fairly linear relationship between load and displacement
for this test. The maximum deflection at the peak load of this test was approximately
5.2 mm. The constant linear relationship indicates that there was likely no structural
damage to the load-carrying portions of this connection. Using this relationship, a
stiffness of approximately 800 kN/mm was determined.
The partially grouted tests were completed to simulate field conditions where
grouting is completed poorly or not thoroughly. As mentioned in the previous section,
it is possible that grouting in the field is not completed to the level of care as was
done in the fully grouted test. Often when installing these floor systems, grout is
Bearing Capacity of Precast Hollow Core Slabs at Load Bearing … 369
brought up by crane or bucket and poured into the gaps between the walls and floors.
The grout is then pushed into the gaps using a shovel or trowel and is unlikely to
completely fill each core in the slab. To simulate this, concrete was poured onto the
gap between the slabs, and a flat trowel was used to push the concrete between the
slabs.
For this variable, two tests were completed, both yielding different results. The
test performed on the first set of slabs resulted in complete failure of the specimens.
Figure 11 shows the results of the first test on the partially grouted specimens.
As indicated by the figure above, a large failure on the right side of the slab was
noted. The localized failure on this side of the beam is the result of an improper setup
of the test machine, causing significantly more load to be applied to the right side
of the beam. This increase in load on one side caused it to fail prematurely, which
resulted in a zipper-type failure along the entire width of the slab. Like the failure of
the as-built slab, the cracks started forming at the bottom of the slabs and progressed
diagonally upward to the top of the slab. Given this, the results of this test were not
used in determining the stiffness of the slab. Another note from the figure above is
the lack of grout in some of the cores. Due to the lower quality of care given to the
grouting, some of the cores have significantly less grout than others, and after the
failure, the grouted cores became detached from the slab.
The second test was performed ensuring that both sides of the machine were
completely level. The results of the second test appeared to be far more accurate in
terms of an observational and data standpoint. The slabs did not crush in the same
way as the first test; however, there were cracks begin to form at the bottom of the
slab and the grouted area between the slabs. Figure 12 shows the slabs after 3 cycles
of loading and unloading the slabs.
Although no significantly visible cracks can be seen on the members, small frac-
tures did appear on the bottom of the slabs up toward the middle upon further inspec-
tion. From the results of the previous tests, it is known that in this failure mode,
the cracks begin to form in this area before a larger full-scale failure of the slab.
Given the inconsistencies with the amount of grout fill in each core, it was difficult
to determine the estimated capacity based on the effective area method. Given the
results of the first test as well as estimation of the quality of grout, it was determined
that the capacity would be approximately 4700 kN. The load versus displacement
data were analyzed to determine whether any structural damage occurred within this
member, this plot can be found in Fig. 13.
The plot shows a significant amount of noise in the displacement data, as
mentioned in the analysis of the first test, the displacement transducers were very
sensitive to outside conditions, and there was a significant amount of wind on the day
when this test took place. As illustrated by the above figure, the slabs saw significant
deformation at the end of the third run of the test, reaching over 6 mm. During the
test, cracks were seen and heard after the first run of the test; however, the slab did
not completely fail. Figure 13 shows this phenomenon, a permanent deformation of
approximately 2 mm occurred after the first run of the test even though the connection
did not completely fail. Using the results of this test, a stiffness of approximately
750 kN/mm was determined.
Using the data gathered from all the tests, the different variables were compared as
accurately as possible given the varying results of the experiments. It was difficult to
compare the absolute capacities of the slabs as not all of them failed and the ones that
did may have been subject to misleading conditions during test setup. Given this, it
was still possible to compare the stiffness of the slabs and how much deformation
can be expected under different loads. Figure 14 shows a comparison of the different
load versus displacement relationships for the different variables.
Several findings can be drawn from this plot. The most expected observation is the
as-built slab with significantly less stiffness than the others (ignoring the data points
before 1.5 mm due to noise). This is expected as it lacks the support from grout or
core fill within the cores at the ends, causing it to deform more under lower loads until
failure. The grouted specimens performed similarly in terms of their displacement
relative to the loading; this could indicate either (1) the partially grouted slabs were
filled to a higher degree of quality than expected or (2) the amount of grout within the
cores does not affect the stiffness of the slabs as much as anticipated. The partially
core-filled slabs showed an interesting response as they started out with increased
372 L. Marshall et al.
deformation under lower loads but stiffened as the load increased. This could be
as a result of how the core fills are done; removing the flanges in the hollow core
slabs means the machine (or walls in practice) bear directly on the lower strength
self-consolidating concrete mix, instead of the stronger hollow core mix. Given the
linear nature of some of these relationships, it could be possible to predict the amount
of deformation expected for a slab given a certain load.
Hollow core slabs are an extremely common building product, their many advantages
and trends in precast construction mean they will continue to be very common in
panel-type construction. The connection being studied in this research is one of the
most common types of connections for these projects; therefore, understanding its
behaviour under increasingly higher loads is necessary to ensure the safety of its
continued use and potential use in larger projects.
The results of the first phase of tests are preliminary and were used as a starting
point and introduction to the behaviour of these slabs under the given type of loading.
The experimental method and test setup used in this research allows for a simulation
as close as possible to real building conditions. The main results yielded in this
first phase of tests included the capacity of as-built hollow core slabs, as well as
the relationship between load and deformation for slabs with a range of variables.
Stiffness relationships can be used to predict and estimate slab deformation for given
Bearing Capacity of Precast Hollow Core Slabs at Load Bearing … 373
loads and compare different end condition options. Ideally, an increased number
of tests would be desired to increase accuracy and further explore the effects of
the different variables. Additionally, increasing the capacity of the machine would
aid in crushing some of the slabs; however, given the extremely high loads the
self-contained machine is under, increasing the machine capacity would be very
expensive, as well as pose a larger safety risk during use. These problems will look
to be solved with the second round planned tests.
With the results of the first phase of testing and the findings that many of the slabs
are too strong for the machine, the second phase test plan was developed. During the
second phase of testing, the extra slabs that were not used (as indicated by Table 1)
will be ripped down to test smaller portions of the slab. These smaller pieces of slab
will focus on the same variables with greater detail, such as focusing more on the
web strength of the slabs and its interaction with core fill. By performing these tests
on a smaller scale, the test apparatus will be able to crush all specimens, allowing
for data that can more easily be compared to other variables. The data will then
be extrapolated to predict its behaviour under full-scale conditions. The combined
goal of both phases of testing is to present the industry partner with adequate data to
facilitate in decision-making on the end conditions of these slabs and to understanding
how these slabs perform and their capacities will save time and money in fabrication
and construction of these slabs.
Acknowledgements The Authors would like to thank all parties who helped make this project
possible, including funding sources and financial support through MITACS as well as the Off-site
Construction Centre at UNB. The authors would also like to acknowledge the on-site crew who
helped make the tests possible with their time, effort, and patience; additionally, the work of the lab
technician team at UNB who has helped and supported the testing from the start. Finally, the authors
would like to thank Strescon Ltd. for their continued support through their inputs and guidance on
the project.
References
1. Precast/Prestressed Concrete Institute (2015) Manual for the design of hollow core slabs and
walls, 3rd edn. Chicago
2. Al-Shaarbaf IA, Al-Azzawi AA, Abdulsattar R (2018) A state of the art review on hollow core
slabs. ARPN J Eng Appl Sci 13(9)
3. International Prestressed Hollowcore Association (2021) Advantages of prestressed hollow-
core. Retrieved 12 Aug 2021 from https://hollowcore.org/hollowcore/why-hollowcore/
4. American Concrete Institute (2019) Building code requirements for structural concrete (ACI
318-19). American Concrete Institute, Farmington Hills
5. Rahman MK, Baluch MH, Said MK, Shazali MA (2012) Flexural and shear strength of
prestressed precast hollow-core slabs. Arab J Sci Eng 37(2):443–455
6. Celal MS (2012) Shear behaviour of precast/prestressed hollow-core slabs. University of
Manitoba, Canada
7. Michelini E, Bernardi P, Cerioni R, Belletti B (2020) Experimental and numerical assessment
of flexural and shear behavior of precast prestressed deep hollow-core slabs. Int J Concr Struct
Mater 14(1):1–17
374 L. Marshall et al.
Abstract Log houses are an ancient construction technique. The paper presents an
ongoing investigation of the structural response of the log wall corner joint system of
the Butt-and-Pass style under lateral loads. The finite element model was developed
and validated with the experimental results. A parametric study was performed for
assessing the effect of penetration length of the logs on the lateral capacity of the
log wall specimens. Several models were created with variable penetration lengths
ranging from 20 to 70 mm. The study found an inverse relationship between the
penetration length and the lateral stiffness. The analyses showed that the lateral load
resistance capacity and the initial stiffness change dramatically once the penetration
length passes 55 mm. However, the log wall specimens with penetration lengths less
than 45 mm showed better resistance to the lateral loads.
1 Introduction
The log houses are generally known as archaic timber structural systems, having
the logs laid horizontally on top of each other to gain the wall’s desired height.
The stability of this structural system is provided by the corner joints where the
two orthogonal walls meet. These walls are interlocked by facilitating corners with
the notches of various styles as such standard, saddle, dovetail, Butt-and-Pass, and
corner post. Each corner style responds differently under various load conditions.
Investigations were performed in the past for assessing the seismic performance of
log walls with and without corner joints. Monotonic and cyclic tests were performed
to evaluate the energy dissipation characteristics of log walls with the Swedish cope
Fig. 1 Typical
Butt-and-Pass configuration
profile [4]. Popovski et al. performed a series of tests on log walls with and without
standard corner joints to study their performance and observed a 70% improvement
in the load capacity of the walls with corner joints [6]. While the experimental work
by Branco et al. on log walls with standard corner joints showed an increase in
stiffness with higher pre-compression loads [2]. Bedon et al. investigated the lateral
resistance of log walls with various corner joints (standard, rounded, and dovetail)
and identified that the wall with a standard joint has more capacity to resist lateral load
and dissipates more energy than the other corner joints [1]. The authors also created
an FE model capable of predicting the lateral response of log shear walls. Further,
Grossi et al. investigated the lateral resistance of log shear walls with standard and
dovetail joints. They found that the friction and the corner joint interlocking influence
the lateral capacity of the log walls [5].
A Butt-and-Pass technique between two orthogonal walls is not strictly a notch but
instead a substituted corner style preventing holes in the log. In this interlaced finger
corner style, the log layers cross every other layer, as displayed in Fig. 1. The corner
style is easiest in construction but prone to the separation of walls at the intersections
leading to the reduced lateral stiffness of the wall. The study investigates the effect
of penetration length on the performance of log specimens under lateral loads.
The finite element model of logs with a rectangular profile and standard joint corner
style was created using the commercially available software ABAQUS [8].
The model was developed using 3D solid elements 8-node; C3D8R type was
verified with the experimental results available in the literature [3]. The material
properties and the boundary condition of the FE model were consistent with the
tested specimen, comprising five logs [7, 9]. Figures 2a and b show the dimension
details of the test specimen modelled using finite elements and the finite element
mesh, respectively. Once the model was validated with the experimental work [3],
the analysis was extended to perform a parametric study with variable penetration
lengths in the Butt-and-Pass corner joint. The movement of the sill log, attached to
the floor, was restricted (X = Y = Z = 0), while the wall was allowed to move in
Lateral Performance of Log Wall with Butt-and-Pass Corner Style 377
the longitudinal log direction under lateral loads. The vertical pre-compression load
was applied to the topmost log keeping log layers entirely in contact.
Figure 3 shows the comparative behaviour of the force–displacement curves
between experimental and finite element modelling (current study) results. It can
be seen from the figure that the load–displacement curve of the experimental study
agrees well with the current study with a deviation of an average of 2–3% due to the
geometry simplifications and the generalized boundary conditions.
50
40
Force (KN)
30
20
Current study
10
Experimental test
0
0 5 10 15
Displacement (mm)
Fig. 3 Force–displacement curve comparison between FE-standard joint model (current study) and
FE-STRU from literature [3]
378 R. Kalantari et al.
The validated FE model was modified to create a corner joint with Butt-and-Pass
corner style to understand the response with variable penetration length (D) in the
corner joint. The wall corner comprises 6 logs of rectangular profiles (cross sections
of 160 * 90 mm), three corresponding to the main direction and the rest in the
orthogonal direction. The analysis was performed by pre-compressing the top log
followed by imposing the lateral force on the top log in the main direction. The base
of the wall in the main direction was assumed to be rigid. However, the orthogonal
logs are restrained horizontally but allowed to move vertically for stability concerns.
Figure 4 shows the pre-compressed load and boundary conditions on the wall. The
parametric study was performed at first by defining a base model represented by
“BP”. The penetration length was considered D = 45 mm for the BP model, and
later, the analyses were carried out by changing the value of D ranging from 20 to
70 mm.
4 Results
Lateral performance of the logs’ specimen with Butt-and-Pass corner style was
examined by varying the length of the log penetration at the corner joints.
The results and test details of the control model, BP (D = 45 mm) and other
FE models having D < 45 mm and D > 45 mm are summarized in Table 1. The
results in the table indicate an inverse relationship between the penetration length
and the initial stiffness as well as the lateral capacity, such that by increasing the
penetration length from 20 to 70 mm, the initial stiffness of the wall reduced from
Lateral Performance of Log Wall with Butt-and-Pass Corner Style 379
30
25
lateral force resistance (kN)
20
15
10 BP-2.5 BP-2
BP-1.5 BP-1
5 BP-0.5 BP
0
0 10 20 30 40 50
Displacement(mm)
(a)
30
25
lateral force resistance (kN)
20
15
BP BP+1
10 BP+1.5 BP+0.5
BP+2 BP+2.5
5
0
0 10 20 30 40 50
Displacement(mm)
(b)
5 Conclusion
A Butt-And-Pass corner style is one of the joinery methods used to construct log
houses, and the performance of the corner style has not been investigated in the past.
The paper investigates the effect of the critical design parameter, the penetration
length, on the lateral capacity of the wall specimen. The analyses found more than
40% variation in the lateral capacity with penetration lengths varying from 20 to
70 mm. It has also been identified that the penetration length affects the initial stiffness
of the wall.
6 Future Work
To the authors’ best knowledge, currently, there are no specific design guidelines
available in the code for the design of log shear walls, implying the need to investigate
the performance of log shear walls with various corner styles under lateral loads. The
experimental and numerical investigations on full-scale log shear walls are underway,
and it is expected that the outcomes of this research project will facilitate the design
guidelines.
Acknowledgements The authors greatly acknowledge the financial support provided by Concordia
University Seed Funds.
382 R. Kalantari et al.
References
Abstract In this study, a preliminary design of a lifting and rigging system for
a modular construction company, Nunafab/Illu was conducted. The company is in
the process of building affordable, high-quality, durable and energy-efficient total
precast concrete modular houses in Nunavut. The Off-site Construction Research
Centre (OCRC) at the University of New Brunswick evaluated lifting and rigging
options for the modules and recommended ways to improve the lifting and rigging
process. Therefore, first, the weight and centre of gravity (CoG) of the different
modules were calculated. While the heaviest module was considered for the design,
the CoG location of every module was considered in the calculation of lifting sling
lengths. The length and angle of each sling were calculated in such a way to facilitate
lifting from the CoG and optimize the load applied on each sling. Then the possibility
of different lifting scenarios was evaluated considering the capacities and limitations
of the concrete and lifting accessories. The most limiting condition was the strength
of concrete and the availably of lifting inserts. Finally, suggestions were made to
the company to be able to use more efficient lifting and rigging methods. The first
suggestion was reducing the weight of the module in order to use fewer lifting inserts.
That will reduce the number of levels of spreader bars, improves the lifting tree
assembly process and time, and reduces divergence from the CoG. Second suggestion
was designing and manufacturing of a customized lifting frame instead of using a
lifting tree.
1 Introduction
0.05
200mm concrete
a - Wal l b- Roof
c-Floor
walls will change in height to create the required slope. The walls will extend to the
roofs level and meet the roof at the edges.
a- Module A
b- Module B
c- Module C
d-Module D
Fig. 2 (continued)
Lifting and Rigging Study for Precast Volumetric Modular Construction … 389
e-Duplex Villa- Final Stage after Erection (AM, BM and CM are mirrored to modules A, B and C)
Fig. 2 (continued)
Each module had to be lifted from the centre of gravity (CoG) to eliminate or mini-
mize any possible rotations and twisting of the modules during the lifting process.
To calculate the CoG of each module, thicknesses, dimensions, and distances were
taken from the provided drawings previously shown in Fig. 2. Two types of mate-
rials, reinforced concrete and insulation, were used. The weights of materials were
not provided; therefore, they were assumed to be common practice material prop-
erties and conservative values. Reinforced concrete was assumed to have a normal
density with 24 kN/m3 . Insulations weighs between 0.5 and 3.6 kN/m3 depending on
their material; however, extruded polystyrene insulation which is commonly used in
sandwich panels and weights around 0.1–1.0 kN/m3 , was assumed to be used here.
The location of partitions and the openings were considered in calculating modules’
weight and centre of gravity. Dimensions, weight, and location of the centre of gravity
for each module are given in Table 1. The deviation of CoG from the geometric centre
is shown in Table 2.
3 Lifting Design
To design the lifting system, it is not required to factor the dead load and consider
impact load factor. The reason is that all the lifting and rigging accessories already
incorporate a safety factor of 4–7, meaning that only 0.14–0.25 of their ultimate
capacity is considered [1]. Therefore, there is no need to further reduce the capacity
by increasing the loads. However, a small dead load factor of 1.1, 8–10 T, is considered
to account for the added weight from the lifting accessories. Although a minimum of
60° angle was considered between the slings and the spreader beams, conservatively,
a sling angle factor of 1.45 considering an angle of 45° is applied to the loads
carried by inclined slings and connected shackles, hooks, and the load block [2]. The
angular coefficient is considered on the compressive axial, and shear load applied
on the spreader beams as well. Different lifting practices and scenarios are possible.
Only two scenarios based on their simplicity and practically for Nunafab/Illu are
considered in this paper.
The first considered lifting scenario is lifting from the four corners of the module
using four slings and one load block; see Fig. 3. Although this approach simplifies
lifting from the CoG, there are some concerns with it. The slings are tension-only
members meaning that they only carry load when in tension. When lifting from four
corners and considering the CoG, which is not in the middle, the lengths of the four
slings will be different as their distance from the CoG is different. That results in
the two shorter slings to carry the entire load at the start of lifting and consequently
rotation of the module. When trying
to find
the tension load in the four slings using
statics equilibrium equations, Fx = 0, Fy = 0 and M = 0, although there
is one level of indeterminacy, the calculations result in loading only in three of the
four slings and leaving the fourth sling unloaded. Therefore, mathematically, lifting
with a load block and slings is only practical, if three slings are used and they meet
at the planar location of the CoG. If considering four slings, the slings, lifting inserts
and shackles need to be designed for 80 T when the lifting block is located at 5.5 m
distance and for 54 T when it is located at 10 m distance. 54 T exceeds the capacity
Lifting and Rigging Study for Precast Volumetric Modular Construction … 391
of available lifting inserts into concrete. Therefore, this method is not practical for
the current project.
It is better to use multiple levels of spreader bars instead of using three or four
slings and one block to overcome the high design loads at the lifting points level. In
addition, this method helps to calculate the load more accurately in first level slings
and eliminate or minimize the shear stresses at the level of lifting inserts. Therefore,
the possibility of using a lifting tree such as shown in Fig. 4 was evaluated. Four
lifting locations were considered to lift a module at a specific distance from the edges,
edge .
Use of spreader bars or beams are required in this method. If possible, lifting
a spreader bar from the two ends, where the lower-level slings end, is preferred to
using spreader beams. In the case of spreader bars, bars are only required to carry
the compression load from the slings and the shear load at the connections. While,
in the case of spreader beams, lifting eccentrically to the lower-level slings from two
or one middle points induces a negative moment in the beam. Therefore, the beam
has to be designed for compression, shear, and negative moment.
Having four lifting points at each corner with two levels of spreader bars is a
simple way to lift the modules. If only one lifting insert is used at each corner, the
392 S. Rizaee et al.
sling from each corner would be oriented vertically and that would eliminate any
shear load and subsequently increased required capacity of lifting inserts. If one
lifting insert does not have enough capacity and more than one lifting point is used at
each corner, there will be some shear load applied to the lifting inserts and shackles.
The length of slings is calculated such that they meet at the centre point, where if only
one lifting insert was to be located. The slings in the second and third levels have to
be inclined in order to place the load block in line with the CoG. The length of slings
on the second and third levels was calculated so that the sling angle with the spreader
bar is greater than 60°. 60° angle requirement is used to minimize the horizontal
load component in a sling. The horizontal load is also applied as a compressive load
to the spreader bar. First level spreader bars are placed at a 1000 mm distance from
the highest point of the roof. This distance can be reduced to 500 mm. To achieve
the minimum 60° angle requirement [3], the second level spreader bar is placed at
4500 mm from the first level spreader bars, and the load block is placed at 8000 mm
from the second level of spreader bars.
After a few iterations, the required edge distance was calculated depending on
the capacity of lifting inserts and the required number of them at each corner. It is
required to have four lifting inserts at each corner resulting in an edge distance of 500
for modules A, C and D and 450 mm for module B. The edge distance was decided
Lifting and Rigging Study for Precast Volumetric Modular Construction … 393
in order to fully utilize the capacity of lifting inserts and to prevent any concrete
rupture. The edge distance in modules A, C and D increased by 50 mm in order to
have the same length spreader bars in all the modules.
The length of the first and second level spreader bars from where they connect to
the slings below, L sb1 and L sb2 , are 3086 and 7586 mm long. These numbers can be
rounded to 3100 and 7600 mm. The length of slings at each level for each module is
also calculated and presented in Table 3. The first level slings are not the same length
since the roof is sloped. L 11 is the length of slings on the higher roof elevation, and
L 21 is the length of slings on the shorter roof elevation. Therefore, L 11 is shorter than
and L 12 . The length of slings in the first level was calculated based on the 1000 mm
distance of the first level spreader bar from the roof’s highest point. L 11 and L 12
values were different in module C as the roof slopes perpendicular, Fig. 2e, to the
other modules. Depending on the distance of the CoG from the geometrical centre
(2250, 4500) in each module, slings on one side will be longer than the other side in
the second and third levels so that the load block can be located in line with the CoG
(see Fig. 5).
The factored design loads for levels one, two and three, F 1 , F 2 and F 3 , respectively,
for each module, are shown in Table 4. To calculate F 1 , load in the first level slings
or load at the lifting points, the tributary area for each lifting point was considered
according to the location of CoG and the biggest number for each module was
reported. To be conservative, an angular load factor of 1.45 for 45° angle sling was
considered instead of 1.11 for 60° angle to calculate F 2 , F 3 and load block required
capacities. As mentioned previously, since only two levels of spreader bars and three
levels of slings are required, the added weight due to rigging accessories is not
significant compared to the self-weight of the modules; therefore, a dead load factor
of 1.1 is considered in the calculations to account for the added weight, which is
roughly 10 T.
required edge distances greater than 100 mm. Therefore, more than one lifting insert
is required at each corner. The exterior thickness of the roof is only 50 mm and to
get to the actual concrete that is capable of carrying load, lifting inserts have to pass
through the insulation while only 200 mm concrete is available past the insulation.
Therefore, the roof has to be modified at the corners in order to accommodate proper
lifting. One possible and practical way is to eliminate insulation in the roof sandwich
panels at the corners and create a section that is all concrete, where lifting inserts
are placed, and maximum potential capacity of the available depth can be used.
Four equally spaced, 40 mm (1½ ) coil-type lifting inserts with a minimum tensile
capacity of 8 T and shear capacity of 5.3 T at minimum edge distance of 225 mm can
be used. There is a 72° angle at the location of lifting point that results in a shear load
Lifting and Rigging Study for Precast Volumetric Modular Construction … 395
of 2.6 T, which is lower than the capacity of the lifting insert, 5.3 T. This arrangement
requires a 1000 × 1000 mm2 area in modules A, C and D and 900 × 900 mm2 area in
module B at each corner to be fully made of concrete. The edge distance for module
A, C and D is considered to be 250 mm and for module B to be 225 mm. Lift lags,
bolts, with the same diameter, 40 mm (1½ ), with a capacity of 9 T at 45° angle and
heavy-duty swivel lift plates with 10 T capacities can be used here. Consequently,
shackles, slings or chains can be selected based on their availability from the lifting
company.
Two 3100 mm long round or square, hollow structural section spreader bars with
a capacity of 41 T (at 45° angle) are required at the first level. One 7600 mm long
spreader bar with a capacity of 81 T (at 45° angle) is required at the second level.
Consequently, a load block with a capacity of 162 T is required to lift each module.
Figure 6 presents the required capacities for each of the lifting accessories in tonnes.
The total weight and centre of gravity for each module is calculated first. Each
module weighs about 100 T. Then the load each of the four corners/lifting points is
calculated considering the distance from the CoG. Then the maximum load from the
four corners was considered for design. The length of slings and spreader bars was
calculated to locate the load block in line with the CoG and to meet the minimum
required edge distance in concrete. Consequently, minimum required capacities for
the lifting accessories and sling lengths were calculated. Four lifting points with
minimum capacity of 8 T was considered at each corner. It was recommended to
have fully concrete 1000 × 1000 mm2 sections at the corner of the roof component
to meet the edge distance required for lifting inserts. It is recommended to purchase
the lifting accessories from the Canadian providers in this area. The capacity for
lifting accessories such as slings, spreader bars and load block is calculated, but their
type will be dependent on their availability by the provider company.
References
1. Crosby (2016) Crosby general catalog. The Crosby Group, Richardson, TX, USA
2. FEMA (2000) Module 4-lifting and rigging. In FEMA, FEMA National US&R Response system
structural Colapse technician. FEMA, MD
3. Newberry B (1989) Handbook for riggers. Newberry Investement Co. LTD, Calgary
The Alcan Pioneer Road and Discovery
of Permafrost
J. David Rogers
Abstract During the 1930s, several American and Canadian commissions exam-
ined the feasibility of constructing a continuous highway from the population centers
of western and central Canada as far as Dawson for the Canadians and to Fairbanks as
the desired terminus of the Americans. The scheme lacked political will and sponsor-
ship until Germany attacked the Soviet Union in June 1941 and quickly advanced to
the outskirts of Moscow. Soviet emissaries began lobbying their new North American
allies to construct a series of airfields across northwestern Canada and the Alaskan
Territory to shuttle American aircraft on Lend-Lease to the Soviets. Canadian plan-
ners located 12 airfield sites in their western provinces. The Americans were obliged
to observe their declared neutrality prior to the Japanese attack on Pearl Harbor
and Germany’s declaration of war in December 1941. The Pacific was the largest
theater of military conflict in history stretching > 4500 miles (> 7200 km) between
San Francisco and Yokohama. American military planners realized how vulnerable
Alaska was to airborne attack and seaborne invasion. With America’s entry into the
war against Japan and Germany in December 1941, 28 air bases were constructed
during a 20-month period beginning in the spring of 1942, which connected Great
Falls, Montana with Krasnoyarsk in Siberia, Russia. Known as the ASLIB (Alaska-
Siberian Air Ferry Route), the Great Circle distance between Montana and central
Siberia was 5145 miles (8285 km). The actual flight path was more than 6000 miles
(9600 km), making it the longest air bridge of the Second World War. The Alcan
Highway was the umbilical cord of Canadian and American presence, authority, and
defense of the near-pristine frontier, which accommodated the transfer of ~ 8000
Lend-Lease combat aircraft, 56% of those that reached the Soviets during the war.
1 Introduction
In January 1941, the Japanese Foreign Office had announced it was “greatly
disturbed” over the proposal to construct an Alaskan-Canadian highway, which they
viewed as an “air bridge” across the North Pacific to supply the Soviet Union and
limit Japanese expansion. With the sudden entry of the USA into the global conflict a
year later, Canadian and American leaders agreed that immediate action was needed
to defend the Alaskan frontier from Japanese forces who were rolling over every
Allied target of consequence in the Pacific Theater. The projection of air power
allowed armies to fly over natural barriers like mountains, lakes, and rivers. Naval
power projection was also enhanced by the deployment of long-range patrol and
reconnaissance aircraft, as well as carrier-based attack aircraft. From the outbreak
of the Pacific campaign, there was a growing appreciation of the strategic role of
husbanding petroleum, oil, and lubricants stockpiles and increasing realization that
oil resources and the efficient transport and storage of these resources would play
such a dominant role in the conflict.
Some of the first wartime deployments outside of the continental USA were the
engineer regiments dispatched to Canada and Alaska, beginning in February 1942.
The U.S. Army’s Corps of Engineers (USACE) were given responsibility for laying
out and constructing a pioneer road at least 12 ft. (3.7 m) wide with H15 minimum
bridge loading. This path would be followed by finish grading, paving, and drainage
improvements performed by civilian contractors to conform with permanent highway
standards established by the U.S. Public Roads Administration (USPRA).
USACE dispatched seven engineer regiments, each one responsible for surveying
about 350 miles (560 km) of highway right-of-way on aerial photographs (a historic
first), employing three 8-h shifts per day. Using bearings, surveyors would blaze
trails and mark the proposed center line by tying red cloths to bushes and trees. A
plane table party would then traverse the alignment and record relative elevations. If
this preliminary pass along the proposed alignment proved satisfactory, earthmoving
equipment would begin excavating selected alignments working 20 h per day with
four hours reserved each day for maintenance of equipment.
The Corps’ initial assignment was to grade a pioneer road suitable for military vehi-
cles. The person in responsible charge was Brigadier General Clarence L. Studevant,
who was instructed to build the pioneer road “as fast as humanly possible” [8].
Civilian contractors working for the Public Roads Administration (PRA) would then
upgrade the road to status of a permanent highway.
The specs for the Pioneer Road were a clearing width of 32 ft. (9.8 m), a maximum
grade of 10%, minimum 50-foot (15 m) curve radius, a minimum surfacing width
(wearing course) of 12 ft. (3.7 m), minimum 3 ft. (0.9 m) wide shoulders, minimum
The Alcan Pioneer Road and Discovery of Permafrost 399
ditch depth of 2 ft. (0.6 m), a minimum crown of 1 in. (2.5 cm) for each foot (30 cm)
of width, and single-lane bridges designed for 15 tons (13,600 kg) (H15) minimum
axle-wheel loads [1, 7].
The initial deployment sites for the seven regiments are shown in Fig. 1. Survey
parties were supplied with aerial photos to serve as their base maps and to provide
general location and bearings (a historic first). Using magnetic bearings, the survey
parties blazed trails and marked the proposed center line by tying red cloths to bushes
and trees. They were followed by a plane table party who would then traverse the
alignment and record relative elevations. If this preliminary pass along the proposed
alignment proved satisfactory, construction units could begin dozing. In the 1943
construction season, civilian contractors were brought in to improve the pioneer
road and bring it up to USPRA standards.
When the first surveyors of Company D, 29th Engineer Topographic Battalion
and Company A of the 648th Engineer Topographic Battalion hit the ground in
February of 1942, the base maps were simply aerial photographs. Only one map was
provided of the area from Dawson Creek to Fort Nelson. The paucity of maps and
snow up to 18 in. (0.46 m) deep posed the biggest challenges for the surveyors. Other
problems were mosquitoes, gnats, and yellow jaundice. The surveyors also had to
comply with orders to have the pioneer road service the proposed airfield sites in the
Yukon Territory and Alaska, as well as avoiding steep terrain and muskegs as much
as possible.
Survey parties were normally comprised of 1 officer and 9 surveyors. The first
group to enter the forest would split up into teams and venture out for a mile or two
to see which alignments held the greatest potential. The teams would then back track
and reunite. This separating and coming back together, along with updated aerial
photos, was the only practical way for the surveyors to blaze the best path. Another
survey team would follow behind, running a level survey and taping the centerline.
400 J. David Rogers
Fig. 2 During the summer of 1942 the Army Engineers employed muscle and determination to
blaze a pioneer road extending north of Dawson Creek, Yukon Territory. Source US NARA
Shortly behind them came the transit team that would record the centerline and
elevations of the proposed road. The teams would average about 2–4 miles (3–6 km)
per day. They soon found it advantageous to hire native Inuit people as designated
guides.
The 341st Engineer Construction Regiment (Fig. 2) was the first to arrive in the
Yukon, reaching Dawson Creek on March 10, 1942. They started moving north to St.
John. Their initial goal was to get across the Peace River just north of St. John before
the spring thaw. Otherwise, they had no means of transporting their equipment any
farther.
By June of 1942, seven engineer regiments were on the ground constructing the
pioneer road. These included the 18th and 35th Engineer Combat Regiments and
five engineer general service regiments, which included the 93rd, 95th and 97th
African American regiments and the 340th and 341st Engineer General Service
Regiments (the American Army was re-organized in January 1943, and regiments
of 1000–2000 men were replaced by battalions of four or more companies with
400–1000 men). In early 1942, each engineer regiment was assigned a strip of land
approximately 350 miles (560 km) long. Their goal was to reach the next regiment’s
pioneer road before winter set in.
Within each regiment, the companies constructed a portion of the road using a
leap-frog method. One company would grade their assigned section, another just
ahead of them about 30 miles (48 km) distant. When a company worked up to the
next company’s road, then they would “leap ahead” and start again. During the spring
and summer months, daylight could last up to 20 h with twilight for the remaining
4 h. This allowed the construction crews to work 3 shifts of 8 h each, round-the-clock.
The USACE units began their work with light pre-war equipment, such as Cater-
pillar (Cat) D-4 tracked dozers until the arrival of heavier equipment, like Cat D-7’s,
which began arriving in the late summer of 1942. For the pioneer road, the Army
Engineers were obliged to make do using hand saws, hand axes, small dozers and
The Alcan Pioneer Road and Discovery of Permafrost 401
Fig. 3 Dozer drifting loose soil into a 1940 Chevrolet 4 × 4 1–1/2 ton (1360 kg) capacity dump
truck. It is hauling fill for an airport access road in May 1942. Source US NARA
2-axle 1–1/2 ton (1360 kg) dump trucks (Fig. 3). Building the road required brute
force and considerable creativity, often using the natural materials that were readily
available.
The biggest surprise on the project was the ability of modern earth moving equip-
ment to make short work of excavating the pioneer road across mountains (Figs. 4
and 5). A few years previous would have required time-consuming blasting and
tunneling.
Dr. Siemon (“Si”) William Muller (1900–70) was born in Blagoveshchensk in eastern
Russia between Siberia and Manchuria in May 1900, where his Danish father was
working on the Trans-Siberian Railway’s telegraph line. When the Russian Revolu-
tion began in October 1917, Si was enrolled in the Imperial Russian Naval Academy
at Vladivostok. He later escaped to Shanghai, where he was employed by an Amer-
ican firm and learned to speak English. In 1921, he immigrated to the USA and
followed his older brother Bill to the University of Oregon where he received his
BS in geology in 1927. He received a graduate assistantship at Stanford working for
Prof. James Perrin Smith in paleontology and stratigraphy, receiving his master’s in
1929 and Ph.D. in 1930.
Soon after graduation, he joined Stanford’s faculty as an assistant professor and
was promoted to associate in 1936 and full professor in 1941. His career focused on
402 J. David Rogers
Fig. 4 Maximum 10% grade on one stretch of the rough graded pioneer road in 1942. Source US
Wartime Information Board
Fig. 5 This grade was subsequently lowered 22 ft. (6.71 m) by one of the U.S. Public Roads
Administration’s civilian contractors in 1943–44. Source US NARA
The Alcan Pioneer Road and Discovery of Permafrost 403
Fig. 6 Dr. Siemon W. Muller (1900–70), the originator of the term permafrost and the author of
three books on the subject in 1944, 1947, and 2008. Sources ASCE and Branner Library at Stanford
University
paleontology to interpret the origins and history of stratified deposits of the Meso-
zoic and Paleozoic Eras in western North America, with particular emphasis on the
stratigraphy of the Triassic Period in west central Nevada.
In 1942, Stanford Geology Professor Siemon W. Muller (Fig. 6) was selected
by the U.S. Geological Survey’s Military Geology Unit (USGS-MGU) to develop
construction details appropriate for the frozen ground conditions in Alaska. Muller
coined the term “permafrost” to describe permanently frozen subsoil beneath cover
of vegetation and topsoil (known as the “active layer”). The active layer insulates the
frozen subsoil.
Shortly after the war Muller was cited by the Army for his unique contributions
to the war effort with the Freedom Medal, the highest award available to civilians
(renamed the Presidential Medal of Freedom in 1963). In 1947, Muller self-published
the first open-source text in English describing the engineering challenges of building
on permafrost [5]. His last textbook was published by posthumously in 2008 by the
American Society of Civil Engineers (Fig. 6).
Figure 7 provides valuable insights on the most common sources of near-surface
ground water in arctic areas. Note how permafrost zones tend to create ground-
water flow boundaries and selective flow conduits. Figure 7 shows some of Muller’s
sketches for treatment of muskeg pockets. He felt that active drainage zones were
best addressed by over-excavation and back-filling of active seeps with free-draining
gravel.
Once frozen, the gravel reacts more favorably than other materials and served
to broaden the base of the highway. Care was taken to remove excess quantities
of fine sand and silt which tended to heave when the fill froze. Habitable struc-
tures constructed with conventional interior heating began suffering from differential
thawing like that sketched in Figs. 8 and 9.
With America’s entry into the Second World War in December 1941, American
and Canadian military engineers began planning the establishment of an “aerial
404 J. David Rogers
Fig. 7 .
Fig. 8 Muller’s sketches of solutions for treatment of muskeg pockets (modified from Muller [5])
The Alcan Pioneer Road and Discovery of Permafrost 405
Fig. 9 Heat radiated from habitable structures often triggered plastic deformation of thawed ground
that often resulted in differential settlement. Source modified from Muller [5]
bridge” from somewhere near the US-Canada border into central Siberia, which
would require a pioneer road, an Alaskan and Canadian (Alcan) highway, and a
natural source of petroleum fuel near Norman Wells in the Northwest Territories.
This became the Canol Project, which wasn’t completed until March 1945 [3, 10].
406 J. David Rogers
It was assumed that civilian contractors would upgrade the pioneer road to
USPRA standards, but they soon encountered unforeseen problems with main-
taining the highway improvements on partially frozen. This was the ground, which
lost significant shear strength when it thawed, ruining much of the graded surfaces
(Fig. 10).
The principal reason Si Muller was dispatched to Alaska in 1943 was so he could
study the experiences of the Russians in Siberia and educate the American and
Canadian engineering geologists working with construction crews on developing
mitigation protocols for operations and maintenance of the paved right-of-ways,
which were suffering the ill effects of the active layers at the pavement shoulders
thawing out losing strength.
In 1942, the USACE asked the USGS-MGU for any reference materials on
construction issues or problems on frozen ground [4, 5]. The MGU discovered that
the only published works on construction in frozen ground had been by the Russians
in Siberia, so they searched for an American geologist fluent in Russian, which led
to Professor Siemon Muller at Stanford.
Muller was asked to affiliate with the USGS-MGU for the duration of the war, and
soon found himself attached to the Army’s Air Installation Division, Headquarters
Alaskan Division, U.S. Army Air Forces (USAAF) Air Transport Command (ATC)
out of Elmendorf Army Airfield in Anchorage. He was given the pay of a lieutenant
colonel but did not display any rank on his uniform (Fig. 6-right). Within a year of
his arrival in Alaska, he produced a training manual for USACE titled “Permafrost
or permanently frozen ground, and related problems” [4].
In many cases, the pavement subgrade was almost completely undermined by thawing
of permafrost, which resulted in extensive sloughing of the saturated soil horizons
or pockets of thawed liquid within the active zone. This problem was often caused
by narrow shoulders and/or over-excavation of roadside drainage ditches.
Figure 10 Illustrates how seasonal percolation was restricted to the active zone,
just beneath the ground surface. The problem was exacerbated by drifting in fill
from adjacent to the highway to fill in the collapsing ditch, as shown as “receding
permafrost” in the lower pane of Fig. 10.
When the ground begins to freeze, flow becomes blocked, leading to the formation
of “frost blisters,” as shown in Fig. 11. Seasonal percolation often led to the formation
of frost blisters. These blisters occasionally “erupt,” releasing water to the surface
which can re-freeze and enlarge the frost blister.
The Alcan Pioneer Road and Discovery of Permafrost 407
Fig. 10 Seasonal expansion of roadside drainage problems in permafrost (modified from Muller
[5])
408 J. David Rogers
To overcome the liquid mud created by disturbed Muskegs, the Army Engineers laid
corduroy, just like the Romans (Figs. 13 and 14).
A corduroy road surface is constructed by first laying piles of brush, then logs,
then more brush, and more logs, and finally covering the sandwiched mass with a
layer of gravel.
In one stretch, two miles (3 km) of corduroy was laid. Overall, over 100 miles
(160 km) of muskeg was corduroyed in this manner along the Alcan.
410 J. David Rogers
Fig. 12 Old tractor trails excavated on permafrost allowed water to pond and infiltrate zones
adjacent to the Alcan Highway, undermining the pavement subgrade (modified from Muller [5])
In May and June 1943 about 100 miles (160 km) of the Alcan between Burwash
Landing and Koidern, Yukon became nearly impassable, as the permafrost thawed,
because it was no longer protected by a layer of delicate vegetation.
Figures 15, 16, and 17 illustrate some of the most common problems with
permafrost thawing adjacent to paved surfaces and shoulders. A corduroy road was
built to restore the route, and corduroy still underlays old sections of highway
in this area. Considerable problems were often triggered by the piling of snow
along paved shoulders, which created additional insulation during the winter, but
formed “pockets” of seepage perched on depressions of the active zone during the
summertime.
Thawed permafrost becomes swampy when its thermal insulation is diminished.
When constructing a road, it was a common procedure to remove the topsoil. But,
when the active layer was excavated, the underlying permafrost thaws out and
becomes increasingly plastic, with degraded bearing capacity. Stabilizing berms on
either side of the highway fill prism helped to alleviate this problem, as shown in
Fig. 16.
The Alcan Pioneer Road and Discovery of Permafrost 411
Fig. 13 Tracked dozer swallowed up by a muskeg hole. Muskegs sometimes appeared to “swallow”
the heavier equipment, then exhibiting thixotropy, when the silty mud mixture “sets up”. Unknown
date. Source US NARA
Fig. 14 Wooden corduroy road surface under construction in 1943. Source US NARA
412 J. David Rogers
Fig. 15 Snow piled along shoulders of Alaskan airfield runways often prevented the active layer
from thawing but led to premature deterioration of the pavement shoulders (see the right side of
these panels from Muller [5])
The Alcan Pioneer Road and Discovery of Permafrost 413
Fig. 16 This illustrates the practicality of employing free-draining berms along the flanks of the
highway on sloping ground (modified from Muller [5])
414 J. David Rogers
Fig. 17 This drainage ditch was cut by a dozer to channel discharge away from a culvert to the
Tanana River in 1942. Within a year, this seemingly innocent ditch had excavated itself over 80
vertical ft. (24.4 m), simply by exposing permafrost to thaw (modified from Muller [5])
4 Conclusions
All the locals thought that a pioneer road through the wilderness would be impossible
to sustain. Not only did the USACE construct a road through the Yukon Territory
to Alaska, they completed the project in a record time of 8 months and 11 days,
completing 1,543 total miles (2483 km) of pioneer road.
10,670 engineers were used to construct and improve a total of 1685 miles
(2711 km) of highway [8]. 41 American and 13 Canadian contractors assisted in
the construction and improvement of the paved public highway. November 20, 1942,
marked the official opening of the pioneer road of the Alcan Highway. The ASLIB air
The Alcan Pioneer Road and Discovery of Permafrost 415
bridge was never interdicted by enemy action, and the discovery of permafrost during
the construction of so much engineering infrastructure has impacted the design and
construction of countless facilities located in artic and subarctic areas throughout
the past 75 years. The air bridge was never interdicted by enemy action, and the
discovery of permafrost during the construction of infrastructure has impacted the
design and construction of countless facilities built in artic and subarctic locations
throughout the past 75 years. The completed Alcan Highway was not opened to the
public until 1948 and has been in continuous operation since that time.
References
1. Cohen S (1992) Alcan and Canol. Pictorial Histories Publishing Co., Inc, Missoula
2. Eckel EB (ed) (1958) Landslides and engineering practice. Highway Research Board Special
Report 29, NAS-NRC Publication 544, Washington, DC
3. Finnie R (1945) Canol: the Sub-Arctic pipeline and refinery project constructed by Bechtel-
Price-Callahan for the Corps of Engineers. U.S. Army 1942–45, San Francisco
4. Muller SW (1944) Permafrost; studies in connection with engineering projects in Arctic
and Subarctic regions. Headquarters, Alaskan Division, US Army Air Forces Air Transport
Command
5. Muller SW (1947) Permafrost or permanently frozen ground and related engineering problems.
J. W. Edwards, Inc., Ann Arbor
6. Muller SW (2008) Frozen in time: permafrost and engineering problems. In: French HM,
Nelson FE (eds) ASCE Press, Reston
7. Nelson HE (1949) Military road construction in foreign theaters. National Research Council,
Washington, D.C.
8. Richardson HW (1943) Alcan: America’s glory road. Engineering News-Record, New York
9. Rogers JD (2019) Construction of the ALCAN highway in 1942. Powerpoint lecture, Military
Geology (GE 5642) Missouri S&T. https://web.mst.edu/~rogersda/umrcourses/ge342/Alcan%
20Highway-revised.pdf
10. Sill VR (1947) American miracle. Odyssey Press, New York
Prediction of Rework on a Construction
Site Utilizing ANN Integrated into a BIM
Environment
1 Introduction
2 Literature Review
For rework reduction, Fayek et al. [2] advocated using a rework tracking system to
capture real cost and hours for each field rework incidence. In addition, the research
noted that the approach may be updated and utilized during the engineering phase
of the project to improve the outcomes. Love et al. [5], however, focused on creating
an error management culture through a change process that incorporates sharing
experience and disclosing mistakes. This change approach aimed to motivate team
members and improve their behaviour. The utilization of BIM for rework reduction
has been explored by Hwang et al. [1], where the research assessed the impact of
BIM on rework in the construction sector, quantifying the problem and its impact
on project type and developing ways to decrease rework. Accordingly, eight sources
of rework including design errors and variations and the utilization of BIM reduce
rework and improve project cost and time performance.
Lee and Kim [6] studied two coordinating algorithms to investigate conflict detec-
tion. Their research compares sequential and parallel coordination strategies for
MEP coordinating productivity and information exchange among project members.
The results indicated that the parallel process is three times slower than the sequen-
tial approach in terms of coordination productivity; in addition to that, it required
subcontractors to construct models early on without enough data, causing conflicts
and inaccuracies and sometimes timetable delays. Zaki and Khalil [7] aimed to solve
the clash detection problem through using AR technology and BIM to build clash-
free models and superimpose 3D models on 2D drawings using QR code system.
Later, Kubicki et al. [8] evaluated the utilization of interactive devices to improve 3D
coordination in BIM. A video projection and a smart touch-based white board were
used in trials, and the findings of the trial demonstrated that interactive devices can
improve 3D model viewing and analysis as well as cooperation and coordination. The
utilization of BIM for clash detection has been addressed by Hu et al. [9]. The devel-
oped framework enhanced clash detection and reduced them by 17% when tested on
a real project. The utilization of BIM has been examined as well by Chahrour et al.
[10], and the cost reductions that arise from the application in real-life projects were
evaluated with a saving of 20% of the contract value which was calculated for a case
study.
420 R. Attia et al.
According to the literature review, rework is one of the leading causes of cost over-
runs and project delays in the construction sector. However, recent research shows
that coordination issues contribute to the cause of rework. However, much of the
literature on coordination difficulties and disputes deals with solutions during the
design stage, although many coordination issues develop during the construction
stage. Nonetheless, despite all the gains in rework and collision detection studies,
the literature does not cover the coordination problems during the construction stage
in addition to the combination of BIM and ANN in the field of rework highlighting
the gap in the literature.
This paper’s main purpose is to provide an interactive framework that predicts the
severity of rework and highlight project elements that might experience alleged
rework. The framework shall help decision-makers minimize or eliminate rework,
reducing potential delays and cost overruns. Due of a lack of survey data, the research
is limited to programmes with certain delivery methodologies. The scope of work of
this research can be summarized in Table 1.
Prediction of Rework on a Construction Site Utilizing ANN Integrated … 421
The research framework is divided into three sections. First, in-depth interviews with
industry engineers are used to acquire the first data. Following that, a survey was
created with questions for experts in the field to answer based on the interviews
and earlier research. The second part will involve analysing the survey results and
compiling a database. The model development necessitates a novel integration of
BIM and ANN. This is accomplished through the use of Autodesk Revit® macros,
C# code and an Excel neural network prediction model. The last stage comprises the
output of the BIM/ANN model. The procedure is depicted in Fig. 1.
As presented in Fig. 1, the data collection process began with interviews. The
interviews are conducted to get knowledge regarding rework in construction. As
a result, three semi-structured interviews with industry experts in the construction
industry were conducted. Due to COVID-19’s health restrictions, interviews were
conducted by phone in March 2020. Each interviewee was given around 30 min for
an open discussion. The replies of the respondents were recorded for future refer-
ence. Appendix 1 has the interview questions, whereas Table 2 contains the details
regarding the interviewees.
The interviews were divided into three main sections: the initial focus was on the
key reasons for rework which included wrong execution, lack of coordination, the
presence of several subcontractors, lack of supervision and variations. The second
section covered the project variables that may contribute to rework which included
the project type, the project contract type, the project delivery method, the number
of subcontractors available on-site, the project complexity and the project delivery
strategy. As for the third and final section of the interview, it focused on reducing
rework on-site, with respondents citing improved coordination as a key to reducing
rework. The interview findings highlighted reasons for rework in Egypt’s with its
main contributors. Coordination issues are one of the main causes of rework as
presented by the findings and established in the literature as well. The interviews
findings are then used to generate survey questions and accordingly and create a
database summarizing different combinations for project key factors and how each
combination contributes to rework.
4.1.2 Surveys
As demonstrated in the previous section, the interview results helped generate survey
to collect historical data for past projects and find correlation between project char-
acteristics and rework. Using Cochran’s formula [13], a sample size of 31 is needed
to reflect that population. The survey was completed by 41 people.
Prediction of Rework on a Construction Site Utilizing ANN Integrated … 423
The survey starts by asking questions related to the individual including career,
experience, etc. The respondent then chooses a past project they have an experi-
ence in and complete the survey. Appendix 2 contains the link for the survey. The
responses come from a variety of technical/engineering backgrounds. As a result,
the database will incorporate opinions from a variety of disciplines on-site coordi-
nation challenges. Additionally, the results reveal that 30% of respondents had more
than 20 years of industry experience, which bolsters the survey results and hence
the database. The survey then focuses on four incidents that were recognized during
the interviews. The incidents involved coordination issues on-site, whether between
separate trades or different subcontractors. The respondent for each incident is then
asked to provide the frequency of occurrence of the incident, the number of times
the incident resulted in rework, the average impact on the schedule per incident and
the average impact on the budget cost per incident.
For each response provided, the following calculations are performed to process the
data and generate a database.
1. Probability of occurrence of rework as demonstrated in Eq. (1) as presented by
Dosumu [14]
P = (n(A))/n (1)
4. Global impact on project’s duration which presents the global impact if all
incidents happened and caused rework.
5. Global impact on project’s cost which presents the global impact if all incidents
happened and caused rework.
6. Total number of relationships in the project; the total number of relationships is
a number presenting the complexity of the project.
424 R. Attia et al.
After the data collection stage comes the model development stage as presented
in Fig. 1. The developed model, as demonstrated in Fig. 2, is composed of three
modules: input, processing algorithms and outputs. The inputs module is divided
into three sub-modules: (a) three-dimensional coordinated BIM model for a specific
project, (b) user-defined project key elements and (c) a database of survey findings.
The processing algorithm module is divided into three sub-modules: (a) relationship
definition, (b) relationship calculation and (c) ANN prediction model which calcu-
lates the likelihood of rework and its cost and time impact. As for the outputs module,
it includes two sub-modules: (a) severity of rework as forecasted by the ANN model
and (b) the location of rework-related project items.
through multiple-choice lists. Thus, the required combination of neurons for the
project is known and will be used as input neurons in the ANN prediction model.
Database
The database summarizes the survey findings and identifies several combinations
of project key criteria, each of which correlates to a distinct likelihood, impact on
project schedule and impact on rework cost.
Relationship Definition
The relationship definition sub-module defines two algorithms: a query algorithm
and a building algorithm using Autodesk Revit® macros. For each project element
that occupies a geometric space in the BIM project, an ID is assigned. When the query
algorithm finds an element, the building algorithm identifies its bounding box. As a
consequence, the physical relationship between each element and the other elements
in the project is defined. Followed by that, the models’ IDs are entered across the
first row and column of a new Excel page to create a matrix.
Relationship Calculation
The calculations sections are dependent on the algorithms for relationship definition.
Numerous hidden computations are conducted to determine the overall number of
relationships in the project. To begin, a binary relationship matrix is generated using
previously described relationship algorithms, where NR means no relationship and
R signals a relationship. The resultant matrix is a two-dimensional n × n matrix,
where n is the number of elements in the native BIM project’s query list. Followed
by the matrix creation, an element’s relationship count is computed by adding up all
the relationships connected with that element in the native BIM project. Due to the
existence of duplicate connections, the straightforward summation of R for all compo-
nents cannot represent the overall number of relationships in the project. Therefore,
to prevent overstating the project’s complexity, the total number of relationships must
be reduced by excluding duplicate relationships.
ANN Prediction Model
Due to the restricted number of respondents, an ANN model was built to cover all
possible combinations of project key factors. The ANN model was designed to predict
the likelihood and effect of rework based on past project data. As a consequence,
the model can anticipate any combination not covered by the survey. As shown in
Fig. 3, the ANN model has three layers: input, hidden and output. There were 41
cases; however, one was judged an anomaly, leaving 40 cases; 35 training cases
(87.5%); and 5 testing cases (12.5%). Datasets for training and testing were chosen
at random to help the network learn. The model has seven input neurons, two hidden
layers of five neurons each and three outputs. The seven input neurons are the six
426 R. Attia et al.
project key factors including project type, contract type, delivery method, number of
subcontractors, complexity and delivery strategy. The output layer comprises three
output neurons: probability of occurrence, impact on duration and impact on cost.
A multilayer feedforward network was used to build the ANN model. The model
was linearly linked by multiplying the input nodes by their weights. The sigmoid/
logistic activation function was used to provide a positive output between 0 and 1 as
demonstrated by Jamel and Khammas [16]. Following that, the error was determined
using Equation (4) presented by Korkmaz [17].
The model ran numerous times using Evolver 8.2 on Excel to set the network
weights that would minimize error, and the best range was found to be between −
3 and 3 as the function’s gradients become insignificant as the graph grows flatter
outside of that range. The Evolver’s aim function was to minimize the total training
error for probability, time and cost. The error results are demonstrated in Table 3.
Following the completion of all hidden calculations and the execution of all
processing algorithms discussed in the preceding sections, the model output is gener-
ated. The outputs module synchronizes the native BIM project with the Excel-based
ANN model. The integration process is broken down into steps, which will be
explained in more depth below. The outputs module has two sub-modules: estimating
rework severity and visualizing the model.
Severity of Rework
The outputs module’s first sub-module is related to rework severity. Severity is
defined in terms of likelihood of an event occurring and its impact. Equation (5)
shows Dosumu [14]’s severity quantification.
This sub-module integrates BIM with ANN on several levels. As previously elab-
orates, the input neurons comprise six project key factors, and the number of rela-
tionships in the native BIM project is represented by the seventh input neuron. So
BIM and ANN are combined. The input neurons are either defined by the user in the
BIM environment or generated using known relationship definition and calculation
sub-modules. After the input neurons are initially integrated, the ANN model sub-
module predicts the three output neurons which are probability, impact on duration
and impact on cost. The ANN model’s output neurons are then communicated back
to the BIM model using an algorithm designed for this purpose. As a consequence,
BIM and ANN are integrated again. As a consequence, the user is given with a user
form that shows the degree of rework.
Model Visualization
The second sub-module of the outputs module is model visualization. Following
the output of the severity of rework sub-module, the user should however be able to
identify project elements that may require rework. On the basis of basic computations,
the visualization sub-module distinguishes between project elements and shows their
contribution to rework. The algorithm starts by counting the elements in the list
generated by the relationship definition query algorithm and obtaining the number
of relationships per element computed in the relationship calculation sub-module.
Then, three ranges are specified to represent green, yellow or red zones. The green
zone is mild, the yellow zone is moderate, and the red zone is severe, i.e. substantial.
The ranges are made as follows:
1. The green zone is between zero number of relationships and one-third of the total
number of project elements.
2. The yellow zone is having a number of relationships between one-third and
two-thirds of the total number of project elements.
428 R. Attia et al.
3. The red zone is having a number of relationships greater that two-thirds of the
total number of project elements.
After creating the ranges, the algorithm iterates through the elements list, deter-
mining where each element is located. The elements are then coloured, and a new
BIM project is built with the highlighted elements.
This section outlines a case study conducted in Cairo, Egypt, as part of the model’s
validation process. The code was applied to the 3D Revit® model for the case study
project. Additional information on the project was gathered to ensure code compli-
ance and to compare the outcomes to the framework outlined and what occurred in
reality.
The validation case study is Hesham Barakat Station. The elevated metro station
is part of the construction of Cairo’s metro network. The station is divided into
three levels: the concourse (ticket level), the under platform and the platform as
shown in Fig. 4. Numerous trades were involved in the construction of the station,
including mechanical, electrical, architectural and civil. The data collected from the
site included 3D Autodesk Revit® models for each trade and project key factors as
presented in Table 4.
As stated previously, the 3D models acquired are for each trade. The 3D model must
be properly synchronized among all trades in the inputs module. So the framework
application started with a 3D coordinated BIM model created in Autodesk Revit® .
To generate the 3D coordinated, the architectural model is opened for instance, then
it is linked with the structural model in order to generate a new model including
both the architectural and structural trades. Following that, other models are linked
and synchronized with the same manner resulting in a 3D model with all trades as
presented in Fig. 5.
Once the code is accessed from the macro manager and launched, the user is provided
with the user forms and is asked to identify the project key factor as shown in Fig. 6.
After the relationship definition sub-module takes place and the relationships are
defined, the relationship calculation sub-module takes place. The binary matrix
generated is 7096 × 7096, and the total number of relationships for each element is
determined by summing the total number of relationships for that element. Following
that, total number of relationships for the project was calculated after removing the
duplicate relationships totalling to 30,974.
Prediction of Rework on a Construction Site Utilizing ANN Integrated … 431
By employing the previously developed ANN prediction model, the model includes
the seven input neurons that are given by user inputs and relationships calculating
methods. The following are the input neurons:
1. Neuron 1: Project Type (Infrastructure)
2. Neuron 2: Contract Type [Re-measured (Unit Price)]
3. Neuron 3: Delivery Method (Joint Venture)
4. Neuron 4: No. of Subcontractors (3 or more)
5. Neuron 5: Complexity (Complex)
6. Neuron 6: Fast-Tracked/Not (Fast-Tracked)
7. Neuron 7: No. of Relationships in the Project (30,974).
Then, as described previously, the ANN prediction model takes place and forecasts
output neurons which are the probability of occurrence of rework, the impact on the
project’s duration and the impact on the project’s budget cost. Accordingly, the
severity was calculated as presented in Table 5.
The developed approach then returns the output neurons and their related compu-
tations to the BIM model using the designed technique. As a result, the user is
presented with a form showing the severity of rework the project may experience as
demonstrated in Fig. 7.
As demonstrated in Fig. 7, the chance of rework is 28.31%, with an estimated
percentage increase in project’s duration of 14.5% and expected percentage increase
in project’s budget cost 0.06%. The anticipated outcomes are determined by the
number of computed associations and the user-defined project critical variables
utilized as input layers in the ANN prediction model. This suggests that 28.31% of
coordination events detected during the course of the project are expected to require
rework. According to site data, fifty instances occurred when many trades worked in
the same location. Ten of these occurrences required rework, i.e. 20% of these inci-
dents required rework. Using the equation presented by Khair et al. [18], the mean
432 R. Attia et al.
absolute percentage error (MAPE) between the expected and actual percentages is
calculated to be 29.25%. As for the project’s delay, project was actually delayed by
200 days due to rework, based on a 20 day average delay per event. Thus, the severity
of rework suffered is 14.4% which is almost the same as the 14.5% predicted, and
the MAPE is calculated to be 0.20%. Moreover, the actual extra cost spent due to
rework is 125,000 EGP, based on an average extra cost per incident of 12,500 EGP.
Thus, the rework severity is 0.05%. The predicted severity on cost is 0.065% which
is greater than the actual percentage. Thus, the MAPE is calculated to be 22.79%
with the results being summarized in Table 6.
The second sub-module of the outputs module is the model visualization. The three
zones are formed using the model visualization algorithm using the previously
demonstrated equations. As presented in Fig. 8, a newly generated BIM project
is created after the projects were highlighted.
As presented in Fig. 8, the floors are coloured red indicating a high risk zone.
Because the floor is treated as a single element, the number of relationships estimated
for the floor is large highlighting the segmentation problem in BIM modelling.
In attempt to validate the visualization sub-module’s efficiency, some of the actual
reworked elements were compared to their 3D-created BIM counterparts. As shown
in Fig. 9, the right figure shows a section of the masonry plan with highlighted
reworked walls. The left figure shows the plan view of the same part as the right
image.
As presented in Fig. 9, the masonry plan shows blue-highlighted reworked walls
from the real site and a 3D view of the same walls in the highlighted BIM model.
Prediction of Rework on a Construction Site Utilizing ANN Integrated … 433
Low Risk
Medium Risk
High Risk
The walls are shown in yellow, indicating they are in the yellow zone meaning they
have medium contribution to rework.
434 R. Attia et al.
For decades, rework has been a concern in the construction industry. Unexpected
circumstances demand re-performing a task, resulting in a loss of time and money.
Rework happens for a variety of reasons, the most common being on-site coordina-
tion challenges, which are a significant source of rework during construction. This
study presents an interactive framework for determining project elements that might
experience rework. When BIM software is used in conjunction with an artificial
neural network (ANN), it is possible to forecast the severity of rework and iden-
tify project characteristics that contribute to rework. The following summarizes the
research’s contribution:
• Utilizing BIM and artificial intelligence to forecast rework during construction.
• Using the Autodesk Revit® API with a Sharp application Create new C# algo-
rithms for the definition and computation of connections depending on the
geometry and placement of project pieces.
• The development of an artificial neural network prediction model to quantify the
severity of rework in terms of likelihood, impact on schedule and cost.
• A BIM model that depicts the contribution of each element enables the user to see
and reassign changed project elements. To improve overall project performance,
detect rework early in the project’s lifecycle. This enables decision-makers to take
proactive measures and reduce the degree of rework.
For future research in this area, it is recommended to expand the model’s depth and
incorporate constraints not addressed in the current study’s scope of work. Also,
collect additional survey data to account for new project combinations and delivery
methods that the model does not account for. The additional survey replies would
aid in expanding training and testing data sets and thus reducing training and testing
errors. Additionally, the survey data may be utilized to gather BIM models for indi-
vidual projects, enabling precise relationship counts. Additionally, the model could
examine the time and cost implications of each contract, rather than relying just on
an average. Along with examining the model’s limitations, numerous case studies
may be utilized to investigate model limitations. In the event of a schedule delay,
the model may additionally account for the expenses of project extension. Thus, the
effect of rework on the length of the project may be determined for each critical
path. In addition to that, by using neighbourhood clustering and BIM component
segmentation, it is possible to improve model prediction. Finally, the developed
model could be integrated with augmented reality (AR) which would allow users
Prediction of Rework on a Construction Site Utilizing ANN Integrated … 435
to inspect project components, repair problems and update the model in real time
throughout the construction process.
https://www.surveymonkey.com/r/5RLY3RT.
References
7. Zaki T, Khalil C (2015) QR-coded clash-free drawings: an integrated system of BIM and
augmented reality to improve construction project visualization. https://doi.org/10.13140/RG.
2.1.1889.8085
8. Kubicki S, Guerriero A, Schwartz L, Daher E, Idris B (2019) Assessment of synchronous
interactive devices for BIM project coordination: prospective ergonomics approach. Autom
Constr 101:160–178. https://doi.org/10.1016/j.autcon.2018.12.009
9. Hu Y, Castro-Lacouture D, Eastman CM (2019) Holistic clash detection improvement using
a component dependent network in BIM projects. Autom Constr 105:102832. https://doi.org/
10.1016/j.autcon.2019.102832
10. Chahrour R, Hafeez MA, Ahmad AM, Sulieman HI, Dawood H, Rodriguez-Trejo S, Kassem
M, Naji KK, Dawood N (2021) Cost-benefit analysis of BIM-enabled design clash detection
and resolution. Constr Manag Econ 39(1):55–72. https://doi.org/10.1080/01446193.2020.180
2768
11. Liu S, Chang R, Zuo J, Webber RJ, Xiong F, Dong N (2021) Application of artificial
neural networks in construction management: current status and future directions. Appl Sci
11(20):9616. https://doi.org/10.3390/app11209616
12. Kwon TH, Park SH, Park SI, Lee S (2021) Building information modeling-based bridge health
monitoring for anomaly detection under complex loading conditions using artificial neural
networks. J Civ Struct Heal Monit 11(5):1301–1319. https://doi.org/10.1007/s13349-021-005
08-6
13. Cochran WG (1977) Sampling techniques, 3d edn. Wiley
14. Dosumu O (2018) Assessment of the likelihood of risk occurrence on tendering and
procurement of construction projects. J Constr Bus Manag 2(1):20–32
15. Graff C (2014) Expressing relative differences (in percent) by the difference of natural
logarithms. J Math Psychol 60:82–85. https://doi.org/10.1016/j.jmp.2014.02.001
16. Jamel TM, Khammas BM (2012) Implementation of a sigmoid activation function for neural
network using FPGA. In: 13th scientific conference of Al-Ma’moon University College, vol
13
17. Korkmaz M (2021) A study over the general formula of regression sum of squares in multiple
linear regression. Numer Meth Partial Differ Equ 37(1):406–421
18. Khair U, Fahmi H, Al Hakim S, Rahim R (2017) Forecasting error calculation with mean
absolute deviation and mean absolute percentage error. J Phys Conf Series 930(1):012002
Exploring the Essence of User Perspective
in Studying Sustainability Aspects
of Secondary Educational Buildings
Abstract Over the past few years, practitioners and researchers have worked
together to establish frameworks that look into and evaluate the sustainability aspects
of buildings using different measures and practices. Hence, this research paper intro-
duces a three-tier framework for user perspective assessment in educational buildings.
The first tier involves identification of the users’ perspective attributes that have a
direct influence on the users’ perspective in existing buildings. These assessment
attributes cover most of the users’ perspective areas in existing buildings. In addi-
tion, they are shown to be more comprehensive and to cover more areas compared
against sustainability rating systems. The second tier is the weighting process of
main factors stepping on the implementation of fuzzy analytical network process
technique. The third tier is a fuzzy expert system that is to determine the overall user
perspective index based on merging the weights of factors alongside their scores.
The E.V. building in Concordia University was assessed to determine the impact of
weighting and local context on the user perspective assessment.
G. Alfalah
Department of Architecture and Building Science, College of Architecture and Planning, King
Saud University, Riyadh 11421, Saudi Arabia
A. Al-Sakkaf (B)
Department of Buildings, Civil and Environmental Engineering, Concordia University, Montreal,
QC H3G 1M8, Canada
e-mail: abobakr.alsakkaf@concordia.ca
Department of Architecture and Environmental Planning, College of Engineering and Petroleum,
Hadhramout University, 50512 Mukalla, Yemen
E. M. Abdelkader
Structural Engineering Department, Faculty of Engineering, Cairo University, Giza 12613, Egypt
T. Z. Mohammed Abdelkader
Department of Building and Real Estate, The Hong Kong Polytechnic University, Hung Hom,
Hong Kong
1 Introduction
Energy efficiency in buildings has become mandatory and is one of the requirements
in many building codes worldwide. Such building code requirements are designed
to increase energy efficiency, a standard that is expected to be increased in the future
in order to reduce building-related greenhouse gas emissions. Increasing the energy
efficiency of buildings in the form of insulation, light fittings, building management
systems (BMS), and micro-generation (e.g., solar panels and wind turbines) leads
to measurable and thus quantifiable reductions in operating costs. However, many
owners have not yet realized the benefits of sustainability, and thus the adoption of
sustainable building practices remains restricted to relatively few industry leaders
[11, 26]. Longer building life cycles with healthier environments for occupants are
two of the positive attributes that represent financial benefits to be gained from a
sustainable building [5]. A modern design that incorporates state-of-the-art services
and new technologies allows buildings to meet recognized sustainability criteria;
however, the user perspective in regard to design remains the most important factor
for a successful sustainable building design [2].
According to Brown and Cole [9] and Wilkinson et al. [42], employees working in
commercial buildings receive salaries and on-costs that equate to approximately 85%
of a typical commercial building. Quantifying the levels of employee productivity,
absenteeism, and churn in sustainable buildings could then be used to identify the
potentially significant financial benefits that businesses can realize. The reduced
operating costs during the life cycles of sustainable buildings have been established,
although intangible effects, such as whether sustainable buildings meet users’ needs
and realize users’ satisfaction and if so, to what degree, are uncertain. This and
other intangible aspects have not been accurately quantified and thus are not (yet)
part of the measurement of sustainability of buildings. In addition, there are several
uncertainties in quantifying the direct and indirect benefits that arise from enhancing
sustainability to satisfy building occupants.
Most research in this field revolve around the following two key topics: (1) building
sustainability assessment for measuring sustainability in education buildings [4, 5,
20, 29, 34] and (2) sustainable rating systems for education buildings [3, 44] (Adler
et al. 2006). For example, Brown and Cole [9] asserted that sustainable building
strategies which education building considers one of the most types of buildings have
been linked to occupant comfort, organizational success, enhancement of occupant–
stakeholder relations, and improvement of the livability of a community [9, 16, 23].
They also confirmed that occupants’ workplace perspective may increase occupants’
productivity which we can measure on how sustainable building has a very high
impact on the performance of students. In addition, Eweda et al. [14] focused on
educational buildings from a facility management (FM) perspective and developed
a model that integrated the physical and environmental criteria into a one-condition
assessment model. This integrated model was generated in order to allow the FM
to evaluate the condition of their existing educational buildings. Identified factors
were the ones that affected the physical and environmental condition of an education
Exploring the Essence of User Perspective in Studying Sustainability … 439
building and its spaces by an evaluation schemes for all these factors based on codes,
standards, current practices, and experts’ opinions.
Previous research has focused on the educational design from an organiza-
tional perspective. However, considerably less work has focused on assessing and
enhancing the sustainability of buildings from the user’s perspective. Therefore, it
was necessary to link educational design and organizational culture in the eval-
uation of a sustainable building from the user’s perspective [6]. For this reason,
Owens and Legere [32] and White [41] concluded that the understanding of sustain-
able building is still unclear and has to investigate in different perspective. Because
of that, most recent sustainable facility management (SFM) research studies that
cover the social aspect [10, 39] have limitations that hinder its effectiveness and
usefulness. These limitations include a suboptimal selection of users’ satisfaction
indicators, non-identification of users’ needs, non-consideration of the variations of
users’ needs, ineffective modelling of the complexity associated with users’ satisfac-
tion, and unaddressed difficulties associated with developing weighting schemes and
measurement scales. However, many research studies have discussed and analyzed
the limitations associated with efficiently meeting users’ needs [21, 28].
Fig. 1 Relationship between intangible benefits of sustainability in buildings pre- and post-
adaptation [42]
Buildings of almost all types are considered to be among the worst contributors to
harmful impacts on the environment and, as a result, exert a negative impact on human
well-being and productivity. At the same time, buildings have a huge capability to
reduce the causes of detrimental consequences to the environment when compared
with other contributing factors. The operational or occupational stages of buildings
have the greatest impact on the environment and their occupants, compared to the
other stages. Mitigating the impacts of buildings on the environment and on occu-
pants has led practitioners and researchers to develop frameworks, methodologies,
and best practices for upgrading building sustainability. The majority of research has
focused on the evaluation of building sustainability by providing a comprehensive set
of criteria and strategies to minimize energy (and water) use and reduce greenhouse
442 G. Alfalah et al.
gas (GHG) emissions. However, the existing methods evaluate the sustainability of
buildings from an organizational perspective alone and neglect the users’ perspective.
Furthermore, the factors that affect the user perspective regarding building sustain-
ability have yet to be identified, evaluated, and incorporated into a framework for
upgrading conventional buildings into sustainable ones.
It should be noted that the users’ requirements vary according to the type of
building, which means that the facility manager must evaluate the perspectives
of users in various aspects, such as Indoor Environmental Quality (IEQ). IEQ
includes environmental aspects that provide occupants with good air quality, at least
a minimum amount of daylight and outdoor views, pleasant acoustic conditions, and
control over lighting and thermal comfort [1, 12, 24]. It also includes functional
aspects of space in terms of accessibility and ratio (i.e., the number of occupants
in relation to the useable space). In a way, these aspects are correlated, but their
importance to users differs from one building to another. Therefore, the assessment
procedure for understanding users’ perspectives regarding building sustainability
must change proportionally, depending on the nature of the use of each building,
while keeping the main criteria constant.
In addition, many facility managers for Post-Secondary’ Educational Buildings
forget the true purpose of sustainable buildings and sometimes limit themselves
to being a status symbol rather than a desire for real transformation. Many sustain-
able Post-Secondary’ Educational Buildings have become focused only on achieving
points in any way possible without taking into account user perspective. In other
words, facility managers install systems that have no particular functions just to
acquire LEED credits, which causes the death of many of these sustainable applica-
tions over time as a result of the user’s lack of interaction with them. This is actually
contrary to the actual purpose of why LEED certification was developed. This is
the result of many Post-Secondary’ Educational Buildings address sustainability
from economic and organizational perspectives. However, considerably less facility
managers have focused on assessing and enhancing the sustainability of buildings
from the user’s perspective. Therefore, the proposed model identifies and evalu-
ates factors affecting the perspectives of users/occupants in sustainable buildings
in Post-Secondary’ Educational Buildings. It also evaluates the FM performance in
Post-Secondary’ Educational Buildings from the users’ perspectives. The problem
consists of two sub-problems: (1) identification and assessment of various factors
that affect user perspective in order to set a scale for assessing user perspective in
sustainable Post-Secondary’ Educational Buildings and (2) development of a service-
based model to update or upgrade the sustainability of existing Post-Secondary’
Educational Buildings to maximize users’ experience.
Exploring the Essence of User Perspective in Studying Sustainability … 443
4 Model Development
Fuzzy set theory was first introduced by Zadeh [43] to incorporate the imprecision
and vagueness associated with data [7]. However, fuzzy theory has proven to be
444 G. Alfalah et al.
5 Model Implementation
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 1 2 3 4 5 6 7 8 9 10
6 Conclusions
This research proposed an integrated overall users’ perceptive scale and user’s satis-
faction index for sustainable education buildings, which allows the user’s perspec-
tive to be assessed both before and after a building upgrade. This framework takes
into consideration the factors and sub-factors that affect user’s satisfaction from the
user’s perspective, whose five main factors are: (1) thermal comfort and air quality;
(2) esthetics, amenity, and upkeep; (3) design and flexibility; and (4) lighting and
acoustics. This research identified a threshold value for the factors and sub-factors; if
any factor or sub-factor reaches this level or below, it is considered to be unsatisfied.
The overall users’ perceptive scale model for education sustainable buildings was
expressed based on a 5-point Likert scale in the form of trapezoidal fuzzy numbers.
It was found that thermal comfort (40%) was the important factor, while lighting
and acoustics (16%) exhibited the least level of importance. The user’s perspec-
tive building index will assist facility management to highlight the weaknesses and
strengths of their buildings based on the factors and sub-factors and their needs for
upgrading (or not) from the users’ perspective.
References
13. Eweda A (2012) An integrated condition assessment model for educational buildings using
BIM, Doctoral dissertation, Concordia University. Montreal, QC, Canada
14. Eweda A, Al-Sakkaf A, Zayed T, Alkass S (2021) Condition assessment model of building
indoor environment: a case study on educational buildings. Int J Build Pathol Adapt
15. Gómez F (2013) Adaptable model to assess sustainability in higher education: application to
five Chilean institutions, Master’s thesis, Pontifical Catholic University of Chile
16. Heerwagen J (2000) Green buildings, organizational success and occupant productivity. Build
Res Info 28(5–6):353–367
17. IFMA (2007) Facility management forecast 2007: exploring the current trends and future
outlook for facility management professionals. International Facility Management Associa-
tion, Houston, TX, pp 1–15
18. Ismaeel M, Zayed T (2021) Performance-based budget allocation model for water networks. J
Pipeline Syst Eng Pract 12(3):1–8
19. Jensen PA, Andersen PD (2010) The FM sector and its status in the Nordic countries. Centre
Facil Manag Realdania Res 21(1):1–31
20. Kamal A, Asmuss M (2013) Benchmarking tools for assessing and tracking sustainability in
higher education institutions: identifying an effective tool for University of Saskatchewan. Int
J Sustain High Educ 14(4):449–465
21. Khalil N, Nawawi A (2008) Performance analysis of government and public buildings via post
occupancy evaluation. Asian Soc Sci 4(9):103–112
22. Kincaid D (2004) Finding viable uses for redundant building. Adapting buildings for changing
uses: guidelines for change of use refurbishment, first edition. Taylor & Francis, London, UK,
pp 21–61
23. Kumar S, Singh MK, Mathur A, Košir M (2020) Occupant’s thermal comfort expectations
in naturally ventilated engineering workshop building: a case study at high metabolic rates.
Energy Build 217:1–19
24. Lee JY, Wargocki P, Chan YH, Chen L, Tham KW (2020) How does indoor environmental
quality in green refurbished office buildings compare with the one in new certified buildings?
Build Environ 171:1–12
25. Marzouk M, Mohammed Abdelkader E (2019) On the use of multi-criteria decision making
methods for minimizing environmental emissions in construction projects. Decis Sci Lett
8(4):373–392
26. Marzouk M, Mohammed Abdelkader E, Al-Gahtani K (2017) Building information modeling-
based model for calculating direct and indirect emissions in construction projects. J Clean Prod
152:351–363
27. McLennan P (2000) Intellectual capital: future competitive advantage for facility management.
Facilities 18(3/4):168–172
28. Meir IA, Garb Y, Jiao D, Cicelsky A (2009) Post-occupancy evaluation: an inevitable step
toward sustainability. Adv Build Energy Res 3(1):189–220
29. Mohammed Abdelkader E, Al-Sakkaf A, Ahmed R (2020) A comprehensive comparative
analysis of machine learning models for predicting heating and cooling loads. Decis Sci Lett
9(3):409–420
30. Mohammed Abdelkader E, Marzouk M, Zayed T (2020) An invasive weed optimization-based
fuzzy decision-making framework for bridge intervention prioritization in element and network
levels. Int J Inf Technol Decis Mak 19(5):1–58
31. Noor M, Pitt M (2009) A critical review on innovation in facilities management service delivery.
Facilities 27(5/6):211–228
32. Owens KA, Legere S (2015) What do we say when we talk about sustainability? Analyzing
faculty, staff and student definitions of sustainability at one American University. Int J Sustain
High Educ 16(3):1–18
33. Saaty TL (1996) The analytic network process: decision making with dependence and feedback.
The organization and prioritization of complexity. Rws publications, pp 25–44
34. Shriberg M (2002) Institutional assessment tools for sustainability in higher education:
strengths, weaknesses, and implications for practice and theory. Int J Sustain High Educ
3(3):254–270
448 G. Alfalah et al.
35. Thornhill A, Lewis PM, Saunders M (2000) Managing change: a human resource strategy
approach. FT Prentice Hall. London, U.K., pp 25–55
36. Tsai W, Leu J, Liu J, Lin S, Shaw M (2010) A MCDM approach for sourcing strategy mix
decision in IT projects. Expert Syst Appl 37(5):3870–3886
37. UNEP-Skanska, (2013) Energy efficiency in buildings: guidance for facility managers. UNEP-
Skanska 1(1):1–34
38. Velasquez M, Hester PT (2013) An analysis of multi-criteria decision making methods. Int J
Oper Res 10(2):56–66
39. Waly A, Helal D (2010) The impact of facility management on office buildings performance
in Egypt. In: Second international conference on construction in developing countries, Cairo,
Egypt, pp 1–10
40. Wang T (2012) The interactive trade decision-making research: an application of novel hybrid
MCDM model. Econ Model 29(3):926–935
41. White MA (2013) Sustainability: I know it when I see it. Ecol Econ 86:213–217
42. Wilkinson S, Red R, Jailani J (2011) User satisfaction in sustainable office buildings: a prelim-
inary study. In: 17th PRRES Pacific rim real estate society conference, Gold Coast, Australia,
pp 1–15
43. Zadeh L (1965) Fuzzy sets. Inf Control 8(3):338–353
44. Zhu B, Zhu C, Dewancker B (2020) A study of development mode in green campus to realize
the sustainable development goals. Int J Sustain High Educ 21(4):799–818
Overall Schedule OPTIMIZATION
USING GENETIC ALGORITHMS
1 Background
Minimize project cost = Direct cost + Indirect cost ∗ project duration. (2)
The model aims to use four objective functions to minimize the total project cost,
time, resource moments, and cash flow. The constraints are as follows:
The main objective of this study is to develop and evaluate a multi-objective optimiza-
tion model using genetic algorithms and goal-programming approach that can reach
a near-optimum solution for time, profit, quality, and resource usage simultaneously.
The model starts with a set of inputs that the user should define:
• Activities Relationships and Lags: The user must input the project’s activities
and their predecessors, in addition, to the lags between each activity and its
predecessor. All relationships are assumed to be finish-to-start.
• Activities’ Quantities and Units.
452 M. Amin et al.
• Quality Importance: The user has to input the importance of quality for each
activity, which could be normal, high, or very high.
• Three construction processes: The user shall input three construction processes
for each activity, with the first construction method as the conventional, having
the following items:
– Production Rate.
– Cost/Unit.
– Number of Resources.
– Quality Level (the quality level of the activity in a specific construction
method).
• Quality Incentive/Penalty Rates: The incentive/penalty rates depend on the quality
of the activity with the selected construction process. The quality level ranges from
0.8 to 1.2, in which 0.8 is the least quality accepted with deduction or penalty,
1 is the least required quality without any penalty, and 1.2 is the highest quality
with the highest incentive.
• Time Incentive/Penalty per day: Time incentive per day is an amount of money
that might be offered by the owner for finishing the project in a duration less than
the contract project duration, while penalty per day is an amount of money that is
deducted from the contract price for finishing the project in longer duration than
the contract project duration.
• Maximum available total resources: It is the maximum available number of
resources per day during the project’s entire duration.
• Project contract duration and maximum total duration.
• Objective factors’ weights (profit, time, resource usage, and quality).
• General project data (advance payment percentage, retention percentage, markup
percentage, invoice processing duration, total indirect cost, and interest rate).
The are several variables that affect reaching an optimum solution which includes:
the construction process to be selected for each activity, where each activity can use
one of the three construction processes inputted by the user; and the shift in days for
each activity, which is added to the early start time of the activity to have the new
adjusted early start and early finish time of the activity to meet resource constraints.
• Maximum project contract duration: It is the maximum duration that the project
can by no means exceed.
Overall Schedule OPTIMIZATION USING GENETIC ALGORITHMS 453
Optimizing four aspects of project (profit, time, quality, and resource usage)
simultaneously, with weight or relative importance input.
3 Model Architecture
In the CPM module, calculation of the early start, early finish, late start, and late
finish for the different activities is performed considering the construction process
specified for each activity. Genetic algorithm using Evolver add-in Excel is integrated
with the CPM module to reach to optimum overall solution, as shown in Fig. 1. The
following equations were used:
The adjusted start of each activity is obtained by adding the shift in days, which is
a variable, to the early start date of the activity, and the adjusted finish date is obtained
by adding the duration to the actual start date. The following equations were used:
In this module, resource allocation and leveling were both incorporated. Resource
allocation was applied by adding a specified maximum available number of resources
as an input that can be used on any day. In the case that the resource requirement on a
certain date exceeds the maximum available, the model may extend the project dura-
tion without exceeding the maximum contract total duration, or change the construc-
tion methods of some activities, or change the shift in days to prevent the overlap
between the activities that may result in a high resource usage per day. Also, resource
leveling was considered in this model by incorporating the method of smoothing,
known as Moment Mx and Moment My. Moment Mx is calculated to represent the
resource fluctuation throughout the project duration, while Moment My is calculated
Overall Schedule OPTIMIZATION USING GENETIC ALGORITHMS 455
to represent the gaps in the resources throughout the project duration. Finally, M-total
is calculated as the summation of both.
The financing of the project is based on the cash in and cash out as shown in Figs. 2 and
3. First for cash-out computation, the direct cost is calculated for each day according
to the direct cost of each activity based on the construction method selected, then
it is added to the indirect cost, which is distributed over the total project duration
evenly. The cash in is calculated in a different way, first the direct cost is calculated
as the direct cost of the conventional activity process, then multiplied by the quality
incentive, then the total direct cost for each day is calculated.
The indirect cost for each day is calculated then multiplied by the ratio {Direct
Cost per day (including incentive)/Direct Cost per day (without incentive)}. After
calculating the direct and indirect costs, including incentive per day, the sum of
the direct cost per month and the indirect cost per month is calculated to get the
total cost per month, including incentive. Finally, it is multiplied by {Markup + 1}
to get the month earned cash in. The cash in is then assigned to the month after
the invoice processing period. In order to calculate the profit accurately, the model
calculates the interest cost on the negative financing needed for the project first, the
financing at the end of each month is calculated as {(financing at end of the month) =
(cumulative cash in to the end of the previous month) − (cumulative cash out to the
end of the month)} the difference is negative, then financial charges are calculated as
interest percentage for the month. To reach the maximum profit, profit present value
is calculated as {(Cash in PV) − (Cash out PV) − (Financial Charges PV)}.
After running the model for individual optimization for each of the four aspects
(Cost, Time, Quality, Resource Usage), the model uses a multi-objective optimization
technique, goal programing, to come up with the optimum solution considering all
the four aspects using the following equations [1]:
Multi - Objective Optimization Value = (Elements Weighted Deviation)
(16)
By minimizing the multi-objective optimization value, the deviation for the four
aspects will be less than the optimum.
Overall Schedule OPTIMIZATION USING GENETIC ALGORITHMS 457
4 Model Verification
The selected case was part of a large construction project, 20 activities were taken
with all data needed, such as, activity relationships, items quantity, and quality impor-
tance as in Fig. 4. Also, for each construction method for each activity, the required
resources, along with cost-per-unit, productivity rate, and quality level were provided
as in Fig. 5. Finally, the general project and contractual inputs were entered, such
as time incentive/penalty per day, quality incentive/penalty, project duration, and
financial data as in Fig. 6.
AcƟvity Name AcƟvity Prede. 1 Lag 1 Prede. 2 Lag 2 Prede. 3 Lag 3 QuanƟty Quality Importance
ExcavaƟon A1 10000 Normal
Soil Replacement A2 A1 5000 High
PC FooƟng Form A3 A2 356 Normal
PC FooƟng Pouring A4 A3 356 Normal
PC FooƟng Form removal A5 A4 1 Normal
RC FooƟng Form A6 A4 3 1990 High
RC FooƟng Reinf. A7 A6 A5 1990 High
RC FooƟng Pouring A8 A7 1 A4 7 1990 High
RC FooƟng Form removal A9 A8 1 Normal
Basement Col. Form A10 A8 3 160 V.High
Basement Col. Reinf. A11 A10 A9 160 V.High
Basement Col. Pouring A12 A10 1 A8 7 160 V.High
Basement Col. Form remova A13 A12 1 High
FoundaƟon InsulaƟon A14 A13 3 2071 High
Backfill A15 A14 1 1500 Normal
S.O.G A16 A15 300 Normal
Basement Slab Form A17 A12 3 472 High
Basement Slab Reinf. A18 A17 472 V.High
Basement Slab Pouring A19 A18 1 A12 14 472 High
Basement Slab Form remova A20 A19 1 Normal
The model was run four times at first, trying to reach the optimum value for each of
the four aspects individually without considering the other aspects. Then, a final run
was done using the four optimum values. Goal-programming approach was utilized,
as shown in Fig. 7, to reach the overall optimum schedule that considers all aspects
simultaneously.
The outcomes of the model, as shown in Fig. 8, present how much the model
compromises each aspect to reach a near-optimum solution for all the aspects
together.
Figure 9 shows a sample of the model output for the cash flow diagrams for opti-
mizing the profit only and for the multi-objective optimization. Figure 10 shows the
histograms for optimizing the resource only and for the multi-objective optimization.
Overall Schedule OPTIMIZATION USING GENETIC ALGORITHMS 459
References
Jill Roth
The Envision framework provides industry-wide sustainability metrics for all types
and sizes of infrastructure to help users assess and measure the extent to which their
project contributes to conditions of sustainability. By using the Envision framework
as a project delivery tool, a project’s sustainability performance becomes less about
“point chasing” and focuses more meeting a project’s success metrics. If adopted from
the project outset and maintained throughout the project, “sustainability” becomes
a natural feature. Design decisions are made in alignment with goals set out in the
early planning stages; requiring no financial commitment up front, this approach
helps formulate a plan for sustainability. Using Envision as a project delivery tool
can help with:
• Setting sustainability goals for a project
• Assigning accountability for sustainability goals
• Making decisions on design and value engineering
J. Roth (B)
Luuceo Consulting, Vancouver, Canada
e-mail: jill@luuceo.com
1 Introduction—Sustainable Development
Table 1 17 UN SDGs
1 No poverty 7 Affordable and clean energy 13 Climate action
2 Zero hunger 8 Decent work and economic 14 Life below water
growth
3 Good health and 9 Industry, innovation, and 15 Life on land
well-being infrastructure
4 Quality education 10 Reduce inequalities 16 Peace, justice, and strong
institutions
5 Gender equality 11 Sustainable cities and 17 Partnership for the goals
communities
6 Clean water and 12 Responsible consumption and
sanitation production
https://www.un.org/sustainabledevelopment/sustainable-development-goals/
Redefining Sustainability for Project Lifecycle Success 465
Sustainable development needs to be the standard for new projects, so future gener-
ations can meet their needs. As infrastructure is essential to development, sustainable
infrastructure is an effective area of focus. One way to measure sustainable infras-
tructure is by assessing a project’s triple bottom line (TBL). Traditionally, a project
or organization assesses their success based on their financial performance, also
known as their bottom line. The triple bottom line refers to social, environmental,
and financial performance [2].
In addition to considering social, environmental, and economic performance of
a project, climate change should be considered by a project team as part of sustain-
able development and sustainable infrastructure. A project’s ability to both limit its
contribution to climate change and demonstrate its resilience to the risks of climate
change are critical to meet the needs of future generations as identified by the 17
UN SDGs. As severe weather events are becoming more common, the need for
this dual ability is becoming increasingly more evident. In the November of 2021,
British Columbia experiences three significant rainfall events, referred to as atmo-
spheric rivers. The Coquihalla Highway (Highway 5) in British Columbia had not
been designed to withstand such flooding events, and more than 100 km of essential
roadway were destroyed [6]. Infrastructure built to existing design standards have not
been designed to withstand the reality of current weather events further proving the
need for sustainable development which including being designed to address climate
change.
Despite a clear need for sustainable infrastructure, there are still barriers to the
successful implementation. The challenges, solutions, and methods for successful
implementation of sustainable infrastructure will be explored in this paper.
2 Problem—Challenges of Implementation
2.1 Ambiguity
There is a clear emphasis on the need for sustainable development and sustain-
able products; however, the market has been saturated with a variety products and
technologies on the market making claims to support sustainability by using words
such as “green” and “eco-friendly.” Unfortunately, not all claims that are made are
true or accurate. This concept is referred to as greenwashing. First coined in 1986,
greenwashing is the process of conveying a false impression or providing misleading
information about how environmentally friendly or sustainable a product is Robinson
[3]. Greenwashing is the result of companies understanding that if they are seen as
ethical, this may drive their profitability.
Clarifying the decisions that can support a project’s sustainability goals, the best
approach to sustainable development and the identification of unsubstantiated claims
can be challenging for a novice and therefore deter a project owner or design team
from attempting.
The perception of increase project costs can also be a deterrent to project teams from
meaningfully incorporating sustainable practices. Costs are typically only increased
if they require changes late in the project schedule as a project goes from planning
through design and into construction, the associated costs to make changes will
increase (Fig. 1).
Therefore, sustainability strategies should be adopted before critical decisions
milestones on a project, to reduce cost. Although valid, concerns related to cost
implications focusing solely on the financial bottom line when making decisions,
and not the social and environmental bottom lines or lifecycle cost, are based on
incomplete information. Project owners and teams must also consider the social and
environmental impacts and full lifecycle costs to understand how the project can
perform if developed sustainably.
One common mistake a project team will make that will result in increased project
costs is to include the sustainability scope as a separate value add or line item, rather
than integrating it across disciplines.
triple bottom line (environment, economy, social) for infrastructure projects as well
as climate and resilience. Credit categories include Quality of Life, Leadership,
Resource Allocation, Natural World, and Climate and Resilience. Each credit has
a range of levels of achievement that represent a spectrum of performance goals
starting at slightly improving beyond the status quo to practices that conserve or
restore the environment, community, or economy.
Envision supports project teams navigate sustainability by providing measurable
metrics in a variety of areas related to sustainability. Figure 2 is an example of levels of
achievement and associated evaluation criteria for the Envision credit QL3.4 Enhance
Public Space and Amenities.
A project team can assess how they are performing on a credit and use the eval-
uation criteria to improve their performance. For example, a municipality needs to
complete road improvements with the end goal of alleviating congestion. The scope
includes a multi-use pathway (MUP), and the municipality wants to meaningfully
engage its residents for input on the MUP. Using QL3.4 as an example, the following
metrics can be identified for the project to assess to support their efforts:
• Assess the project’s impacts to existing public amenities. If impacts are identified,
implement mitigation strategies so there is no net loss of the public amenity (A).
• Facilitate stakeholder engagement on the project and include issues of public
space and amenities (B).
• Address concerns raised by stakeholders in such a way that the project can
demonstrate stakeholder satisfaction related to the public amenity (C).
Redefining Sustainability for Project Lifecycle Success 469
• Identify the priority of the project to improve and enhance existing public space,
create a new amenity, or restore a lost amenity (D/E/F).
The project team can incorporate tangible solutions for meaningful engagement of
the development of a public amenity. Any improvements made to their engagement
processes can also be adopted to benefit future projects.
Because the framework is applicable to all types and sizes of infrastructure
projects, there is not a one-size fits all solution. The framework provides solutions
that can be tailored to a specific project and are measured against a conventional
project of a similar nature.
4 Methodology—Sustainability Management
Table 2 Example
Sustainability goals
sustainability goals
Improve cycling and walking network
Traffic reduction
Energy reduction
Table 3 Example of
Sustainability goals Project features
mapping project features to
sustainability goals Improve cycling and walking Multi-use pathway (MUP)
network
Traffic reduction Roadway overpass
Energy Reduction Solar powered signage
Once sustainability goals are set, the project team should map out how the project
will meet the goals. This should include what project features meet these goals. An
example of goal mapping is provided in Table 3.
Using the same roadworks example, assume early stakeholder engagement iden-
tified that improved biking and walking connectivity around the municipality was a
priority for its citizens. As the design progresses however, value engineering deci-
sions need to be made. If the value of the multi-use pathway is not clearly understood
by the design team, this scope runs the risk of being reduced or eliminated for cost
savings.
The SMP should also include procedures for managing sustainability goals and
will help keep the project on track to meet these goals. These procedures should
include a plan-do-check-act methodology.
Plan: The objectives for meeting the project’s sustainability goals and how success-
fully meeting the goals will be measured. When using Envision, it is helpful to map
each goal to a credit or credits. Specific credits will help establish measurable metrics
for the project team to reference throughout design, construction, and operations.
Do: The act of implementing the objectives set in the “Plan” which may include
achieving certain credits under Envision.
Check: How the project team will assess if the project is meeting the sustainability
goal’s metrics. This should include the frequency at which a team will assess the
project goals and associated metrics. Based on the above details, the project team
will need to follow up on their engagement plan to ensure stakeholder satisfaction.
Act: How the project team will adjust or correct misalignment with project goals or
unmet metrics.
Continuing our example, the project team mapped credits to project sustainability
goals (Table 4).
Using QL3.4 Enhance Public Space and Amenities (refer to the credit table
provided in Sect. 3), the project can measure their goals as follows:
Redefining Sustainability for Project Lifecycle Success 471
Table 4 Example of mapping project features to sustainability goals and envision credits
Sustainability goals Project features Envision credits
Improve cycling and walking Multi-use pathway QL2.1 Improve community
network mobility
QL2.2 Encourage sustainable
transportation
QL3.4 Enhance public space and
amenities
Traffic reduction Roadway overpass QL.11 Improve community quality
of life
QL2.1 Improve community
mobility
QL2.3 Improve access &
wayfinding
Energy reduction Solar-powered signage RA2.3 Use renewable energy
Criterion A: The project team assessed their impacts to existing public amenities
through stakeholder and community engagement and determined the project would
result in none. They can demonstrate no net loss of public amenity will occur.
Criterion B: The stakeholder and community engagement process started early in
the project and provided input on design details of the MUP.
Typically, the municipality’s engagement strategies end here. Reviewing the
criteria requirements outlined in the credit guidance, the project team made
improvements to their stakeholder and community engagement strategies to include
follow-ups to demonstrate support and satisfaction with the MUP (Criterion C).
The SMP should also include an approach to how the plan itself will be monitored.
Specify at which frequency the SMP will be updated and how the progress of the
project’s sustainability goals will be communicated amongst the team.
Lastly, organizational charts assigning responsibility to the various aspects of the
SMP will promote accountability. Identify who oversees the plan, who is responsible
for project sustainability issues, and their authority to make decision that will promote
change.
5 Demonstration of Concepts
Adopting the Envision framework as a project delivery tool can also be used to
benchmark an organization’s current practices with the goal of improving. In 2019,
a Port Authority was able to leverage the Envision framework to improve their orga-
nization’s approach to sustainable infrastructure development by using it to assess
a project that was already nearing construction. As the project was already at the
472 J. Roth
construction phase, there was limited room to implement meaningful change. Instead
of paying a premium to try and achieve an award under Envision, the organiza-
tion chose to use this project as a baseline for their sustainability practices. They
hired sustainability professionals with experience using the Envision framework to
complete a baseline assessment of the project. The baseline assessment identified
gaps and opportunities for the organization to consider as strategies to improve how
they deliver projects in the future. Areas of improvement that were identified and
then adopted included the following:
• Establishing a stakeholder engagement plan, including a form to log and track
feedback.
• Revising their contractor services agreement language to include specific require-
ments related to equitable hiring practices, waste management, health and safety
plans, and soil management.
• Improvements to how the organization document and track their sustainability
initiatives.
• Adopting frameworks at project outset and hiring professions to support the
process.
• Establishing sustainability goals early in a project as a metric for success.
This client was able to take these learnings and apply to the next project, which they
can proudly say they are submitting for their first verification under Envision. They
have continued to seek ways to improve their project delivery process, understanding
that change is an iterative process.
6 Conclusions
The ambiguity in how to achieve sustainability, perceived increased project costs, and
innovative approaches in a traditional industry are the key barriers observed when
projects consider adopting sustainability frameworks. Using the Envision framework
as a project delivery tool and developing a comprehensive sustainability management
plan at the project outset help eliminate those barriers.
The following lessons learned have come from projects using the Envision
Framework:
• Hiring inexperienced professionals can lead to improper management of the
project’s sustainability goals.
• Not having a sustainability management plan, formal or informal, can lead to lack
of accountability, and running the risk of not meeting publicly committed goals.
474 J. Roth
References
1. Brundtland G (1987) Report of the world commission on environment and development: our
common future. United Nations General Assembly Document A/42/427
2. Institute for Sustainable Infrastructure (2018) Envision: sustainable infrastructure framework
guidance manual V3 (Third Edition). Institute for Sustainable Infrastructure
3. Robinson D (2021) What is greenwashing. Earth.org. https://earth.org/what-is-greenwashing/
4. United Nations (2015) Sustainable development goals. UN.org. https://www.un.org/sustainab
ledevelopment/sustainable-development-goals/
5. United States Green Building Council (2021) Mission and vision. USGBC. https://www.usgbc.
org/about/mission-vision
6. Zussman R, Judd A (2022) Months after disastrous floods, B.C.’s Coquihalla highway to
fully reopen. Global News. https://globalnews.ca/news/8519601/bc-floods-latest-update-state-
of-emergency/
Watershed Analysis for Small Coastal
Newfoundland Communities
1 Introduction
2 Methodology
See Fig. 2.
Fig. 2 Map of the study areas on the island of Newfoundland with the closest meteorological
station at St. John’s Airport
green space and infiltration of water upstream is leading to increased runoff making
its way downstream leading to higher peak flows, water accumulation and erosion
affecting some of the community’s main infrastructure. The town of Bay Bulls has
been approached by many developers and companies to purchase large sections of
land in hopes to develop new subdivisions and commercial areas. By assessing some
of the concerns, the town can upgrade infrastructure and incorporate regulations for
development to help mitigate future impacts.
Bay De Grave watershed is a 17.3 km2 basin discharging into southwestern shores of
Conception Bay. The low-lying waterfront area has a gentle topography compared
to the upstream highlands. According to the 2016 Census, the towns had a total
population of 3671 with the highest population density near the coast area. Storm-
water management systems applications, monitoring and future planning are essential
due to our changing climate and increasing sea level’s impacts on drainage systems
in this area.
Watershed Analysis for Small Coastal Newfoundland Communities 479
The Town of Pouch Cove is a small coastal community of about 58.34 km2 , located
in the Southeast Avalon region of Newfoundland and Labrador. The basin is prone to
flooding, with tropical cyclones being the main flood-causing events. The region lacks
data collection services (stream and rain gauges), making model setup and calibra-
tion difficult. The Town’s existing storm-water management system is a prominent
infrastructure and public safety vulnerability. Overall, it is known by Town staff
from working experience that the current storm-water infrastructure is inadequate in
terms of coverage of the Town’s developed area, aging conditions/state of disrepair
and the capacity of the system to address climate change projected volumes and
flows. The Town staff identified that an innovative solution to begin the storm-water
management master plan exercise is with watershed models.
2.2 PC-SWMM
The main objective of the model development was to represent the current conditions
of Bay Bulls, Pouch Cove, Brigus, Clarke’s Beach, Cupids and South River water-
sheds as a computer simulated model. This would allow for manipulation to evaluate
future scenarios of developed land cover, changing climate and weather scenarios.
The first step was the preparation of data layers and to delineate the study areas
using a digital elevation model (DEM). A 6-m DEM of the study areas was obtained
from the Worldwide Elevation Data using Watershed Modeling System (WMS). One
thing to note about a DEM is that the quality of the delineated hydrological features is
sensitive to the accuracy and resolution of the DEM. The relatively low resolution of
the available DEMs contributed to some details being lost and incorrect and required
manual manipulation and field surveying. Unfortunately, there were no other higher
resolutions available for Newfoundland at the time these studies commenced.
The DEM was first imported into GIS to be clipped to just study the specific
town area and from here some of the required spatial data components were initially
completed in ArcGIS and then saved as a shape file that could be imported into
PC-SWMM. The GIS Software could also be utilized in the field through a mobile
app and was able to pinpoint assets by satellite coordinates. Some of the initial 1D
nodes such as conduits and junctions to highlight the rivers, culverts and bridges,
outfalls to represent the outlet points and the boundaries of the watershed and smaller
subcatchments were initially highlighted and tweaked in ArcGIS. ArcGIS allows for
a satellite imagery layer to be visible in the background to help layout the community
infrastructure, the imagery allowed for confirmation of important infrastructure. Once
these layers were completed, they could be saved as shape files and were imported
into PC-SWMM. Within PC-SWMM an automatic watershed delineation tool was
used but there was the ability to burn in some of the 1D nodes that were picked up
during field surveys for higher accuracy. The delineation process split the study area
into subcatchment areas at a reasonable size to represent similar land use and flow
points to drain to the nearest common outlet. Some subcatchments were modified
after some field investigation or based on priority such as a concerning subdivision
was broken down into smaller subcatchments, whereas the rural green space was
combined into one larger sub catchment.
Once the one-dimensional model was complete, a two-dimensional mesh was
generated from the Digital Elevation Model that linked all the 1D model components
for more accuracy.
The SCS Curve Number method was used to determine the infiltration of rainfall
versus runoff. This method requires a curve number to be input for each subcatch-
ment based on its hydrologic soil type and land use. For hydrologic soil type, a
Global Hydrologic Soil Group (HYSOGs250m) which was retrieved from the NASA
website, land use was provided by the towns or determined through field visits. Curve
numbers were selected from the EPA SWMM 5.1 manual [10].
Once the model was set up and ready to run, rainfall data was required, at first
the model was tested with some of the preset storms such as Hurricane Hazel which
Watershed Analysis for Small Coastal Newfoundland Communities 481
allowed some minimal calibration and errors to be corrected. There was an attempt to
gather Hurricane Igor data and incorporate it into the model; however, there was no
direct rain gauge or data in the vicinity of the communities, so the data had to be read
from the radar and interpolated from other locations where the data was available.
This storm was running in the model, but the generated design flows did not match
the records. Therefore, from here another route was taken.
IDF curves were then input to replace the storm events. IDF curves were adopted
from the Government of Newfoundland’s website (Environment and Climate Change
Canada, Station No. 8403506) and enabled us to design 2, 5, 10, 25, 50 and 100 years
return period storms for the durations of 5 min to 24-h storms. For Bay Bulls and
Pouch Cove models, the following 100-year return period was used based on St.
John’s Airport Weather Station (Table 1).
Model calibration and validation were completed to ensure the parameters that have
been input or generated are accurate compared to other data and analysis in nearby
locations. These small communities unfortunately had neither rain or flow gauges to
compare direct inputs and outputs. As a result, the basin transfer method was used.
In this method, we consider other basins within the province that are gauged and
have similar characteristics to the modeled basin in terms of development and land
use type [11–13].
The following equation is used to compare basins to calibrate the model:
Q a /Aa = Q b /Ab
Q represents runoff, A is the drainage area, a is the gauged basin and b is the
modeled basin. This will allow calibration by ensuring the runoff per unit area in the
482 K. Miller et al.
model is reasonable to the gauged basin nearby. There is some acceptable tolerance
because no two basins are identical.
The gauged basins used for comparison were:
South River Near Holyrood (02ZM016)
Shearstown Brook at Shearstown (02ZL004)
Seal Cove Brook Near Cappahayden (02ZM009)
Northeast Pond River Near Northeast Pond (02ZM006)
Big Brook At Lead Cove (02ZL005)
South Brook at Pearl Town (02ZM021).
The models identified several locations that may be vulnerable to the impacts of
severe precipitation events, sea level rise, storm surges, or a combination of these.
Though the findings of the models identified interactions between a combination
of these elements, the component on sea level rise was found to occur only in low-
lying areas such as the Towns of Brigus, Clarke’s Beach and Cupids. Specifically, the
models show that these areas are at risk of excessive surface water runoff and erosion,
ponding, overtopped ditches and culverts, and hazards resulting from increased water
levels in water bodies. In places where municipal assets are close to high-risk zones,
the chances of damage to or failure of these assets can be high.
The Northside Road is a concerning area to the Town of Bay Bulls, the road runs
along their hillside just above the ocean and is a main road to some of Bay Bulls
famous boat tours. The road continues to erode and lose some of its side bank
material over the hillside. In an attempt to keep the road intact, the town installed
corrugated roof panels along the cliffside to act as a retaining wall. When generating
the model and completing field surveys, it was made clear that the new subdivision,
Sheldon Drive and Dunn Drive runoff to the town’s main rivers, Bay Bulls River
and Stanley’s River; however, it can also be seen that these subdivisions have been
rerouted to drain through culverts in-between the two main rivers and drain down
toward Northside Road. There have been driveway culverts installed along this new
subdivision and culverts with outpours traveling down through the hillside that can
be seen highlighted by the orange lines in Fig. 3.
The numerous outfalls, especially from conduit that is at a higher elevation than the
land around it increases the erosion of this hillside and northside road. The conduits
Watershed Analysis for Small Coastal Newfoundland Communities 483
may require better placement or connectivity to ensure the water travels in a path and
routes the water to the main outfall without eroding and damaging the communities
assets along the way. This area seems to be an increasing concern and evolving issue
as houses continue to be built in the new subdivisions creating more hard surfaces
such as the house itself, garages and long driveways, creating a greater runoff that is
no long draining to the main two rivers that provide more efficient infrastructure for
the output flows (Fig. 4).
Northside road also lacks any proper ditching on either side of the road which
leads to water traveling in unwanted locations such as along the asphalt surface or
shoulder of the roadway as can be seen in the pictures below (Fig. 5).
In the Town of Brigus, North Brook and South Brook discharge into the ocean through
Harbor Pond. Being a tidally influenced waterbody, Harbor Pond will be impacted
by SLR and is currently impacted by high tides and storm surge conditions. This in
turn can affect the flow of water in both South Brook and North Brook.
Watershed Analysis for Small Coastal Newfoundland Communities 485
Climate projections anticipate rising sea levels and more high-intensity snow and
rainfalls [14]. Warmer temperatures in the future can contribute to rapid snowmelt,
and if combined with factors such as SLR and storm surges, can overwhelm
downstream infrastructure in coastal towns like Brigus.
Of the two streams flowing into Harbor Pond, North Brook has a larger water-
shed. Town residents reported that this stream usually carries a larger volume of
water compared to South Brook and the watershed hazard model results confirm this
assertion. Both Brooks are fed runoff from developed parts of the town as well as
areas still in a natural state. When the ocean surface water level is high, excessive
runoff and the very slight elevation drop close to the outlet can interfere with these
streams’ ability to properly discharge. The simulation results show that when the
water level in these streams increases, town assets in several locations appear to be at
risk. Three locations close to Harbor Pond were identified as being at elevated flood
risk during a severe precipitation event (Fig. 6):
1. North Brook crossing Water Street,
2. South Brook crossing Water Street and
3. South Brook crossing South Street.
Water Street is relatively low in elevation and is close to Harbor Pond, and several
Town assets, including the Town Hall, Recreation Center, Fire Hall, and a park, are
located here. The abundance of municipal infrastructure within these hazard areas
poses concerning questions around the level of risk the Town is willing to accept.
In particular, the presence of emergency services infrastructure here should make
re-considering the use of this area as a hub of municipal services a priority.
Where South Brook passes below South Street, the depth of the canal is about
1 m from the bottom of the bridge decking to the streambed. Extreme events will
increase runoff volumes in this area and cause the flow to rise to a level that could
Fig. 6 a and b Harbor Pond area (left) and Beaver Pond area (right). The vulnerable zones are
marked by red circles
486 K. Miller et al.
threaten to overtop the bridge. The impacts of SLR and storm surge should not play
a major role in potential flooding concerns here.
The Town of Clarke’s Beach and much of South River are located in the same
watershed. This watershed is divided into subcatchments that feed into North River,
South River and Clarke’s Beach Pond. Most of the developed portion of Clarke’s
Beach is located on the flat area close to the coast and the shorelines of North
and South Rivers. While the majority of the runoff discharges into Clarke’s Beach
Pond, South River and North River, some discharges directly into the ocean. At
the intersection of Glam Road and Wilsonville Avenue, several factors contribute to
the increased risk of water ponding. Several culverts and side-road ditches transfer
runoff water from adjacent neighborhoods to this area (Fig. 7a). The model indicates
that when the combination of inadequate surface water drainage infrastructure and
flat terrain of this area is confronted with a large precipitation event, there is a
high probability that culverts and ditches here will get overtopped, and ponding
will occur. This was anecdotally confirmed during fieldwork undertaken in this area
when a resident told those employees about previous experiences of water ponding
in this neighborhood. Nearby municipal assets that may be impacted by flooding
here include the Town Hall and Town Garage (Fig. 7b).
The model indicates that large storms can generate runoff that would drain into
the ocean by crossing streets and Conception Bay Highway (Fig. 7). Setting aside the
acute risks associated with this runoff, the long-term impacts of insufficient drainage
infrastructure leading to frequent over-land water flow could include increased rates
of pavement erosion.
Fig. 7 The red circle in (a) illustrates an area in the model with observed ponding and drainage
issues due to flat terrain and inadequate stormwater infrastructure. Nearby vulnerable town assets
are highlighted with yellow rectangles. (b) demonstrates the potential for increased flooding damage
and overland flow risk in the event of a large return period storm
Watershed Analysis for Small Coastal Newfoundland Communities 487
Another area of concern is the large body of water alongside the western end of
Main Street where Goulds Brook and South River join (Fig. 8). Here, the model
indicates that extreme events will increase water levels in the lake. Several culverts
transfer runoff from the hill (west) side of Main Street to the lake, and the outlets of
these culverts are close in elevation to the surface level water. Furthermore, condition
assessments carried out on these culverts revealed that the majority were deficient
and/or near failure, negatively impacting drainage capacity. Under certain conditions,
damage to town assets in this area is possible. For example, during cold seasons
when the lake is partially frozen, a rain on snow event can trigger rapid snowmelt
and increase runoff significantly. An increased lake level could induce ice blockage
of culvert outflows, potentially causing the ditch on the west side of Main Street to
flood.
One more noteworthy point in this area is the location of a quarry on the eastern
bank of the river. The topography of the area brings runoff from any size of precip-
itation event over the mining site, and this could potentially carry suspended solids
into South River in quantities problematic to riparian health. It is therefore important
that sediment arresting measures, such as vegetated buffer strips, are intact and effec-
tive—not only to protect the health of the river, but also in adherence to aggregate
mining best practices and regulations governing such activities [15].
Fig. 8 Aerial view of South River. Main Street and vulnerable areas are marked in red. Runoff
direction over the quarry in South River is indicated by the yellow arrows
488 K. Miller et al.
Current high tide water levels can already reach capacity limits of the Quay Road
bridge, at which point water cannot efficiently discharge into the ocean under normal
river flow conditions. The model results suggest that the infrastructure and town
assets in the vicinity of the Quay Road-Sea Forest Drive intersection are in a high-
risk zone (Fig. 9). The combined impacts of SLR, storm surge and event-driven
storm runoff could overload Horse Brook, resulting in erosion and inundation issues
causing property damage and disrupted transportation [16].
Uncertainties about how much and how quickly sea level will rise make coastal
adaptation efforts more challenging. In addition to runoff from the town and higher
elevation areas, Horse Brook carries the discharge from Cupids Pond and is influ-
enced by the tidally driven groundwater table rise in lower reaches. A key piece of
green infrastructure also exists adjacent to the lower reach of Horse Brook, the green
space—much of which is wetland—to the northeast of the brook helps to slow and
reduce runoff (Fig. 9). This green space is currently zoned for residential mixed devel-
opment [17]. To gauge the impact developing this area would have, we increased the
impervious surface percentage to align with typical low-density residential values
and as a result the model indicated an increase in runoff volume entering Horse Brook
from this area by 8–10%. This as well as the effects of transformation of wetlands on
Sheldon Drive and Dunn Drive in Bay Bulls into new subdivisions confirm wetland
conservation as a powerful tool in reducing flood risk, particularly at coastal outfalls
where tidal influences may drive backflow into the system.
Fig. 9 (a) marks high-risk areas at the Quay Road-Sea Forest Drive intersection, where the Quay
Road bridge’s capacity can be overwhelmed by the current high tide climate (red circles). The
combination of sea-level rise, storm surge, and storm runoff could overwhelm Horse Brook, leading
to erosion, flood inundation, property damage, and transportation disruptions. (b) The vulnerable
municipal assets are marked with yellow rectangles. Additionally, a crucial green space near the
lower part of Horse Brook is present, which helps mitigate runoff but is currently zoned for residential
mixed development
Watershed Analysis for Small Coastal Newfoundland Communities 489
Table 2 Comparison of
Location Modeled flood Observed flood
some flood locations in the
model and as observed Strugnells’ Marsh Road Yes Yes
Castella’s Lane Yes No
Shoe Cove Pond Yes No
Butler’s Road Yes No
Lawrence Lane Yes Yes
Given that the study region is quite large, with residences limited to only a fraction
of the region, the flooding results were focused on the residential areas and sections
demarcated to be used as residences in the future. Table 3 shows the flood inundation
extent for current and future periods for the 100-year return period. A 24% increase
in precipitation for the 2040 time slice resulted in an increase of 8.87% flooded area
and 17.56% flooded depth. For the 2070 time slice, precipitation increased by 40.3%
from the current climate, resulting in flood extent increasing by 12.8% and flood
490 K. Miller et al.
depth by 29.19%. It should be noted that the projected precipitation used was the
median; as such, these increases could be higher or lower than reported. The flood
extents resulting from increased precipitation under the two climate change time
slices were less than 15%. This may not seem like a huge concern considering the
percentage increase in rainfall. The significant impact of the climate change projected
precipitation is seen in the increasing flood depths of up to approximately 30%. Such
flood depth information will be helpful to emergency response personnel to guide
their development of rescue plans for flooding emergencies. The Town could also
use this information to plan and develop alternate emergency routes out of Town in
a major flood.
As expected, the blockage simulations showed increases in flood extent and depth
(Table 4). Flood extents increased by from 0.46 for 20% blockage to 2.13 for 80%
blockage. Correspondingly, flood depths increased by 0.53 for 20% blockage to 2.21
for 80% blockage. Infrastructure maintenance is key to ensuring that the storm-water
system works efficiently. It can be seen from the results that blockages of even up to
80% only impact flood extent and depths by less than 3%. A possible reason for this
small impact could be the elevation gradient in the Town. With steep slopes in most
parts of the Town, the culverts are still likely to function in some capacity even when
there is a blockage, especially by gravels that are quite porous. Although the steep
gradient is an asset, the Town still needs to ensure that scheduled culvert maintenance
is promptly carried out, especially for locations where the model showed water depths
greater than 0.2 m. Thus, the study did not include simulating blockages in ditches
identified in the Town. It is known that increasing roughness in ditches helps to slow
down flows and thus minimize downstream flooding. However, when the roughness
in the ditches is high, through overgrown vegetation, the ditches tend to serve as
ponding sites, thus contributing to flooding.
4 Conclusion
The coastal communities involved in this study will face disproportionate challenges
due to the compounding impacts of climate change, SLR and the infrastructure deficit
that exists in virtually all small and rural municipalities across the province. Our find-
ings show that when specific threats, such as SLR, groundwater inundation and severe
precipitation events, are combined, current storm-water management infrastructure
will not be adequate and the risk of flooding causing damage to infrastructure will
increase. In summary, emerging flood risks in coastal regions due to climate change
are: (i) surface flooding from precipitation, (ii) SLR-driven marine inundation and
(iii) SLR-driven groundwater rise.
The effects of climate change and SLR are severer in low-lying coastal communi-
ties. Long-lasting high-water events may drive up already high groundwater levels in
some coastal communities. This can exacerbate surface runoff-driven flood risk by
reducing the soil’s ability to absorb stormwater, thus making all surfaces behave simi-
larly to paved surfaces. On the other hand, SLR has no or minimal impact on surface
runoff and infiltration for lands that are at higher elevation. Where the development
on the hills, changing land cover has the most impact on runoff and subsequent
downstream issues such as erosion and piping capacity. Therefore, understanding
the compounding impacts of the above-mentioned stressors is necessary for devising
appropriate adaptation strategies to protect coastal communities against the impacts
of climate change.
These models could be improved and modified with a higher-resolution DEM
or proper LiDAR data, site surveying and bathymetry to improve some accuracy
that the digital elevation model clearly lacked. In some provinces in Canada, higher-
resolution digital elevation models are available in 1 to 2-m grid sizes. Another
improvement for not only the model but also communities would be to avail of
precipitation and flow gauges for the rivers to compare inputs and outputs of the
model for more efficient calibration instead of having to use the basin transfer method.
The results of the study indicated that flood is less sensitive to blockages. Infras-
tructure elements whose capacity can become limited in the event of a storm were
also identified. The results of this study can be used as a guide in identifying priority
areas where infrastructure needs to be upgraded. The majority of the vulnerable
storm-water elements were found in the downstream portions of the watershed.
Implementation of upstream measures such as storage ponds can help to minimize
flooding downstream significantly. Since there are many wetlands and swampy areas
within the study regions, it is recommended that further studies to understand the
dynamics of these areas and water balance within the region be undertaken. It is also
recommended that a 1 m resolution LiDAR data be used to rerun this simulation to
identify flow paths better and refine the model results.
492 K. Miller et al.
References
Abstract The American Society of Civil Engineers and the Canadian Society for
Civil Engineering have just jointly designated “David Thompson’s Surveying and
Mapping of the Northwest of North America” as an International Historic Civil Engi-
neering Landmark. David Thompson (1770–1857)—surveyor, map-maker, explorer,
and fur trader for both the Hudson’s Bay and North West Companies—is consid-
ered “the greatest land geographer that the world has produced” (Tyrrell in David
Thompson’s narrative of his explorations in Western America. The Champlain
Society, Toronto, 1916, [13]), despite his serious visual impairment. Often accom-
panied by his Métis wife, Charlotte Small, he surveyed and mapped a vast region
stretching from 45°N to 60°N latitude and from the western shores of Hudson Bay
to the Pacific Ocean between 1790 and 1812. His 1814 Great Map, compiled from
his surveys and those of Alexander Mackenzie, Simon Fraser, George Vancouver
and his teacher Philip Turnor, laid the groundwork for development of the North-
west of North America. This paper briefly describes Thompson’s life and remarkable
achievements.
1 Introduction
Employed as a trader with the Hudson’s Bay and North West Companies, David
Thompson (1770–1857) surveyed and mapped a vast region of the Northwest of
North America between 1790 and 1812. His 1814 Great Map, compiled from his
surveys and those of his predecessors, detailed the terrain between 45°N to 60°N
D. R. Gilbert · C. G. Bedford
ASCE History and Heritage Committee, Reston, USA
F. Michael Bartlett (B)
CSCE National History Committee, Surrey, Canada
e-mail: f.m.bartlett@uwo.ca
latitude and from the western shores of Hudson Bay to the Pacific Ocean. The map
laid the groundwork for development of the region.
Consequently, the American Society of Civil Engineers and, concurrently, the
Canadian Society of Civil Engineering have jointly recognized “David Thompson’s
Surveying and Mapping of the Northwest of North America” as an International
Historic Civil Engineering Landmark (or CSCE International Civil Engineering
Historic Site). This paper briefly chronicles the historic significance and unique
features of the Thompson’s life. Gilbert et al. [2] provide a more in-depth summary.
David Thompson was born on April 30, 1770 in Westminster, England. His parents
were Welsh migrants, David and Ann Thompson, who gave their son a Welsh accent
that Thompson spoke with throughout his life. The senior David Thompson died
when his son was an infant, leaving the family without a reliable source of income.
Due to the financial hardship that resulted, Ann placed David in the Grey Coat
Hospital, a home for disadvantaged children, in 1777. The hospital operated a school
where Thompson studied practical navigation, surveying, trigonometry, and geom-
etry. He also learned skills that would serve him well in later life, like using nautical
instruments, finding latitudes and longitudes, and making navigational calculations
from observing the sun, moon, and tides and drawing of maps and charts. After seven
years of study at the school, when Thompson was fourteen, the Grey Coat Hospital
paid five pounds to the Hudson’s Bay Company, whereby he became an indentured
servant to the Company. On May 28, 1784 Thompson set sail for North America.
Slightly more than three months later, on September 2, Thompson arrived at Fort
Churchill in what is now Manitoba, where he worked as a clerk to the Governor.
During the next years, he transferred between several different forts and trading
posts—York Factory, Cumberland House, South Branch House, Manchester House,
Buckingham House—as he continued to do secretarial work. He learned to keep
accounts and other records, calculate values of furs, track supplies, and other duties
[7] (Fig. 1).
The trajectory of Thompson’s life changed forever when, in late 1788, he severely
fractured his right tibia in a sled accident at Manchester House. The break did not
mend cleanly, so the following spring others carried him to Buckingham House
for treatment, where he was confined for many months of recovery. Whilst in this
recovery period, he met Hudson’s Bay Company employee and astronomer (as
surveyors were termed at the time) Philip Turnor. Turnor trained Thompson in skills
of astronomical observations more advanced than what he had learned at Grey Coat
Hospital as a boy. Turnor also hoped that Thompson would join him on a survey
of the Athabasca Country, but Thompson’s ongoing recovery prevented him joining
the expedition. Instead, he spent months practicing diligently and, in the process,
became an expert. This time of study kindled in Thompson a passionate interest in
exploration and surveying. He was to write later that his fall and subsequent recovery
David Thompson’s Surveying and Mapping of the Northwest of North … 495
had “by the mercy of God turned out to be the best thing that ever happened to me”
[6]. It was also during this time that he lost much of the sight in his right eye, leaving
one of history’s greatest surveyors and explorers with a lifelong visual impairment.
In 1790, Thompson completed the term of service required by the Grey Coat
Hospital payment years earlier and was also recovered enough to strike out on his
own. He acquired a set of surveying tools, and whilst entering the employ of the
Hudson’s Bay Company as a fur trader, he used his spare time to document the survey
observations he had made between Cumberland House and York Factory [9]. He
distinguished himself early as a competent surveyor completing his first significant
survey near the present Alberta/Saskatchewan border in 1792. He succeeded his
mentor, Turnor, as the Company surveyor two years later. In 1797, Thompson’s
superiors made it known to him, though, that western exploration and mapping
was no longer one of their priorities, which disappointed him greatly. He was so
disappointed, in fact, that he immediately resigned and sought employment with
Hudson’s Bay Company competitor, the North West Company. Unlike the Hudson’s
Bay Company, the North West Company was anxious to expand its western operations
and, eventually, to find a route the Pacific. This suited Thompson’s interests perfectly,
and he would remain with the North West Company for the rest of his fur-trading
career.
During Thompson’s first year with the North West Company, he was assigned to
explore and map the territory of the Assiniboine River, the Mandan villages of North
Dakota, and Upper Mississippi Country. Alexander McKenzie, a contemporary fur
trader and surveyor, remarked that in this work “Thompson had performed more
in ten months than he expected could be done in two years” [6]. Similar efforts
496 D. R. Gilbert et al.
occupied Thompson during the next years all across modern Alberta, Saskatchewan,
Manitoba, Wisconsin, Minnesota, and North Dakota. In 1804, Thompson became
a full partner in the Company and, in response to the American-backed Lewis and
Clark Expedition, was tasked with identifying a northern route to the Pacific and
establishing a British presence in the fur-rich Columbia River basin [8].
All during these expeditions, Thompson was supported both at home and in
the field by his wife, Charlotte Small Thompson. Charlotte was the daughter of
fellow North West Company trader Patrick Small and his wife, a Cree woman. Small
“capably assisted her husband” and “travelled thousands of kilometres across North
America by foot, canoe, and horseback, helping to map its extensive lands and water-
ways” [10]. The couple had thirteen children during their lives and the marriage,
unlike many such marriages between white trappers and native women, lasted a long
fifty-seven years—the longest known in Canada pre-Confederation. Charlotte was
designated a Canadian National Historic Person in 2008 because she “exemplifies the
many Aboriginal women who shared their lives with fur traders, bringing their knowl-
edge of language, culture, and survival skills to eighteenth- and nineteenth-century
trade and exploration” [10].
After twenty-eight years dedicated to the twin causes of fur trading and surveying,
Thompson retired in 1812. He, Charlotte, and their children settled in Terrebonne,
north of Montreal. Establishing a quiet routine, Thompson set out to complete the
map which he had never had time to complete before. Between 1812 and 1814,
he designed two editions of his map, the second measuring roughly 2 m high by
3.2 m wide (6½ ft by 10 ft) and showed all of the areas he had personally traversed
between 45°N and 60°N latitude from the Hudson Bay to the Pacific Ocean. Not
only the scope of the map, but its detail, showing every North West Company trading
post, was unprecedented. Today, one of Thompson’s two original maps is preserved
at the Archives of Ontario in Toronto.
Thompson’s map and a more detailed atlas that accompanied it were not appreci-
ated in their time. He had hoped to sell them but was unable to find a publisher. This
inability combined with a succession of unsuccessful financial investments drove
Thompson to seek further employment. In 1817, he began working for the Interna-
tional Boundary Commission that was tasked with locating the border between the
United States and Canada from Quebec to the Lake of the Woods. It is interesting to
consider that, when the Oregon Boundary Dispute occurred between Canada and the
United States a half century later, the English negotiators might have argued more
forcefully. Citing the great extent of Thompson’s Columbia surveys and his strong
opinion that the Columbia River should be the United States/Canada boundary, most
of Washington State may well have been part of Canada (Fig. 2).
Thompson continued to work off and on throughout the 1820s and in 1833, at the
age of sixty-three, he found employment with the British American Land Company
to conduct hydrographic surveys for proposed canal projects and exploratory land
surveys. In 1846, Thompson’s visual impairment, which had first developed back
in 1789, became so severe that he could no longer conduct survey work. Active
man that he was, though, rather than discontinue work altogether, he began writing
David Thompson’s Surveying and Mapping of the Northwest of North … 497
Terrestrial surveys done by Philip Turnor, Peter Fidler, Alexander MacKenzie, Simon
Fraser and Lewis and Clark were all major undertakings and contributed in some
way to David Thompson’s “Great Map”. In particular, Turnor mapped from Hudson
Bay to the west and south to Lake Superior (1778–94) [5], and Fidler carried on
from Turnor to map Hudson’s Bay Company areas (1789–1811). Mackenzie first
crossed North America (1793) and travelled down the Mackenzie River to the Arctic
Ocean (1796) [3]. Lewis and Clark (1804–1806) were the first non-native explorers
of the Columbia from its confluence with the Snake River to a point near what is
now Vancouver WA.
Marine surveys conducted previously by the Spanish, James Cook, George
Vancouver, and William Broughton were dissimilar from the terrestrial surveys,
with shorter durations, but were still large and significant. In particular, the Spanish
498 D. R. Gilbert et al.
explored the west coast of North America (1768–91), and Cook made three sweeping
voyages of discovery (1768–1779). Vancouver mapped the west coast of North
America (1792–1795), and Broughton mapped the east coast of Asia (1795–1798).
Thompson did rely heavily on Vancouver’s very extensive detailed mapping of the
west coast of North America to demarcate the westernmost fringes of his map.
The following seven significant historical contributions of David Thomson are high-
lighted in this section: (1) scale of work; (2) date of work; (3) contributions to remote
surveying techniques; (4) travel diary; (5) interaction with indigenous people; (6)
legacy communities; and (7) recognition by others.
David Thompsons’s field surveying and the resulting map of a 4.9 million km2
(1.7 million sq.mi.) block of the North American continent are of truly great magni-
tude. Over 22 years, he travelled 80,000 km (50,000 miles) by canoe, horseback
and on foot, making hundreds of astronomical observations to determine the true
geographic positions of his employers’ fur-trading posts and major natural land-
marks. Between these, he conducted “tracking surveys”, essentially compass-and-
estimated-distance traverses, to define the detail of river courses, lakes, and moun-
tains, all recorded in daily journals most of which are preserved to this day. At the
same time, he conducted the day-to-day business of fur trading.
The physical hardships that he and his party regularly endured were considerable.
For example, they crossed the Athabasca Pass, Fig. 3, in January 1811 in very difficult
conditions—he recorded the temperature on January 5th as being “− 26 [F, − 32
C] very cold …” [13]. The psychological stress experienced by members of the
party was exacerbated by their fear of hostile natives who had chased them on the
plains and “rumours of a giant, dreadful creature that stalked these intimidating
mountains” [11]. The subsequent descent was so steep that the sleds often outran the
dogs pulling them, crashing into trees and causing frustrating delays. On January 26,
they reached the frozen upper Columbia River, building cabins, and a stockade to
wait the winter out. In the spring, they constructed a cedar canoe and paddled up river
to its source then down the Kootenai, Pend Oreille, Spokane, and Colville Rivers,
eventually finding the Columbia at Kettle Falls in Washington State and following
the Columbia to its mouth at Astoria, Oregon. The party then returned to Kettle
Falls and explored up the Columbia to the “Great Bend”, making Thompson the first
European to traverse the entire course of the Columbia.
After retiring from field activities and moving his family to Montreal, David
Thompson spent two years compiling his survey data and that of his North West
David Thompson’s Surveying and Mapping of the Northwest of North … 499
Company associates and others, such as Captain George Vancouver and Lewis and
Clark, into a “great map”, Fig. 4, measuring 3.15 m by 2.06 m (10 -4 by 6 -9 ) [1]. It
was composed of 25 individual sheets on imperial paper glued together by mucilage
all ordered from London. It depicted in great detail a block of the North American
continent from Hudson Bay to the Pacific Ocean, Longitude 87°W–132°W, and from
parts of the U.S. states of Washington, Idaho, Montana, North Dakota, Minnesota,
and Wisconsin, to areas north of Lake Athabasca, 45°N–60°N. Two original maps
remain, one hanging in the Ontario Archives in Toronto, the other in the National
Archives in London, England [6].
Thompson was originally given three years to produce this map, but William
McGillivray, then Principal of the North West Company, wanted it done in two years.
Thompson delivered it 10 June 1814 and referred to it as “a hasty rough map” [4]. But,
his creation was of great importance to the North West Company. In his book Epic
Wanderer, Canadian historian and writer D’Arcy Jenish describes what McGillivray
might have thought when he first saw the map, “a whole chunk of the continent
lay on the table before him”. The North West Company kept this vital picture of
their widespread enterprise secret for several years, but on being absorbed by the
Hudson’s Bay Company in 1821, the maps were shared with London cartographer
Aaron Arrowsmith who copied and published them giving no credit to Thompson
[4], a great snub to “the greatest land geographer the world has produced”.
500 D. R. Gilbert et al.
Fig. 4 Thompson’s 1814 Great Map of Northwest North America. Source Wikimedia Commons/
Public Domain
David Thompson’s explorations and surveys of the Northwest of North America took
place between 1790, when he wrote the secretary of the Hudson’s Bay Company
in London to request survey instruments upon completion of his apprenticeship,
until he retired in 1812. When he started, almost no systematic reporting on the
region described by his later map was available. Over the course of the next years,
though, Thompson and his several contemporaries revolutionized the geographic
understanding of the region. These contemporaries included George Vancouver, who
mapped the western coastline in 1792–1795, and the American Corps of Discovery
lead by the Americans Lewis and Clark in 1804–1806. Thompson also specifically
cited Alexander Mackenzie, John Stewart, and his mentor Philip Turnor as significant
contributors to his understanding of the region.
Whilst compiling his Great Map of the Northwest of North American in 1812–
1814, Thompson noted that it was the culmination of “the surveys and discoveries
of 20 years” (Fig. 4). He had personally contributed to many of those surveys and
discoveries and, combined with those of the other principal surveyors and explorers
of the time, he was able to produce a map which predated any comparable map of
the continent.
David Thompson’s Surveying and Mapping of the Northwest of North … 501
Thompson did not develop new surveying techniques, but successfully adopted
celestial-based methods, originally developed for nautical navigation, to create
remarkably accurate surveys. He took and averaged repeated observations, for
example, taking 10 latitude and 19 longitude readings at Rocky Mountain House,
which are impressively accurate, to within 1.6 km (1 mile) in latitude and 8 km (5
miles) in longitude [12]. He plotted the locations of key features including trading
post sites, river mouths, and mountain passes, on a blank map using their latitude
and longitude. Then, he travelled between these points, creating rough track surveys
to fill in the details, using scaling as necessary to correct distances that were often
overestimated.
His technique to determine latitude was to start measuring the altitude of the
sun before local noon, repeating the process until sun lost altitude, and then taking
the greatest value in the series as the meridian altitude. He also obtained accurate
measurements of the meridian altitude of the Pole Star at night [12]. The familiar
C. W. Jefferys illustration, Fig. 5, shows Thompson taking an observation using his
sextant and the artificial horizon. He is measuring the angle between a celestial body
and its reflection on a plane, level surface, and dividing this value by two to determine
the declination. The flask near Thompson’s right foot contained mercury, which was
pooled in the base of the device and shielded from the wind by the inclined glasses
to create the reflection plane. Jefferys intentionally shows the measurement being
taken some distance from the adjacent camp, to minimize any vibration by people or
animals. He also erroneously shows Thompson sighting using his right eye—which
was almost blind.
Thompson determined longitude by first computing the local time from observed
astronomical phenomena, such as the eclipses of Jupiter’s satellites or the relative
location of the moon with respect to background stars. He could then compute the
Fig. 5 C. W. Jefferys’
Illustration of David
Thompson Taking an
Observation. Source Library
and Archives Canada/
Charles William Jefferys
fonds/c073573
502 D. R. Gilbert et al.
longitude as the difference between the local time and the published Greenwich time
of these phenomena, recognizing that one hour equalled 15 degrees of longitude.
The mathematical calculations required for a single observation required three to
four hours [12].
It was decades later, in the 1890s [9], when the unfinished manuscript was found
(acquired) and published by Canadian mining geologist, Joseph Tyrrell, becoming a
valuable source for research by many disciplines.
Thompson also strongly opposed the practice of trading posts exchanging alcohol
for furs [11].
David Thompson’s Surveying and Mapping of the Northwest of North … 503
David Thompson created trading posts that have evolved to become permanent
communities. In Canada, Kootenae House is now Invermere BC, and Rocky Moun-
tain House remains a town in AB. In the United States, Spokane House is now
Spokane WA, and Saleesh House is now Thompson Falls, MT.
An online search [2] has yielded photographs of 21 plaques, sculptures, and signs
commemorating David Thompson and his achievements including: eight Historic
Sites and Monuments Board of Canada plaques, two Archaeological and Historic
Sites Board of Ontario plaques; three British Columbia Department of Recreation and
Conservation interpretive signs; two Idaho Historical Society/Idaho Transportation
Department interpretive signs; and six plaques or signs created by others.
Monuments commemorating David Thompson is listed in Table 1.
The Association of Canada Land Surveyors has awarded the “David Thompson
National Geomatic Awards” annually since 2007.
There are David Thompson Secondary (High) Schools in Vancouver and Inver-
mere, BC, and Sylvan Lake and Condor, AB; David Thompson Middle (Junior
High) Schools in Winnipeg, MB and Calgary, AB, and a David Thompson Primary
(Elementary) School in Kamloops, BC.
5 Plaque Citation
David Thompson (1770–1857)—surveyor, map-maker, explorer, and fur trader for Hudson’s
Bay and North West Companies—was, despite his serious visual impairment, “the greatest
land geographer that the world has produced”, according to J.B. Tyrrell. Often accompanied
by his Métis wife, Charlotte Small, he surveyed and mapped a vast region stretching from
the 45th parallel to the 60th parallel and from the western shores of Hudson Bay and Lake
Superior to the Pacific Ocean between 1790 and 1812. His 1814 Great Map, compiled from
his surveys and those of Alexander Mackenzie, Simon Fraser, George Vancouver, and Philip
Turnor, laid the groundwork for development of the Northwest of North America.
504 D. R. Gilbert et al.
Fig. 6 Thompson
Monument in Verendrye,
North Dakota. Attribution
The original uploader was
Elcajon farms at English
Wikipedia. Transferred from
en.wikipedia to Commons,
CC BY-SA 3.0, https://com
mons.wikimedia.org/w/
index.php?curid=15128885
David Thompson’s Surveying and Mapping of the Northwest of North … 505
It is envisaged that the plaque will be unveiled at the CSCE 2022 Annual Confer-
ence in Whistler. It will then be permanently displayed at the Fort William Historical
Park in Thunder Bay ON. Funding is being sought to mount identical plaques at
Invermere BC, where Thompson founded Kootenae House and a statue commem-
orating Thompson and Charlotte Small is located, and at Fort Carlton Provincial
Park, SK, which is near where Thompson experienced his life-changing injury in
1788. The ASCE Montana and Minnesota Sections, co-sponsors of this initiative,
also intend to erect identical commemorative plaques in their regions.
6 Conclusions
The American Society of Civil Engineers and the Canadian Society for Civil Engi-
neering have just jointly designated “David Thompson’s Surveying and Mapping
of the Northwest of North America” as an International Historic Civil Engineering
Landmark. David Thompson (1770–1857)—surveyor, map-maker, explorer, and fur
trader for both the Hudson’s Bay and North West Companies—is considered “the
greatest land geographer that the world has produced” [13], despite his serious visual
impairment. Often accompanied by his Métis wife, Charlotte Small, he surveyed
and mapped a vast region stretching from 45°N to 60°N latitude and from the
western shores of Hudson Bay to the Pacific Ocean between 1790 and 1812. His
1814 Great Map, compiled from his surveys and those of Alexander Mackenzie,
Simon Fraser, George Vancouver and his teacher Philip Turnor, laid the groundwork
506 D. R. Gilbert et al.
for development of the Northwest of North America. This paper has briefly described
Thompson’s life and remarkable achievements.
References
1. Archives of Ontario and Carter-Edwards, Dennis (2015) David Thompson map maker
explorer and visionary. http://www.archives.gov.on.ca/en/explore/online/thompson/index.
aspx. Accessed 28 Sep 2021
2. Gilbert DR, Bartlett FM, Bedford CR (2021) Nomination of David Thompson’s surveying and
mapping of the Northwest of North America as an ASCE/CSCE International Civil Engineering
Historic Landmark. Report submitted to the ASCE History & Heritage Committee and CSCE
National History Committee
3. Hayes D (2001) First crossing: Alexander Mackenzie, his expedition across North America,
and the opening of the continent. Sasquatch Books, Seattle
4. Jenish D (2009) Epic Wanderer: David Thompson and the mapping of the Canadian West.
University of Nebraska Press, Lincoln NE
5. Mitchell B (2017) Mapmaker: Philip Turnor in Rupert’s Land in the age of enlightenment.
University of Regina Press, Regina
6. Moreau WE (ed) (2009) The writings of David Thompson, volume 1: the travels, 1850 version.
McGill-Queens University Press, Montreal and Kingston; University of Washington Press,
Seattle; Champlain Society, Toronto
7. Nisbet J (2005) The mapmaker’s eye: David Thompson on the Columbia Plateau. Washington
State University Press, Pullman WA
8. Nisbet J (1994) Sources of the river: tracking David Thompson across Western North America.
Sasquatch Books, Seattle
9. Nicks J (1985) David Thompson. The dictionary of Canadian biography volume VIII (1851–
1860). http://www.biographi.ca/en/bio/thompson_david_1770_1857_8E.html. Accessed 25
Oct 2021
10. Parks Canada (2008) Small Thompson, Charlotte, National Historic Person. Directory
of Federal Heritage Designations. https://www.pc.gc.ca/apps/dfhd/page_nhs_eng.aspx?id=
12004. Accessed 22 Jan 2022
11. Shoalts A (2017) A history of Canada in ten maps. Penguin Random House Canada Limited,
Toronto
12. Smyth D (1981) David Thompson’s surveying instruments and methods in the Northwest
1790–1812. Cartographia 18(4):1–17
13. Tyrrell JB (ed) (1916) David Thompson’s narrative of his explorations in Western America. The
Champlain Society, Toronto. https://archive.org/details/davidthompsonsna12thom. Accessed
01 Aug 2021
A Brief History of the Kinsol Trestle
Abstract The Canadian Society for Civil Engineering will designate the Kinsol
Trestle near Shawnigan Lake, BC, as a National Historic Civil Engineering Site in
2022. Canadian National Railways completed this massive structure, also known as
the Koksilah River Trestle, in 1920. It is noteworthy for: (1) the scale and complexity
of its original design and construction; (2) the operational and engineering challenges
during its long railway service life; and, (3) the innovative rehabilitation design and
construction to repurpose the trestle and extend its service and heritage value on the
Cowichan Valley Trail which is part of the Trans Canada Trail. Built with a seven-
degree curve, it is 44 m high, 187 m long and so remains today as one of the largest
and highest wooden trestle bridges in Canada, representing an enormous feat of
engineering and construction. It provided rail service and contributed to development
of Vancouver Island for close to 60 years—the last train crossing was in June 1979.
Rehabilitation of the trestle, completed in 2011, involved the replacement of 17 of
the 46 bents using all-new wood and erecting under-slung custom-built steel trusses
to “bridge” between the active bents. The remaining 29 original bents are simply
left in place as inactive, non-load-bearing elements. This paper briefly describes the
history of this remarkable structure.
K. Baskin
CSCE 2022 Annual Conference, Whistler, Canada
F. Michael Bartlett (B)
CSCE National History Committee, Western University, London, Canada
e-mail: f.m.bartlett@uwo.ca
1 Introduction
The The Author(s), under exclusive license to Springer Nature Switzerland AG will
designate the Kinsol Trestle as a National Historic Civil Engineering Site in 2022.
The trestle is located on Vancouver Island, approximately 4 km northwest of the
town of Shawnigan Lake, British Columbia, Fig. 1.
Canadian National Railways (CNR) completed the Kinsol Trestle, across the
Koksilah River, near Shawnigan Lake, British Columbia, in 1920. As shown in Fig. 2,
it features a seven-degree horizontal curve, with a length of 187 m (614 ft.) and a
maximum height of 44 m (145 ft.). It is a testament to the skill of those involved to
conceive, design, and then construct a civil engineering project of this magnitude,
across a deep ravine in a relatively remote location, using locally available timber
and labor.
The trestle provided rail service and contributed to the development of Vancouver
Island for almost 60 years before the last train crossed it in June 1979. It is thought to
be the highest and largest surviving timber trestle in Canada and one of the highest
and largest in the world. It embodies the distinctive characteristics of large ravine
railway crossings using wooden trestle bridges of the early 1900s.
Vancouver
Vancouver
Island
Kinsol
Trestle
Victoria
In 1918, the federal government, who had effectively taken control of Canadian
Northern Railway (CNoR), and its subsidiary, the CNoPR, in 1917,1 resumed
construction of the line. Timber prices fell at the end of the war, so the line was
downgraded to be a logging railway with wood trestles, rather than a main line with
steel bridges [5].
By 1919, rail line construction from Victoria reached Mile 52.5 (km 84) at the
Koksilah River. The following year, local laborers completed the 23 north approach
bents and the high-level deck Howe Truss over the Koksilah River. This twelve-
panel deck truss, spanning 44.8 m (147 ft.) between timber towers, is clearly visible
in Fig. 3. The timber was supplied by the Canadian Western Lumber Company.
Canadian National Railways (CNR) completed trestle construction in February
1920. The bridge engineer was William Walkden from the CN Bridge Engineering
Office in Winnipeg. The District Engineer was D. O. Lewis in Victoria, and the Chief
Engineer was T. H. White.
The trestle was officially named the Koksilah River Trestle; its popular name, the
Kinsol Trestle, refers to the nearby Kinsol Station which in turn takes its name from
1 The Canadian Government more formally took control on September 6, 1918, when Mackenzie
and Mann resigned from the Canadian Northern Railways Board of Directors and were replaced
by a government-appointed board. On December 20, 1918, a Privy Council Order directed that the
Canadian Northern Railway and Grand Trunk Railway be merged and managed under the name
Canadian National Railways, though the merger was not formalized until Parliament passed the act
to incorporate the CNR on January 20, 1923 [10].
A Brief History of the Kinsol Trestle 511
the KINg SOLomon copper mine. The copper mine was not particularly successful—
Wikipedia [11] describes it as a:
mining venture grandiosely named “King Solomon Mines”, a very small mining venture that
produced 18 t (20 short tons; 18 long tons) or 18,000 kg (40,000 lb) of copper and 6,300 g
(200 Ozt) of silver from 254 t (280 short tons; 250 long tons) of ore — hardly enough to fill
3 rail cars — over the period 1904–1907.
There were many wooden trestle bridges in the USA and Canada in the late 1800s
and the early 1900s. Many of these were constructed as temporary bridges using the
readily available timber as this was an inexpensive technique to get the rail lines up
and running (and so earning income) quickly. Over time, they were then replaced
with more permanent structures.
Standards and specifications for wooden railway trestles were in the early stages
of development at the time of the design of the original Kinsol Trestle. An impor-
tant publication of that period was A Treatise on Wooden Trestle Bridges and their
Concrete Substitutes According to the Present Practice on American Railroads, by
Wolcott C. Foster. The first edition was published in 1891, and the fourth edition was
published in 1913.
Foster [6] provides the following insight about the design and construction of
timber trestles at that time:
However, a well-built trestle of good material will last a long time, depending to a certain
extent on climatic conditions. If properly designed and cared for they form an efficient portion
of the roadway. They require constant watching; and the moment any sign of weakness or
injurious amount of decay appears it should be remedied immediately. The inspection should
be regular and frequent, and placed in careful, trustworthy, and competent hands.
A few engineers have advocated the use of mathematics in the designing of trestles, but
as wood is an article whose strength and properties vary rather widely with every piece, no
dependence whatever can be placed on the results, and such practice is to be condemned. It
is far wiser to merely follow one’s judgment and the results of the experience of others as
to the proper proportioning of the various parts, gained from experience in dealing with the
wood, than to follow any special set of mathematical formulas.
Foster [6] also acknowledges the efforts of the American Railway Bridge and
Building Association and the American Railway Engineering and Maintenance of
Way Association to improve the design criteria and specifications for wooden trestle
bridges in the early 1900s. The Manual of Recommended Practice for Railway Engi-
neering and Maintenance of Way included a section on Wooden Bridges and Trestles
for the first time in 1907.
The design and construction codes, standards and practices for railway bridges
have evolved extensively over the last century since the Kinsol Trestle was originally
designed and built. The fundamentals of using good materials, proper design and
constant watching still apply today. The safe performance of the Kinsol Trestle as
512 K. Baskin and F. Michael Bartlett
an active railway bridge for almost 60 years clearly demonstrates that it was prop-
erly designed, constructed, and maintained by the engineers and workers involved,
starting with the standards and practices of the day back in the early 1900s and up
to its use as a trail bridge today.
By the end of 1922, the rail line had reached the south end of Cowichan Lake, half
its originally intended length [7]. The following year, daily passenger and twice-
weekly freight service began. By 1925, a new rail line was also constructed on CN’s
“Tidewater Subdivision” that ran from the mainline approximately 10 km (6 mi)
north of the trestle, at Deerholme, eastward to Cowichan Bay. Within one year, there
were five new timber companies connecting to the line. By 1928, the line had been
extended north to Youbou and Kissinger at the northwest head of Lake Cowichan,
but it was never completed to Port Alberni as had originally been intended. Several
logging railways were constructed on the right-of-way between Youbou and Port
Alberni, but the gap was never closed [9]. Freight consisted mainly of logs and sawn
lumber.
The Koksilah River Flood of 1931 caused extensive damage to the trestle, which
CNR repaired. In that year, William Walkden, the bridge engineer for the 1920 design,
designed a major rehabilitation as shown in Fig. 4. It featured six parallel low-level
eight-panel Howe Trusses spanning 29 m (95 ft.) across the Koksilah River. The
rehabilitation was completed in 1934, Fig. 5, under the direction of C. G. MacKenzie,
CNR Assistant and Resident Engineer. New trestle bents, two decks high, supported
the original high-level trusses on the new low-level trusses. A note on the drawing,
Fig. 6, states:
When the life of the existing structure is exhausted, intermediate bents are to be added above
the new trusses and all bents built up to full height as shown in sketch above [Figure 4], old
trusses being removed.
In accordance with this note, the high-level truss and the trestle bents that
supported it were replaced in 1936 with six-deck-tall trestle bents, Fig. 5.
In Fig. 7, the eight-panel low-level Howe Truss spanning the river is just visible
at the lower right, supporting the six decks of trestle above it.
Figure 5 shows part of CN Drawing R-518-36, which documents the history of
the reconstruction of the trestle. The last major rehabilitation of the trestle occurred
in 1958 when Bents 36-45 were replaced. The decision to carry out this work is
significant because, by the 1950s, trucks began to replace railways as log haulers
on Vancouver Island. The drawing is approved by L. R. (Ralph) Morris (c. 1925–
2011), who served as a CN Bridge and Structures Engineer from 1961 to 1987.
Based in Edmonton, he retired as the Regional Engineer Bridges and Structures of
the Mountain Region, that included Vancouver Island.
A Brief History of the Kinsol Trestle
Fig. 4 1931 Rehabilitation drawing showing low-level and high-level Howe Trusses. Source Ralph Morris fonds, Cowichan Valley Museum and Archives
513
514
K. Baskin and F. Michael Bartlett
Fig. 5 Part of CN Dwg No R-518-36 showing construction history and terminology. Source Ralph Morris fonds, Cowichan Valley Museum and Archives
A Brief History of the Kinsol Trestle 515
Fig. 6 1931 Rehabilitation drawing, notes, and title block. Source Ralph Morris fonds, Cowichan
Valley Museum and Archives
The Canadian National Railway repaired the trestle for the last time in 1973–74.
The last train crossed the trestle on June 20, 1979.
516 K. Baskin and F. Michael Bartlett
In 1980, CNR abandoned the Kinsol Trestle. Four years later, the Province of British
Columbia acquired the CNR right-of-way, including the trestle, and the first of several
structural assessments and feasibility studies for preservation and/or reuse was under-
taken. In 1988, a fire burnt a portion of the trestle. The end portions were subsequently
removed to prevent access, Fig. 8.
Between 1999 and 2008, the Cowichan Valley Regional District (CVRD) commis-
sioned several studies regarding the rehabilitation of the Kinsol Trestle and to assess
the economic impact of the investment. A 2008 report [5] induced the CVRD to
rehabilitate the trestle and integrate it into the Cowichan Valley Trail which forms
part of the Trans Canada Trail. The Trans Canada Trail initiative repurposed portions
of the CNR rail corridor in the late 1990s but the lack of access across the Kinsol
Trestle necessitated a significant 8.5 km trail detour.
Ralph Morris graduated in Civil Engineering from the University of Manitoba, and
subsequently had a distinguished career with the Canadian National Railway from
1947 until his retirement in 1987. He served as Regional Engineer of Bridges and
Structures in Edmonton, starting in 1961, where he was responsible for design,
maintenance and construction of railway bridges, tunnels, retaining walls, docks and
wharves. He was an active member of APEGGA, AREA, IABSE, EIC and chaired
the CSCE Western Region History Committee. He was made a Fellow of CSCE in
1994 [3].
518 K. Baskin and F. Michael Bartlett
Ralph Morris passed away in Edmonton on April 18, 2011, just three months
before the Kinsol Trestle was reopened as part of the Cowichan Valley Trail/Trans
Canada Trail.
7 Comparable Structures
The spectacular timber trestles in the Myra Canyon Section of the Kettle Valley
Railway in the southern interior of BC are another example of this type of railway
trestle bridge design and construction. Like the Kinsol Trestle, the Myra Canyon
trestles have been rehabilitated and are part of the Trans Canada Trail.
The The Author(s), under exclusive license to Springer Nature Switzerland AG
designated the Kettle Valley Railway, which includes the Myra Canyon Trestles, as
a CSCE National Historic Civil Engineering Site in 1988. This designation recog-
nizes that the location, layout, and construction of the Kettle Valley Railway through
the Myra Canyon constitutes an outstanding engineering achievement, employing
conventional technologies in highly imaginative and ingenious application in routing
and constructing a railway in mountainous terrain. This section of track carried
railway traffic from its completion in 1914 to its closure in 1978. It was subsequently
developed for use as part of the Trans Canada trail.
8 Historic Significance
The Kinsol Trestle is one of the few accessible and visible reminders of a distinctive
type of railway bridge construction used to span waterways, ravines, and other chal-
lenging terrain in the early development of the Province of British Columbia and its
mining and logging industries.
A Brief History of the Kinsol Trestle 519
It is envisaged that a CSCE plaque will be unveiled at the Whistler 2022 CSCE
Annual Conference and subsequently erected on site with the following text:
In tribute to the skill of those who conceived, designed, and constructed this significant civil
engineering structure. Completed in 1920 and managed under the direction of Canadian
National Railways’ civil engineers, this trestle remains the highest and one of the largest
surviving timber rail trestles in Canada. It is representative of the distinctive railway engi-
neering that overcame challenging terrain in remote locations to develop British Columbia,
particularly its mining and logging industries. The trestle has been preserved through the
innovative engineering used in its 2011 rehabilitation to carry the Trans Canada Trail.
10 Conclusions
The Kinsol (Koksilah River) Trestle deserves its recognition as a CSCE National
Civil Engineering Historic Site. It is the highest and one of the longest surviving
timber trestles in Canada and symbolizes the distinctive railway construction used to
span waterways, ravines and other challenging terrain to develop British Columbia
and its mining and logging industries.
References
1. Anon. (1911) Orders of the Railway Commissioners of Canada. The Canadian Engineer
21(12):356
2. Canada’s Historic Places (2009) Kinsol Trestle. https://www.historicplaces.ca/en/rep-reg/
place-lieu.aspx?id=18478. Accessed 26 Feb 2022
3. The Author(s), under exclusive license to Springer Nature Switzerland AG (CSCE) (1994)
Presenting the 1993–94 CSCE honours awards and fellowships. CSCE, Montreal
4. Canadian Standards Association (CSA) (2006) Canadian highway bridge design code (CAN/
CSA S6-06). Canadian Standards Association, Toronto, Ontario
5. Commonwealth Historic Resource Management Limited (2008) Kinsol Trestle restoration
feasibility study—phase I: final report. Report submitted to Cowichan Valley Regional District,
26 Feb
6. Foster WC (1913) A treatise on wooden trestle bridges and their concrete substi-
tutes according to the present practice on American Railroads, 4th edn. Wiley,
New York. https://openlibrary.org/books/OL7101900M/A_treatise_on_wooden_trestle_brid
ges_and_their_concrete_substitutes. Accessed 30 Jan 2022
A Brief History of the Kinsol Trestle 521
7. Passfield RW (1991) The Koksilah River (Kinsol) Trestle, Vancouver Island, BC. Report
submitted to the Historic Sites and Monuments Board of Canada
8. Roden B (2015) Past, present & beyond—the Last Spike for the Canadian Northern Pacific
Railway. The Ashcroft-Cashe Creek Journal, January 28. https://www.ashcroftcachecreekjou
rnal.com/community/past-present-beyond-the-last-spike-for-the-canadian-northern-pacific-
railway/. Accessed 04 Feb 2022
9. Turner RD (1997) Vancouver Island Railroads, 2nd edn. SONO NIS Press, Victoria, British
Columbia
10. Wikipedia (2022a) Canadian Northern Railway. https://en.wikipedia.org/wiki/Canadian_Nor
thern_Railway#Western_Canada_expansion. Accessed 04 Feb 2022
11. Wikipedia (2022b) Kinsol Trestle. https://en.wikipedia.org/wiki/Kinsol_Trestle. Accessed 05
Feb 2022
Performance of VIP Insulated Building
Envelope in Extreme Cold Climate
Abstract Energy use per person in Canada is among the highest in the world. Build-
ings consume about one-third of our total energy demand. This ratio is even higher
in extremely cold climate regions. The most affordable and effective way to reduce
energy consumption in buildings is to develop highly insulated building envelopes.
There are several high-performance thermal insulations such as polymeric foam,
aerogel, and vacuum insulation panel (VIP). Among these insulations, VIP offers
at least five times the higher thermal insulating capacity than others. This unique
characteristic of VIP makes it an ideal candidate for applications in the construction
of new and retrofitted building envelopes. However, the uncertainty about the service
life of VIPs in exterior building envelope applications is the issue yet to be addressed
conclusively by the researchers and engineers. In recent years, researchers across
the world have worked to address this issue, and several studies involving laboratory
investigations and numerical modeling have been reported, but the lack of real-life
field performance data is a significant impediment. This paper presents the critical
analysis of the results from a field study conducted over 11+ years in Whitehorse,
Yukon, Canada.
1 Introduction
Canada is known as a nation where per capita energy consumption is among the
highest in the world [1]. Like all over the world, climate change is happening in
Canada, and it is the greatest threat to human existence in our recorded history. It is
primarily an issue of too much greenhouse gases (76% is carbon dioxide), generated
and emitted by human activities, in the atmosphere. In 2015, buildings in Canada
emitted nearly 111 Mt CO2 E2 , accounting for 17% of the country’s total greenhouse
gas emissions [2]. Hence, the reduction of greenhouse gas emissions from buildings
is a top national priority. Primarily, there are two major strategic paths to increase
energy efficiency and reduce the carbon footprint of buildings: (i) high-performance
building envelopes and (ii) high-efficiency appliances and equipment. While it is true
that the second path (i.e., energy-efficient equipment and appliances) can reduce the
carbon footprint of buildings and be less disruptive to existing buildings, the first
path (i.e., energy-efficient building envelopes) is a preferred option for policymakers
and strategic thinkers for its ability to offer a long-term solution which also helps
to reduce the size of heating and cooling appliances. A high-performance building
envelope includes highly insulated walls, roofs, floors, doors, windows, and skylights,
minimum thermal bridging, and airtightness. Among all these options, the addition of
more insulation to both new and existing buildings appears to be a very cost-effective
solution [3]. Hence, the search for high-performance thermal insulation for exterior
building envelopes is a priority for the building envelope construction industry in
Canada and beyond. However, as can be seen in Fig. 1, the development of high-
performance thermal insulation is not a new phenomenon. Staring from a still air
layer with an R-value1 of 1/per in., the thermal resistivity of insulation materials has
changed gradually over the time during the last century. However, as shown in Fig. 1,
a vacuum insulation panel (VIP) is the insulation that stands out among all thermal
insulation materials. VIP has a thermal insulating capacity that is 5–10 times higher
than other traditional thermal insulation materials.
The concept of vacuum insulation was first introduced by Sir James Dewar in
1892 [4]. The most commonly used form of vacuum insulation is VIP, and it has
three major components: (i) gas barrier, (ii) core material, and (ii) getter/desiccant
(Fig. 2). The building envelope construction industry is looking forward to inte-
grating this unique insulation panel in new and existing buildings and reducing their
carbon footprints. The introduction of new materials in the Canadian construction
industry has its challenges and processes, which are based on sound scientific and
engineering principles. In general, while satisfactory short-term performance leads
to the introduction of new materials or systems, it is the uncertainty regarding the
long-term performance that impedes the entry of a novel material or system into
the building construction industry. Unlike most manufactured entities around us,
the utility of modern buildings does outlive our lifespan; hence, it is no surprise
that building envelope designers are concerned about the long-term performance of
novel building materials and systems. Though significant strides have been made in
1 ft2 ·°F·h/BTU.
Performance of VIP Insulated Building Envelope in Extreme Cold Climate 525
the performance assessment of VIPs for building envelope applications, some ques-
tions about long-term performance in field applications are yet to be answered. This
paper presents the results from a field study conducted over 11+ years in White-
horse, Yukon, Canada, where VIPs were used to retrofit an exterior wall. It is hoped
this unique set of observations will help to address some of the concerns about the
long-term performance of VIPs in an extremely cold climate.
Getter / Desiccant
Core Material
Seam Joint
Gas Barrier / Facer Foil
Core Material
Gas Barrier
Seam Joint
Fig. 3 Schematic diagram of the post-retrofit wall cross-section and the temperature sensors’
locations [6]
A part of an existing building (one wall only) was retrofitted with VIPs. The wall had a
concrete exterior surface. The pre-retrofit thermal resistance of the existing wall was
≈ 3.5 m2 K/W. The glass fiber core VIPs were of size 560 mm × 460 mm × 12 mm,
and the measured center-of-the-panel thermal conductivity was 0.0034 W/m K. VIPs
were sandwiched between two layers of 25 mm extruded polystyrene (XPS) boards.
Complete construction details of the retrofitted wall, along with the strategically
installed temperature sensors, are shown in Fig. 3. The concerns regarding moisture
management in the retrofitted wall assembly were addressed through an appropriate
design strategy [5]. The wall retrofit constructions were completed during the last
quarter of 2011. The photographs taken during different phases of construction are
shown in Fig. 4.
3 Field Observations
The field observations were based on the infrared thermal images taken at regular
intervals (2011, 2014, 2016, 2018, and 2022) and recorded temperatures at different
locations of the retrofitted wall cross-section (see Fig. 3) over 11+ years. Thermal
images in Fig. 5 show no abrupt change in color in the areas where VIPs were
installed, and this observation confirms that all VIPs installed in 2011 remain intact
after 11+ years of exposure. This is indeed a very significant and unique outcome
Performance of VIP Insulated Building Envelope in Extreme Cold Climate 527
2011 (Pre-retrofit)
Fig. 5 Infrared images taken in 2011, 2014, 2015, 2016, 2018, and 2022
resistivity increases with the decrease of temperature). These recorded field obser-
vations on the thermal effectiveness of XPS-VIP-XPS sandwich panel for building
retrofit applications over 11+ years are a unique set of field test data that can help to
build confidence in the long-term performance of VIPs in extremely Canadian cold
climate.
4 Conclusions
The observations presented in this paper focus on the long-term in-situ performance
of vacuum insulation panels (VIPs) used for external building retrofitting in the
extremely cold climate of Canada (Whitehorse, Yukon). Following conclusions can
be drawn from these observations:
1. Foam-VIP-Foam (i.e., XPS-VIP-XPS) sandwich panels can be effectively
installed and used for exterior energy retrofit of existing buildings in an extremely
cold climate.
Performance of VIP Insulated Building Envelope in Extreme Cold Climate 529
60.75% 60.34%
60%
y = -0.0107x + 0.7034
R² = 0.9774
50%
40%
30%
21.98% 20.50% 21.91% 22.18% 22.21%
17.66% 18.01% 19.21% 19.46%
17.39%
20%
14.40% 15.26% 15.18% 15.00% 16.50% 16.60% 17.07% 17.45%
10% 13.25% 13.80%
0%
Winter Winter Winter Winter Winter Spring Winter Winter Winter Winter
2012 2013 2014 2016 2017 2018 2019 2020 2021 2022
Fig. 6 Contribution of each insulation layer as a percentage of total temperature drop across the
composite (XPS-VIP-XPS) exterior insulation
2. As determined through infrared thermal imaging, not a single VIP failed in the
retrofitted wall during 11+ years of field exposure.
3. Aging of glass fiber core VIPs over 11+ years reduced thermal insulating
effectiveness by about 0.8 percent per year.
4. These unique field performance data would be useful to develop confidence in
the long-term performance of VIPs in building envelope retrofit applications.
Acknowledgements The authors would like to acknowledge the support of Panasonic Canada Inc.
and Panasonic Corporation for supplying the vacuum insulation panels for this project. The authors
also acknowledge the National Research Council Canada (NRCC) for supporting the first five years
of this project. The authors are grateful to Yukon Housing Corporation for their continuing interests
and support. The financial and technical supports provided by Yukon Research Centre, Yukon
College, and the Energy Solutions Centre are also to be acknowledged.
References
1. BP plc (2021) Statistical review of World Energy 2021, 70th edition. https://www.bp.com/con
tent/dam/bp/business-sites/en/global/corporate/pdfs/energy-economics/statistical-review/bp-
stats-review-2021-full-report.pdf
2. Senate. Standing Committee on Energy, the Environment and Natural Resources (2018)
Reducing greenhouse gas emissions from Canada’s built environment, (The Honourable Rosa
Galvez, Chair, The Honourable Michael L. MacDonald, Deputy Chair). https://sencanada.ca/
content/sen/committee/421/ENEV/reports/ENEV_Buildings_FINAL_e.pdf
530 P. Mukhopadhyaya et al.
3. McKinsey & Company (2010) Energy efficiency: a compelling global resource. https://www.
mckinsey.com/~/media/mckinsey/dotcom/client_service/sustainability/pdfs/a_compelling_glo
bal_resource.ashx
4. Bragg W (1940) History of the vacuum flask. Nature 145:408–410
5. Mukhopadhyaya P, MacLean D, van Korn J, Reenen D, Molleti S (2014) Building application
and thermal performance of vacuum insulation panels (VIPs) in Canadian subarctic climate.
Energy Build 85:672–680. https://doi.org/10.1016/j.enbuild.2014.08.038
6. Chan VT, Ooms T, Korn M, MacLean J, Mooney D, Andre S, Mukhopadhyaya P (2019) Critical
analysis of in situ performance of glass fiber core VIPs in extreme cold climate. Front Energy
Res 1–7
Performance Assessment of the Harmless
Home
1 Introduction
The construction sector has the prospects to attenuate climate change due to its
tremendous contribution to carbon dioxide (CO2 ) emission worldwide [21]. More-
over, governments worldwide are inquiring about improvement and transparency in
building performance assessment to guarantee that buildings make significantly less
GHG emissions to the environment. In Canada, several sustainable agendas and poli-
cies were inspired by reducing GHG emissions while creating a better occupants’
experience, such as the Energy Step Code in British Columbia and the Build Smart
strategy at a federal level [12, 14].
There are widely used commercial benchmark tools for green building rating
systems such as Leadership in Energy and Environmental Design (LEED), Building
Research Establishment Environmental Assessment Methodology (BREEAM),
WELL Building Standard (WELL), High Quality of Environment Certification
(HQE), and others. Most of these abovementioned benchmarks have optimized
energy and resource efficiency in building performance [22]. Doing things more
efficiently to save energy and carbon has been a current technical challenge and an
economic payback depending on the construction. However, green buildings must
not only use natural resources within profitable means; they also support occupants’
health and wellbeing to contribute to the building’s sustainability.
Housing has evolved into a unique and different product, which requires consider-
ation for consumers’ needs and desires. Repetitive failures of housing projects arise,
in part, from the absence of feedback and lessons learned from the end-user’s or
occupants’ perspectives [7]. These failures could drive an enormous loss of invest-
ment made in the housing development; besides, it could lead to lethal injuries or
even death.
Hence, it is necessary to discover a methodical feedback method to obtain lessons
from the occupants and ensure quality and value for their money [18]. Meir et al.
[11] recommend that conducting post-occupancy evaluation becomes necessary
to achieve a sustainable result. Since the 1960s, post-occupancy evaluations were
notably used to register building evaluations, either success or failure. The purpose
of a post-occupancy assessment is to obtain information for effective management
of the current housing stock and feedforward to enhance the further project in the
following areas: planning, design, and construction [2]. Li et al. [9] confirmed that
more than 140 POE projects are available worldwide. However, POEs of residential
housing has just attained significant recognition in the latest two decades [8].
The current research aims to carry out a performance assessment in a house
constructed in Victoria, British Columbia, called Harmless Home. This research
aims to present an overview of the literature to help as a background and source of
reference for ongoing study in housing evaluation research and its related subfields.
This research covers a summary of POE, including its methods and procedures.
It presents an overview of the performance elements and approach for the holistic
assessment of building housing. As Harmless Home aims not to harm the environ-
ment, this research considers the indoor environmental quality (IEQ), which has
received tremendous attention in the latest years. Moreover, the whole sustainable
water/electric/septic system includes solar panels to generate energy and recycling
water systems and a new sustainable brick material called Just BioFiber (JBF).
Performance Assessment of the Harmless Home 533
and JBF above-grade for all exterior walls as show in Fig. 3 in the initial construction
phase.
Performance Assessment of the Harmless Home 535
This research is about the post-occupancy evaluation (POE) of a green building house
called Harmless Home located in East Sooke, BC. POE is a method of studying occu-
pants of buildings through occupants’ feedback and/or measurements of building
performance, which covers energy and water assessment, indoor environment quality
(IEQ), physical assessment, occupant survey questionnaires, visual records, and tech-
nical measurement of a building structure [19]. This study focused on following the
iiSBE Protocol to assess the building performance evaluation of the Harmless Home.
Some of the goals for benchmarking include the following categories of performance:
1. Occupancy Issues
2. Energy and Emissions
3. Water Use
4. Economic Factors
5. Indoor Environmental Quality
6. Site Issues
536 R. W. C. Sousa and T. Froese
7. Materials Issues.
The Key Performance Indicators (KPI) for the abovementioned categories were
defined and collected for:
• Building performance at least for six months of operation for thermal comfort
and at least two years of operation for other KPIs;
• Predicted performance at the design stage (based on Building Energy Modeling
and green building certification submissions);
• Reference values for similar buildings in the exact location.
The work required to collect both quantitative and qualitative data from various
sources:
• Metered data for energy and water use should be collected from utility bills or
sub-meters. However, due to the specific project with solar panels and rainwater
harvesting system, energy was calculated by the Tesla Solar App and water empir-
ically for average annual human consumption per year. The energy use intensity
(EUI) and water use intensity were calculated in kWh/m2 /year and m3 /m2 /year.
The energy information was compared to the predicted energy modeling and
water with similar typical buildings of the same type in the region. Greenhouse
gas (GHG) emissions were calculated using The Ministry of Environment and
Climate Change Strategy carbon intensity factors.
• Spot measurements for indoor environment quality were taken using The Thermal
Efficiency Monitoring (TEM) Kit from SMT Research that provides an assessment
of the thermal performance of existing structures [20]. The sensors were installed
in the north and south walls to measure the thermal flow through the Just BioFiber,
as shown in Fig. 4.
• Documentation from the design phase was used to identify predicted performance
at the design stage, including drawings, specifications, and energy modeling.
4.1 Occupancy
The occupancy of the Harmless Home is as predicted in the design phase, two people.
The typical occupation of the building is almost the whole day based on two retired
people. Sporadically, they leave the building for their outside activities and personal
gatherings. Sometimes, they receive visitors that were not considered for this study.
Therefore, although the calculated number indicates similar occupancy predicted,
there is uncertainty about the number of guest’s frequency and time using the building,
as it is not tracked.
While using energy from the grid at certain times, since the solar panels do not always
overproduce power, Harmless Home can generate more energy than it requires to
operate as a whole; therefore, it can be considered to a net-zero house (or more
accurately, a net-positive house). The Key Performance Indicator for energy use
intensity is equal − 10.26 kWh/m2 /year, values significantly lower than the Passive
House Standard and BOMA BEST certification required for 15 kWh/m2 /year and
29.9 kWh/ft2 /year, respectively [4, 16].
The net usage shown in Table 1 in megawatt-hour since 2019 is − 13.60 MWh,
according to the table and information exported from the Tesla Solar App.
Regarding emissions, The Ministry of Environment and Climate Change Strategy
publishes a set of greenhouse gas (GHG) emission intensity factors for electricity use
annually. The latest report published in 2020 sets a GHG emission intensity factor
of 40.1 tCO2 e/GWh [13]. Hence, this − 13.60 MWh in three years is equivalent
to a saving of 5.45 tons of CO2 based on the factor given from the Ministry of
Environment and Climate Change Strategy report.
538 R. W. C. Sousa and T. Froese
4.3 Water
The Harmless Home uses no off-site water, and a direct measure of the water used
from the on-site system could not be done. Calculating the average water use per
person in a household in Canada based on McGill University study that is 329 L/
person/day [10], the necessary amount of water for the two occupants per year should
be 240.17 m3 /year or approximately 0.54 m3 /m2 /year. Since 2012, water use intensity
at BOMA BEST certified office properties has been relatively constant, between
0.6 m3 /m2 and 0.7 m3 /m2 each year. Nevertheless, in the BOMA’s last report, the
average water use intensity was 0.67 m3 /m2 / year. Consequently, Harmless Home
uses around 19.40% less water than buildings certified by BOMA [4]. However, these
are theoretical numbers to understand the average consumption in traditional houses
with no water filter system, contrary to the Harmless Home; therefore, the KPI for
water intensity use is considered 0.0 m3 /m2 /year.
4.4 Cost
The construction cost of the Harmless Home was around C$1500,000.00, excluding
land, which means the construction cost is equivalent to C$315.52/ft2 . According to
Altus Group, in their last report, the average price of a custom-built single-family
home is between C$450/ft2 and C$1135/ft2 [1]. Although Harmless Home used an
innovative material from Alberta, the price was slightly lower than conventional
construction costs in the region.
Commissioning cost was considered all-included in the agreement based on the
contract between the owner and builder, and because of confidentiality, data was not
provided.
Annual Operating Water Cost reflects only the variable costs for the building. Since
Harmless Home was designed to use reclaimed water and recycle all the graywater,
the predicted price is $0.00.
Annual Operating Energy Cost reflects only the variable costs for the building,
and since Harmless Home was designed to use solar panel energy, the predicted price
is listed as $0.00.
Performance Assessment of the Harmless Home 539
4.5 IEQ—Thermal
Figure 5 shows the temperature value daily from August 30, 2018, to April 08, 2019,
recorded by the TEM Kit from SMT Research. The values are plotted according to the
model described by ASHRAE 55. ASHRAE standard 55 defines the acceptable range
a building should be maintained. From the below chart borrowed from ASHRAE
standard 55, the permissible content is for the winter 20–24 and the summer ~ 22–26,
with a good humidity range between ~ 30 and 70% [3].
According to the ASHRAE 55 thermal comfort standard, 41.73% of occupant
spaces should be within the zones specified on the standard.
4.6 Site
the building time of their house and external services, they have been most protective
of the remaining natural ecosystem. According to the architect, the project impacted
less than 10% of the land; therefore, approximately 70% of the site was protected.
The siting of the Harmless Home was done to retain the trees to the east, offering
morning shade, but the Home is fully exposed to the sun for the remainder of the sun
path (be it winter or summer) in the south and west aspects. The site is also exposed
to a rock outcropping and is thus prone to steady breezes that limit heat accumula-
tion around the building perimeter. The primary roofline is also intentionally south
oriented to accommodate the anticipated central solar panels array requested by the
owners in the search for energy independence. The variety takes up about 75% of
the south-facing rooflines, which ensures solar energy is captured in solar panels for
electrical generation and potentially offset thermal accumulation.
Upon building the Home, all the rain that falls around the Home’s perimeter likely
continues its natural journey off-site, but by design, much of the water that lands on
the Home’s roof is captured as rainwater cisterns. The architect estimates that around
90% of all rainfall would be sent off-site via the various trenches and drainage courses
created on the hillside.
4.7 Materials
No measurement was taken to calculate the amount of materials waste and reuse/
recycle.
Performance Assessment of the Harmless Home 541
5 Conclusion
Below is an overview of some critical lessons identified from the Harmless Home
performance assessment according to the iiSBE Protocol. This project is ongoing;
consequently, these are initial findings, and further measurements and occupants’
surveys will be run. These results will be further studied in other papers:
1. Hours of operation and occupation load in current building occupancy can be
challenging to quantify accurately without monitoring sensors or recording on a
continued basis. These are critical aspects of managing futures POEs to have a
precise occupancy analysis.
2. Even though two retired people occupy the Harmless Home, their social life
welcoming people in their Home, especially to promote the JBF and their find-
ings of the house, this can impact the original design assumptions momentarily,
increasing the energy and water use.
3. Surprisingly, the Harmless Home consumed less energy than predicted in the
Building Energy Modeling (BEM). However, the BEM considered that JBF had
an R-value equal to 21.84, but the study from SMT Research about the JBF’s
R-value determined that the actual R-value was 40.15. Besides, solar panels
currently produce energy in the Harmless Home that needs to be considered
in the BEM model, and these changes are vital and need to be considered in
further BEM analysis.
4. Actual building performance can be directly impacted by how the building is
managed. The thermal sensors are located below a window in a den that hardly
ever is used, affecting the thermal comfort measurement having a below-average
for a high-performance building.
5. The thermal comfort results are below expected for a high-performance house.
However, the den is rarely used, so setting up a high temperature is unnecessary.
6. Even though the ASHRAE 55 states that the minimum temperature for indoor
comfort is 20 °C, the occupants report that they both feel comfortable in tempera-
ture from 17 °C to 20 °C, which should be considered in the IEQ-Thermal results,
not only ASHRAE 55.
7. A lack of sub-metering, sensor, or data acquisition was a significant barrier in
water and indoor environmental quality performance assessment.
542 R. W. C. Sousa and T. Froese
References
1. Altus Group (2022) 2022 Canadian cost guide. Altus Group, Vancouver
2. Amole D (2009) Residential satisfaction in students’ housing. J Environ Psychol 76–85
3. ASHRAE (2020) ASHRAE standard thermal environmental conditions for human occupancy.
The Society, Atlanta
4. BOMA (2020) Building on Sustainability: 2020 National Green Building Report. BOMA,
Toronto
5. IPCC (2007) Climate change: synthesis report. Intergovernmental Panel on Climate Change,
Geneva
6. JBF (2022) FAQ. Retrieved from just BioFiber: structural solutions. https://justbiofiber.com/
products/faq/
7. Jiboye AD (2012) Post-occupancy evaluation of residential satisfaction in Lagos, Nigeria:
feedback for residential improvement. Front Architectural Res 236–243
8. Leaman A, Stevenson F, Bordass B (2010) Building evaluation: practice and principles. Build
Res Inf 564–577
9. Li P, Froese T, Brager G (2018) Post-occupancy evaluation: state-of-the-art analysis and state-
of-the-practice review. Build Environ 187–202
10. McGill (2022) How much are we using? Retrieved from McGill Water is Life!: https:/
/www.mcgill.ca/waterislife/waterathome/how-much-are-we-using#:~:text=If%20we%20l
ook%20at%20how,L%20of%20water%20per%20person
11. Meir IA, Garb Y, Jiao D, Cicelsky A (2009) Post-occupancy evaluation: an inevitable step.
Adv Build Energy Res 189–219
12. Ministry of Energy, Mines and Low Carbon Innovation (2019) BC energy step code: a best prac-
tices for local governments. Energy Step Code Council and the Building and Safety Standards
Branch, Victoria
13. Ministry of Environment and Climate Change Strategy (2020) Electricity emission
intensity factors for grid-connected entities. Retrieved from Government of British
Columbia: https://www2.gov.bc.ca/gov/content/environment/climate-change/industry/report
ing/quantify/electricity
14. NRCan (2017) Build smart Canada’s buildings strategy: a key driver of the Pan-Canadian
framework on clean growth and climate change. St. Andrews by-the-Sea: Natural Resources
Canada
15. NRCan (2020) Canada’s GHG emissions by sector, end use and subsector. Retrieved from
Natural Resources Canada: https://oee.nrcan.gc.ca/corporate/statistics/neud/dpa/showTable.
cfm?type=HB§or=aaa&juris=ca&rn=3&page=0
16. Passive House Canada (2022) Passive house (Passivhaus) is considered to be the most rigorous
voluntary energy-based standard in the design and construction industry today. Retrieved from
Passive House Canada: https://www.passivehousecanada.com/about-passive-house/#:~:text=
Buildings%20consume%20up%20to%2040,cooling%20energy%20than%20conventional%
20buildings
17. Pérez-Lombard L, Ortiz J, Pout C (2008) A review on buildings energy consumption
information. Energy Build 394–398
18. Preiser WF (1995) Post-occupancy evaluation: how to make buildings work better. Facilities
19–28
19. Sanni-Anibire MO, Hassanain MA, Al-Hammad AM (2016) Post-occupancy evaluation of
housing facilities: overview and summary of methods. J Perform Constructed Facil
Performance Assessment of the Harmless Home 543
20. SMT Research (2013) Thermal efficiency monitoring (TEM) kit. Retrieved from SMT research:
https://www.smtresearch.ca/thermal-efficiency-monitoring-kit
21. UNEP (2020) Global status report for buildings and construction: towards a zero-emissions,
efficient and resilient buildings and construction sector. United Nations Environment
Programme, Nairobi
22. UN-Habitat (2017) Building sustainability assessment and benchmarking—an introduction.
United Nations Settlements Programme (UN-Habitat), Nairobi
Energy Performance Evaluation
and Building Energy Code
Implementation of Multi-unit Residential
Buildings: A Review
Ishanka Perera, Syed Asad Hussain, Rehan Sadiq, and Kasun Hewage
1 Introduction
The importance of overall building energy efficiency was highlighted in the above
statistics. The residential buildings are becoming more critical for energy-saving
goals due to the rapid population growth [39]. According to the published literature,
the residential building sector holds the most significant energy-saving potential in
building energy [2]. Residential buildings account for 27% of total global energy
consumption and 7% of total global CO2 emissions [49].
Primary energy end-uses in the residential sector are space heating and cooling,
water heating, lighting, and appliances [9]. In the Northern hemisphere regions,
the energy consumption for space conditioning and water heating exceeds 80% of
the total residential energy [26, 50, 52]. Demand for natural gas and fuel oil-based
energy is higher in cold climate regions due to the extensive space heating [21]. In
the Northern hemisphere, the warm winters reduce fuel use by 2.4%, while warm
summers increase electricity usage by 0.5% for air-conditioning [67]. The critical
factor here is that space heating tends to consume more energy than space cooling.
Hence, while pursuing high-energy-efficient residential houses, space heating should
be dealt with utmost care. The space conditioning energy consumption is highly
dynamic and changing drastically. In the USA, it is expected to reduce the space
heating energy by 42% between 2091–2100, while a growth of 167% in space cooling
energy is expected in residential buildings due to global warming [2]. From 1991
to 2013, Spain’s residential sector energy consumption has increased by 1.55 times
[43]. The value has risen 1.58 times during 2013 alone within one year period until
2014. Also, the average annual residential consumption of energy per surface area
was 179.59 kWh/m2 between 2011–2014, and 61% of this consumption is attributed
to space heating and cooling in Europe [44]. Moreover, Canadian residential building
energy use has tripled since 1990 [28]. Therefore, the residential building sector is
responsible for a significant portion of the exponential growth of global building
energy demand and GHG emissions.
Various types of residential buildings are available, such as small-scale single-
family houses, duplexes, and larger-scale MURBs. The following section describes
the MURB-related energy demand and its importance.
MURBs have become one of the most prominent solutions for population growth
and urban densification [11]. A study done by the United Nations in 2008 declared
that 50% of the world’s population resides in urban areas. Numerous mega-scale
residential construction projects are booming due to this. For example, in major
Canadian cities such as Vancouver, Toronto, and Montreal, the population density
and MURB construction have increased by more than twice compared to the previous
decade [11]. Due to the limited availability of land area and integrated cost of single
548 I. Perera et al.
houses, the apartment building concept is thriving. At the global level, the construc-
tion and development of high-rise and low-rise residential buildings increased by
60% in 2018, compared to the 2017 construction rates [54, 61]. According to the
USA census data in May 2021, there is a 2.4% increment in residential building
permits compared to the previous year’s statistics. These statistics strongly represent
the rapid development of MURBs worldwide [66].
Even though MURBs are the leading solution to control urban densification,
MURBs come with heavy energy demand and consumption concerns. The forecasted
MURB energy consumption is expected to rise by 179% by 2050 than the energy
use in 2010 globally [7]. According to Natural Resources Canada (NRCan), the
associated residential building energy consumption has tripled since 1990 in Canada
[28]. Moreover, in Vancouver and Toronto, 40% of the total local city GHG emissions
are coming from high-rise apartments [7, 37, 62]. The importance of MURBs and
associated energy efficiency was reflected in those statistics. In terms of the local
utility provider’s perspectives, the heavy impact of the energy grid (both electricity
and NG) could be induced from MURBs than clusters of single-family houses [3,
5, 53]. With increasing energy consumption, the operational emissions could further
increase, such as in Vancouver and Toronto [24, 56]. Therefore, it is evident that
MURBs hold great potential for energy consumption and GHG emissions mitigation.
Without limiting to the advantages of reducing energy consumption and emissions,
MURBs could also lead to significant cost benefits. High-energy-efficient MURBs
could find a series of solutions for the world energy crisis, energy poverty, and adverse
impacts of GHG emissions [35].
Numerous action plans and practices are available to mitigate energy consumption
and enhance the energy efficiency of MURBs. Building energy codes (BEC), building
certifying standards, government policies, and mandates are major means of regu-
lation. However, these methods and measures have limitations and challenges when
implementing them on MURBs. Building certification standards and BEC are the
most common measures available to enhance the MURB energy efficiency.
Building certification standards are the entry-level initiatives to develop energy-
efficient MURBs. LEED, BREEAM, Living Building Challenge, CASBEE, and
Green star are examples of such building certification standards. However, they
are not as comprehensive and detailed as BECs. BECs provide detailed guidelines
and recommendations to reach energy efficiency states or net-zero energy ready
states. The certification standards are more of an evaluation scheme that evaluates
the already developed MURBs to provide an energy efficiency rating.
Building energy codes have been introduced to develop more sustainable build-
ings with higher energy efficiencies [63]. Most of the available building energy
Energy Performance Evaluation and Building Energy Code … 549
codes are capable of handling both existing MURBs and new construction buildings.
However, since building energy codes include very detailed guidelines to achieve the
energy efficiency standards, it often delivers optimum results on new construction
buildings [38]. Building energy codes are available globally and developed based
on the socio-economic conditions, stakeholder acceptance, regional climate condi-
tions, and occupant behavior and awareness. The energy performance characteristics,
the previous statistics, and data are required to develop a suitable building energy
code. However, many of the currently available building energy codes are developed
based on limited data from the MURBs [35]. Therefore, some of the most prominent
building energy codes were investigated thoroughly in the flowing section to identify
how effectively they can enhance the MURBs’ energy performance.
The followings are some of the major building energy codes available globally. Under
this review, they were investigated to identify their effectiveness and involvement
toward the MURBs.
Canada
Canada has a significant variation in climatic conditions between its’ geograph-
ical locations. Hence, the building energy performance varies drastically between
provinces. Due to this, there are multiple building energy codes available for
Canadian MURB construction.
The National Building Code of Canada (NBC) and National Energy Code of
Canada for Buildings (NECB) are available as the nationwide building codes for
Canada. The primary objectives of NBC are to address health and safety, efficient
construction methods, and community-related issues [25]. NBC tries to promote
a more uniform regulation mechanism throughout Canada. NECB specifies the
approaches to develop high-energy-efficient buildings as a high-level overview [10].
Since the federal government is not responsible for the building regulations, this
model is adopted by provincial municipalities with some alterations to meet their
requirements.
BC. Energy Step Code (BCESC) is the provincial building energy code for British
Columbia. BCESC addresses mainly two categories of buildings called part 3 and
part 9 buildings. MURBs are considered as part 3 buildings since they have more
than 600 m2 of floor size. The STEP code has five steps defined for small buildings
by performance levels, starting from step one. STEP one considers the energy perfor-
mance modeling and airtightness measuring. Eventually, STEP five considers the net-
zero energy buildings [6]. However, only four STEP levels are defined for MURBs
due to the complexity and scope of the MURB energy performance enhancements,
where STEP four achieves the net-zero ready status [20].
550 I. Perera et al.
The importance of energy consumption reduction and available building energy codes
for MURBs was well established in the previous sections. However, the adaptation
of building energy codes and the effectiveness of energy code implementation on
MURBs remains questionable and still lagging in many aspects. Under this part of
the review, the progress and trends of energy code implementation on MURBs and
challenges are discussed.
BEC standards in both new and existing residential buildings are mandatory or
voluntary based on the country. Even though multiple BECs and energy-efficient
Energy Performance Evaluation and Building Energy Code … 551
upgrades are available, the investment or the motivation toward implementing energy-
efficient upgrades or adaptation to a building energy code does not occur simply.
Various stakeholders are involved in these energy-efficient projects, and each of them
must be motivated for a successful implementation. BEC implementation challenges
of MURBs were reviewed under financial and environmental perspectives.
Building energy codes are developed to reduce energy consumption and building-
related GHG emissions to mitigate the adverse environmental impacts. However,
economics and financial perspectives hold key importance when considering the
overall building sustainability. Particular stakeholders such as investors and resi-
dents of high-energy-efficient MURBs might be more interested in financial benefits
than environmental benefits. Therefore, the expected financial benefits provide a key
influence on the success of the energy code implementation.
A study conducted in Great Britain found that better heating upgrades to a building
can be justified in terms of fuel and energy-related cost savings [48]. However, before
rationalizing the cost expenses toward energy-efficient upgrades, it is essential to
identify how these cost expenses behave in actual practice. In the United Kingdom,
it was estimated that the cost for energy-efficient improvements is considerably lower
when considering the cost of power consumption while providing excess heat during
winters. The Institute of Fiscal Studies has estimated that 20% of low-income homes
are concerned about energy prices at least ten times more than high-income homes
[14]. Therefore, the cost of energy improvements should not be higher than the
current energy prices in the long term [48]. Similar conditions could be identified in
the USA. The Waxman-Markey climate bill and Boxer-Kerry bill [36] include many
regulations of energy efficiency targets for residential buildings, but specifically
stating that those have to be met with the most cost-effective manners.
The incremental capital cost is the first concern when considering financing-
related challenges toward BEC implementations. Many existing implementations
pay attention to the initial cost increments. The construction decision-makers are not
interested in future costs which are associated with energy performance later. Their
primary focus is to provide an affordable house with the minimum required energy
standards without giving much attention to future energy bills and expenses. This
so-called “future cost” concept must be weighed more when costs and investments
are realized. Most financial institutes that provide incentives and financial assistance
only focus on the construction cost while neglecting the future costs of energy bills
and expenses of energy-saving measures [38]. The lifecycle thinking-based lifecycle
cost approach could deliver meaningful results for these issues [35]. Since most
building codes have minimum standard levels, designers and developers would not
go beyond the limit of available incentives and investments to increase the energy
standards. Another major issue related to cost deployments is that no builder or
552 I. Perera et al.
designer has the framework or model to calculate the lifecycle cost and operating
costs of such energy-saving upgrades at the initial decision-making stages.
Therefore, it is challenging to distribute the investments and deploy the invest-
ments at the beginning. Building information modeling and obtaining energy advi-
sors’ guidance are some alternative methods used by the builders to reduce unex-
pected costs and expenses. However, in practical situations with the actual implemen-
tation, the cost will be increased to unexpected values. Also, based on the location
of building construction, the energy code implementation process could drastically
vary. Therefore, locational characteristics are crucial for financial incentives and
financial aid planning. All the energy-efficient upgrade options might not have the
same local price. Hence, the capital cost for the energy code implementation could
vary drastically. With different local utility prices, the financial performance of the
building also varies. When considering the expenses associated with energy-efficient
measures, it is not only the cost of actual constructions and operations but also the
cost of enforcing the building energy codes that hold a major fraction of expenses
[68]. The energy code development, planning, and inspection also burden the total
expenses. Hence, the cost of enforcement and quality of enforcement of building
energy codes needs to be evaluated under financial perspectives [51, 69].
Even though energy consumption reduction and GHG emissions mitigation are
crucial, most stakeholder groups could only be motivated by financial benefits.
However, as stated, the environmental perspective is of utmost importance. Due to
the government goals, policies, and mandates, investors and builders tend to invest
in BECs to mitigate the adverse effects on the environment by reducing energy
consumption. Most of the currently available BECs are not focusing on or giving
very little attention to the environmental effects directly. GHG emissions-related stan-
dards or other environmental impact standards are not integrated into most codes. In
general, if a particular BEC implementation is voluntary, the attention to environ-
mental standards is minimal. Currently, voluntary environmental investments are the
most popular and trending investment category available to BECs under the envi-
ronmental perspective. They are also referred to as the “corporate voluntary envi-
ronmental investments” because the motivation of such environmental investments
is mainly based on the corporation’s public image development.
However, some are deploying these investments to meet the current environ-
mental regulations and environmental standards, while some are trying to reduce the
cost of existing regulations and penalties [47]. Limited research has been conducted
to investigate these environmental investments’ true motivation and whether these
criteria meet the BEC implementation process. The policymakers could make deci-
sions for the development and implementation of the BECs from these models.
However, whether the model is voluntary-based or law and policy-based, it affects
the final MURB design and its performance. If the building could prove that the
Energy Performance Evaluation and Building Energy Code … 553
environmental impacts are significantly lower with the adaptation of BECs, both
previously mentioned implementation models could be adopted [4]. Additionally,
new investment pathways could open up and lower the financial challenges as stated
above based on the environmental performance. Also, the majority of the stake-
holders might be interested in obtaining as much as external investments possible by
reducing environmental damage through building energy code implementation.
When considering the environmental perspectives, it is not only the outdoor envi-
ronment. The indoor environment of buildings is crucial, especially for MURBs.
Under the building energy codes, the indoor environmental quality, such as the
thermal comfort levels of the occupants, and indoor air quality should be evalu-
ated. As of the 2020’s COVID-19 pandemic, residential building occupancy time
is significantly increased. Therefore, existing building energy codes should update
the standards, such as maintaining the optimal temperature levels for indoor thermal
comfort rather than only focusing on saving the thermal energy consumption. The
BEC implementation and adaptation are expected to be more successful by enhancing
stakeholder motivation, which can be gained by giving more attention to indoor
environmental standards [13].
Most of the available BECs and energy performance evaluation criteria do not include
the lifecycle assessment for MURBs. BECs primarily focus on energy consumption,
and their implementation is barely considered with the building life cycle, which is
generally around 50–70 years for MURBs. However, the ultimate goal is to reduce
the environmental impacts and costs associated with buildings. In that case, the scope
of these standards should extend beyond mere energy efficiency to evaluate the life-
cycle emissions (LCEs) and lifecycle costs (LCCs) associated with building energy
upgrades. Under the financial perspectives of the building energy code implementa-
tion, much more motivation and investment opportunities could be generated with
LCC data. After the energy upgrades, LCC savings are evident in most cases with
energy bill savings [35]. In terms of environmental perspectives, LCE savings could
strongly support and can be used to validate the emissions reduction goals for the
building sector [35].
Moreover, analyzing the performance of BEC implementation from a lifecycle
lens while accounting for local environmental parameters, macroeconomic condi-
tions, and energy supply characteristics is essential to understand the actual costs,
benefits, and local applicability [49, 52]. In the simulation environment, it is possible
to analyze and identify the pathways that are not practically feasible, thereby allowing
better strategies to be identified [1, 33]. A keyword search was carried out in the
Compendex Engineering Village database to highlight the current status of the life-
cycle adaptation in the MURBs and MURB-related BEC implementation. Only a few
lifecycle assessment studies have been conducted to enhance the BEC implemen-
tation for the MURBs. Table 1 summarizes lifecycle assessment-related keyword
search results for MURB building energy code implementation.
554 I. Perera et al.
The following summary table was created after reviewing the available BECs for
MURBs, their current implementation status and challenges. Possible solutions and
recommendations were given for each critical challenge based on the global MURB
energy demand, BEC implementation of MURBs, and the importance of adapting
lifecycle assessments for the implementation. Table 2 summarizes some of the iden-
tified barriers and potential solutions for the energy efficiency enhancement in the
residential building sector [32, 35, 37].
Energy Performance Evaluation and Building Energy Code … 555
4 Conclusion
This review discussed the global residential sector energy demand, its importance,
and the necessity of energy conservation measures considering MURBs. Building
energy code implementation with energy-efficient upgrades is one of the best energy
consumption mitigation measures. However, various challenges and obstacles block
the BEC implementation process and its success. Especially when considering the
new construction MURBs, the lack of data and energy performance evaluation tech-
niques to enhance the building energy efficiency are the significant issues. If such
techniques can be developed through the research work, that could be helpful in the
decision-making process for energy conservation of MURBs. Moreover, adopting
Table 2 Summary of key challenges and possible solutions for the implementation of BECs in
MURBs
Barriers/ Description Examples/causes Solutions and recommendations
challenges
Lack of Most of the • BC. Energy STEP • Research-based methods should
environmental available building code, Hong Kong be developed to assess the
impacts energy codes HK-Beam, and environmental impacts and
assessment focus only on the many building benefits from the upgrade
energy savings codes do not pay options
proper attention • LCE thinking-based approach is
to ideal for the building code
building-related standards to evaluate the
GHG emissions environmental impacts
Limited access to The ratio between • Up the front cost • Government subsidies and
significant capital initial cost or of incentives
investments implementation energy-efficient • Reduced interest rates and
(cost) cost and upgrades opportunity cost
energy-saving • Lack of
cost is significant low-interest load
schemes
• Lack of support
and motivation
from the
governments,
local
municipalities
Lack of methods Unavailability of • Evaluation of • Lifecycle cost assessment for
to access the methods and total building the buildings
financial techniques to energy savings • Lifecycle cost of the equipment,
performance evaluate the • Current practices replacement, and maintenance
financial are only focusing • Lifecycle cost assessment of
performance with on the initial energy savings
the capital cost
energy-efficient
(continued)
556 I. Perera et al.
Table 2 (continued)
Barriers/ Description Examples/causes Solutions and recommendations
challenges
Split incentives/ Unsatisfactory • Residential • Stakeholder perspective-based
split investments incentives or building uncertainty assessment. Various
and financial investments split developers may stakeholders might prioritize
benefits between various invest in different weights for the
stakeholders constructions, and upgrade costs
related to the benefits will be • Re-align the incentive among all
energy efficiency obtained by the parties who try to achieve
enhancement consumers performance levels based on the
project (residents) performance achieved,
• Government regardless of whether the actual
incentives tax performance level was reached
rebates will only or not
be received by • Split the investments based on
people who usage and payback periods
achieved the calculated using actual energy
energy standards saving and considering the
parties receiving the benefits
The risk When the risk • Uncertainty of • Proper risk analysis and risk
associated with associated with future energy mitigation plans have to be
implementation the success of prices and associated with the building
building code alternative cheap code implementation process
implementation is energy sources in • LCE and LCC approaches could
not adequately future be adopted to ensure future
assessed • Might have to performance variations before
change the the actual implementation
modifications • Adoption of BIM and
based on future BEM-based simulations to
performance level identify energy-efficient
targets measures’ performance during
• Length of the the decision-making process
payback period • Upgrading or stepping up a bit
forward than the current defined
performance levels
(future-proofing)
Hidden cost/ The overall cost • The cost for • The fixed pricing index must be
transaction cost is not directly obtaining relevant prepared by relevant authorities
captured or not information or responsible for developing the
adequately energy expert cost models
assessed might change • If unexpected hidden costs
• In the existing immerge, appropriate incentives
building financial aid programs should be
conversion there to support
process,
unexpected costs
due to
incompatibilities
of equipment
(continued)
Energy Performance Evaluation and Building Energy Code … 557
Table 2 (continued)
Barriers/ Description Examples/causes Solutions and recommendations
challenges
Lack of The general • Residents do not • Advertising about the benefits
awareness, public does not have an idea and profitability through
insufficient have enough about the communication media
information information to be profitability or • While developing the model
aware of energy payback of code building codes, authorities
efficiency implementations should call for all stakeholders
enhancement • Building relevant to the subject
programs constructors are • Building energy performance
not aware of evaluation methods should be
government developed from the research
policies and side. The research environment
regulations developed data and statistics
• Financial support could be helpful when the actual
institutes don’t implementation process takes
have enough place and increases general
programs to awareness
support investors
due to a lack of
awareness of
building energy
codes
Resistance Economic, • Higher • Enhance the transparency of
toward the political, or social affordability cost investments and long-term
implementation factors might be or retrofit returns, improve the reliability
against some investment cost of energy-saving performance to
projects will make strengthen the trust toward the
resistance among energy codes
residents toward • More incentive programs, tax
the rebates, and financial assistance
energy-efficient toward the people who favor the
enhancements changes
• Due to political
issues and
diplomatic
problems,
countries might
not be favorable
to energy
performance
standards or
pollution
standards
558 I. Perera et al.
lifecycle thinking into the BEC development and implementation would signifi-
cantly enhance the effectiveness of the process. The following are the key findings
concluded from this review.
• BEC implementation is currently the best method to enhance building energy effi-
ciency performance for MURBs. However, when it comes to large-scale MURBs,
the energy code implementation is still in the stage of development.
• Lack of the previous energy performance data, lessons learned from the past, and
scientific performance evaluation methods, significantly higher upfront capital
costs are holding the energy efficiency upgrade process in MURBs.
• Many of the currently available building energy codes address a limited number
of aspects related to the environmental impact-related standards. To reach the
triple bottom sustainability, it is essential to integrate the standards to address
long-term environmental impacts. Integration of LCE into BEC implementation
could resolve this issue.
• In terms of investment deployment and cost handling, long-term cost effects
should be evaluated rather than focusing only on the initial capital cost of building
energy code implementation. LCC integration-based performance evaluation
could assist in resolving this problem.
• BEC implementation process has to be supported by proper scientific research-
based methods. The variations between locations, uncertain parameters, and
stakeholder perspectives should be studied thoroughly before developing code
standards.
References
7. Berardi U, Soudian S (2018) Benefits of latent thermal energy storage in the retrofit of Canadian
high-rise residential buildings. Build Simul 11(4):709–723. https://doi.org/10.1007/s12273-
018-0436-x
8. Brown MA (2001) Market failures and barriers as a basis for clean energy policies. Energy
Policy 29(14):1197–1207. https://doi.org/10.1016/S0301-4215(01)00067-2
9. Brown MA, Cox M, Staver B, Baer P (2014) Climate change and energy demand in buildings
conclusions from the literature. ACEEE Summer Study Energy Effi Build 2014:26–38
10. Building Code—The Canadian Wood Council—CWC: The Canadian Wood Council—CWC
(n.d.) Retrieved 11 Nov 2019, from https://cwc.ca/why-build-with-wood/safe/building-code/
11. Bunting T, Filion P, Priston H (2002) Density gradients in Canadian metropolitan regions,
1971–96: differential patterns of central area and suburban growth and change. Urban Stud
39(13):2531–2552. https://doi.org/10.1080/0042098022000027095
12. Cao X, Dai X, Liu J (2016) Building energy-consumption status worldwide and the state-of-the-
art technologies for zero-energy buildings during the past decade. Energy Build 128:198–213.
https://doi.org/10.1016/j.enbuild.2016.06.089
13. Christina S, Dainty A, Daniels K, Waterson P (2014) How organisational behaviour and atti-
tudes can impact building energy use in the UK retail environment: a theoretical framework.
Architectural Eng Design Manag 10(1–2):164–179. https://doi.org/10.1080/17452007.2013.
837256
14. Crawford I, Smith S, Webb S (1993) VAT on domestic energy.pdf. ISBN 1-873357-30-3
15. Cunha JB, Oliveira PB de M, Cordeiro M (2000) Greenhouse adaptive climate control tech-
niques to reduce energy demand. Greenhouse adaptive climate control techniques to reduce
energy demand, 1–4
16. Dahl CA, Mcdonald L (1998) Forecasting energy demand in the developing world. Energy
Sources 20(9):875–889. https://doi.org/10.1080/00908319808970105
17. Deason J, Hobbs A (2011) Codes to cleaner buildings: effectiveness of US building energy
codes
18. Decarbonizing U.S. Buildings|Center for Climate and Energy Solutions (n.d.) Retrieved 28
Feb 2022, from https://www.c2es.org/document/decarbonizing-u-s-buildings/
19. Department of Energy U (2015) An assessment of energy technologies and research
opportunities. Chapter 5: increasing efficiency of building systems and technologies
20. Energy Step Code (2018) How the BC energy step code works—energy step code. https://ene
rgystepcode.ca/how-it-works/
21. Energy UD of (2013) U.S. energy sector vulnerabilities to climate change and extreme weather
22. Environment Canada (2015) Canada greenhouse gas emissions inventory. https://www.can
ada.ca/en/environment-climate-change/services/climate-change/greenhouse-gas-emissions/
inventory/emissions.html
23. Europe—Countries & Regions—IEA (n.d.) Retrieved 16 June 2021, from https://www.iea.org/
regions/europe
24. Farahbakhsh H, Ugursal VI, Fung AS (1998) A residential end-use energy consumption
model for Canada. Int J Energy Res 22(13):1133–1143. https://doi.org/10.1002/(SICI)1099-
114X(19981025)22:13%3c1133::AID-ER434%3e3.0.CO;2-E
25. Finn WDL, Wightman A (2003) Ground motion amplification factors for the proposed 2005
edition of the National Building Code of Canada. Can J Civil Eng 30(2):272–278. https://doi.
org/10.1139/l02-081
26. Frappé-Sénéclauze T-P (2018) Is B.C.’s energy step code a blueprint for Canada? Sustain Archit
Build Mag
27. Ghajarkhosravi M, Huang Y, Fung AS, Kumar R, Straka V (2020) Energy benchmarking anal-
ysis of multi-unit residential buildings (MURBs) in Toronto, Canada. J Build Eng 27:100981.
https://doi.org/10.1016/j.jobe.2019.100981
28. Government of Canada (2016) Natural Resources Canada|Natural Resources Canada. https://
www.nrcan.gc.ca/home
29. Greenhouse gas emissions from energy use in buildings in Europe—European Environment
Agency (n.d.) Retrieved 28 Feb 2022, from https://www.eea.europa.eu/data-and-maps/indica
tors/greenhouse-gas-emissions-from-energy/assessment
560 I. Perera et al.
30. Hadley SW, Erickson DJ, Hernandez JL, Broniak CT, Blasing TJ (2006) Responses of energy
use to climate change: a climate modeling study. Geophys Res Lett 33(17):2–5. https://doi.org/
10.1029/2006GL026652
31. Halverson M, Shui B, Evans M (2009) Country report on building energy codes in the United
States. In: United States Department of Energy (Issue April)
32. Haney AB, Jamasb T, Platchkov LM, Pollitt MG (2012) Demand-side management strategies
and the residential sector: lessons from the international experience. In: The future of electricity
demand: customers, citizens and loads, pp 337–378. https://doi.org/10.1017/CBO978051199
6191.021
33. Hu M (2018) Optimal renovation strategies for education buildings-A novel BIM-BPM-BEM
framework. Sustainability (Switzerland) 10(9):1–22. https://doi.org/10.3390/su10093287
34. IEA: World energy outlook 2011—Google Scholar (2020)
35. Ishanka Perera (2021) Energy performance of multi-unit residential buildings: a life cycle
thinking approach for BC Energy Step Code evaluation [University of British Columbia].
https://doi.org/10.14288/1.0406094
36. Jacobsen GD, Kotchen MJ, Davis L, Fowlie M, Horowitz M, Walsh R, Wolak F, Grant F,
Jacobsen D (2010) Are building codes effective at saving energy? Evidence from residential
billing data in Florida. Are building codes effective at saving energy? Evidence from Residential
Billing Data in
37. Jamasb T, Köhler J (2007) Learning curves for energy technology and policy analysis: a critical
assessment. Delivering a low carbon electricity system: technologies, economics and policy,
314–332
38. Jens Laustsen (2008) Energy efficiency requirements in building codes, energy efficiency
policies for new buildings. In IEA INFORMATION PAPER (Issue March)
39. Karunathilake H, Hewage K, Sadiq R (2018) Opportunities and challenges in energy demand
reduction for Canadian residential sector: a review. Renew Sustain Energy Rev, vol 82, pp
2005–2016. https://doi.org/10.1016/j.rser.2017.07.021
40. Khalil HB, Zaidi SJH (2014) Energy crisis and potential of solar energy in Pakistan. Renew
Sustain Energy Rev 31:194–201. https://doi.org/10.1016/j.rser.2013.11.023
41. Lee WL, Chen H (2008) Benchmarking Hong Kong and China energy codes for residential
buildings. Energy Build 40(9):1628–1636. https://doi.org/10.1016/j.enbuild.2008.02.018
42. Li, Y., Kubicki, S., Guerriero, A., & Rezgui, Y. (2019). Review of building energy perfor-
mance certification schemes towards future improvement. Renewable and Sustainable Energy
Reviews, 113.https://doi.org/10.1016/j.rser.2019.109244
43. López-González LM, López-Ochoa LM, Las-Heras-Casas J, García-Lozano C (2018) Final
and primary energy consumption of the residential sector in Spain and La Rioja (1991–2013),
verifying the degree of compliance with the European 2020 goals by means of energy indicators.
Renew Sustain Energy Rev 81(March 2017):2358–2370. https://doi.org/10.1016/j.rser.2017.
06.044
44. López-Ochoa LM, Las-Heras-Casas J, López-González LM, Olasolo-Alonso P (2019) Towards
nearly zero-energy buildings in Mediterranean countries: energy performance of buildings
directive evolution and the energy rehabilitation challenge in the Spanish residential sector.
Energy 176:335–352. https://doi.org/10.1016/j.energy.2019.03.122
45. Mansur ET, Mendelsohn R, Morrison W (2008) Climate change adaptation: a study of fuel
choice and consumption in the US energy sector. J Environ Econ Manag 55(2):175–193. https:/
/doi.org/10.1016/j.jeem.2007.10.001
46. Mariangiola Fabbri, Groote M. De, Rapf O (2016) In: Bean F, Faber M (eds) Building renovation
passports. Buildings Performance Institute Europe
47. Maxwell JW, Decker CS (2006) Voluntary environmental investment and responsive regulation.
Environ Resour Econ 33(4):425–439. https://doi.org/10.1007/s10640-005-4992-z
48. Milne G, Boardman B (2000) Making cold homes warmer: the effect of energy efficiency
improvements in low-income homes. Energy Policy 28(6–7):411–424. https://doi.org/10.1016/
S0301-4215(00)00019-7
Energy Performance Evaluation and Building Energy Code … 561
49. National Renewable Energy Laboratory (2010) Summary of gaps and barriers for implementing
residential building energy efficiency strategies
50. Natural Resources Canada (2018) Energy fact book 2018–2019
51. Paquette Z, Miller J, Dewein M (2011) Incremental construction cost analysis for new homes
52. Prabatha T, Hewage K, Karunathilake H, Sadiq R (2020) To retrofit or not? Making energy
retrofit decisions through life cycle thinking for Canadian residences. Energy Build 226.https:/
/doi.org/10.1016/j.enbuild.2020.110393
53. Sabouri V, Femenías P (2013) Two case studies in energy efficient renovation of multi-family
housing; Explaining robustness as a characteristic to assess long-term sustainability. In: Smart
innovation, systems and technologies, vol 22. https://doi.org/10.1007/978-3-642-36645-1_5
54. Sarac F (2091) High-rise buildings and mid-rises overshadowing low-rises. https://www.ren
tcafe.com/blog/apartmentliving/high-mid-rise-residential-buildings-overshadowing-low-rise/
55. Şen Z (2004) Solar energy in progress and future research trends. Prog Energy Combust Sci
30(4):367–416. https://doi.org/10.1016/j.pecs.2004.02.004
56. Senbel M, Church S (2010) The relationship between urban form and GHG emissions
57. Shafiullah G, Oo AMT, Ali AS, Wolfs P, Arif M (2012) Meeting energy demand and global
warming by integrating renewable energy into the grid. In: 2012 22nd Australasian Universities
power engineering conference: “green smart grid systems”, AUPEC 2012, pp 1–7
58. Sheffield J (1997) World population and energy growth impact on the Caribbean and the roles
of energy efficiency improvements and renewable energies
59. Sheffield J (1998) World population growth and the role of annual energy use per capita.
Technol Forecast Soc Chang 59(1):55–87. https://doi.org/10.1016/S0040-1625(97)00071-1
60. Sheffield J (1999) World population and energy demand growth: The potential role of fusion
energy in an efficient world. Philos Trans Royal Soc Math Phys Eng Sci 357(1752):377–395.
https://doi.org/10.1098/rsta.1999.0333
61. Siebrits J (2019) Skyscrapers|Global Living Report 2019|CBRE|CBRE Residential. https://
www.cbreresidential.com/uk/en-GB/news/skyscrapers
62. Simonet G (2018) The energy intensity of the Canadian residential sector becomes more
efficient, 1–12
63. Skalko SV (2013) Building codes evolve through experience, research. PCI J 58(3):34–40.
https://doi.org/10.15554/pcij.06012013.34.40
64. The Daily—Survey of Commercial and Institutional Energy Use, 2014 (n.d.) Retrieved 28 Feb
2022, from https://www150.statcan.gc.ca/n1/daily-quotidien/160916/dq160916c-eng.htm
65. Directive 2010/31/EU of the European Parliament and of the Council of 19 May 2010 on the
energy performance of buildings, Directive 2010/31/EU of the European Parliament and of the
Council of 19 May 2010 on the energy performance of buildings (2010)
66. U.S. Census Bureau, & U.S. Department of Housing and Urban Development (2021) Monthly
New Residential Construction, December 2020 (Issue January, 2021)
67. U.S. Energy Information Administration (2011) Annual Energy Outlook 2011 with projections
to 2035
68. Vine E, Williams A, Price S (2017) The cost of enforcing building energy codes: an examination
of traditional and alternative enforcement processes. Energ Effi 10(3):717–728. https://doi.org/
10.1007/s12053-016-9483-2
69. Williams A, Price S, Vine E, Berkeley L (2014) The cost of enforcing building energy codes
sample overview and methodology, 403–414
70. Yao R, Li B, Steemers K (2005) Energy policy and standard for built environment in China.
Renew Energy 30(13):1973–1988. https://doi.org/10.1016/j.renene.2005.01.013
Construction Lean Scoring
and Benchmarking System
1 Introduction
Construction projects are challenging to manage due to their complex nature. Because
of that, construction projects have been known to go beyond their scheduled duration
and planned budget more often to the extent that delays and cost overruns are the norm
rather than the exception [41]. In fact, it was reported in a global construction survey
that over 50% of engineering and construction professionals report one or more
underachieving projects in a year [9]. According to 69% of owners, the biggest reason
for project underperformance is poor contractor performance [9]. Mismanagement in
construction projects leads to schedule slippage, which causes 66% of contractors to
incur extra costs for overtime and second shifts in order to finish on schedule; despite
these efforts, 50% of contractors still need to extend the project end date [24]. Hence,
projects are completed over budget and behind schedule. Over the years, engineers
are constantly trying to analyze and improve on the conventional building methods
in order to cut time, cost and upgrade the quality of construction projects. One of the
methods of process optimization widely used nowadays in the construction field is
lean construction. This methodology aims to remove any part of a process that does
not bring value to the customer. The process simply focuses on dividing the activities
needed to complete a project into two categories, value-added activities and non-
value-added activities [17]. Lean principles are implemented to remove and reduce
non-value-added activities to the utmost minimum in order to deliver an end product
that is up to the customer’s specifications with less cost, time and waste [7]. The
implementation of lean aids in overcoming challenges such as productivity loss [32],
waste generation [16] and environmental issues [1]. Frequent employment of lean
practices in construction projects has been found to make projects three times more
likely to be completed ahead of schedule and two times more likely to be completed
under budget [33]. This was further validated by lean experts in the field, where 70%
of 95 professionals acknowledged that implementing lean techniques led to improved
performance and waste reduction [33]. Such benefits have encouraged countries such
as USA, UK, Brazil, Chile, Peru, Ecuador, Venezuela, Finland, Denmark, South
Korea, Singapore and Australia to implement lean, and they are now considered
the leading countries in adopting lean practices [11, 28, 29]. It is important to note
that the aforementioned benefits are only reaped by the project based on how well
the employees implement lean techniques. According to Salem et al. [40], proper
implementation of lean techniques goes beyond the techniques themselves. In order
to be able to get the maximum benefits of lean, there has to be a continuous change
in employees’ behavior. Such behavior included team work, active communication
and high levels of visualization. The fact that project performance improvement is
directly linked to how well lean techniques are implemented leads to a set of intriguing
questions. The first is what are the factors that if applied frequently and efficiently
entitle a project to be lean? Additionally, how can the quality of implementing such
factors be measured in a construction project? Lastly, how are lean factors related
to project performance? These questions shed a light on the knowledge gap that
this research intends to fill. This is implemented through extensively studying the
Construction Lean Scoring and Benchmarking System 565
2 Literature Review
When assessing the effectiveness of lean construction techniques, one must look
at the benefits that are associated with implementing lean techniques in real-life
construction sites. According to Kilpatrick [31], lean construction techniques provide
improvements in three main aspects, strategic, administrative and operational. The
strategic aspect is improved through better client–contractor relationship and client
satisfaction. This allows companies to have a good reputation and therefore increases
their market share. The administrative improvements include reducing redundant
processes, standardizing repetitive tasks and homogenizing project reports that are all
benefits associated with implementing lean [31]. The operational aspect is improved
as the productivity and quality of work increase, site space is more efficiently utilized
and flow of work is improved [31, 37]. Other authors were in agreement with the oper-
ational benefits of lean construction, and they were quantified as follows, increased
labor productivity enhancement by 43%, reduction in cycle time by 41% and process
efficiency enhancement by 27% [2]. There are other intangible advantages to imple-
menting lean construction such as improved corporate image, improved project
delivery and increasing client satisfaction through delivering projects that are to
up client expectations [37]. Lastly, authors also attested improvement in the envi-
ronmental quality of projects that regularly implement lean construction [10, 37]. It
is important to note that there are a number of other benefits associated with lean
construction; however, each project realizes these benefits differently depending on
the frequency and proficiency of utilizing the tools. Hence, this research aims to
identify the key lean factors that influence the efficiency of lean in construction
projects.
Scoring indices are usually established for the purpose of evaluating performance.
They serve as early warning systems for entities to monitor their performance and
set mitigation plans when their performance is hindered. The validity of a scoring
system is derived directly from the legitimacy of the logic that the score is built upon
[30]. Hence, it is important to study how the different scoring indices are devel-
oped and what are the different aspects that should be considered when developing
566 J. Said et al.
a fair scoring index. Since the purpose of this research is to develop a project lean-
ness score, previously established scoring systems that evaluate construction project
performance were studied. Table 1 shows a compiled list of the research efforts that
developed scoring systems that assess different aspects of construction projects [19].
3 Problem Statement
There are several factors in a project that are directly affected by the implementation
of lean principles. These factors can be technically related to the project, such as
improving quality, material flow and pull, schedule and cost overruns and efficiency
of equipment and resources used. Said factors can also be related to the managerial
aspect of the project such as improving client focus, waste consciousness, organiza-
tion planning and information flow, continuous improvement and lastly coordination
among parties. The literature found on lean construction is very rich when it comes
to explaining the different lean techniques and tools. There are also several studies
that address the effectiveness of lean tools when implemented in construction sites.
Construction Lean Scoring and Benchmarking System 567
However, very few research efforts could be found on evaluating the implementation
of lean principles in construction sites. Consequently, there is a knowledge gap on
how to evaluate and give advice to companies on areas to improve based on their
current lean practice positions. Knowing the factors in construction projects that are
directly affected by lean principles, they can be used to give a complete assessment
of a company’s performance. Hence, this research intends to fill the existing gap
through identifying the factors that support the success/failure of implementing lean
construction tools and quantifying the impact of those factors to the actual perfor-
mance of construction projects. Consequently providing a project leanness score and
benchmarking scale shows where a company stands and where it needs to improve
its position based on other companies in the industry.
4 Methodology
Based on the literature review presented, a conclusion can be drawn that there are
several factors affected by the implementation of lean principles. Therefore, in order
to be able to develop an accurate leanness score, a four-stage methodology is used
as follows.
The first stage is identifying the key factors that influence the leanness of construction
projects. The main aim of this stage is to be able to find out which lean factors directly
affect construction projects the most. Hence, extensive literature review research is
done to identify such factors.
The second stage is to determine the relative importance of the identified factors
through an expert-based survey, where a questionnaire is generated and responses of
experts who previously worked utilizing lean principles are collected. The questions
in this survey are based on the factors identified in the first stage of the research. The
main aim of the expert-based survey is to be able to assign weights to each of these
factors based on the expert opinions gathered. These weights would then dictate
the relative importance of each of the factors. The relative importance of factors is
beneficial when developing the project leanness score as it helps treat each factor
according to the weight of its importance.
568 J. Said et al.
The third stage of the research involves developing a project leanness score measuring
the overall level of lean implementation in the project. The scoring equation is devel-
oped utilizing the responses retrieved in the expert-based survey through assigning
factors different weights depending on their relative importance. That way, one can
ensure that the project leanness score accurately and realistically assesses companies’
implementation of the lean principles.
In the last stage of the research, a project-based survey is created. This survey targets
construction companies, and they are asked to fill out questions related to their project
performance. The data is collected directly from the teams working on the projects
in order to ensure its credibility. Responses are then used to evaluate the overall
performance of the company based on the project leanness score developed in the
previous stage. The resulting scores of the participating companies give an insight
on the construction industry’s overall quality and frequency of implementing lean
principles. Having such a database provides an overview on the industry’s general
performance, where patterns in the industry’s performance are identified through
statistical methods and a project leanness benchmarking scale is established based
on these patterns. The project leanness score together with the project leanness bench-
mark scale offers a holistic evaluation of companies’ performance, where companies
would be able to know their scores which represent the overall rate of implementing
lean principles in projects and would also be able to compare their performance to
other companies in the industry based on the project leanness benchmarking scale.
The literature found provides an overview on the factors that are likely to be directly
affected by the implementation of lean principles. For example, Hofacker et al.
[26] highlighted through their research that lean construction principles enhance
client focus, improve communication between the key stakeholders of the project,
improve the overall quality, provide for a better organization and information flow
and provide room for continuous improvement [26]. Other factors were considered
in other research papers, and they included coordination among parties, meeting
schedule and budget targets and efficiency of equipment and resources used [23].
Construction Lean Scoring and Benchmarking System 569
Moreover, research by El-adaway et al. [18] considered factors like design flexi-
bility and facilitated contractual risks and disputes to strongly influence construction
projects performance. As a result of a collaborative research effort between the
University of Missouri for Science and Technology and The American University in
Cairo, an extensive literature review was conducted to compile the aforementioned
factors into a list and categorize them into six main categories. These categories are
behavioral, communication, team, managerial, technological and contractual. Then,
each of the factors was assigned to a category that relates to its topic resulting in a
list of 27 factors that affect a construction project as seen in Table 2.
Since one cannot safely assume that the above factors are all equally important and
influential to a construction project, an expert-based survey was conducted for two
main objectives. The first is to validate the choice of the factors, and the second
is to rate each factor in terms of its real-life applicability and overall significance
in construction projects. The results of this survey then determined the weight of
each factor. This weight indicates the relative importance of each of the 27 factors to
the overall success of construction projects. Assigning such weights to each factor
ensures that the scoring equation is not treating all factors with equal importance.
By doing that, the scoring equation delivers a balanced output that fairly and accu-
rately assesses the companies’ performance in the project. The expert-based survey
consisted of questions targeting the 27 identified factors listed in Table 2. For each
factor, every expert had to rate its importance in a construction project based on the
Likert scale shown in Table 3. From the ratings provided, the relative importance
of each factor within a category could be identified. The survey received a total of
71 responses, and the following steps were taken in order to determine the relative
importance of each of the identified factors. First, expert ratings for the importance
of each factor were listed, and the average result of the responses was calculated.
Then, the data was normalized in order to observe how experts rated the importance
of each factor within its category. This was done by dividing the factor average by
the sum of factor averages within the factor’s category. The results can be seen in
Table 4.
The project leanness score developed follows a 100-point scale, and the score was
built to reflect the 5-point performance Likert scale presented in Table 5, where the
performance of the company is measured according to its overall implementation
of the identified lean factors. The total project score out of 100 would then indicate
the overall frequency and quality of implementing lean factors in the project as
570 J. Said et al.
outlined on the project leanness score scale seen in Fig. 1. The score was calculated
by multiplying the factor averages by 1 to get the minimum score and by 5 to get the
maximum score as outlined in Table 4. Then, the minimum and maximum scores
were normalized by getting the score slope in order to fit the 100-point scale using
Eq. 1. That way, theoretically speaking, if a company very frequently implements
all 27 lean factors, it would score 5 on the Likert scale making its total score 100
Construction Lean Scoring and Benchmarking System 571
using Eq. 2. It should be noted that the same equations were applied to each of the
six categories to be able to calculate category score slopes and category total score
out of 100.
After developing the scoring system, a project-based survey was created in order
to gather information on industry performance in construction projects. The survey
aimed to make the respondent self-rate their companies’ performance in a specific
project in each of the 27 lean factors. The rating was also done on a 5-point Likert
scale with 1 being the lowest rating and 5 being the highest using the same Liker scale
shown in Table 5. A total of 30 responses were collected through the survey, each
response representing and describing a different project. The first step was to calcu-
late the category scores, and total score for each project as per the scoring system
is established. The resulting category scores and total score of the 30 projects in the
database are shown in Fig. 2. Then, the second step was to divide the scores into six
categories as shown in the box plot of Fig. 2, where the categories would then repre-
sent the industry performance in each of the lean categories and the total lean scores.
The established leanness benchmarking scale helps in providing a more accurate
and fair performance assessment as it uses the industry performance as the datum
572 J. Said et al.
Table 4 (continued)
Factors Factor Factor Minimum Maximum
average weighted score = score = (xi )
average (xi ) * 1 *5
D4. Accuracy of schedule 2.84 14.37 2.84 14.2
accomplishment indicators
(SPI)
D5. Client focus 3.04 15.38 3.04 15.2
D6. Problem-solving and 3.01 15.23 3.01 15.05
decision-making
D7. Flexibility of design 2.88 14.57 2.88 14.4
Sum category D 19.75 100 19.75 98.75
E. E1. Use of BIM 2.54 48.04 2.54 12.7
Technological E2. Effectiveness of using a 2.75 51.96 2.75 13.75
category scheduling tool
Sum Category E 5.28 100 5.28 26.4
F. Contractual F1. Procurement plan 2.98 34.27 2.98 14.9
category F2. Appropriateness of 2.88 33.08 2.88 14.4
project delivery method
F3. Risk balance in 2.84 32.65 2.84 14.2
contractual obligations
Sum category F 8.70 100 8.70 43.5
Fig. 1 Project leanness score showing overall frequency of implementing lean factors
574 J. Said et al.
Fig. 2 Box plot showing projects’ category and total scores along with the benchmarking categories
6 Conclusion
categories. Among its benefits is that the user is able to visualize what their leanness
and category scores indicate using the graphical benchmarking scales that the tool
displays. These scales show the user where they stand compared to other companies
in the industry. The limitations of this research include having a limited database
of only 30 projects, and hence, it is recommended to conduct more project-based
surveys in order to increase the database. As a recommendation for future research,
predictive data analysis tools can be used to predict project schedule and budget
performance. This can be done through artificial intelligence tools that train on the
existing data in order to be able to predict whether projects would be on schedule or
budget through their lean principles implementations.
References
16. Diekmann J, Krewedl M, Balonick J, Stewart T, Won S (2004) Application of lean manufac-
turing principles to construction. Construction Industry Institute, Boulder, CO (2004) Project
report by Project team 191. The University of Texas, Austin. http://docplayer.net/23504615-
Application-of-lean-manufacturing-principles-toconstruction.html
17. El-Sawalhi N, Jabr B, el Shukri A (2018) Towards lean and green thinking in construction
projects at Gaza Strip. In: Organization, technology and management in construction. Sciendo,
pp 1827–1838. Retrieved from https://core.ac.uk/download/pdf/212491648.pdf
18. El-adaway I, Abdul Nabi M (2021) Understanding the key risks affecting cost and schedule
performance of modular construction projects. J Manage Eng 37(4):04021023. https://doi.org/
10.1061/(asce)me.1943-5479.0000917
19. Elsayegh A, El-adaway I (2021) Collaborative planning index: a novel comprehensive bench-
mark for collaboration in construction projects. J Manag Eng 37(5):04021057. https://doi.org/
10.1061/(asce)me.1943-5479.0000953
20. Erol H, Dikmen MT (2017) Birgonul measuring the impact of lean construction practices on
project duration and variability: a simulation-based study on residential buildings. J Civ Eng
Manag 23(2):241–251. https://doi.org/10.3846/13923730.2015.1068846
21. Farrar JM, AbouRizk SM, Mao X (2004) Generic implementation of lean concepts in simulation
models. Lean Constr J 1(1):1–23
22. Goodrum PM, Haas CT, Caldas C, Zhai D, Yeiser J, Homm D (2011) Model to predict the
impact of a technology on construction productivity. J Constr Eng Manage 137(9):678–688.
https://doi.org/10.1061/(ASCE)CO.1943-7862.0000328
23. Hamed U (2013) Implementation of lean construction techniques for minimizing the risks
effect on project construction time. Alexandria Eng J 52(4):697–704. ISSN 1110-0168, https:/
/doi.org/10.1016/j.aej.2013.07.003
24. Hamzeh F, Zankoul E, Rouhana C (2015) How can ‘tasks made ready’ during lookahead
planning impact reliable workflow and project duration? Constr Manage Econ 33(4):243–258.
https://doi.org/10.1080/01446193.2015.1047878
25. Hanna AS, Lotfallah W, Aoun DG, Asmar ME (2014) Mathematical formulation of the project
quarter back rating: new framework to assess construction project performance. J Constr Eng
Manage 140(8):04014033. https://doi.org/10.1061/(ASCE)CO.1943-7862.0000871
26. Hofacker A, Oliveira B, Gehbauer F, Freitas MD, Mendes Junior R, Santos A, Kirsch J (2008)
Rapid lean construction quality rating model (LCR)
27. Ibrahim MW, Labib Y, Veeramani D, Hanna AS, Russell J (2021) Comprehensive model for
construction readiness assessment. J Manage Eng 37(1):04020088. https://doi.org/10.1061/
(ASCE)ME.1943-5479.0000832
28. Johansen E, Walter L (2007) Lean construction: prospects for the German construction industry.
Lean Constr J 3(1):19–32
29. Jørgensen B, Emmitt S (2008) Lost in transition: the transfer of lean manufacturing to
construction. Eng Constr Archit Manag 15(4):383–398
30. Kapliński O (2008) Usefulness and credibility of scoring methods in construction industry/
Tašk˛u skaičiavimo metod˛u naudingumas ir patikimumas statybos pramonėje. J Civ Eng Manage
14(1):21–28. https://doi.org/10.3846/1392-3730.2008.14.21-28
31. Kilpatrick J (2003) Lean principles. Utah manufacturing extension partnership, 68(1):1–5
32. Koskela L (1992) Application of the new production philosophy to construction, vol 72. Stan-
ford University, Stanford, CA. http://www.leanconstruction.org.uk/media/docs/KoskelaTR72.
pdf
33. McGraw Hill Construction (2013) McGraw Hill Construction LC—leveraging collabora-
tion and advanced practices to increase project efficiency (smart market report). McGraw
Hill Construction Research and Analytics, Massachusetts. https://www.leanconstruction.org/
media/docs/Lean_Construction_SMR_2013.pdf
34. Mohan SB, Iyer S (2005) Effectiveness of lean principles in construction. In: 13th annual
conference of the international group for lean construction, Sydney, Australia, 1921 Jul 2005,
pp 421–429
578 J. Said et al.
35. Nani G, Apraku K (2016) Effects of reward systems on performance of construction workers
in Ghana. Civ Environ Res 8(7)
36. Oberlender G, Trost S (2001) Predicting accuracy of early cost estimates based on esti-
mate quality. J Constr Eng Manage 127. https://doi.org/10.1061/(ASCE)0733-9364(2001)127:
3(173)
37. Ogunbiyi O, Goulding JS, Oladapo A (2014) An empirical study of the impact of lean
construction techniques on sustainable construction in the UK. Constr Innov
38. Poppendieck M (2011) Principles of lean thinking. IT Manage Select 18(2011):1–7
39. Salem S, Paez O, Solomon J, Genaidy A (2005) Moving from lean manufacturing to lean
construction: toward a common socio-technological framework. Human Factors Ergon Manuf
15(2):233–245
40. Salem O, Solomon J, Genaidy A, Minkarah I (2006) Lean construction: from theory to imple-
mentation. J Manag Eng 22(4):168–175. https://doi.org/10.1061/(asce)0742-597x(2006)22:
4(168)
41. Sterman JD (2010) Does formal system dynamics training improve people’s understanding of
accumulation? Syst Dyn Rev 26(4):316–334
42. Yu J-H, Lee H-S, Kim W (2006) Evaluation model for information systems benefits in construc-
tion management processes. J Constr Eng Manage 132(10):1114–1121. https://doi.org/10.
1061/(ASCE)0733-9364(2006)132:10(1114)
Agent-Based Modeling for Delay
Analysis Claims
1 Introduction
2 Literature Review
A typical construction project usually faces at least one or more delay events
throughout its life cycle [8, 11]. Delay events are typically defined in the litera-
ture as events that prolong the construction time beyond the specified contractual
completion date as agreed by both parties [5]. There are numerous reasons for delay
events in construction projects, such reasons include but are not limited to design
changes requiring reworks, improper planning, deficiency in equipment, material or
labor supply and inadequate site management [2, 10]. In their comprehensive liter-
ature review about the causes of delay in construction projects, Mbala et al. [12]
ranked the leading causes of delays in construction projects. The results of their
research show that the top five causes of delay in construction projects are (1) poor
site management, (2) unavailability of skilled labor competent to perform complex
tasks, (3) impractical activity durations, (4) scarcity of resources and finally (5) design
changes leading to reworks. It goes without saying that the magnitude of the effect
of these events on construction projects is directly related to the level of complexity
of the project. Consequently, the more complex and interdependent the project activ-
ities are, the higher the risk that any of the aforementioned delay events will cause
a chain reaction of unpredictable delays. Among the ways that contractors utilize
to mitigate the effects of delay events are accurate planning, sufficient project risk
assessment, high communication and coordination between project parties, efficient
project management and finally, detailed delay analysis in order to quantify effects
of such delays.
Delay analysis techniques are divided into two main broad categories which are
prospective and retrospective analysis. Prospective analysis studies the effect of the
delay event on the criticality at the time the delay event occurs or before it occurs.
However, retrospective analysis is done after the delay event happens, and the actual
sequence, timing, resources of work related to the event is known [1]. According to
Yang et al. [16], prospective analysis is performed on five steps, (1) identification of
change or potential of change that will cause a delay for which extension of time may
be entitled. (2) Identifying the activities that may be affected as a result of this delay
and the effect on the project completion date from the original as planned schedule.
(3) Creating a series of new activities that should be added to the schedule in order
to accommodate the delay event. (4) Determine how these new activities will fit into
the original schedule and establish an impacted schedule. (5) Comparing the project
completion or milestone completion dates in the original unimpacted schedule versus
the impacted schedule. Among the advantages of prospective analysis is that it is very
582 J. Said et al.
accurate and reflects real-time effects on the project, moreover can help the contractor
anticipate risks associated with the new schedule ahead of time and create a mitigation
plan for them. However, the disadvantage is that it is time consuming and requires
constant follow-up and can be more expensive. Moreover, as it is done in real time, it
provides data for what might happen and does not take into consideration the effect
of concurrent delays; therefore, it can produce inaccurate analysis [6]. Prospective
analysis includes methods like the time impact analysis method and impacted as
planned method. On the other hand, retrospective analysis includes inserting a fragnet
with the actual delay events onto the baseline schedule and assessing the impact on the
completion date or milestone of the project [16]. Retrospective analysis is relatively
cheaper and is easier to carry out; however, one of its disadvantages is that it can yield
very different results if key information is missing and its accuracy depends mainly on
the consistency of record keeping [6]. Retrospective delay analysis methods include
the Window Analysis, and the As Planned versus As Built method. The accuracy of
quantifying the effect of a delay event depends mainly on proper documentation and
records after the event has taken place. Therefore, in efforts to speed up the process
of quantifying the effect of delay events, this paper employs a real-time recording of
delaying events through conducting delay analysis using agent-based modeling.
As seen from the literature, agent-based modeling can have several applications
specific for use within the construction industry. One of the novel research areas that
agent-based modeling can be utilized in is simulating construction delay events in an
Agent-Based Modeling for Delay Analysis Claims 583
effort to perform delay analysis. Where the aforementioned common reasons of delay
within construction can be simulated in a model and a quantification of the effect
of the delay event can be measured. This is done through simulating hypothetical
construction activities for a hypothetical project and introducing live delay events
such as but not limited to resource scarcity, changes in design or unavailability of
labor crews. The introduction of the said delay events onto the modeled project
would simulate the effects of these delays accurately; thus, allowing for an accurate
quantification of the consequential effect of the duration. For the purpose of this
research, one of the most common delay events in construction projects, which is
design changes, was chosen for the simulation model and a delay analysis was done
to demonstrate the effect of such an event on the project. The proposed model along
with a case study and results is presented in the following sections of the paper.
This paper examines the application of delay analysis using agent-based modeling.
The proposed framework for the model can be seen in Fig. 1.
The simulation tool chosen was AnyLogic© due to its discrete event simulation
where a delay analysts can observe the schedule overtime as it evolves and the
instantaneous changes that could occur to it. This is where the real-time aspect of
the approach enters. Delaying events can be chronologically added to the simulation
and changes to the schedule can be observed directly. This involves changes made
to all interrelated activity paths.
The proposed method involves retrieving the initial planned schedule to
commence its simulation. The important aspect to observe is how the simulation
would be built. The simulation must start off by modeling the planned schedule to
retrieve the planned duration, from there the model must be adjusted to accompany
the real-time insertion of events. This involves the task of defining the model with all
possible scenarios that the agent would be exposed to when faced with an inserted
delaying event. Table 1 combines a list of all possible simulations that may be faced
when conducting a delay analysis.
Planned Schedule
Delayed Schedule
The following actions would be defined by a user in an excel sheet, and the
simulation model would read the required action and perform it. Thus, the final
delayed schedule would eventually be produced based on an updated events log
reserve.
4 Case Study
Opening
Ceramic Procuring
the Ground
Ground the First floor
First floor purchase
floor material floor
order
Marble
sample in Fig. 4 presents the first event added as well as the resulting delayed schedule
where due to the reworks taking place for the procurement cycle, the project was
delayed a day from 64.7 to 65.3 days.
To further exhibit the real-time insertion of events, another scenario is demon-
strated in Fig. 5 where event 1 required re-engineering and reprocurement to take
place. But event 2 was added before reprocurement of event 1 was taking place,
and event 2 involved re-engineering again. So, the model has readjusted to produce
re-engineering again in light of new information received. This has caused a further
delay pushing the completion duration to 108.3 days.
In the two cases presented above and as seen in Fig. 6, the delayed duration was
attained through recording events in an events log where the user is capable of
choosing where the reworks would take place. The info would then be instantaneously
fed into the simulation to be capable of adjusting the agent’s path and thus affecting
the project duration. The idea being that the selection of the delay analysis method
would not be a dispute open for discussion between parties; in fact, as seen in the case
study above, it is only needed to identify the delay and its impact and the de facto
imitation of the delays shall take place to present the delays incurred. In other words,
the long-standing debate of “which delay analysis method to select?” is obliterated
through delay analysis using agent-based modeling.
The advantages of using the agent-based modeling for delay analysis lies in the
following:
1. Real-time modeling: The agent-based delay analysis can allow for a proactive
observation of the events’ effect. The model can also be further adjusted to read
120 108.3
100
80 64.75 65.3
Days
60 2 Delaying Events
40
1 Delaying Event
20
0
Planned Schedule Delay Analysis- Case 1 Delay Analysis- Case 2
a retroactive list of events that have already been finalized to demonstrate their
effect on the schedule.
2. Automation: Once the simulation model is created once over the lifetime of the
project, any user can allot for inserting events into an events log and the delay
analysis would be updated by simply running the model. The labor-intensive
process of a delay analysis is not necessary at this point. Further code can be
developed to cover all the situations an agent may face as seen in Table 2. No
only so, but such evades the need for a delay analyst to set the relationship of the
delaying events with the existing activities every time an event is added.
3. Accurate results: Because AnyLogic© takes into account the limitation of
resources and the probabilistic nature of activities, this information provides
for a far more realistic set of results versus the stagnant, limited nature of a
scheduling software.
Regardless, of the above, it is important to recognize that the model prepared does
not account for acceleration, pacing, and concurrent delays which may be the initial
limitation set out. Yet, due to the versatility of AnyLogic© it can be accommodated
through further adjustments to the simulation model.
6 Conclusion
Overall, the above has ignited the spark for the potential that agent-based modeling
has in the delay analysis realm. Advantages include a fast, accurate, and realistic
method for applying delay analysis; the history of events can be visually unraveled
in front of all parties to retell the story of happenings. This may decrease disputes
between parties by avoiding the intricate and complex delay analysis methods where
an individual who require to revisit the delay analysis must have extensive knowledge
in the application of the method itself as well as the commercial software being used
as opposed to visually demonstrating through simulation the happenings. Further,
agent-based modeling for delay analysis can also be the key to solve the variance
found in applying different delay analysis techniques due to the simulation or imita-
tion of events as they are. Moreover, AnyLogic© in particular sense is versatile and
can be adjusted to accustom a variety of delaying scenarios all the while consid-
ering real-life limitation such as resources. This application can even open doors to
further recognizing costs associated with man-hour utilization due to delays or even
disruption costs due to decrease in productivity.
References
1. Abou Orban H, Hosny O, Nassar K (2018) Delay analysis techniques in construction projects
2. Al-Kharashi A, Skitmore M (2009) Causes of delays in Saudi Arabian public sector construction
projects. Constr Manag Econ 27:3–23. https://doi.org/10.1080/01446190802541457
588 J. Said et al.
Abstract Cylinders fitted with strakes have been studied as a method to suppress
vortex shedding and mitigating vortex-induced vibrations. However, the effect strakes
have on the horseshoe vortex (HSV) formed on the forebody-bed junction is not fully
understood. In this work, a computational fluid dynamic (CFD) study is utilized
to further investigate the flow characteristics. A Reynolds averaged Navier–Stokes
(RANS) k-ω shear-stress transport (SST) turbulence model was utilized to carry out
the numerical study. Simulations of the flow around the cylinder with diameter (D)
of 40 mm were carried out. Three models were adopted: a cylinder with no strake and
two straked cylinders with strake heights of 0.1D and 0.2D. The straked cylinders
were situated such that at the base, the strakes were located at 60° to the approach
flow. The cylinder Reynolds number was 28,000 based on the approach flow velocity.
The drag coefficient of the no-strake cylinder case agrees well with published results.
Wall shear contours along the bottom wall show a larger area of decreased wall shear
stress both upstream and downstream of the straked cylinders compared to the bare
cylinder. Vorticity contours and visualization of 3D fluid structures show that the
HSV is formed further upstream from the pier face in the straked cases than the
no-strake cylinder case. It was also observed that the HSV is extended further into
the wake region for the straked cylinders. It was also found that the roll-up and
subsequent development of the HSV is pushed farther from the base of the cylinder
in the straked cases, with the distance increasing with strake height.
Keywords
1 Introduction
The flow around a circular cylinder is a classic problem in bluff body aerodynamics
which has been studied for many years. The analysis and understanding of these flows
are an integral part of many engineering applications. The flow field around a circular
cylinder is made up of trailing vortices, wake or lee vortices, a horseshoe vortex
system, or a combination of these. As a fully developed boundary layer encounters
the cylinder, a pressure gradient is created, causing the flow stream at the base to roll
up into a vortex. This vortex is then carried around the cylinder downstream, resulting
in an increase of the mixing between the wake and core flow. This phenomenon is
known as the horseshoe vortex (HSV). The HSV is understood to be the primary
cause of scour holes around bridge piers. As the HSV gets trapped by a sediment
ring, formed from the scoured sides, it allows for rapid removal of sediment [4]. These
scour holes around the bridge piers may ultimately result in bridge failure. Several
investigations [8, 10, 12] of bridge failures in North America have determined that
scour or scour-related problems yield 50–60% of bridge collapses. Vortex shedding
is brought on by flow separation of the cylinder’s surface. The separated flow sheet is
slowed down by friction when in contact with the cylinder, causing the formation of
Karman vortices. If the vortex shedding frequency approaches a structure’s natural
frequency, vortex induced vibrations (VIV) can occur. VIVs can result in damages
to engineering structures such as off-shore structures, cable-stayed bridges, and heat
exchangers.
Numerous researchers have investigated methods to control the flow and miti-
gate its harmful effects. Passive control methods are often favored over-active
control methods, as active methods require a secondary energy supply, increasing the
complexity and energy costs. A common form of passive control is when changes are
made to the physical geometry of the cylinder, called geometric shaping. Examples
of geometric shaping include the incision of a slit into the cylinder and the addition
of helical strakes or bumps. This paper focusses on helical strakes in the three-start
configuration, which are one of the most widely used passive devices. Helical strakes
are “fin-like” elements that wrap around the cylinder in a helical pattern to reduce the
forces and cross-stream deflections experienced on the stack due to vortex shedding.
Many studies have been conducted in the past to investigate the effect of strakes
on the flow field around a circular cylinder in both water and air. Results from various
experimental studies [1, 2, 6, 11] indicate that strake pitch is not as significant in
VIV suppression than strake height. Korkischko and Meneghini [7] employed a two-
camera stereoscopic particle image velocimetry system to perform volumetric recon-
struction of flow behind cylinders with strakes of varying heights. It was observed
that the strakes introduce a well-defined wavelength of one-third of the pitch in
the reconstructed flow. The results of this study also provide a basis of comparison
to numerical simulations. Carmo et al. [3] conducted two- and three-dimensional
simulations of the flow around straked cylinders. Analysis of the simulation results
showed that the Strouhal number was the lowest when the strakes were at a 60° angle
to then inlet flow. In both the studies conducted by Korkischko and Meneghini [7]
Flow Field Around Wall Mounted Circular Cylinders with Strakes 591
and Carmo et al. [3], the correlation coefficient of the velocity field in the near-wake
region, compared to a bare cylinder, was also at a minimum when the strakes were
at a 60° angle to the incoming flow.
Although numerous investigations have been completed on the use of strakes
in reducing vortex shedding and VIV, the effects of strakes on the HSV is much
less documented in literature. Given the effectiveness of strakes at suppressing the
alternating vortex shedding that leads to VIV’s, the possibility of using strakes to
help suppress or eliminate the horseshoe vortex that forms around the base of the
cylinder is an interesting possibility that should be investigated. In the current study,
CFD methods are utilized to investigate the wake regions formed around circular
cylinders with and without strakes. Specifically, the effects the strakes have on the
horseshoe vortex and the bed shear stress in the bed region are of interest. For the
present work, a numerical Reynolds averaged Navier–Stokes (RANS) simulation was
carried out to study how the use of helical strakes affects the formation of horseshoe
vortices. In this study, the flow characteristics of two cylinders fitted with helical
strakes are compared with that of a plain cylinder case. To this end, the streamwise
velocity, vorticity generation, and wall shear stress are analyzed.
2 Model Setup
The wake flow is simulated using the commercial CFD package STAR CCM+. A
water channel of rectangular cross section with dimensions of 34D (1.36 m) long,
18D (0.72 m) wide, and 10D (0.4 m) in height as seen in Fig. 1. The height of
the domain is 10D to match the pitch of the cylinders. The side walls and inlet are
positioned 9D from the center of the cylinder. The side walls are located far enough
from the cylinder such that the blockage ratio is 5.5%. The outlet was extended to
25D from the center of the cylinder to allow for the wake region to fully form.
The inlet condition is a nearly fully developed turbulent flow and is obtained by
conducting a separate simulation using the same computational domain as described
above, but the length was extended to 134D (5 m) and the bluff body was removed.
A uniform velocity inlet profile with a free stream velocity of 0.7 m/s was used at
the inlet boundary. The velocity distribution of the fully developed inlet profile is
compared with the numerical work of Nasif et al. [9], as seen in Fig. 2, who compared
their work with the experimental data of Heidari et al. [5]. The work of Nasif et al.
[9] and Heidari et al. [5] studied the flow in an open channel without a bluff body.
In these comparisons, U represents the mean streamwise velocity, and U ∞ is the
freestream velocity of 0.72 m/s. The numerical results were in good agreement with
the work of Nasif et al. [9].
The fully developed profile also follows the near-wall and logarithmic forms of the
boundary layer very well, as shown in Fig. 3. The Reynolds number based on the inlet
velocity of 0.7 m/s and cylinder diameter of 40 mm is 28,000. The water properties
are assumed to be constant, with the density equal to 1000 kg/m3 and the dynamic
viscosity equal to 0.001 N s/m2 . An implicit unsteady simulation was carried out
592 M. Marrocco et al.
to match the pitch of the strakes used in this case. The two straked cases have
different heights of 0.1D and 0.2D. The strake height was varied instead of the pitch
because previous studies have shown that altering the strake height is more effective
at suppressing vortex shedding. The start of the strakes at the bottom of the cylinder
are positioned 60° from where the flow initially reaches the cylinder, as shown in
Fig. 4. This is because it has been previously reported that the Strouhal number and
correlation coefficient of the wake region were minimized when the strakes were
positioned at this angle to the approach flow [3].
Grid independence study was conducted by investigating how the coefficient of
drag (Eq. 1) varies with the mesh cell count, where Fd is the drag force on the
cylinder, ρ is the density of the fluid, U is the freestream velocity, and A is the frontal
area of the cylinder.
2Fd
Cd = (1)
ρU 2 A
The grid was considered to be adequate when there was a minimal change in the
drag coefficient following mesh refinement. Table 1 provides a summary of the grid
study conducted.
Fig. 4 Orientation of
straked cylinders to the
approach flow
594 M. Marrocco et al.
The drag coefficient from case C to case D changes minimally, around 1.3%.
Therefore, case C was chosen due to its drastically reduced cell count. The final
mesh is shown in Fig. 5. The refinement region is reserved to the area surrounding
the cylinder and the wake region. Prism layers were included around the cylinder
and on the bottom wall to accurately resolve the wall effects, as seen in Fig. 5c. The
non-dimensional wall-normal distance, y+, is below 1 along the cylinder and bottom
walls.
Model validation was carried out by comparing the drag coefficients between the
present numerical work and the experimental work of Korkischko and Meneghini
[7]. The drag coefficient produced by the work of Korkischko and Meneghini [7]
was 1.16, while the drag coefficient calculated by the present numerical study was
1.15.
Fig. 5 Mesh domain used for numerical study including a top-down view of domain, b bird-view
of mesh domain, and c magnified view showing prism layers around the cylinders and domain bed
Flow Field Around Wall Mounted Circular Cylinders with Strakes 595
3 Results
To visualize the time-averaged fluid motion and vorticity of the flow around the
plain and straked cylinders, the time-averaged velocity contours superimposed with
streamlines (left column) and the ωz -vorticity contours (right column) in the Z/D
= 0.0125 plane are displayed in Fig. 6. The time-averaged vorticity plot clearly
indicates the roll-up of the flow into the horseshoe vortex. The interesting thing to
note though is that the presence of the straked cylinders changes the location of the
HSV roll-up, pushing it further upstream compared to the plain cylinder case. This
is also observed in Fig. 7. It is important to note that as the HSV develops in the
downstream direction, it moves further away from the cylinder as the height of the
strakes increase. It is believed that the development of the HSV is moved further
away from the cylinder because the shear layers of the flow are also pushed further
away from the cylinder due to the presence of the strakes. The observations into
the development of the HSV’s can also be seen in the λ2 plots displayed in Fig. 8.
Downstream of the plain cylinder, it is evident that the main vortices are still present
in the near wake of the cylinder, leading to stronger flow recirculation in that region.
This phenomenon is not seen in the straked cylinder cases, as neither the streamlines
nor vorticity plots show recirculation in the near-wake region. Instead, it appears
as though the strakes do an effective job of suppressing vortex formation in the
near-wake region by dissipating the vortices throughout the wake.
Figure 7 shows the time-averaged velocity contours and streamlines from the
velocity vectors upstream of the cylinder in the Y/D = 0 plane. It is observed that
there are minimal changes in the flow fields between the three cases. This is expected
due to the orientation of the strakes to the incoming flow. Since the bare cylinder is
reached first in all three cases, the flow experiences similar behaviors. However, a
small recirculation region can be seen in front of the cylinder. It is observed that the
location of this recirculation region is pushed forward by the presence of the strakes
and moves further upstream as the height of the strakes increases. The recirculation
zone is located approximately 0.4D, 0.5D, and 0.6D in front of the cylinder for the
plain, straked 0.1D, and straked 0.2D cylinder cases, respectively. The exact cause
of this effect is unknown to the authors and requires more analysis to determine an
accurate conclusion.
Figure 8 illustrates the 3D vortex cores of the time-averaged flow field identified
by using the λ2-criterion of λ2 = −1. It is colored with the time-averaged streamwise
velocity. The HSV has been marked by number 1 for each case. From this figure,
it is evident that the finger-like vortices shed from the plain cylinder are suppressed
in the two straked cylinder cases, which has been reported in numerous previous
studies. As shown in the plain cylinder case, the HSV forms at the upstream face of
the cylinder and wraps around it but does not extend to the wake region. However,
the HSV is extended into the wake region in both the straked cases. The HSV for
each straked cylinder case appears to not only extend further into the wake region,
but they also appear to be larger in physical size than in the plain cylinder case. As
the HSV travels around the cylinder in the straked cases, it appears to do so at a
596 M. Marrocco et al.
Fig. 6 Mean streamwise velocity contour with streamlines for the a plain cylinder, c straked 0.1D
cylinder and e straked 0.2D cylinder on the Z/D = 0.0125 plane. z-vorticity contour for the b plain
cylinder, d straked 0.1D cylinder, f straked 0.2D cylinder on the Z/D = 0.0125 plane
further distance from the cylinder compared to the plain cylinder case as well. This
is also evident in the ωz contours presented in Fig. 6.
The time-averaged shear-stress contours along the bottom wall are displayed in
Fig. 9. As can be seen from the figure, both straked cylinder cases have larger areas
of low shear stress in the downstream region of the cylinder. It is also observed that
the areas of high shear stress on the upstream face are much smaller than in the case
of the plain cylinder. It is evident that as the strake height increases, the region of low
Flow Field Around Wall Mounted Circular Cylinders with Strakes 597
Fig. 7 Mean streamwise velocity contour with streamlines for the a plain cylinder, b straked 0.1D
cylinder, and c straked 0.2D cylinder on the Y/D = 0 plane
shear stress at the upstream face of the cylinder also increases. A small correlation is
observed between the ωz -vorticity pattern and the wall shear-stress distribution along
the bottom wall. Figure 10 shows a plot of the wall shear-stress amplitude along the
bottom wall in the upstream direction just before the cylinder. The peak wall shear
stress for the 0.1D straked cylinder case is lower than that of the two other cases.
As the flow approaches the cylinder (i.e., when −0.7 < X/D < −0.5), the wall shear
stress for both straked cases is lower than that of the plain cylinder case. This may
help in with the reduction of scour as the wall shear stress needs to exceed the critical
shear stress of the bed material for scour to occur. If the wall shear stress is less than
the critical shear stress of bed material, the flow is unable to remove sediment from
the scour hole. From Fig. 10, it is also observed that the peak shear stress along
the bottom wall moves further upstream as the strake height increases. This further
solidifies the results drawn from the ωz plots in Fig. 6, as the location of peak wall
shear stress corresponds to the location of the HSV formation.
Figure 11 displays the ωx -vorticity contours in the X/D = 0 plane for all three cases.
Comparison between the plain and straked cylinder cases shows that the regions
of vorticity along the height of the straked cylinders are larger and more intense.
598 M. Marrocco et al.
a)
b)
c)
Fig. 8 λ2-criterion iso-surfaces for a plain cylinder, b straked 0.1D cylinder, and c straked 0.2D
cylinder
Flow Field Around Wall Mounted Circular Cylinders with Strakes 599
Fig. 9 Time-averaged bed-shear-stress contours for the a plain cylinder, b straked 0.1D cylinder,
and c straked 0.2D cylinder
Fig. 10 Time-averaged bed-shear-stress amplification along the symmetry line upstream of each
cylinder case
600 M. Marrocco et al.
This is expected though as strakes are used to generate vortices to help suppress
the unsteady wake region. This trend is also noticed at the bottom of the cylinder
between the cylinder base and the bottom wall. This is highly undesirable as those
vortices may contribute to the loosening or removal of material on the bed. It can
also be observed that as the height of the strakes increase, the area of high vorticity
increases as well while also extending further from the cylinder base. At the bottom
of the cylinders, the vortices cover a larger area in the negative y-direction than in the
positive y-direction. This is believed to be due to the orientation of the strakes. On the
negative y-side, the strake starts just ahead of the X/D = 0 plane and moves up and
away from the plane. On the positive y-side, the strake also starts just before the X/D
= 0 plane but travels up and toward the plane. This means that on the negative y-side,
the vortices are generated by the strake and are allowed to develop freely without
interference. The strake on the positive y-side acts more like a barrier to this region
with the development of any vortices occurring further downstream than where the
plane is located.
Fig. 11 ωx -vorticity contours for the a plain cylinder, b straked 0.1D cylinder, and c straked 0.2D
cylinder on the X/D = 0 plane
Flow Field Around Wall Mounted Circular Cylinders with Strakes 601
4 Conclusions
A numerical study was carried out using STAR CCM+ to investigate the effect
of helical strakes on the horseshoe vortex formed around circular cylinders. An
implicit unsteady simulation was carried out utilizing a RANS equation and k-ω SST
turbulence model. Three models were utilized in the study—a plain cylinder and two
straked cylinders with strake heights of 0.1D and 0.2D. Various parameters, including
velocity, vorticity, and wall shear stress, were plotted and visualized to investigate
the effects of strakes on the flow field. The three-dimensional fluid structures were
also visualized using the λ2-criterion.
In the Z/D = 0.0125 plane, the straked cases increased the location of the HSV
rollup upstream from the cylinder, with the distance from the cylinder increasing
with the height of the strakes. The vorticity contours in this plane show that the HSV
is extended further downstream, into the wake region, for the straked cases. From the
iso-surface plots, showing the 3D fluid structures, it is observed that the formation of
the HSV occurs further away from the upstream surface as the height of the strakes
increase. This complimenting the results drawn from the vorticity plots in the Z/
D = 0.0125 plane. Furthermore, it appears as though the HSV travels around the
cylinder at a further distance from the cylinder’s main body in the straked cases. The
streamlines displayed in Fig. 7 show that the upstream recirculation region is pushed
further upstream in the straked cases and moves further upstream as the height of the
strakes increases. It can also be concluded that as the height of the strakes increases,
the area the vortices cover grows, and extends further from the cylinder. This means
that on the y-negative side, the vortex has more time to develop after being generated
by the strake, giving it the ability to develop closer to the bottom wall as well. The
strake on the y-positive side acts more like a barrier to this region, shielding it from
the development of the vortex in this region.
Since the HSV is not destroyed, but instead is larger and extends further into the
downstream region, the use of strakes, at 60° from the incoming flow, may not be
a suitable countermeasure for bridge pier scour. However, it was observed that the
wall shear stress was reduced with a strake height of 0.1D, which may assist in the
reduction of scour. It is recommended that further studies be conducted to investigate
the effect of strake orientation (i.e., position of strakes relative to incoming flow) on
the horseshoe vortex.
References
4. Guo J (2012) Pier scour in clear water for sediment mixtures. J Hydraul Res 50(1):18–27
5. Heidari M, Balachandar R, Roussinova V, Barron R (2017) Characteristics of flow past a slender
emergent cylinder in shallow open channels. Phys Fluids 29(6):065111
6. Korkischko I, Meneghini J (2010) Experimental investigation of flow-induced vibration on
isolated and tandem circular cylinders fitted with strakes. J Fluids Struct 2:611–625
7. Korkischko I, Meneghini JR (2011) Volumetric reconstruction of the mean flow around circular
cylinders fitted with strakes. Exp Fluids 51(4):1109–1122
8. Miroff N (2007) Collapse spotlights weaknesses in U.S. infrastructure. Washington Post,
Nation, www.washingtonpost.com
9. Nasif G, Balachandar R, Barron RM (2019) Influence of bed proximity on the three-dimensional
characteristics of the wake of a sharp-edged bluff body. Phys Fluids 31(2):025116
10. Shirhole AM, Holt RC (1991) Planning for a comprehensive bridge safety program. In: Trans-
portation research record 1290, Transportation Research Board, National Research Council,
Washington, D.C., vol 1, pp 39–50
11. Trim AD, Braaten H, Lie H, Tognarelli MA (2005) Experimental investigation of vortex-
induced vibration of long marine risers. J Fluids Struct 21:335–361
12. Wardhana K, Hadipriono FC (2003) Analysis of recent bridge failures in the United States. J
Perform Constr Facil 17(3):144–150
Framework for the Optimization
of Flying Shuttering Bridges: A Hybrid
Graph Theory and Simulation Approach
Y. A. S. Essawy (B)
Department of Structural Engineering, Ain Shams University (ASU), Cairo, Egypt
e-mail: yasmeen.sherif@eng.asu.edu.eg; y_essawy@aucegypt.edu
Y. A. S. Essawy · A. Abdullah
Department of Construction Engineering, The American University in Cairo (AUC), Cairo, Egypt
e-mail: abdelhamid.abdullah@m-eng.helwan.edu.eg; abdelhamid.abdullah@aucegypt.edu
A. Abdullah
Department of Architectural Engineering, Faculty of Engineering at Mataria, Helwan University,
Helwan, Egypt
K. Nassar
Industrial Partnerships and Extended Education, Department of Construction Engineering, The
American University in Cairo (AUC), Cairo, Egypt
e-mail: knassar@aucegypt.edu
1 Introduction
Construction industry is the largest industry in the world (service) forming more
than 10% of a country’s Gross National Product (GNP). Growth in the construc-
tion industry is considered as an indicator of the economic conditions of a country.
Since a large segment of the public and private sectors’ expenditure is spent on the
construction industry, it is, therefore, essential to think how to properly direct this
huge amount of money spent on such a crucial industry.
Bridge construction, being one of the most challenging infrastructure construction
projects and heavy construction projects due to their long service life, large budget
and complexity, consumes a considerable portion of the aforementioned budget. Such
projects are performed in different conditions, i.e., different locations, equipment
breakdown, technology used, resources available, site conditions and environmental
conditions. Due to these raised uncertainties, the production rates of all resources are
influenced, and consequently, each of the project construction sequences is affected.
Research Motivation and Objectives
Nowhere in construction is the competitive component in design as obvious as it is
in the field of bridge construction. This is clearly shown by the large number of bids
submitted based on design alternatives to those for which tenders were invited and
also by the enthusiasm with which contractors participate in design-build contracts.
Since, design can have a significantly greater effect on the overall cost in bridge
works than in other forms of construction, thus, it will be useful to try to help clients
and consultants to make sound decisions regarding the selection of an optimized
construction methodology in order to obtain better time and resource utilization.
That is the premise of this research. This paper focuses on a rapid bridge construction
method: Flying Shuttering. It is mainly featured by its use in projects with unfavorable
ground conditions.
Subsequently, an intelligent framework with advanced computational tools and algo-
rithms was designed to simulate and optimize construction sequences for Flying
Shuttering Construction Method. This framework gives maximum flexibility to
the designer while considering constructability criteria, hence improving design-
construction integration while achieving an optimized construction plan. In other
words, the objective of this research was to develop a framework to configure an
optimized sequencing of bridge elements for Flying Shuttering Construction Method
in order to obtain an optimized construction duration and resource utilization and,
accordingly, recommend the Optimized Bridge Construction Method, Schedule &
Resources.
Framework for the Optimization of Flying Shuttering Bridges: A Hybrid … 605
2.1 Introduction
Bridge construction is considered as one of the major industries which reflect remark-
able human evolution and creativity. There have always been continuous development
and advancement in the bridge construction.
The most significant structural evolution of the nineteenth century was the intro-
duction of prestressed concrete. An initial compressive stress is induced in a structure
to eliminate tension caused by the loading (the tension that caused the concrete to
crack). This technique was used to build bridges and was first introduced in the
construction of prestressed concrete bridge spanning over the River Rhine at Worms
[9]. A cantilever method of construction was used. This arouse some considerable
interest in the bridge construction industry and led to the development of the Flying
Shuttering method of construction [1].
It should be noted that some attempts were made in order to aid the client and
consultant in selecting particularly the most appropriate bridge deck construction
method [10]. Such selection could be done based on several factors, including but
not limited to: site conditions, technology used, resources available and required
construction time [11].
Flying Shuttering is, also, called Advanced Shoring System, Mobile or Moving
Scaffolding or Self-launching Erection Girder. The method of construction is used
for prestressed cast-in-situ box-section concrete bridges. It proved to be efficient
for relatively short length spans (30–60 m). The construction method’s main idea
is that the formwork (movable Flying Shuttering) is supported on the permanent
substructure or superstructure. Thus, the construction activities are transferred off
the ground, and the factory operation is transported on site. This allows for the rapid
construction over important obstacles without hindering traffic. It is also used when
the ground conditions are poor, the ground level is variable, or the bridge is high
above the ground. Flying Shuttering is similar to the stationary formworks, but has a
faster construction rate as it can be easily dismantled and used for consecutive spans.
It should be noted that bridge design should take into account the loads resulting from
the construction method, and careful consideration should be taken for deflection.
On the other hand, the main drawback of bridge deck construction using Flying
Shuttering Technique is the large required capital investment, because of the intensive
utilization of equipment, compared to other techniques. The presence of the gantry
system limits the allowable bridge span and the bridge section (box-section with
constant depth). Moreover, Flying Shuttering is only suitable for simple geometry
606 Y. A. S. Essawy et al.
providing very limited flexibility for horizontal curves or any complex roadway
geometry [2, 6].
1. Formwork (supported on a moving gantry system and suspended from steel rods),
2. Moving gantry system,
3. Two main trusses,
4. Brackets (fixed to bridge piers in order to support the trusses).
N.B. Only four brackets are needed to support the main truss, but, in certain cases,
six brackets were used to just speed up construction.
Flying Shuttering position, as demonstrated in Fig. 2, varies dependent on the
project limitations.
• The trusses are placed above the final superstructure; the principal advantage of
this solution is that it can be used for spans where the height of construction above
ground level may be small. However, the major disadvantage of this method is
that the formwork normally requires temporary suspension rods passing through
the superstructure.
• The girder is placed below the final superstructure; this solution has the advantages
that no suspension rods penetrate the superstructure and the deck area can be kept
free of obstructions. Major disadvantages are that the height of construction above
ground must exceed that of the formwork and that the piers normally require
adaptation of support and permit launching of the girder.
• The girder extends above and below the final superstructure.
Bridge construction starts with piers casting, and small adjustments are made to allow
for the gantry system operation. Recesses in the pier are casted, and small diameter
horizontal holes that allow the insertion of reinforcing steel bars are made in order
to support the brackets. Subsequently, the steel brackets are mounted, and six (6)
tensioning bars are inserted at each column. Afterward, the system is assembled on
the ground, and the formwork is supported on moving gantry system. It is, then,
lifted to position on brackets, and the bridge deck construction commences.
The bridge deck construction is done in stages. Construction joints are located at
points of zero bending moment, as illustrated in Fig. 3.
Subsequently, the bridge deck construction for a typical stage is done as described
below:
1. The construction of a typical stage starts by lowering formwork to free it from
the bottom slab and webs. Figure 4(1) illustrates the starting position of the
system.
2. The brackets are required to move forward to support the next span. Accordingly,
the main girders and formwork move forward until the girders’ end pass the next
column in a manner that would preserve system stability, as shown in Fig. 4(2).
3 Proposed Framework
Based on the Three-level Concept designed by Essawy and Nassar [4], the framework
for the Optimization of Flying Shuttering Bridges was designed.
The proposed framework comprised four (4) main modules, as shown in Fig. 5,
namely: Bridge Elemental Graph Data Model (EGDM) Retrieval, Bridge Elemental
Construction Method Graph (ECMG) Retrieval, Bridge Construction Sequence(s)
Retrieval and, finally, Bridge Simulation.
Essentially, the Bridge Elemental Graph Data Model (EGDM) Retrieval Module
is the first unit in the framework that parses the bridge and builds an internal data
structure of the bridge elements along with their data. It, then, transposes the bridge
elements to a graph model, Bridge Elemental Graph Data Model (EGDM). This
Framework for the Optimization of Flying Shuttering Bridges: A Hybrid … 609
01 02
Bridge EGDM Bridge ECMG
04 03
Bridge Simulation Bridge CS
Table 1 Description of the activities and resources used in the simulation model
Model entity Type Description Description
1 Source Source The construction sequence –
(bridge elements)
2 SOPierOrDeck Select output Classifies the bridge elements –
Piers construction
3 PFW Service Pier formwork FrmwrkCrew
4 PST Service Pier reinforcement RebarCrew
5 PCL Service Pier concreting ConcreteCrew
6 PCU Service Pier curing –
7 PDF Service Pier dismantling formwork FrmwrkCrew
Deck construction
8 *Hold Hold Hold the deck construction –
process
9 SeizeCrane Seize Seize the resource “Crane” in –
the deck construction
10 SAssembly Service System assembly –
11 SFW Service Supporting formwork FrmwrkCrew
12 LiftSystem Service Lift system on place –
13 AdjFW Service Formwork adjusted FrmwrkCrew
14 BFnWBsRF Service Deck BFnWBs reinforcement RebarCrew
15 BFnWBsInFW Service Deck BFnWBs place inner FrmwrkCrew
formwork
16 BFnWBsCon Service Deck BFnWBs concreting ConcreteCrew
17 BFnWBsCu Service Deck BFnWBs curing –
18 BFnWBsDisFW Service Deck BFnWBs dismantling FrmwrkCrew
Inner formwork
19 TFInFW Service Deck TF place inner formwork FrmwrkCrew
20 TFRF Service Deck TF reinforcement RebarCrew
21 TFCon Service Deck TF concreting ConcreteCrew
22 TFCu Service Deck TF curing –
23 TFDisFW Service Deck TF dismantling inner FrmwrkCrew
formwork
24 Prestressing Service Prestressing StressingCrew
25 GroutingCables Service Grouting cables StressingCrew
26 releaseCrane Release Release the crane –
27 AdvGirder Service Advancing girder –
28 LowGirderOnFW Service Lowering girder on formwork –
614 Y. A. S. Essawy et al.
presents the number and type of alternatives to be optimized. By simulating all bridge
construction sequences under all resource combinations, the Optimal Construction
Method along with its Construction Schedule and Resource Utilization is retrieved.
Finally, detailed and summary reports for the bridge under study are generated,
providing estimated construction duration in tabular and graphic formats.
Moreover, creating multiple iterations/replications for the simulation model with
different alternatives for the critical resources and simulating the construction process
would help in determining which resources had an effect on the construction time.
Critical resources that could affect the duration of construction are resources that
have minimum average waiting time in the resources queue.
Flying Shuttering is considered one of the robust techniques for a bridge’s deck
construction because of its speedy construction feature. A framework was presented
to help clients and consultants to make sound decisions regarding the selection of
an optimized construction methodology in order to obtain better time and resource
utilization.
The research established a new approach to achieve a higher degree of design-
construction integration of individual bridge elements in order to achieve an opti-
mized coordinated construction plan. Subsequently, an intelligent framework, with
advanced computational tools and algorithms, was designed to generate possible
bridge construction sequences for Flying Shuttering Construction Method from a
semantically enhanced, 3D topological data model and the Bridge Elemental Graph
Data Model (EGDM). It employs graph search algorithms to obtain all possible bridge
construction sequences. Ultimately, the construction sequences are simulated and
evaluated, and, then, the Optimal Construction Method along with the Construction
Schedule & Resources Utilization is retrieved.
The current algorithm extracts geometric and topological information for Flying
Shuttering bridge elements. It is recommended to extend this work to cover other
bridge deck construction methods and, hence, allow for the optimal selection of the
most appropriate bridge deck construction method.
Framework for the Optimization of Flying Shuttering Bridges: A Hybrid … 615
References
1. Chen W-F, Duan L (2013) Handbook of international bridge engineering. Taylor & Francis
2. Chen W-F, Duan L (2014) Bridge engineering handbook: construction and maintenance. CRC
Press
3. Elbeltagi E, Hegazy T, Grierson D (2005) comparison among five evolutionary-based
optimization algorithms. Adv Eng Inf 43–53
4. Essawy YA, Nassar K (2017) Elemental graph data model: a semantic and topological
representation of building elements. Int J Civ Environ Eng 11(6):845–856
5. Essawy YA, Nassar Y (2018) BIM-based model for extracting the elemental graph data model
(EGDM). Royal Institute of Chartered Surveyors (RICS), London, UK
6. Khan MA (2015) Accelerated bridge construction: best practices and techniques. Butterworth-
Heinemann, Waltham, MA
7. Libenberg AC (1992) Concrete bridges: design and construction. Longman Group UK Limited
8. Marzouk M, Said H, El-Said M (2009) Framework for multiobjective optimization of launching
girder bridges. J Constr Eng Manage 791–800
9. Menn C (2012) Prestressed concrete bridges. Birkhäuser
10. Sherif YA (2007) Value engineering in bridge deck construction—during the conceptual design
phase. M.Sc. thesis, The American University in Cairo (AUC), Cairo, Egypt
11. Sherif YA, Haroun MA, Nosair IAR (2010) Optimal construction method for bridge decks: a
value engineering spirit approach. Royal Institute of Chartered Surveyors (RICS), Paris, France
Graph Representation for Emergency
Egress Code Analysis
Abstract Code checking for emergency egress has been studied significantly which
led to the continuous advancement and development of automatic code checkers.
Yet, these checkers were only able to identify whether the building meets the code
limitations or not and, in some cases, were able to take a step further and identify
problem areas (if any). This paper presents a novel approach for code checking
and analysis using graph theory. It maps floor plans into a simple directed acyclic
Floor Plan Elemental Graph Data Model (EGDM). The Floor Plan EGDM, dual-
graph representation, undergoes graph distance analyses in order to perform the first
level of code checking, where egress paths are identified and examined against IBC
emergency egress code limitations. The benefit of such a graph representation is
not limited to performing traditional code checking, but further extends to include
an innovative analysis of the floor plans to perform post-code checking. Post-code
checking utilizes graph measures and search techniques on buildings that meet the
code in order to identify areas for improvement and highlight critical areas where
problems can happen. It is, also, performed on non-compliant buildings in order to
highlight the problem areas and, hence, help the designer in solving these problems.
Y. A. S. Essawy (B)
Department of Structural Engineering, Ain Shams University (ASU), Cairo, Egypt
e-mail: yasmeen.sherif@eng.asu.edu.eg; y_essawy@aucegypt.edu
Y. A. S. Essawy · A. Abdullah
Department of Construction Engineering, The American University in Cairo (AUC), Cairo, Egypt
e-mail: abdelhamid.abdullah@m-eng.helwan.edu.eg; abdelhamid.abdullah@aucegypt.edu
A. Abdullah
Department of Architectural Engineering, Faculty of Engineering at Mataria, Helwan University,
Helwan, Egypt
K. Nassar
Industrial Partnerships and Extended Education, Department of Construction Engineering, The
American University in Cairo (AUC), Cairo, Egypt
e-mail: knassar@aucegypt.edu
1 Introduction
2 Emergency Egress
One of the most critical aspects of checking compliance against the building code is
emergency egress. In the schematic design phase, the engineer should check egress
pathways for travel distance, common path of egress travel, dead-end corridors, and
accessible routes and egress. In the design development phase, the engineer, then,
identifies locations of fire-resistive assemblies and openings based on the type of
construction, allowable area, and occupancies. Finally, in the construction documents
phase, the engineer integrates the egress details [9].
Solibri Model Checker (SMC) of Solibri Co., Finland, is the most widely known
software related to code checking. It assesses the building design and checks if it
meets the building code with the aid of specific checking rules for each sector of
the code [4, 11]. The results are exported in a report showing the architecture rule
set, including interference check of objects and space and space program review
[3, 8]. The final results from the egress checking rule are illustrated in Fig. 1. As
Graph Representation for Emergency Egress Code Analysis 619
shown, the SMC interfaces can be divided into two parts: code checking and rule set
configurations.
Unfortunately, the current code compliance checking and the outcomes from all
the well-known model checker are just a final report to check if the building is
compliant with the building code or not. Certainly, these results are very useful for
the engineer and help in making more accurate and precise checks. However, these
checkers cannot help the engineer to discover any areas of improvement for his/her
design and the critical points where problems may happen. Accordingly, this paper
aims to find a new way to help the engineer to make decisions. It, also, proposes a
new way to present the building elements and the code checking rules by presenting
a framework to define a new type of code checkers.
3 Proposed Framework
This section presents the framework and the proposed steps for post-code checking.
It starts by summarizing the framework stages and then explaining each stage.
This framework depends mainly on converting the model to elemental graph and,
then, presenting the final results of the emergency egress checking process on the
graph. The framework, as illustrated in Fig. 2, is divided into five stages. It starts
by converting the floor plan into a graph. Secondly, it creates and selects the right
elemental graph method in order to present all the model element data and the checker
results data. Thirdly, it imports floor plan data and adds it to the nodes and links.
Next, graph algorithms are employed in order to obtain the shortest path. Finally,
based on the results, the graph undergoes the code compliance checks where the post
building checking results are highlighted to give the engineer a new dimension of
design and code compliance checking.
03
Select Elemental Graph
Type
04
Apply the Graph in
01 02 The Floor Plan Model
Convert Plan Data Create Elemental Graph
Into Graph for the Model
05
Represent the Elemental Graph and
Highlight the Critical Issues
Fig. 3 Primal (a) and dual (b) graphs and adjacency matrices [2]
Space syntax is largely a descriptive technique for visualizing spatial relations at the
level of connections between places, while spatial interaction is a predictive model
that forecasts how much travel there will be between places. A Primal Graph can be
used to represent the planar graph of the network along with the spatial interaction,
while the dual graph linking corridors in the planar graph is the graph which is used
in space syntax, as shown in Fig. 3 [2]. These graph representations can be used to
derive predictions and insights from the model.
The elemental graph for the floor plan under study is used to represent the emer-
gency egress plan. Accordingly, the type of graph selected should be able to present
all the spaces, doors, and egress paths. By comparing the two graph types, as illus-
trated in Fig. 4, the Primal Graph presents the spaces as nodes and the access paths
“doors” as links, but cannot present all the possible egress paths to the exit. Accord-
ingly, in case of emergency egress, it is not the best graph representation. However,
in this case, the Dual Graph can be used to present the emergency egress analysis
for the floor plan under study by presenting the spaces as links and the access doors
as nodes. Thus, all the egress information will be presented in one graph. The dual
graph is further enhanced by adding data tables as labels to each vertex and edge to
be able to handle wide ranges of queries. The data tables include all spaces, doors,
and egress information, as shown in Fig. 5.
This model framework generated to check the quality of design of the building
compliance with the emergency egress needs. So the code checking analysis is initi-
ated by converting a real floor plan case study of “SUNY Institute of Technology
Student Center” building floor plan {Henry, 2019 #215} to a dual graph displaying all
spaces, doors, and egress paths information. Figure 6 shows the access doors “nodes”
and all possible egress paths “links”. This figure illustrates the number of doors and
highlights the common travel path. It, also, gives the engineer an egress graph map
622 Y. A. S. Essawy et al.
Fig. 6 SUNY Institute of Technology floor plan emergency egress dual graph
for the floor plan. The results of this method of code checking are demonstrated
in Fig. 7. The figure shows colored graph links representing the travel path total
length. The red color clearly indicates the longest travel distance in the floor plan,
and the color range represents the variability in the travel distance for the floor plan
under study. Accordingly, the engineer can effortlessly check the building floor plan
624 Y. A. S. Essawy et al.
compliance to the travel distance rules of the building code and assess the alternative
paths for each space.
The floor plan graph using elemental graph method develops a new dimension of
design checker by visualizing the results and coloring the elemental graph according
to the data table for each node or link. This coloring style enables the designer to
upgrade the checking concept to a new level which be called post-code checking.
This colorful graph helps designer to represent the floor plan to highlight any design
issue areas, critical points, zone checking, spaces relations, and more helpful graph
presentation to develop building design and increase the design efficiency.
The proposed framework focuses on the emergency egress issues. Accordingly,
post building checking of the emergency egress helps the engineer to define the
critical points of egress path which are highlighted by coloring the floor plan graph
depending on the number of links connected to each node in the direction of egress.
The results are illustrated in Fig. 8. The color of each node is an indicator of the
level of criticality of each door. So, the capacity of egress on exits E1 and E2 is more
than the capacity of exit E3. And the access door A6 is the only access door for five
spaces {Z6, 61, 62, 63, 64}, so this is a critical point of the design, and the designer
should improve this area by adding a new door. Also access door A5 has the same
concerns.
The graph representation, as shown in Fig. 9, can, also, be used to illustrate the
number of users per door. This highlights the critical exits. Thus, the engineer could
review the compliance of the exit doors E1 and E2 width and safety code requirements
against the number of users in case of emergency egress. Additionally, the area of
improvement at the doors A6, A5, and A7. These doors are an access door for a large
number of users. Accordingly, the engineer should re-check the width of these doors
and the fire rate of each door. Furthermore, this graph highlights C1, and this door
will be the only access door for all spaces in zone B if users use the alternative path.
Thus, in case of egress and if there is fire cut point in the main egress paths for zone
B, this area should be improved to cover all egress issues.
4 Computer Implementation
The application’s main development language is C#, based on .Net framework 4.0
platform and has been written using Microsoft Visual Studio Integrated Development
Environment (MS VS IDE).
The application uses AutoDesk Revit SDK and AutoDesk Revit APIs to
communicate with AutoDesk Revit software.
Graph Representation for Emergency Egress Code Analysis 625
Z6
61 62 64 63 51 52 53 11
A6 A5 A4 A7 A1
B3
Critical Points
C1 A2
B6 E2
B2 E1 E3
B1 1
B5
30
B7 Total Number of Links
For all previous Nodes
Egress Path
Alternative Path
5 Conclusion
Automated code checkers are one of the most rapidly growing up software in the
construction and design fields. Accordingly, this paper presents a new framework for
a new type of code checking called post building checking. This checking system
is based on representing the model elements and the default model checker in a
new way. This new presentation method is dependent on visual graphs using graph
analysis techniques. So this new method opens a new window for designer to see
the model checker software in a new angle which gives them the ability not only to
check the compliance to the code, but also analyze the checking results as a graph.
By applying this concept on the emergency egress issues, this graph highlights the
critical points of the design and the area of improvement and more.
Accordingly, the sequence of this paper starts by illustrating a framework of
the proposed post building checking model highlighting the main process and the
outcomes of this framework, then defining the types of graphs and selecting the dual
graph as the presentable graph for the emergency egress issues; secondly, converting
the default floor plan elements and the egress check results into elemental graph.
After that check the compliances of the building against the life safety code specially
the emergency egress; and finally, exporting and highlighting the final deliverables
626 Y. A. S. Essawy et al.
Z6
61 62 64 63 51 52 53 11
A6 A5 A4 A7 A1
Areas of
Improvement
B3
C1 A2
B6 E2
B2 E1 E3
B1 6
B5
Critical Points
250
B7 No. of Users Per Door
For The Main Egress Path
Egress Path
Alternative Path
Fig. 9 Graph representation by sum number of users per door in the direction of egress
of the dual graph representation for the designers, which give the designer the ability
to upgrade the design by highlighting the critical issues of the design and the area of
improvement.
References
1. Abdullah A (2018) BIM model checker for the egyptian building code applications as a step
for BIM implementation in Egypt. Helwan University, Cairo, Egypt
2. Batty M (2017) Space syntax and spatial interaction: comparisons, integrations, applications
3. Choi J, Choi J, Cho G, Kim I (2012) Development of open BIM-based code checking modules
for the regulations of the fire and evacuation. Paper presented at the CIB W099 international
conference on “modelling and building health and safety
4. Dimyadi J, Amor R (2013) Automated building code compliance checking—where is it at. In:
Proceedings of CIB WBC, pp 172–185
5. Ding L, Drogemuller R, Rosenman M, Marchant D, Gero J (2006) Automating code checking
for building designs—DesignCheck
6. Essawy YA, Nassar K (2017) Elemental graph data model: a semantic and topological
representation of building elements. Int J Civ Environ Eng 11(6):845–856
7. Essawy YA, Nassar Y (2018) BIM-based model for extracting the elemental graph data model
(EGDM). Royal Institute of Chartered Surveyors (RICS), London, UK
Graph Representation for Emergency Egress Code Analysis 627
8. Eastman C, Lee J-m, Jeong Y-s, Lee J-k (2009) Automatic rule-based checking of building
designs. Autom Constr 18(8):1011–1033. https://doi.org/10.1016/j.autcon.2009.07.002
9. Geren R (2017) Applying the building code during design: a step-by-step process.
Retrieved from http://specsandcodes.typepad.com/the_code_corner/2011/08/applying-the-bui
lding-code-during-design.html
10. Getuli V, Ventura SM, Capone P, Ciribini ALC (2017) BIM-based code checking for
construction health and safety. Proc Eng 196:454–461. https://doi.org/10.1016/j.proeng.2017.
07.224
11. Jeong J, Lee G (2010) Requirements for automated code checking for fire resistance and egress
rule using BIM. ICCEMICCPM 2009:316–322
12. Nawari NO (2018) Building information modeling: automated code checking and compliance
processes. CRC Press
13. Nguyen T-H, Kim J-L (2011) Building code compliance checking using BIM technology. Paper
presented at the proceedings of the winter simulation conference
14. Tan X, Hammad A, Fazio P (2010) Automated code compliance checking for building envelope
design. J Comput Civ Eng 203
15. Vectorworksdeutsch (2013) 08 Arboleda building egress analysis with Solibri Model checker.
Retrieved from https://www.youtube.com/watch?v=M1uHRlNJKzs
Implementing Surrogate Modeling
Techniques for Designing Optimal
Building Envelops: A Case Study
Abstract Buildings are known to have significant environmental impacts. The life
cycle approach for the measurement of CO2 emission and the life cycle costs of
buildings are getting more important in the building design process. However, due
to the complexity of the design process and the computational time of simulations
and data processing, such methods are difficult to implement within optimization
processes. This paper aims to apply surrogate modeling techniques as a solution
to resolve the computational difficulties in the optimization process of building
envelopes. The paper will describe the methods applied and will evaluate several
aspects of the process, including the impact of the size of the training set on the
prediction accuracy as well as the impact of different energy system efficiencies
on the final optimum envelope design concerning seven objectives related to the
economic and environmental performance of the building. The results showed that
the size of the sampling test has a significant effect on the prediction accuracy;
however, a balance between increasing the precision and computational time can be
maintained by selecting an adequate number of samples. Moreover, it is found that to
achieve the lowest total equivalent cost corresponding to the highest economic and
environmental performance of the building, the minimum allowed window-to-wall
ratio and the maximum permitted wall insulation thickness should be 0.15 and 0.02 m,
respectively. The surrogate model was also shown to be efficiently capable of finding
the optimum results according to the other objectives, including both economic and
pure environmental aspects. Furthermore, the results provide some insights on how
the variation of energy systems’ efficiency might affect the optimum solutions in the
optimization process.
1 Introduction
Buildings are significant sources of energy consumption and have major envi-
ronmental impacts. It is estimated that 40% of energy consumption in Europe is
attributed to the building sector. This fact along with the need to reduce the environ-
mental impact makes the building sector an essential target of environmental energy
efficiency programs [1, 5].
The main focus of energy consumption reduction approaches and the improve-
ments of the environmental performance of the buildings has been on the operational
phase. Although various life cycle approaches are now becoming available to design
sustainable buildings, the most common methodology is life cycle assessment (LCA)
which aims to evaluate the environmental impact of a building within its whole life
cycle from the cradle to the grave. One of the most used and agreed-on environ-
mental impact categories in LCA is the global warming potential (GWP), a factor
established to allow comparisons of the global warming impacts of different assets.
GWP is essentially an indicator that measures the potential of a product or process
to emit CO2 equivalent to the environment for each functional unit of the asset under
evaluation [12].
One of the difficulties regarding the GWP calculation is the massive amount of
data and the extensive time required for the computational analysis [2]. We face a vast
complexity in the building design scope due to many data and several variables and
the complication of the design process and data analysis. Therefore, some automated
techniques like machine learning can be adopted to facilitate the calculations. By
automating the analytical computations, the simulation process and analysis of the
building performance become faster while preserving its accuracy [3, 11].
Computational analysis can be processed by using a surrogate model, which is
a function that generates detailed simulation models. Surrogate models, or meta-
models, are promising tools to provide building performance assessments, which are
knowledge-based but much faster than simulation-based design analysis methods.
The idea of surrogate modeling is to rival an expensive high-fidelity model, for the
purpose of this research, a building simulation model, using a statistical method. Only
a small set of simulation data (inputs and outputs) is needed to train the surrogate
model. However, the reliability of the synthesized data is based on the accuracy of
the simulation program itself, and the range of error provided by the surrogate model
is directly associated with the provided information for training the surrogate model
[6].
Therefore, it is recommended to employ a sufficient amount of data to train the
model. After validating the model to ensure that it can produce results that are suffi-
ciently close to the detailed existing simulation model, it can be used to almost
instantly predict outcomes of the high-fidelity simulation given an appropriate set
of building design information [11]. The surrogate model can be advantageous in
four stages of the building design process: (1) conceptual design stage, (2) sensitivity
analysis, (3) uncertainty analysis, and (4) optimization.
Implementing Surrogate Modeling Techniques for Designing Optimal … 631
In this research, we concentrate on the first stage, i.e., conceptual design. In the
early design phase, designers need to constantly change their proposal and see how it
affects the overall design outcome. Surrogate modeling can provide design feedback
in much less time than simulation-based parametric analysis.
Surrogate model development involves the following steps: First, a problem
should be defined to target the whole approach of the process. Followed by that, a
design-based model should be implemented by the designer, and the samples should
be generated using different sampling strategies (it can be both from statistics or
adaptive sampling). Each variable defined through the samples would be simulated
to create a database with inputs and outputs so that in the next step, a surrogate
model could be fitted to this dataset. Finally, the model is validated by computing the
model’s precision. The validation is typically conducted by measuring the deviation
of surrogate predictions from simulation outcomes for the same set of inputs [11].
This paper examines the importance of window-to-wall ratio and wall insulation
thickness on the energy and environmental performance of buildings [4, 10]. The first
objective is to develop a surrogate energy model and to implement it for achieving
the optimum solution for designing a building envelope considering the exterior wall
insulation and the window-to-wall ratio of the building. In addition, the study aims
to define the adequate sample size for inputting into the surrogate model by testing
multiple statistical indices, including mean absolute error (MAE), mean squared
error (MSE), root mean square error (RMSE), coefficient of determination (R2 ), and
mean absolute relative error (MARE). The third goal is to find the optimum results
considering the economic and environmental aspects of the building, both regarding
the operational and embodied impacts.
2 Methodology
The thermal test zone that is considered in this paper is a cube with dimensions of
3 m × 3 m × 5 m located in Milan, Italy. The building is analyzed assuming a life
span of 30 years. A window is located on the south side of the building. It is a UPVC
double-glazed window with a thermal transmittance (U) value of 1.3 w/m2 k. The
opaque wall is a cement prefabricated wall with EPS insulation. A Python script is
used to evaluate the energy consumption by executing an Energy Plus simulation.
The variables defined in this case are the window-to-wall ratio (ranging from 0.15
to 0.9) and the thickness of the external wall insulation (ranging from 0.05 to 0.2 m).
The goal is to determine the optimum combination of the values for the variables
to reach the minimum value of the objectives, which is possible with the help of a
Python script to train the surrogate modeling by defining a sample set.
Seven objectives are defined based on three different categories. The first one is
the economic cost. In this category, the results are calculated based on the cost of the
materials only, the cost of the energy consumption during the operation phase of the
building, and a sum of both. The next category is environmental objectives. Here, the
target is to reach the minimum GWP of materials, operational energy, and the sum
632 S. Monshet et al.
of both embodied and operational impacts. The last category (seventh objective) is
the total equivalent cost, which combines the total cost and the total GWP that is
converted to the cost. It is established that one kilogram of CO2 is equal to 0.02 euros
[9]. Figure 1 presents a schematic of the analysis steps, showing the Python script,
as the computational engine, the design variables, and the optimization objectives.
As shown in Fig. 2, the analysis starts by defining the design variables. The para-
metric Energy Plus analysis is carried out using the BESOS library, which produces
the heating and cooling energy demand of the test zone for a set combination of
design variables as the sample set. The sample set of the design variables is produced
randomly using Python. After generating the energy demand of the sample set, the
electricity consumption of each design scenario is calculated by implementing the
HVAC system’s efficiency. In this case, electric heat pumps are considered to provide
heating and cooling energy, and the baseline efficiency of the heat pump units in
heating mode (COP) and cooling mode (EER) are considered equal to 3 and 2,
respectively. To calculate the energy consumption of the design scenario based on
energy system efficiency, a set of Python codes is written and linked to the BESOS
codes to run Energy Plus automatically.
The economic and environmental impacts of the electricity consumption and the
building materials are incorporated in the analysis using the economic data such
as the materials and energy prices and environmental life cycle assessment reports
such as those recommended in the Ecoinvent and Environmental Product Declaration
(EPD) to calculate the GWP of the materials [7, 8]. The seven objectives are separately
obtained for each design scenario using the Python codes developed for this analysis.
In the final step, the generated datasets are used to train the surrogate model,
and the model is run to predict the results of the indicator for the whole design
space, including all possible (allowed) combinations of design variables. The results
of the surrogate model are then used to find the optimum scenarios in which the
economic and environmental indicators are minimized. These correspond to the
lowest economic and environmental impacts.
Given the steps for the training and implementing the surrogate models, the anal-
ysis carried out in this research will first aim to determine the minimum size of
Implementing Surrogate Modeling Techniques for Designing Optimal … 633
the training set (samples) using the error metrics such as MAE, MSE, RMSE, R2 ,
and mean absolute relative error (MARE). For this purpose, different training set
sizes, including 10, 15, 20, 25, and 30 samples, are analyzed. Followed by that, the
minimum training set size that attained an accuracy of over 99% (MARE < 1%) is
applied to predict the results separately for the seven objectives shown in Fig. 1. Then
the impact of different heat pump efficiency on the results is analyzed. Therefore, the
model will be implemented several times with different energy system efficiencies.
3 Results
Increasing the size of the training set will increase the computation time and is
expected to increase the model accuracy by providing a more comprehensive training
input to the model. The results showed that increasing the size of the training sample
set from 5 to 25 affects the accuracy significantly. For the cases with more than 25
634 S. Monshet et al.
samples, the accuracy of the result is, however, less sensitive to the size of the training
set, as shown in Fig. 3 and Table 1.
Figure 4 represents the actual (simulation-based) and predicted values by the
surrogate model for different size of the training set, which shows the actual and
predicted values become closer while increasing the size of the training set.
MARE (%)
different sizes of the training 2.5
set 2
1.5
1
0.5
0
Table 1 Error metrics for the predicted results by different sizes of the training set
5 samples 10 samples 15 samples 20 samples 25 samples 30 samples
MARE 3.37 1.66 1.27 1 0.88 0.83
MAE 247 128 98 77 69 65
MSE 67,584 23,854 13,449 7684 7314 6703
RMSE 259 154 115 87 85 81
R2 0.963 0.989 0.994 0.996 0.996 0.996
Fig. 4 Actual versus predicted results by the surrogate model with different size of training set
Implementing Surrogate Modeling Techniques for Designing Optimal … 635
Fig. 5 Predicted results by the surrogate model for GWP and the cost of materials and energy
consumption. Colors indicate the life cycle cost in euro (d, e, f) and intensity of life cycle kg CO2
eq emission (a, b, c) of the building considering different combination of design variables
636 S. Monshet et al.
Fig. 6 Predicted results for the total equivalent cost (euro). (Colors indicate the total equivalent
cost in euro of the building considering different combination of design variables.)
the total equivalent cost through decreasing the operational energy consumption.
However, the final optimum results will remain affected mainly by the operational
energy performance of the buildings.
Figure 7 illustrates the total cost of different cases with different COP and EER
of heat pump units. As is shown in all four scenarios, the final optimum solution
based on the predicted results by the surrogate model is similar and is achieved at
the minimum allowed window-to-wall ratio and the maximum allowed wall insula-
tion thickness. However, increasing the efficiency of energy systems will decrease
the impact of operational energy consumption on the total cost and environmental
impact. Therefore, it is found that for the buildings with high energy efficiency,
the total impact of the building is being shifted toward the embodied impacts. For
this case study, it is shown that the highest wall insulation thickness provides better
performance in terms of total equivalent cost since it significantly decreases heating
energy consumption.
By increasing the efficiency of the energy system, the difference in the total
equivalent cost between the lowest and highest wall insulation thickness decreases
noticeably. The results also show that if the COP and EER of the heat pump units
improve from 2 and 1 to values equal to 5 and 4, respectively, the difference in the
total equivalent cost for the lowest and highest wall insulation thickness changes
from 37 to 15%. This indicates that the benefits of installing thicker wall insulation
will decrease gradually by increasing the efficiency of the energy system.
Implementing Surrogate Modeling Techniques for Designing Optimal … 637
Fig. 7 Predicted results for the cases with different heat pump efficiency. (Colors indicate the total
equivalent cost in euro of the building considering different combination of design variables.)
4 Conclusions
References
1. Amini Toosi H, Lavagna M, Leonforte F, Del Pero C, Aste N (2020) Life cycle sustain-
ability assessment in building energy retrofitting; a review. Sustain Cities Soc 60(November
2019):102248. https://doi.org/10.1016/j.scs.2020.102248
2. Amini Toosi H, Lavagna M, Leonforte F, Del Pero C, Aste N (2021) Implementing life cycle
sustainability assessment in building and energy retrofit design—an investigation into chal-
lenges and opportunities. Environ Footprints Eco-design Products Processes 103–136. https://
doi.org/10.1007/978-981-16-4562-4_6
3. Amini Toosi H, Lavagna M, Leonforte F, del Pero C, Aste N (2022) A novel LCSA-Machine
learning based optimization model for sustainable building design—a case study of energy
storage systems. Build Environ 209:108656. https://doi.org/10.1016/j.buildenv.2021.108656
4. Annibaldi V, Cucchiella F, De Berardinis P, Rotilio M, Stornelli V (2019) Environmental and
economic benefits of optimal insulation thickness: a life-cycle cost analysis. Renew Sustain
Energy Rev 116(October):109441. https://doi.org/10.1016/j.rser.2019.109441
5. Aste N, Caputo P, Buzzetti M, Fattore M (2016) Energy efficiency in buildings: what drives
the investments? The case of Lombardy Region. Sustain Cities Soc 20:27–37. https://doi.org/
10.1016/j.scs.2015.09.003
6. De Wilde P (2014) The gap between predicted and measured energy performance of buildings:
a framework for investigation. Autom Constr 41:40–49. https://doi.org/10.1016/j.autcon.2014.
02.009
7. Ecoinvent (2021) LCA database. https://ecoinvent.org/
8. Epd (2021) No title. https://www.environdec.com/home
9. Ristimäki M, Säynäjoki A, Heinonen J, Junnila S (2013) Combining life cycle costing and life
cycle assessment for an analysis of a new residential district energy system design. Energy
63(2013):168–179. https://doi.org/10.1016/j.energy.2013.10.030
10. Troup L, Phillips R, Eckelman MJ, Fannon D (2019) Effect of window-to-wall ratio on
measured energy consumption in US office buildings. Energy Build 203:109434. https://doi.
org/10.1016/j.enbuild.2019.109434
11. Westermann P, Evins R (2019) Surrogate modelling for sustainable building design—a review.
Energy Build 198:170–186. https://doi.org/10.1016/j.enbuild.2019.05.057
12. Zieger V, Lecompte T, Hellouin de Menibus A (2020) Impact of GHGs temporal dynamics on
the GWP assessment of building materials: a case study on bio-based and non-bio-based walls.
Build Environ 185(August):107210. https://doi.org/10.1016/j.buildenv.2020.107210
Methodological Analysis of KPIs
to Evaluate Contractor Performance
of the Construction Project
1 Introduction
The construction industry is increasingly rewarding for companies that can demon-
strate their performance in a holistic way and can show how they benchmark against
the rest of the industry. A construction performance measurement is an important task
that has to be conducted before, during, and after the project. KPIs are an accepted way
for companies to measure their own progress against overall industry performance.
Contractors can use the procedure either to identify low-performing activities that
need to be corrected or to identify high-performing areas that need to be redirected.
For adequate time and resource allocation, it is also essential to measure perfor-
mance in order to create an accurate budget and schedule forecasts. A contractor’s
performance measurement can assist the owner in assessing the contractor’s ability
to complete the project and provide information on the progress of the project during
construction. In addition to providing an evaluation mechanism for future projects
by assessing multiple metrics, performance measurements are useful for evaluating
various contractors for future projects. KPIs can be used to evaluate various contrac-
tors. In addition to the construction sector, KPIs are common in all areas of business.
In addition to quantitative information, they provide a picture of how well a company
accomplishes a business objective or how well a project is doing. KPIs can be used to
evaluate construction projects and contractor performance in areas such as produc-
tivity, safety, and quality. The question is, which ones are most useful to industrial
construction owners. Specifically, which KPIs will enable those owners to evaluate
contractor performance objectively is a major concern nowadays in the construction
industry.
Key performance indicators (KPIs) are one of the variables that make up the
success criterion for construction projects which is the reason why performance
measurement on construction projects is usually carried out by establishing KPIs
that offer objective criteria to measure project success. Projects for an owner might
differ greatly in scope and complexity from one to the next, and they can be located
in a variety of geographic areas. These various criteria demonstrate how difficult it is
to accurately compare and assess projects, and they will have a variety of effects on
the project’s cost, quality, and schedule. KPIs are used to normalize these factors in
order to establish a performance measurement framework that can allow projects and
contractors to be accurately evaluated against one another for potential performance
assessment.
This paper explores various types of KPIs (proposed and existing) that owners
are using to evaluate projects and contractor performance. It also aims to develop a
unique framework using new and existing key performance indicators to aid owners
and contractors in measuring the ability and efficiency of their previous projects in
order to identify trends that will lead to future project success. The insights of the
paper include tracking and interpreting an array of KPIs to evaluate contractors’
project performance such as long-term trending and short-term performance goals.
Having a better understanding of project performance will certainly help to optimize
the quality of future projects. This paper contains the detail of the existing KPIs used
Methodological Analysis of KPIs to Evaluate Contractor Performance … 641
2 Literature Review
KPIs are one of the factors that constitute the project success criteria. An extensive
literature search was performed, and data from valid sources was collected, through
which the assessment of important and measurable KPIs was identified. Swan and
Kyng [18] view KPIs as the measure of a process that is critical to the success of
an organization and also for the project. According to a publication by Price Water-
house Coopers (PWC), KPIs mean actors by reference to which the development,
performance, or position of the business of the company can be measured effectively.
Construction KPIs paint a picture of the health of the industry as a whole. They
also provide a set of tools that can be used by companies across the sector to evaluate
their performance and raise their game against their peers, bringing lasting benefits
to the whole industry [15]. The construction industry KPIs were first published in
1999 and are updated annually, by the UK working group [18]. In 2016, the BC
Construction Association (BCCA) published the “Construction Innovation Project:
A Vision for BC”, which proposed a series of “ambitions” framed within five vision
statements to begin to lay out a path forward [5]. This paper is a first foray into
industry-level performance measurement for construction in Canada, and while it
has been motivated by an interest in BC, there is relevant activity emerging in other
provinces. Alberta, in particular, is starting to apply performance measurement expe-
riences within its oil and gas sector to buildings. Ultimately, industry performance
measurement should be undertaken at a large scale.
Performance measurement is the process of quantifying the efficiency and effec-
tiveness of actions. For a performance measurement system to be regarded as a useful
management process, it should act as a mechanism that enables an assessment to be
made, provides useful information, and detects problems, allowing judgment against
certain predetermined criteria to be performed [2]. Thoor and Ogunlana [19] together
with Humaidi and Said [9] suggested that KPIs are helpful to compare the actual and
estimated project performance in terms of effectiveness, efficiency, and quality of
workmanship and product. KPIs can be used to measure the performance of project
operations and are usually used in construction projects. Moreover, performance
measurement can be carried out by establishing KPIs that offer objective criteria to
measure project success. One need identified by Douglas [30] is for proper catego-
rization of KPIs so that they represent broader applicability and potential use. Studies
have developed and built lists of large numbers of indicators.
This study provides a list of KPIs that could be used for a holistic performance
evaluation of industrial projects. Furthermore, they are categorized in such a way
that project management professionals could choose a set of measurable KPIs most
relevant to their situation which is also easily measurable. Amaratunga et al. [1] and
642 A. Siddika and M. Masfiqul Alam Bhuiyan
Brackertz [3] indicate that performance relates not only to the functional quality
of the building but also to the contribution made by the building in achieving the
organization’s goals.
3 Methodology
The study was conducted in five phases. A strong emphasis was placed on creating
a solid base of understanding of existing research and technological advancements
in the area of measuring project performance. The first phase of the project includes
preliminary research on how to measure project performance and other KPIs that
are used in all stages of any construction project. This first scan was broad and
gave a working understanding of the research in this area. Based on the findings, the
research was conducted by focusing on seven main performance categories: financial,
schedule, productivity, quality, social, environmental, and health and safety. The
following phase focused on developing an exhaustive list of KPIs in each of these
categories that were already in use across multiple industries. In the fourth phase, the
generated list of existing KPIs was reviewed and modified to ensure that the added
factors of the list were both measurable and applicable for post-project performance
evaluation and applicable to industrial projects. The final stage included industry
consultation from construction professionals which helped to further elucidate which
KPIs would be most relevant for comparing contractor performance on industrial
projects of varying sizes, scopes, and complexity.
Successful projects are typically measured on how well contractors are able to muster
project resources (namely, labor, materials, equipment, and sub-trades) to achieve
project goals and secure successful outcomes. Contractors are usually constrained
in three areas: scope, cost, and time [13]. When reviewing project or contractor
performance, it is important to complete the assessment based on the three major
constraints and the efficient use of the major resources involved in executing the
project. Furthermore, environmental, social, and health and safety factors must be
considered. Tragic developments and occurrences in our society can point to the detri-
ment of successful project performance in the area of cost, scope, and time while
neglecting environmental, social and health, and safety factors. Against that back-
drop, the study focused on the following seven areas: financial, schedule, productivity,
quality, social, environmental, and health and safety.
Methodological Analysis of KPIs to Evaluate Contractor Performance … 643
Time is one of the major constraints in any project. Scheduling is the activity taken
to develop a guideline (also known as a schedule) that identifies the tasks, activities,
milestones, and resources to be used in a project within a specific time frame. The
schedule also includes concurrent start dates and finish times for each item identi-
fied [13]. A well-planned and controlled schedule can help contractors and owners
644 A. Siddika and M. Masfiqul Alam Bhuiyan
achieve their stated project objectives. According to Iskandar et al., project perfor-
mance metrics on schedules can be measured using construction speed, delivery
speed, schedule growth, and a schedule factor. Weston and Gibson first defined
schedule growth as a change in the schedule over the original schedule duration.
Their research was focused on comparing partnering projects and non-partnering
projects, where partnering projects were those projects with greater levels of engage-
ment between owners and contractors. They showed that partnering projects have
less schedule growth (1993). Pocock then used schedule growth to compare part-
nered, design-build, and combination projects against traditional design-build, with
greater integration yielding greater results [14]. Two years later, Konchar and Sanvido
expanded the schedule performance metrics to include construction speed, delivery
speed, schedule growth, and schedule intensity. They defined construction speed
as the area completed per day, with the duration in the denominator starting from
the time the notice to proceed is issued to when the project has been substantially
completed. Next delivery speed was defined as the area completed per day from the
start date of the project to occupancy. Finally, Konchar and Sanvido developed the
schedule intensity performance metric as a hybrid of the total unit cost and total
duration. The schedule intensity was defined as the final project cost over the total
area divided by the total duration of the project.
Industrial projects can vary quite significantly in size, scope, and major unit of
production. Unlike buildings and traditional infrastructure projects, where users have
a common unit of measure (i.e., area, square feet, cubic yards, etc.), industrial projects
typically have numerous and complex outputs that may not be readily compared
across projects. Slight modifications of the performance metrics presented above
can help yield comparable KPIs.
Productivity KPIs measure the effectiveness of a project and show the relationship
between the input and output, such as the average revenue per day/per hour or amount
of waste. In general, the KPIs related to productivity consist of three main kinds of
KPIs, including labor productivity, resource use efficiency, and profitability.
• Labor productivity: The first one focuses on the efficiency of workers. Labor
productivity measures output over labor cost.
• Resource use efficiency: Next, the effectiveness of resource usage is also an
important way to measure productivity. It calculates the amount of usage over
the resources invested in, or it would calculate the number of wasted resources.
• Profitability: Finally, profitability represents the conversion ratio from the resource
input to monetary output. This factor indicates the average revenue per day or per
hour.
It is noteworthy that “labor hours” is more commonly used to calculate the effi-
ciency of workers, but the outcome is a bit limited. It ignores the difference between
Methodological Analysis of KPIs to Evaluate Contractor Performance … 645
high-skilled workers—who are paid higher—and low-skilled workers. The wage cost
should also be considered, and the indicator can show the relationship between the
input and output closer to the real situation. Nowadays, researches already covered
the three aspects, but more are needed to evaluate a project contractor’s performance.
For example, the performance of subcontractors is also crucial.
According to Song et al. [16], quality indicators are classified into effectiveness
measures and efficiency measures. Efficiency measures are designed to measure the
productivity of the quality management process, including technical quality and all
types of quality costs. Effectiveness measures are calculated using the data from the
inspections and tests, including work quality ratio, NCR records, and work sigma
level.
The efficiency of quality management is identified by measuring and analyzing
quality costs on-site. Ledbetter et al. developed Quality Performance Management
System (QPMS) for estimating quality costs based on labor costs. They defined
QPMS as “the cost associated with quality management activities (prevention and
appraisal) plus the cost associated with deviations”. Deviations resulting in doing
things over, termed rework as well as unnecessary quality management costs, reduce
a project’s profitability. Deviation correction costs plus unnecessary prevention and
appraisal costs are sometimes termed “quality losses”, and their reduction results in an
increased quality performance. QPMS keeps track of three main endeavors: normal
work, quality management work (prevention and appraisal), and rework (deviation
correction) (Fig. 1).
Quality status measures the quality level with the application of sigma level. If a
process does not meet the customer’s specification, the number of defects it delivers
can be evaluated using a metric such as Defects-per-Million-Opportunities (DPMO).
DPMO is a ratio of the number of defects in 1 million opportunities when an item can
contain more than one defect. Once the DPMO has been determined, a Six Sigma
table is used to find the process sigma. A non-conformance report (NCR) is prepared
to identify works that fail to meet quality standards. By detailing the problem, the
report examines how it occurred and how to prevent it from happening again.
Gabcanova [6] identified traditional social indicators like productivity, health and
safety, quality of work life, performance time, and errors and built a framework for
optimizing the work system by considering the interactions between social, tech-
nical, and environmental variables. According to Hudson et al. [8], social metrics
closely matched the terms in the human resources dimension: employee relationships,
employee involvement, workforce, employee skills and learning, labor efficiency,
quality of work life, resource utilization, and productivity. Some of the traditional
social indicators for a project can be as following:
• Absenteeism rate: Measured by dividing the number of working days in which
an employee was absent by the total number of working days of that project.
• Employee satisfaction and engagement index: Measured through employee atti-
tude and engagement surveys. High employee engagement ensures lower turnover,
higher productivity, better customer service, and numerous positive outcomes.
• Turnover rate: Turnover is a critical KPI, as high turnover can lead to a very costly
project. Dissatisfaction plays a primary role in employee turnover.
• Employee innovation index: Measured through surveys to find the number of
innovations in a project during a period. Innovation is turning into a critical driver
of business success nowadays.
5 Improvements on KPIs
It was determined from the literature review that many of the KPIs established were
sum quantities of specific types of data, whereas others were ratios based on total
project cost. These KPIs are useful for evaluating one project at a time to understand
the result but cannot accurately compare against the performance of other projects.
Comparing project performance over time allows for data trending to provide helpful
information to make strategic decisions. This data analysis is the value output from
gathering all of this data and the efforts of KPI measurement.
Rarely are two projects ever the same in the industrial or construction industry.
They are continually engineered for a specific purpose, to fit in an existing facility,
or have a unique engineering design based on purpose and location. Projects are
constructed using different labor forces, i.e., union versus non-union, constructed
in different geographic regions which vary in available resources and are executed
at different times of the year through varying seasons. Another challenge is the
varying size and complexity of the projects and the variability in the magnitude of
the scope of the disciplines involved. For example, some projects might have signif-
icant earthworks scopes and are constructing a greenfield site. The project will still
have mechanical, structural, and electrical components, but this would vary signifi-
cantly from a strictly brownfield project with only limited civil work but significant
mechanical and structural scopes. This phenomenon also causes the projects to vary
wildly in cost; for example, how to compare the performance of a $5 million project
that lasts six months to a $50 million project that lasts two years?
However, owners still require the ability to compare one project to the next, regard-
less of their variability. It is determined that the existing KPIs required adjustment to
allow for this purpose. The KPIs need to be normalized to minimize the variability
of the projects they measured to allow for performance comparisons. Normalization
involved converting the KPIs to a unit of project value or labor hours. An assessment
was completed to determine what would give the most helpful information output to
allow for more accurate comparisons. The project focuses on industrial construction
projects with values from $5 M to $50 M; therefore, it is necessary to determine
the proper size of normalization to achieve an appropriate value for the KPI. Some
KPIs were normalized per million dollars of project value, while some were normal-
ized to every five million. KPIs normalized to worker hours were assigned a value
that allowed for reasonable KPI output values. Some KPIs were further modified to
allow for more effective comparisons. For instance, the financial performance index
KPI initially calls for taking the total dollars spent on the project construction and
dividing it by the total number of hours worked on the project, thereby providing
a construction dollars per hours worked metric. To compare projects from different
geographical areas and varying scopes, the KPI was modified by removing the cost
of permanent materials and the cost of mobilization, travel, LOA, and provincial-
specific tax. This adjustment allows for a fairer comparison between projects of the
actual cost per hour for construction. Additional KPIs were determined from previous
Methodological Analysis of KPIs to Evaluate Contractor Performance … 649
project experience of the research team, expert opinion, and industry practices and
are additional to those determined through the literature review.
The output from this exercise was a master list of industrial construction KPIs
organized by category. It included KPIs that could be utilized directly from the
literature review and those normalized, improved, and created by the research group.
The KPI summary list was further developed through interviews with industry experts
to determine the most helpful and valuable KPIs that should be utilized. The master
list was presented to industrial construction project managers with significant experi-
ence to determine which KPIs would most effectively highlight project performance.
This validation process further refined the KPI list into a more comprehensive report
that industrial owners could directly utilize for project comparison. The findings
through the normalization, KPI improvements, and expert interviews and validation
are shown in Table 3.
7 Conclusion
Table 3 (continued)
Categories Refined KPIs Short description (measurable)
Spill severity Estimated volume spilled/10,000 worker
hours
Health and safety Average overtime hours per Planned overtime percentage/actual
person overtime percentage
TRIF (Number of recordable incidents ×
200,000 h)/total project hours
PTRIF (Number of potential recordable incidents
× 200,000 h)/total project hours
Health and safety prevention Total health and safety prevention cost/
costs $1 M actual costs
Percentage productive days Planned productive days percentage/
(based on weather) actual productive days percentage
References
11. Memon ZA, Khatri KL, Memon AB (2013) Identifying the critical factors affecting safety
programme performance for construct project within Pakistan construction industry. Mehran
Univ Res J Eng Technol 32(2):269–276
12. Pinheiro MD (2006) Ambiente e Construção Sustentável. Instituto do Ambiente. Agência
Portuguesa do Ambiente
13. PMBOK (2017) A guide to the project management body of knowledge, Sixth Edition. Project
Management Institute Inc., Newtown Square, PA
14. Pocock JB (1996) The relationship between alternative project approaches, integration, and
performance. University of Illinois at Urbana-Champaign
15. Print M, Young A (2004) KPIs and benchmarking best practice guide for constructing
excellence and the DTi. Retrived from www.constructingexcellence.org.uk
16. Song, L, Park (2006) Development of quality performance indicators for quality management
in construction projects
17. Shirouyehzad H, Khodadadi-Karimvand M, Dabestani R (2011) Prioritizing CSFs influencing
safety, using TOPSIS. Int J Bus Soc Sci 2(20):295–300
18. Swan W, Kyng E (2005) An introduction to key performance indicators. Centre for Construction
Innovation, 2004
19. Thoor SR, Ogunlana SO (2010) Beyond the ‘iron triangle’: Stakeholders perception of key
performance indicators (KPIs) for large-scale public sector development projects. Int J Project
Manage 28(3):228–236
20. UNEP (2003) Sustainable building and construction: facts and figures. UNEP Industry and
Environment
30. Douglas J (1996) Building performance and its relevance to facilities management. Facilities
14(3/4:23–32
Achieving a 100-Year Design Service Life
on Wastewater Projects
1 Introduction
The use of internal concrete protective lining systems for new infrastructure is preva-
lent around the world and can generally be placed into two categories; epoxy coat-
ings or concrete protective liner (CPL) typically made with high-density polyethy-
lene (HDPE). These coatings/linings are utilized to protect concrete from hydrogen
sulfide gas and the subsequent and destructive sulfuric acid that is produced at the
gas-to-concrete interface.
Given the relatively high cost of 3–4 mm thick epoxy coatings as compared to
2–3 mm thick HDPE CPL along with the difficulties associated with epoxy’s mid-
to long-term adhesion to concrete in high groundwater environments, only anchor/
embedment CPL will be addressed. Note that epoxy coatings are a more expensive
polymer than HDPE and are also more expensive to apply to the concrete when
troweled on with thickness inconsistencies. Sprayed epoxy is more cost-effective,
but the 3–4 mm thickness necessary to avoid pinholes/holidays is also expensive
and must be done in a factory setting to maximize quality. Both epoxy and CPL
require proper installation techniques for the system to be successful and maintain its
integrity. A final note with epoxy is its relatively brittle nature as compared to HDPE
CPL. Movement/settlement of manhole barrels or trunkline/pipeline segments could
result in cracks at joints exposing the concrete to gas/fluids and/or water ingress.
HDPE has elongation properties that allow for settlement without joint failures.
With any polymeric coating/liner, the correct application of the material is imper-
ative for the creation of a system that has containment integrity as well as a long
lifespan (the target of the developments discussed herein is 100 years). This paper
will now delve into the study of how to create a CPL system within either a precast
or cast-in-place structure. The study is a cradle-to-grave analysis that starts with
the use of H2S-resistant HDPE which has a 446 year half-life at 20 °C in non-UV-
exposed applications [4, 5] and ends with the on-going surveillance and maintenance
of the system. All aspects of the system from start to finish are critical in achieving
a 100-year service life. The cradle-to-grave analysis steps are as follows:
1. HDPE CPL manufacturer’s embedment/anchor liner (CPL) design and quality.
2. Accessory availability and quality.
3. Precast design, production techniques, handling techniques, and quality.
4. Cast-in-place formwork design, cladding techniques, gapping, and the use of
accessories to minimize inferior welds.
5. In-plant and in-situ extrusion welding quantity.
6. In-plant and in-situ extrusion welding quality.
7. QA/QC techniques including the ability to follow ASTM holiday testing
protocols.
8. On-going surveillance of the system to ensure small operating problems are
resolved early via maintenance such as patching and welding holes, scratches,
or failed components.
To assure a long life, it is imperative that one uses a manufacturer with experience
and a solid track record. There are two such companies that manufacture quality
CPL globally: AGRU and Solmax (previously GSE). AGRU is third party certified
by South Carolina Manufacturing Extension Partnership (SCMEP) and has been in
business since 1948. They pioneered the use of their Suregrip/ultra-grip V-anchor that
is made as one piece in one step and has over 30 years of in-service Suregrip instal-
lations. Solmax/GSE has been in business since 1981 and produces their Studliner
which is made as one piece in two steps with over 20 years of in-service Studliner
installations (Fig. 1).
Achieving a 100-Year Design Service Life on Wastewater Projects 655
Fig. 1 Solmax/GSE
Studliner on left and AGRU
Suregrip/ultra-grip on right
Neither of the aforementioned CPL sheets can be considered better than the other
except to make your own judgment on anchor design, anchor density, and/or sheet
width availability. Suregrip has 13–19 mm V-anchors at 420 anchors per square meter
[2], while Studliner has 8 mm head-type anchors at 1200 anchors per square meter
[6].
Suregrip is available in sheet widths up to 5 m (16.4 ), and Solmax is available
in sheet widths up to 2.44 m (8 ). The wider width is important when producing
precast pipe as it minimizes longitudinal welds on 3 m (10 ) and greater lengths of
pipe. As an example, 3 m (10 ) Suregrip requires only 1 long weld on 3 m (10 )
long precast jacking pipe, whereas 8 wide sheet requires 2 longitudinal welds on
1350 mm diameter pipe and 3 welds on 1800 mm diameter pipe. The goal to reduce
potential failure modes by minimizing welds will be discussed at length in the quality
section (Fig. 2).
A signal layer option is available with AGRU’s Suregrip material that allows
for easier visual detection and inspection of construction and in-service mechanical
damage that must be repaired. This is created by coextruding multiple layers for 3 mm
CPL, creating a 1 mm thick “signal” layer on the inside surface of the structure. While
not a necessity, it is a value-added in the consideration of a 100-year design service
life analysis.
656 J. P. Eng. Teppan
Fig. 2 10 width CPL on left allowing one longitudinal weld on 10 jacking pipe in the invert. The
8 width CPL on right requiring 2–3 longitudinal welds on 10 jacking pipe (3 welds are shown in
the picture of 1800 mm jacking pipe)
To assure excellent fit and finish as well as the minimization of inside and outside
corner welds, accessories are key. Accessories will vary between the manufacturers
and installers and may include:
• Conductive tear-off profiles
• End-profiles for terminations away from another intersecting CPL panel
• Fabric-backed material for penetrations
• Thermoformed inside/outside corners made from smooth or anchor sheet.
Corner pieces are generally bent by hand or by heating with a torch which are
both detrimental to the lifespan of the polymer/sheet from a 100-year design service
life. Corners should only be produced under specific temp/time conditions such as
extrusion or thermoforming. Engineered Containment has developed an accessory
line (included in the list above) to supplement the manufacturers’ lines and aid in
100-year design service life (Figs. 3 and 4).
The use of these accessories significantly improves welding and system quality
by:
1. Keeping CPL panels aligned during concrete pours (tear-off profiles)
2. Allowing proper gaps between panels, so wrinkles do not occur (tear-off profiles)
3. Allowing proper ASTM protocol spark testing (tear-off profiles)
4. Minimizing nail/staple holes, so a single weld bead can be utilized (tear-off
profiles with Engineered Containment floating panel procedures)
Achieving a 100-Year Design Service Life on Wastewater Projects 657
The precast stage of CPL lined trunk line and manhole systems is key. The quality of
each precast piece (i.e., wet cast jacking pipe, wet or dry cast open cut pipe, manhole
barrels, cones, etc.) is paramount as poor fit and finish from a precasting standpoint
and/or poor in-plant finishing welding cascades to unnecessary and serious issues
for the in-field welding phase and overall quality of the project.
Quality starts with the CPL panel or tube being utilized for the precast piece.
Panels are either wrapped around molds, or tubes are inserted on molds with both
techniques problematic if due care and attention is not taken in the production of
the panels or tube. The production accuracy for these pieces should be aligned with
finishing carpentry (i.e., ± 1/8 ) not general formwork accuracy. Unfortunately,
many precasters do not have carpentry tools/systems or the technician to achieve
this. If the precast production accuracy is not ± 1/8 , problems arise in panel sizing
for wrapping molds and major problems in the case of CPL tube fabrication. An
example of this need for precision is detailed below.
During panel production/wrapping, two cuts are required to produce a 4 (1.22 m)
manhole barrel if using 8 wide 2 or 3 mm CPL. The 8 length/circumference cut
can be a little less accurate given the overlap used for wrapping the mold, but the
center cut must be precise, so two 4 (1.22 m) pieces are produced. (For a 1200 mm
manhole, this is 12.66 (3858 mm)). From a practical standpoint, this is an extremely
difficult cut to complete without the proper cutting system that can hold that line.
Without a panel that is 4 (1.22 m) end-to-end, a manhole barrel with a single weld
bead cannot be achieved. Instead, capstrip with 2 welds must be utilized significantly
increasing the cost and decreasing the containment quality of the barrel because the
gap will exceed the maximum allowable of 10 mm at the barrel-to-barrel joint.
In addition to panel sizing issues, panel wrapping and lacing techniques by
precaster technicians must be administered with precision to prevent/reduce the risk
of liner failure either during the initial QA/QC testing or in the future service life
of the system. Robust and special lacing to ensure the overlapped panel does not
move up the mold during vibration, leak past the overlap, and/or wrinkle around the
edge is critical. If there is poor administration in this stage, the result is a poor-quality
precast component that requires capstrip. Overlap of 1 is all that should be necessary
if lacing is done correctly.
In the case of pre-made CPL tubes, dimensional stability is a serious consideration
that needs to be addressed. The use of fixed diameter butt-welded tubes should not
be adopted as the tube fabrication temperature versus mold installation temperature
fluctuation creates thermal expansion and contraction. It is more likely to have a tube
too small or large at the time of mold installation than have it fit properly. Additionally,
the use of tapered dies and multiple longitudinal butt-welds in tubes constructed with
a traditional 8 wide sheet increases the likelihood of sizing problems. The use of
fixed diameter butt-welded tubes creates major problems at the precaster’s facility
with either mold fitment or wrinkles as shown in Fig. 5.
Achieving a 100-Year Design Service Life on Wastewater Projects 659
Fig. 5 Wrinkles and panel lift-off from inadequate lacing and/or panel/tube sizing
Formwork comes in many shapes and sizes with wood as the preferred material as it
is the easiest to work with. Steel formwork requires specialized fastening techniques,
so the cladded CPL sheet can expand and contract. Expansion is the typical problem
as this causes wrinkles that may or may not be acceptable from a structural or process
flow perspective.
Aside from fastening design challenges with steel formwork, this section will
focus on accessory use during the cladding process. Accessories such as tear-off and
end-profiles perform multiple functions and are far superior to using tape, nails, and/
or staples to hold the CPL to the formwork and keep it aligned. They also seal the
CPL during the pour to minimize concrete leakage and after the pour to eliminate
liquid/gas transmission. The use of accessories also minimizes field welding time
and increases quality by minimizing edge wrinkling and allowing for proper gapping,
so only one extrusion weld bead is needed. Failure to properly align/gap the CPL
requires the use of capstrip with two weld beads and the resulting unembedded
capstrip. Capstrip is either 4 or 6 wide and is used as a bridge across large gaps
and/or misaligned sheets of CPL. See Fig. 6 for an example of wrinkling due to
improper gapping which was repaired with a manufacturer’s approved weld-bead
filling process of each depression which was then power-planed to a smooth finish.
The ability to create a corner/bend on a sheet for smaller cast-in-place formwork
is the best solution as there is still only one weld, but one that is flat versus an inferior
inside/outside corner weld. The best approach to ensure welding occurs on a flat
surface is with a corner accessory piece. It creates two welds, but these are superior
to an inside/outside corner weld [1, 7].
Lastly, patches are necessary for tie-rod holes and should only be circular and no
smaller than 6 as an extrusion gun cannot effectively and continuously weld around
a 90-degree corner on a square patch nor too small a radius.
660 J. P. Eng. Teppan
Fig. 6 Anti-friction application that was not gapped and/or attached correctly. About $250,000
repair on right
Fig. 7 Vault (left) with wall gaps too large so more capstrip and welding than necessary. Manhole
(right) with single pass weld joints and properly shaped penetration details
Achieving a 100-Year Design Service Life on Wastewater Projects 661
Extrusion welding is considered the most difficult type of weld to produce properly
as the rate and consistency of speed of the weld is completely determined by the
welding technician. The preheat air for the substrate material and welding rod exit
temperature are variable and controlled. Other factors that affect weld quality are
ambient conditions such as wind speed, air temperature, and direct/indirect sunlight
[8].
The first step in preparing for successful extrusion welding is to “qualify” the
welder (man and machine combined). The process involves taking strips of the mate-
rial and creating a mock-up of the assembly. For example, if overlap welding, as is
the case with the use of capstrip, set up an overlapped material sample and staple it to
a piece of wood in the vicinity of where welding will be complete to mimic the type
of welding to be performed in the environment it will be performed in. The machine’s
temperatures are set to where the technician believes he/she will achieve a successful
weld. The technician then runs a “qualification” weld, and after allowing 15–20 min
for material cool-down, he/she cuts 1 coupons with a bone cutter. These coupons are
then placed in a calibrated tensiometer and tested in both shear and peel (peel only
for overlap welds). If the tests meet or exceed the specification, the technician can
proceed with the work until such time that the ambient or work conditions change
enough to require another qualification weld.
The skill and experience of the welder is important and a key factor in ensuring
a 100-year design service life. The technician must have a deep understanding of
why the procedures exist. They must have experience to deal with unusual situations
such as stops/starts, intersecting welds, and penetrations in the structure. It should
be noted that it is preferable for the same technician who clads the formwork to be
the one who welds as both skillsets are key.
Welding technician certification by the CPL manufacturers is a bare minimum
requirement and insufficient when installing a system with a 100-year design service
life as a person could conceivably pass all manufacturer testing without an in-person
proctor and without knowing how to test their own welds on a tensiometer. The
appropriate technician to utilize for CPL welding is one who is first certified by
International Association of Geosynthetics Installers (IAGI), the governing body for
all Certified Welding Technicians (CWTs). The CPL manufacturer’s certification
is then the last step as it inherently includes certification of the actual installation
company who the technician works for as its procedures/policies are a key element
in achieving high-quality workmanship.
Returning to the discussion about inside/outside welds being suspect, it is very
difficult to qualify those welds for two reasons. The first is that a corner mock-up
structure would need to be built which is not easy to do. The second is that if the
662 J. P. Eng. Teppan
Fig. 8 Self-performed precaster in-plant welding on left and third-party in-plant welding on right.
Neither piece passed QA/QC protocols
technician built a corner mock-up, he/she would not be able to properly test the
sample on a tensiometer. With respect to butt joints, where one bead is being applied
across two CPL panels, only a shear test would be necessary as a peel test is only
necessary on overlap welds. For trunklines, an automatic welding unit is preferred for
weld consistency and long-term worker safety. There is only one known automatic
welder, and its design continues to be evolved with the upcoming addition of key
data logging as is done with the auto-welders used for the steel pipeline industry
(Figs. 8 and 9).
8 QA/QC Techniques
QA/QC techniques are an area of great interest to both owners and their engineering
firms. As discussed in the section above, the quality of a job starts and ends with an
abundance of accessories being applied with the CPL sheet by skilled, experienced,
and certified technicians.
The QA/QC for a project starts with written/logged qualification welds at least
once per day and ends with written/logged holiday/spark and/or vacuum tests. The use
of proper ASTM protocols is key. Installers and general contractors have propagated
the use of the concrete/rebar as the substrate conductor for spark testing welds or
using low vacuum “geomembrane” pond vacuum box testing methods for vacuum
testing. Neither is in line with ASTM protocols and only appropriate in a general
sense and for general use for simpler installations that do not drive toward a 100-year
design service life.
Achieving a 100-Year Design Service Life on Wastewater Projects 663
Fig. 9 1800 mm microtunnel with automatic welding of capstrip on right. Capstrip is unavoidable
in precast pipe due to gaps too large to create a single bead
As with any manufactured item that carries a long-term warranty, on-going inspection
is necessary with no exceptions. Complex systems such as an automobile require
regular inspection to resolve small problems like oil leaks with a retorque of the
oil pan or new gasket instead of waiting for a major leak and potentially damaging
the engine, a very costly repair. CPL systems are no different, and a small breach
or weld failure can be repaired very inexpensively versus waiting for the breach to
grow larger and the leak to allow serious structural damage to the concrete.
While vessels and trunklines are difficult to access, they are still shut down for
maintenance from time to time. If using the techniques as described throughout this
664 J. P. Eng. Teppan
paper, the frequency between inspections of CPL lined systems can be as low as
every two years (systems with potential for mechanical damage like ice/sand/etc.) or
every five years when mechanical damage is unlikely. Inspection can be via CCTV
or in-person if appropriate safety precautions can be taken. The most likely damage
to watch for is mechanical damage within the invert area where objects such as pipe
maintenance equipment like buggies, hoses, and/or tools can deeply scratch the liner.
Chunks of ice can cause damage as can the on-going erosion from road sand and grit
in combined systems. When these types of problems are identified, not only can the
area be repaired, it should be reinforced with a thicker layer of material that can act
as a sacrificial impingement plate/barrier.
10 Conclusion
CPL lined system has the ability to be designed to a 100-year service life as HDPE
geomembrane and CPL have a 446-year half-life at 20 °C in non-UV-exposed appli-
cations [4] as well as high resistance to H2S attack (Roward 2016). Ensuring that
the design intent and reality merge relies heavily on vetting every step of the process
during construction and during service. No material on its own can claim to meet
the service life goals of stakeholders, and it is the system outlined in this paper that
will facilitate long-term infrastructure protection that lasts for generations. Selecting
the correct material, ensuring that it is handled by competent and capable parties
including precasters, field installers, and welders, adhering to strict QA/QC protocols,
and continuing with regular inspection and maintenance are all critical in achieving
Achieving a 100-Year Design Service Life on Wastewater Projects 665
this lofty goal. If these considerations are implemented, the projects of today should
still be servicing our great, great, great grandchildren in the future.
References
Marilyn Fanjoy, Ken Johnson, Marc Lafleur, Eric Bell, and Simon Doiron
Abstract Water for the City of Iqaluit, the capital city of the Canadian territory of
Nunavut, is supplied from a nearby lake which feeds by gravity to a water treatment
plant. Treated water is stored in a two-cell reservoir prior to distribution into seven
independent water districts through an insulated and buried piping network. The
water districts are unique because, in addition to insulated piping, each district has
an independent recirculating water supply and water reheat system to provide freeze
protection. This makes the hydraulic configurations of each district more compli-
cated than a water system in warmer climates. Two of the districts are pressurized
with independent booster pumphouses, and the remaining five districts are supplied
by gravity from the reservoir. The City is advancing a project to develop a water
model of its water distribution network which will provide a detailed understanding
of the current system operations and provide a tool for system troubleshooting as well
as planning for system upgrades and expansions. Model results will also be utilized
for thermal analysis of the water districts. This will provide opportunities for opti-
mization of the heat addition to the water supply to be explored which may provide
some significant cost savings on the energy used to reheat the recirculating water.
The first phase of the project has been completed with the objective of developing a
working water model using all the system data based upon records of construction
and operational data available for each of the seven water districts. This stage of the
work has identified system deficiencies for maintenance and repairs. Phase 2 of the
project will complete hydrant flow tests throughout the distribution system required
to calibrate the water model.
K. Johnson (B)
EXP Services Inc, Edmonton, AB, Canada
e-mail: ken.johnson@exp.com
M. Fanjoy · E. Bell
EXP Services Inc, Fredericton, NB, Canada
M. Lafleur
EXP Services Inc, Ottawa, ON, Canada
S. Doiron
City of Iqaluit, Nunavut, Canada
1 Introduction
The City of Iqaluit engaged EXP Services Inc. (EXP) to develop a water model of
the City’s water distribution network to gain an understanding of the current system
operations and provide a tool for system troubleshooting, system expansion planning,
system upgrade planning, as well as for the design of expansions and upgrades. The
water model will ultimately become an essential tool for a wide variety of planning
and operational water distribution improvements.
Water for the City is supplied from Lake Geraldine to the Water Treatment Plant
and stored in a nearby reservoir prior to being distributed into seven service loops or
districts. Two districts are pressurized via separate booster stations. The remaining
five districts are supplied by gravity from the reservoir. The reservoir consists of two
equal-size tanks with a combined capacity of 4.1 million liters. An overview of the
water distribution system is provided in Fig. 1.
The objective of the project was to collect all the system data necessary to develop
a fully calibrated hydraulic and thermal model of Iqaluit’s water system. At this point
in the project, a working model has been developed and the field-testing program is
ongoing. Results from field tests along with current system operations details will
be utilized for model calibration in the next phase of the project. The calibrated
model will then be utilized for the identification and prioritization of recommended
improvements to the Iqaluit water system as well as plan for system upgrades and
future expansion potential. Other potential uses for the water model include determi-
nation of system wide operating pressures and flowrates, calculation of available fire
flows, trouble shooting water system issues, water age analysis, flushing program
development, and contaminant tracing.
2 Methodology
The first task undertaken was to gather and review all background information
provided by the City including reports, guidelines, codes, regulations, maps, draw-
ings, planning documents, flow tests, and available measured data. Of particular
interest in the background information was the data being recorded by the City
SCADA system, which provides real-time information on the water system opera-
tion. A gap analysis was then carried out to identify missing information required
for development of the water model and communicated to all parties involved in the
project.
The water model was created in the AutoCAD version of Bentley WaterGEMS
software using drawings of the water distribution system provided by the City for
the physical layout. Additional physical attributes such as pipe material, diameter,
valve status, pump curves, and operational parameters were then included in the
model from additional background information and record drawings.
The pipe lengths throughout the water model were determined by scale from the
2019 Overall Utility Atlas drawing except for buildings such as the booster stations
and reheat buildings. Record drawings were used at building locations to deter-
mine the piping layout, length, diameter, material, and installation date. Physical
attributes such as pipe diameter, material, and installation date were acquired using
the Iqaluit Water Distribution Network schematics for each district This information
also provided the locations of valves as well as the intended operational status (i.e.,
open or closed).
672 M. Fanjoy et al.
Elevations in the model throughout the distribution system were determined using
a combination of extracted elevations from the contour mapping in the model as well
as record drawings for water and sewer construction.
Hazen Williams coefficients for calculating head loss through each pipe section
were initially assigned according to standard published values for various pipe mate-
rials from the Material Library provided in the software. These values represent new
pipe conditions which reduce over time from the date of installation due to potential
corrosion, tuberculation, biofilm, and scaling inside the pipe. Initial Hazen Williams
head loss coefficients for the various pipe materials in the water model based on the
Bentley WaterGEMS software defaults for brand new pipe are summarized in Table
1.
The Hazen Williams head loss coefficient values will be adjusted during calibra-
tion of the model to account for variations in nominal versus actual inside pipe diam-
eter, fittings, and pipe age based on comparisons of model predicted to collected field
data. The unique service connections required in Iqaluit to provide freezing protec-
tion (i.e., two connections to the main and a recirculation pump) will increase the
head loss throughout the system more than for similar systems in warmer climates.
Therefore, lower Hazen Williams coefficients are expected for pipe materials in the
Iqaluit system than for published estimates for pipes with similar age in southern
geographic locations.
In addition, nominal pipe diameters are used throughout the water model as
opposed to the actual inside diameter and minor losses attributed to fittings and
valves are not explicitly defined. Common water modeling practice allows for adjust-
ments to the Hazen Williams coefficients to account for the head loss attributed to
these parameters as it dramatically reduces the level of effort required to define them
throughout the entire hydraulic model.
Water Model Development for Freeze-Protected Water System in Iqaluit … 673
Operational parameters for the system such as pump curves, operating tank levels,
and operating setpoints were acquired through a variety of sources as follows:
1. 2004 Process Operation and Maintenance Manual for the Iqaluit Water Treatment
Plant with 2018 Updates.
2. Record drawings for Boosters Stations No. 1 and 2, water treatment plant and
reservoir.
3. Shop drawings for Booster Stations No. 1 and 2 pumps and control valves.
4. Site inspections from previous work in Iqaluit.
5. Discussions with system operators.
6. SCADA data collection obtained in cooperation with the City’s SCADA
consultant.
The allocation of user demands throughout the distribution system was based on the
2020 City of Iqaluit’s Estimated Sanitary Flow data spreadsheet prepared by EXP.
The information from this flow data is considered as a conservative assumption for
domestic demand rates. The sanitary sheet also details the location throughout the
network in which demands are imposed on the system. An overall assumption is that
the average water supply throughout the City is approximately equal to the sanitary
flow collected. This approximation was used as metered data for users are not readily
available for a consistent time period. The sanitary flow also includes fixed flows at
constant bleed locations in the system which encourage flow to prevent freezing
of the watermain or sewer system. Bleed locations were provided by the City and
included within the system demands. The total estimated daily average flows based
on the sanitary data are 5.25 ML/day (1,916,250 m3 /year).
Water demands in the model are based on average user consumption. In reality,
water usage varies over time and generally follows a repeating 24-h cycle referred
to as a diurnal demand pattern. Each municipality often has its own unique pattern
based on time of day, climate, user types, and usage patterns. The diurnal pattern
is useful in the water model for identifying peak times of usage and fluctuations in
flows and pressures throughout the system over time.
SCADA data of the flows being discharged from the reservoir on June 11, 2021,
were used to develop the diurnal pattern for the City of Iqaluit. The date were selected
as a recent typical weekday without any large non-typical user demands such as flow
testing or known system breaks. The discharge flow recorded in the SCADA system
at 2-min intervals was compared to the average discharge flow for that day to develop
multipliers for the pattern. These usage trends were applied to all the user demands
in the water model except for known fixed demands at bleed locations. The total
measured average flow was 3.54 ML/day (extrapolated to 1,292,100 m3 /year) on
June 11, 2021.
674 M. Fanjoy et al.
Historical data for raw water consumption for the City of Iqaluit indicate that
the water usage measured on June 11, 2021, is likely more representative than the
assumed values from the 2020 sanitary flow data which was expected to be conser-
vative. Therefore, the user demands in the water model developed from sewer flows
were adjusted by a factor of 67.0% (1,292,100 m3 /year/1,916,250 m3 /year) for 2021.
The City provided a list of structures within the network which are imposing
a water bleed to prevent system freezing. Water recirculation and heating are the
preferred means of freeze protection; however, certain systems’ configurations, such
as piping dead ends, require water bleeds to maintain constant flow through the
watermain. Operational bleeds are usually coupled with a flowmeter to determine
rates and quantities. The collection of operational flow rates is still in progress, and
for model advancement, a flow rate of 0.15 L/s has been assumed at each bleed
location. Flowrates will be updated in the water model as they come available.
Considerations for the impact of COVID-19 restrictions are necessary to deter-
mine the City’s true average day usage and trending for 2020 and 2021. For example,
the effect of travel restrictions imposed in Nunavut has likely changed the City’s
current water usage patterns. These trends shall be revisited later once a more typical
average day condition is imposed.
A field execution plan was developed to gather system data for calibration of the water
model via hydrant flow testing in all seven districts of the system. The flow testing
plan was adjusted in consultation with the City of Iqaluit staff as conducting fire flow
testing at the desired locations was deemed unachievable due to the identification of
broken or frozen hydrants which required repair. The total number of hydrant flow
tests recommended for the field calibration plan was 16.
Operational data retrieved through the SCADA system were also extracted for
analysis. Instrument readings from within the two booster stations, which record pres-
sure, temperature, and flow, are essential for calibration as well as troubleshooting.
Upon approval from the City and working closely with the City’s SCADA consul-
tant, data collection activities were expanded to include EXP read-only access to the
City SCADA system to assist with defining critical operational parameters needed
for model calibration.
As noted above, the calibration plan relies heavily upon determining the appro-
priate Hazen Williams coefficient for each pipe. Although theoretical Hazen Williams
Coefficient (C) value ranges are provided based on each pipe type, they do not account
for age and years in service as well as inside pipe diameter and system minor losses.
Appurtenances such as valves (and disk status), bends, tees, service saddles, reducers,
etc. have a considerable effect on performance especially under high flow conditions
(i.e., fire flow). Therefore, the applicable C values will vary depending on the local
system schematic and details. Estimates for C value ranges from calibration are based
on typical published values for different pipe material, diameter, age, and level of
attack on the pipe which compensates for changes in internal pipe diameter. Internal
Water Model Development for Freeze-Protected Water System in Iqaluit … 675
pipe diameter can be affected by tuberculation, build up in the pipe or by ice formation
in cold climates.
The planned calibration strategy started with the Plateau district loop to determine
initial Hazen Williams coefficient values. This district was the ideal starting point
because of the available instrumentation within the booster pumphouse serving the
district (Booster Station 2 referred to as BS2) and the simplicity of the district piping.
The use of parameters and conditions which are verified/quantified through instru-
ments from BS2 is preferred to the alternative of relying on assumptions regarding the
City’s current system operation. The details of the physical infrastructure within the
Plateau network are available in relatively recent as builtgreat detail, especially when
compared to all other City districts. The level of detail incorporated within the model
with respect to pump curves and on/off status, elevations, pipe age, pipe size, pipe
length, bends, valves, etc. drastically reduces the margin of error due to assumptions.
Through SCADA, data are available from BS2 on pressure, flow, and temperature
for both the supply and return piping entering/exiting BS2. Therefore, quantifying
the difference between output and input parameters for both flow and pressure which
influence the calculation of a C value reflective for the distribution system. The C
values determined for Plateau would be considered as a high-end benchmark value
when comparing to other districts simply based on the pipe age and years in service.
Plateau is a network entirely composed of HDPE pipe which has been in service for
approximately 10 years. This will be beneficial for comparison when analyzing other
water districts which have larger information gaps coupled with a lack of available
instrumentation to confirm operating conditions. Relying on data from the Plateau
network will enhance the accuracy of any assumptions necessary in the other districts
throughout the City. The results of recent assignments in the Plateau network have
confirmed the operation of various distribution system components, and it is believed
that linear network is functioning as intended.
The goal for achieving an acceptable level of model calibration from field data
is based on the calibration criteria recommended in Chap. 7 of Advanced Water
Distribution Modeling and Management by Bentley Systems. It states that the model
should predict the hydraulic grade line (HGL) to within 1.5–3 m at calibration data
points for a water system with pipe diameters of 600 mm and smaller during fire flow
tests and to the accuracy of the elevation and pressure data during normal demands.
It also recommends the model be able to reproduce water tank fluctuations to
within 1–2 m for extended period scenarios (EPS) and match water treatment plant
and/or pump station flows to within 10–20%. These initial targets may be difficult to
achieve due to the lack of instrumentation throughout the water network. Therefore,
focusing on portions of Iqaluit’s network with instruments such as the Plateau district
is desired to validate the calibration results. Acceptable calibration may be achieved
and the model is deemed to be a reasonable representation of the system when the cost
of performing additional field tests exceeds the value of further calibration efforts.
676 M. Fanjoy et al.
The thermal model was created using Microsoft Excel spreadsheets for each of the
water districts in Iqaluit and flow data from the water model. Heat loss calculations
used in the thermal model development incorporate methods given in Chap. 4 of the
Cold Climate Utilities Manual (1996).
The thermal model has a “Master” spreadsheet where inputs are imported from the
water model for the entire water distribution network during the critical low flow
period where the system is most vulnerable to freezing. The spreadsheets for each
individual water district are automatically populated with inputs from the “Master”
spreadsheet, using the Excel VLOOKUP function. The data from the water model can
be simply exported from the water model and pasted into the “Master” spreadsheet
within the thermal model. The input hydraulic (flow) data required for each pipe
segment to model the thermals of the water system are summarized as follows:
• Pipe Label,
• Pipe Start Node.
• Pipe Stop Node.
• Pipe Diameter.
• Pipe Material.
• Pipe Length.
• Flow Rate in Pipe Segment.
The collection of information relating to the physical attributes of the pipes
throughout the distribution network was described in the previous section on water
model development.
Flow rates of the circulating pumps at the reheat stations are vital for the develop-
ment of the thermal model for each district. The recirculation rates, together with the
thermal capacity of the boilers, must be capable of adding sufficient tempered water
to replace the heat loss within the distribution system. Adequate hydraulic capacity
is required to ensure that water returns to the reheat station prior to cooling below
5 °C. Information on the existing circulation rates and boiler capacities of the reheat
stations was provided by the City.
Another important input required for the thermal model analysis is the location
and flow rate of existing bleeds within the distribution system. In certain water
districts, due to phasing of development or condition of existing infrastructure, bleeds
are utilized to maintain sufficient circulation during low flow periods to prevent
watermains from freezing. The City intended to provide the location and flowrates
of all existing bleeds for thermal analysis of the existing network condition. The
locations have been provided, but flow rates are still being compiled. Without this
Water Model Development for Freeze-Protected Water System in Iqaluit … 677
information, the thermal model will estimate the minimum bleed flow rates required
to mitigate the risk of freeze.
The flow scenario used for analysis in the thermal model is the assumed lowest flow
period throughout the City’s diurnal consumption pattern. Trending data indicate
that this occurs around 3 am daily and would be the period where the greatest heat
loss occurs within the distribution network. This is due to minimal water demands
and low flow rates throughout the network. The lower velocities through the pipes
correspond to greater time and thus heat loss in the distribution system before the
water returns to a reheat station or to a water bleed location.
Several assumptions are necessary for developing estimates on heat loss. Critical
piping norms applied are:
• All pipes are wrapped in 50-mm of polyurethane insulation.
• The pipes are assumed to have no resistance to heat loss; only the insulation is
assumed to provide any thermal resistance.
• Outside pipe diameters were calculated based on schedule pipe sizes.
• Service connections typically consist of 25-mm service and recirculation lines
installed inside a 100 mm insulated HDPE duct. For heat loss estimates, the two
lines were treated as a single 50-mm diameter pipe, 20 m in length.
• The total number of service connections was estimated using City mapping for
each water district.
On an annual basis, the ground temperature below the active layer in Iqaluit
marginally varies; thus, in theory pipes appropriately buried are encased in a steady-
state temperature within permafrost. The ambient temperature selected for heat loss
calculations in Iqaluit is –10 °C. The thermal conductivity (K) value of 0.024 W/m/°C
was selected from the Cold Climate Utilities Manual (1996) for typical polyurethane
insulation. Greater conductivity values are used to simulate insulation deterioration,
based on the year of pipe installation.
The calibration plan for the thermal model was to rely on values obtained from
existing instruments to validate the assumptions utilized for the unknowns within the
heat loss calculations. Thermal modeling started with Plateau network and analyzed
logged temperature depressions from Booster Station No. 2’s (BS2) instruments. BS2
actively logs the output (supply) as well as the input (return) temperatures from the
Plateau distribution network. Comparing these temperature depressions with output
(supply) and input (return) flow rates enables the validation of assumptions used for
heat loss calculation. Thus, with all the known details of the Plateau network, the
parameters assumed in the heat loss calculations would be appropriately adjusted
678 M. Fanjoy et al.
3 Findings/Observations
It is important to understand that interpreting the results of the data gathered in the
field requires considerable assumptions around the wholistic system functionality.
Validation of these assumptions will enhance the accuracy of the model during the
calibration stage. Confirming the operational functionality of key infrastructure such
as normally closed valves, Pressure Reducing Valves (PRVs), pump status, system
bleeds/demands, known frozen sections, etc. is essential when calibrating the system
with the data collected. For example, the cause for unanticipated flow test results
could be a function of a wide variety of variables such as pump failure/inefficiency,
incorrect post design system modifications, or excessive system head loss caused
by a partially closed valve. To collect additional information, the fire flow testing
program was modified to include additional monitoring locations along the supply
line. The City’s Public Works Department provided two resources to monitor pressure
or pump operation during the flow tests of critical infrastructure for further analysis
and interpretation of the flow test results.
A comparison of the measured reservoir discharge flow from the SCADA system to
the calculated reservoir discharge flow from the water model over a 24-h period is
shown in Fig. 2. The measured flow data for the same 24-h period that were used to
develop the diurnal pattern for the model were used in the comparison to evaluate the
Water Model Development for Freeze-Protected Water System in Iqaluit … 679
difference in actual versus predicted flow from the reservoir. The graphical illustration
in Fig. 2 indicates that the discharge flow in the model is predicted to be on average
0.1 ML/day lower than measured by the SCADA on that date during low demand time
from 12 to 6 am. A possible explanation is that all the fixed flows from bleeds have
not been accounted for in the model due to inaccurate flow data for operational bleeds
or that the bleed rates are operating at significantly higher flow rates than assumed.
There are also 16 peaks in flow which coincide with the truck fills at Booster Station
No. 1. The time, duration, and flowrate for the truck fill consumption pattern were
derived directly from the measured SCADA data. It indicated that each truck is filled
at a rate of 73.6 L/s (1167 USgpm) for approximately 3 min. It should be noted that
the BS1 station was designed to have truck fill operations which flow at a rate of
approximately 25 L/s.
The measured and calculated reservoir levels for the same time period are shown in
Fig. 3. It indicates that the model predicts the initial filling of the reservoir at a similar
rate as the measured data. However, the model predicts that the reservoir will initially
fill faster than the measured data and one extra fill cycle is required than measured.
This is an indication that model user demands may be too high between approximately
12 and 8 am. It is anticipated that the misalignment between model predicted and
actual operational measurements of the tank level trends is a combination of these
factors. Thus, incorporating unidentified system bleeds operating in the field coupled
with modifications to the user demand pattern during that time period would improve
the alignment of model calculated versus measured tank level trends.
680 M. Fanjoy et al.
A contractor was engaged to execute the field calibration plan throughout the Iqaluit
water network. Upon reviewing the data of the first few flow tests in March 2021,
there were concerns regarding the contractor’s results as all tests were significantly
less than anticipated in both districts tested, Plateau and Federal Road. EXP was not
present during the testing and had not previously worked with this contractor; thus,
there was concern for human error. As a means of quality control, the subsequent flow
test was scheduled in the presence of EXP in June 2021. It was assumed that there
were two potential causes for the poor flow test results experienced on March 8th,
2021 within the Plateau network. The first, the contractor did not properly complete
the tests, or alternatively, the emergency pump within Booster Station No. 2 (BS2)
did not engage during the tests. By collecting and reviewing SCADA data from the
period when the tests were completed, it was uncovered that the flow experienced
during the tests at the hydrants was appropriate with the flows recorded through BS2’s
flow meters. This also confirms the in-field test results and therefore procedure. Data
logged from within BS2 also confirmed the City’s emergency pump within BS2 which
did not automatically engage during the test conditions, and solely, the booster and
jockey pumps were operational for these results. This was immediately flagged to
the City and it is EXP’s understanding that this problem has been resolved.
A subsequent field test confirmed that the fire pump (P-305) automatically engaged
and ran during the test. Review through SCADA of the flowmeter (FIT-312) imme-
diately downstream of P-305 confirmed that there was no discharge from the pump
conveyed throughout the Plateau network. Reviewing the schematics and logic
control, it was suggested that the City investigates a solenoid and pressure control
valve (SV-306 and PCV-306) pairing used for pressure relief downstream of the
Water Model Development for Freeze-Protected Water System in Iqaluit … 681
pump. Upon pump P-305 startup, PCV-306 is in the open position, and after a period
of time, SV-306 is energized to close PCV-306 forcing all water through FIT-312 and
out to the demand in the Plateau network. If high output pressures are experienced,
PCV-306 activates to relieve pressure buildup. See Fig. 4 illustrating the suspected
conditions experienced during the flow testing within the Plateau network, water was
circulating within the fire pumps’ pressure relief system as SV-306 and PCV-306 may
not be functioning or set properly.
The City was also immediately notified, and it is our understanding that the City is
investigating and planning to repair the problem. Upon correction of the issues noted
above, the fire flow testing in Plateau may be attempted again. The current test results
within the Plateau district are not valid for model calibration. Upon investigation and
repair, all flow testings in Plateau will need to be repeated once the fire pump and
solenoid valve are confirmed to be working properly and discharging higher system
flow rates to the district network.
Temperature data were collected from the SCADA system from the treated water
storage discharge at the Water Treatment Plant and BS2. Upon review, several irreg-
ularities were identified which indicated that the system is not operating as intended,
and ultimately, the data could not be relied on to validate assumptions necessary for
calibration of the thermal model. Figure 5 presents the temperature data recorded
from July 21 to July 28, 2021. The irregularities identified in the data, which suggests
potential operational issues within the system, are summarized as follows:
• Water temperatures leaving Booster BS2 are excessively high. The stations’
heating system is intended to operate and maintain a maximum output tempera-
ture setpoint of ± 10 °C. Water temperature leaving the Water Treatment Plant
(WTP) was increasing beyond the optimal maximum setpoint as well.
682 M. Fanjoy et al.
The water model can be further refined to closer reflect measured data by quantifying
the fixed flow from bleeds during the low demand period between 12 and 6 am. As
noted above, EXP has requested this information.
For information purposes, the measured water temperature variation at the reser-
voir discharge location over the same 24-h period has been provided in Fig. 6. It
indicates very little variation throughout the day and an average water tempera-
ture of 7.5 °C. This average water temperature will be useful for future heat loss
calculations.
Water Model Development for Freeze-Protected Water System in Iqaluit … 683
A working water model has been established for the City of Iqaluit water distribution
system. However, an acceptable level of calibration of the water model has not been
achievable as recommended by published guidelines due to results of the field test
data. Operational issues were identified by comparing field test results to SCADA
collected operational data.
In addition, bleed locations throughout the system were identified by the City, but
flow measurements have not been provided at each location. In the absence of this
data, assumed flows are required to be simulated in the water model. Therefore, any
available flow data at bleed locations will assist calibration efforts. Similarly, it is
recommended that flow data from Sewage Lift Station No. 2 (LS#2) can be captured
as a means of validating the Lower Iqaluit water demands since LS#2 only collects
sewage from within Lower Iqaluit. Since it is not connected to the City’s SCADA
system, flow data are requested directly from the City.
684 M. Fanjoy et al.
The plan moving forward is to repeat flow tests in the Plateau district once oper-
ational issues in Booster Station No. 2 are resolved. The field results will be used
to calibrate the Plateau district in the model since the physical attributes of the
infrastructure as well as operational data are known in greater detail in this district
compared to the remainder of the system. The Hazen Williams C values, which are
a function of head losses throughout the piping network, determined for Plateau
would be considered as a high-end benchmark when comparing to other districts
simply based on the pipe age and years in service. Flow testing can proceed with the
resolution of operational issues.
This paper summarizes Phase 1 of the project with the objective of a working
model achieved. The objective of a calibrated model remains outstanding because
system operational issues need to be addressed prior to conducting further field
flow tests required for model calibration. Phase 2 will resume model calibration
in conjunction with the City’s efforts toward restoring and resolving the identified
operational issues identified throughout the flow testing conducted in each district.
Once acceptable calibration of the Plateau district is achieved in the water model,
subsequent districts will be evaluated utilizing Plateau’s C values as a guideline for
calibration or field troubleshooting. It is anticipated that the majority of the flow
tests will need to be conducted again. Resolution of the outstanding operational
items noted in Phase 1 in addition to flow data requested will increase the likelihood
of successfully calibrating the remaining water districts.
Continuation of thermal modeling is dependent on flow data from the cali-
brated water model along with the resolution of operational items noted in Phase
1. Therefore, thermal modeling will continue in Phase 2.
The calibrated water model will then be used to identify and prioritize potential
improvements to the overall distribution system. It will also be a valuable tool for
assisting the City with trouble shooting and maintenance activities as well as plan
for system upgrades and future expansion potential.
References
1. Bentley Systems (2007) Advanced water distribution modeling and management, 1st edn, vol
7. Bentley Institute Press, Exton, Pennsylvania, USA, pp 287–289
2. Smith DW (1996) Cold regions utilities monograph, 3rd edn, vol 4. ASCE, New York, NY, USA,
pp 19–48
Hydrology and Water Balance Study
for the Canadian High Arctic
Community of Grise Fiord
Abstract In the Canadian Arctic, drinking water availability is scarce and vulner-
able to changes such as climate change impacts and increasing community develop-
ment. A hydrological and water balance study was completed to determine if various
surface water sources that rely on snowfall and snowmelt-generated runoff could
meet current and future 20-year water supply needs for the High Arctic Nunavut
community of Grise Fiord, the most northerly community in Canada. The study is
focused on a coarse-resolution analysis to characterize annual watershed yield versus
expected water use of the community and accounts for annual municipal water supply
usage, population growth, and potential impacts of climate change. High-Resolution
Digital Elevation Models were used to delineate potential watersheds using ArcGIS.
A water balance model was used to predict the annual water yield from each poten-
tial watershed using historical and projected future climate data. Climate scenarios
were analyzed using below-average values for precipitation rates and above-average
values for evapotranspiration rates to account for worst-case scenarios. The water-
sheds represent nival regimes which are characterized by negligible winter flows
(typically between October and early May), followed by significant flows in the
summer from ice melt and thawing snowpack. From the analysis, the existing runoff
source at Grise Fiord is not reliable or sufficient to meet the future water supply
needs of the community. It was recommended that the community uses the alter-
native primary water source identified in the study (Airport River) as this source
can meet the community’s future needs. The alternative water source will require
constructing a new water intake and raw water storage infrastructure (storage tanks)
in addition to the new water treatment facility.
1 Introduction
2 Background
The Hamlet of Grise Fiord, Nunavut, is located at 76°25 N latitude and 82°53 W
longitude on the southern section of Ellesmere Island and it is the most northerly
community in Canada. In Grise Fiord ,the primary water source is melt water from
surface runoff which is available for about 45–50 days a year during the summer from
mid-June to the beginning of August. The runoff is collected in a small basin where
a hose runs overland to two heated water storage tanks (approximately 4,000,000
L each) that hold the community’s annual storage. In recent years, there have been
structural issues with the tanks, likely due to settlement and thawing permafrost
issues. To mitigate further settlement, tanks were being filled to only 50% capacity
and operational adjustments were made to conserve water. Remediation actions were
taken in 2020, but issues persist. The community noted that one tank was leaking
in the summer of 2021 and is currently completely empty. The community is under
direction to undertake “smart water use” practices. A fleet of two trucks distributes
water to the community. Chlorination is the only method being used for treatment/
disinfection.
Hydrology and Water Balance Study for the Canadian High Arctic … 687
The community has identified a secondary water source (Airport River), about
300 m northwest of the storage tanks, that runs through the west side of the commu-
nity. The Airport River has been reported to have noticeable flow between the months
of late June to September. In 2020, the community requested to add Airport River as a
secondary water source to the water license. This secondary source will not be recog-
nized by public health authorities as suitable for potable water supply purposes until
complete bacteriological and chemical analysis has been completed and approved.
Historical documentation and past reports have noted that the existing water basin
is recharged by ‘glacier’ melt during a few weeks in the summer. However, upon
reviewing satellite imagery and topographic information, there is little evidence that
the runoff basin receives any glacial melt. Historical satellite imagery and topographic
information indicate that there are no remaining ice fields within the runoff basin’s
catchment area. The closest ice fields cap to the community drain into the Airport
River watershed. It is likely that the recharge of the existing basin is solely attributed
to snowmelt and surface runoff. An aerial view of the community is shown in Fig. 1.
2.2 Methodology
Water budgets (as volumes) were computed on an annual basis assuming steady
conditions with respect to storage within the watershed. Water volumes are removed
(losses) from the watershed through community water usage and evapotranspira-
tion (ET). Water volumes are recharged (inputs) into the watershed through annual
precipitation.
The change in annual storage volume equation (water balance) within a watershed
is given as:
(P − ET)
S = × Aw − U, (1)
1000
where:
Table 1 Population
Population Water use (m3 /year)
projections and annual water
use rates 169 7423
200* 8760**
* Based on projections, the population of Grise Fiord is projected
to decrease in 2043—for this study, a conservative value of 200
persons has been used as the population in 2043 to consider
potential growth
**Water use based on design value of 120 lpcd
If S > 0, precipitation (input) exceeds ET and water use (losses) in the watershed
and the annual net balance is positive.
If S < 0, precipitation does not exceed ET and water use in the watershed and
the annual net balance is negative.
Percolation due to groundwater is assumed to be negligible due to underlain
permafrost. The equation above assumes that the entirety of the precipitation entering
the watershed experiences evapotranspiration.
Historical data between 1984 and 2020 were downloaded from the Environment and
Climate Change Canada website to calculate annual total precipitation at the closest
weather station. There is only one weather station located in Grise Fiord near the
airport (76°25 22.040 N, 82°54 08.020 W, elevation = 44.50 m, Climate Station
690 C. Keung et al.
2.2.6 Evapotranspiration
Evapotranspiration (ET) is the primary mechanism for water loss from a watershed
underlain by permafrost. However, sparse data are available regarding actual ET rates
for the study community of Grise Fiord. For this assignment, a literature review was
completed using past research that investigated annual ET in High Arctic environ-
ments in Nunavut (above 70°N latitude) and specifically for intermittent river and
ephemeral stream systems. These values have been listed in Table 3.
Minimum, maximum, median, average, and 3-year high annual ET values for the
water balance study have been calculated using these literature values and have been
presented in Table 4.
As a comparison, wetland treatment studies conducted by Dalhousie Univer-
sity and the Government of Nunavut—Department of Community and Government
Services (GN-CGS) in 2015–2017 estimated annual ET rates from Sanikiluaq, Cape
Dorset, and Naujaat to be 91, 63, and 65 mm/year, respectively [2–4]. All three of
these Nunavut communities are south of 70°N latitude. With relation to latitude,
ET rates greatly decrease with increasing latitude because of the decrease of solar
irradiance and air temperature. The annual surface irradiance in the High Arctic is
approximately 2500 MJ m-2 yr−1 which is almost 60% less than communities in the
south [10]. Typically, communities further south (below 70°N latitude) would expe-
rience higher ET values due to increased solar irradiance and air temperature, which
illustrates that the chosen ET values for the High Arctic communities are appropriate
for the conservative approach applied for the purposes of this screening study.
Hydrology and Water Balance Study for the Canadian High Arctic … 691
Table 4 Calculated ET
ET parameter ET (mm/year)
values for water balance
calculations Minimum ET 27
Maximum ET 103
Median ET 61
Average ET 65
3-year high ET 86
692 C. Keung et al.
3 Results
Watersheds were delineated using ERSI ArcGIS Pro. The watershed areas for the
existing runoff source and Airport River for Grise Fiord are shown in Fig. 2.
Table 5 presents the delineated water areas for the two water sources.
For the water balance calculations, fifteen (15) scenarios that would result in the
lowest potential runoff for the study watersheds were analyzed. Taking this conser-
vative approach, the fifteen analyzed scenarios use below-average values for precip-
itation (inputs) and above-average values for evapotranspiration (ET) (losses). The
worst case would be represented as Scenario 1, with minimum precipitation and
maximum ET.
The results showing the amount of potential annual runoff for the community of
Grise Fiord are presented in Table 6.
As discussed previously, a review of historical satellite imagery and topographic
information indicates that the existing water basin recharge is likely attributed to
snowmelt and surface runoff (not glacier melt). As a conservative approach, only
annual precipitation values have been used as water inputs in the water balance
calculations.
The results of the water balance analysis for the two water sources for Grise Fiord
are presented in Table 7.
Table 7 Grise Fiord water balance analysis for potential water sources
Scenario no. 2043 Water use Existing runoff source Airport river
(m3 /year) Runoff (m3 /year) S > 0* Runoff (m3 /year) S > 0*
1 8760 0** No 0** No
2 8760 367 No 47,796 Yes
3 8760 5879 No 764,734 Yes
4 8760 5416 No 704,420 Yes
5 8760 9878 Yes 1,284,798 Yes
6 8760 15,390 Yes 2,001,736 Yes
7 8760 7654 No 995,519 Yes
8 8760 12,116 Yes 1,575,897 Yes
9 8760 17,628 Yes 2,292,835 Yes
10 8760 11,047 Yes 1,436,948 Yes
11 8760 15,510 Yes 2,017,326 Yes
12 8760 21,021 Yes 2,734,264 Yes
13 8760 21,956 Yes 2,855,802 Yes
14 8760 26,418 Yes 3,436,180 Yes
15 8760 31,930 Yes 4,153,118 Yes
* Change in annual storage volume is denoted as S, as per Eq. 1 previously noted, which takes
into account water usage of the community (noted above as 2043 Water Use)
**Noted as zero potential runoff since estimated ET exceeds precipitation
If the annual precipitation volume is greater than or equal to the annual losses
(annual ET plus community water use), the water supply sufficiently meets the needs
of the community. Based on the model:
• For the existing runoff source, the water source cannot meet the community’s water
supply needs in 5 out of the 15 scenarios where the community is experiencing
either minimum precipitation and/or maximum ET.
• For Airport River, the only instance where water supply needs are not met occurs
if the community experiences the worst-case scenario (Scenario 1—minimum
recorded precipitation and highest ET).
Hydrology and Water Balance Study for the Canadian High Arctic … 695
4 Discussion
Snow constitutes the majority of the total annual precipitation in the community of
Grise Fiord. Both watersheds are associated with snowfall and snowmelt-generated
runoff, which characterizes a nival regime streamflow. These nival regimes are char-
acterized by negligible or very low winter flows (typically between October and early
May), followed by significant flows from ice melt and spring thawing of snowpack.
Evapotranspiration is the main hydrological loss and is apparent for a couple of
months after snowmelt until soil moisture declines. Evapotranspiration is greatest
following the snowmelt (typically around late June) and decreases substantially
throughout the summer.
Runoff ratio, the ratio between runoff and precipitation, are typically high for
polar deserts and glacierized basins. In the late spring/summer, high solar radiation
causes rapid snowmelt where over 80–90% of the annual runoff flow occurs within
a few weeks period. Timing and duration of the melt season depend on the weather
and end of winter snow conditions. After snowmelt, flow generally declines rapidly.
The presence of permafrost at shallow depths prevents infiltration.
Water Balance Scenarios for Grise Fiord were studied for the existing surface water
runoff basin and Airport River. Based on the water balance assessment, the existing
runoff source cannot meet the community’s water supply needs in 5 out of the 15
scenarios, where the community is experiencing either minimum precipitation and/
or maximum ET. This is likely due to the small catchment area for the existing runoff
basin (262,473 m2 ).
In a study investigating vulnerability levels of municipal drinking water supplies
for the communities in Nunavut, Hayward et al. [7] stated that the most influential
factor regarding water supply vulnerability threat levels appears to be the size of
the source watershed. The same study noted a high- to medium-level water supply
vulnerability threat for Grise Fiord based on the worst-case scenario assessment
(minimum precipitation and maximum ET scenario) which is consistent with the
results from this assessment (assuming using only the existing runoff basin).
For the alternative water source, Airport River, the only instance where water
supply needs are not met occurs if the community experiences the worst-case scenario
(Scenario 1—minimum recorded precipitation and highest ET). The catchment area
for Airport River is relatively large at 34,139,893 m2 . In general, any net posi-
tive annual runoff for Airport River will provide sufficient water supply for the
community’s water supply needs.
696 C. Keung et al.
It is recommended that Grise Fiord uses Airport River as either the primary or
secondary water source for the community of Grise Fiord over the next 20 years.
Airport River provides a much larger catchment area compared to the existing runoff
source and can be used as a reliable water source. Airport River watershed contains
visible ice fields that will provide additional water quantity inputs.
Airport River is approximately 300 m away from the existing storage tanks and
runs through the west side of the community. This secondary source has not been
recognized by public health authorities as suitable for potable water supply purposes.
Additional water quality sampling including complete bacteriological and chemical
analysis is required to confirm Airport River as a viable potable water source.
In addition to water quality samples, a field program to quantify the estimated
flow rate of Airport River is required. In Grise Fiord, the existing water source is
melt water which is available for about 45–50 days a year during the summer from
mid-June to the beginning of August. It has been reported that in some years, this
window is as short as three weeks. The Airport River has been reported to have
noticeable flow between the months of late June to September.
As the Airport River is an ephemeral river and has a very short extraction period, it
is expected that the proposed raw water refilling operation will use a trailer-mounted
pump system that can be set up for the duration of the refilling operation and taken
down and stored over the remaining months. There is currently no developed access
to the Airport River, and thus, a new access road and truck pad will be required to
safely access the primary water extraction point.
In determining the location of a new intake to access Airport River, the proximity
of the airport needs to be considered. In the vicinity of the airport, the river widens
considerably but increases the risk of industrial (fuel) contamination. Upstream of the
Hydrology and Water Balance Study for the Canadian High Arctic … 697
airport, the terrain for construction is more challenging but the risk of contamination
is greatly reduced.
As raw water is annually collected over a period of a few weeks, the commu-
nity requires raw water storage for the entire 12 months between refilling periods.
Currently, the community has two heated welded steel tanks with an operational
capacity of approximately 4000 m3 /tank (8000 m3 in total). One tank was built
around 1986 (Tank A) and the other in 2002 (Tank B). Remediation actions were
taken in 2020 but issues persist.
The projected 2043 annual water demand (i.e., raw water storage requirements)
for the community of Grise Fiord is 8760 m3 /year. The new water treatment plant
must incorporate this raw water storage consideration into the new design—potential
raw water storage options to assess include:
• Potential to reuse existing tanks for raw water storage—there are notable
geotechnical and structural concerns.
• Construction of a new earthen basin.
• Upgrading and increasing the storage capacity of the existing collection basin.
• Installation of new storage tanks (improving redundancy by providing three tanks
vs. two tanks).
Results from this analysis should be considered high-level and coarse resolution. This
desktop study provides a screening level assessment of the drinking water supplies
for the community of Grise Fiord with consideration to climate change, population
growth, and existing water infrastructure. This study focuses solely on water quantity
and does not provide any assessment regarding water quality.
There are a number of limitations based on poor data availability, as well as
the unknown quality of some of the datasets. If a yearly climate dataset had three
or more months of missing data, this climate dataset was omitted from the water
balance analysis.
Evapotranspiration characteristics of the studied watersheds are also extremely
limited—no field data for measured evapotranspiration rates was available. A litera-
ture review was completed to estimate evapotranspiration rates in similar High Arctic
environments, but there is still a high degree of uncertainty in the quality of this histor-
ical data. Variations in environmental conditions, plant community composition, and
microtopographical features have a significant influence on evapotranspiration rates.
There is a large spatial and temporal variability in geomorphic and climatic drivers of
evapotranspiration which makes it difficult to predict evapotranspiration rates in the
absence of any field data. As precipitation and evapotranspiration are the main sources
of water inputs and losses, any variation or error in these values could significantly
alter the results of the water modeling assessments.
Underestimation of precipitation due to snow undercatch and water losses due to
sublimation was not accounted for in the calculation. Actual basin snow amounts are
698 C. Keung et al.
usually larger than measured values (at weather stations), which suffer from gauge
undercatch, and thus, the use of snow gauge data was deemed as a conservative
approach for this study. Estimates for snow undercatch can range from 10 to 50%
depending on gauge type and wind conditions [9]. Sublimation losses have not been
characterized. Characterization of these processes requires detailed meteorological
data.
In general, there is a lack of field studies detailing the hydrological regime and
hydrological features that affect recharge (streams, glaciers, flows through the active
layer) at all the sites. The nearby Grise Fiord glaciers and ice caps have not been
quantified and known characteristics are very limited. As such, these potential water
inputs have been omitted from the water balance analysis.
To improve the accuracy of future studies, it is recommended to conduct addi-
tional field studies to provide more complete and site-specific climate information,
evapotranspiration rates, and flow rates and water levels for major streams and
channels.
6 Conclusions
For the community of Grise Fiord, it is recommended to use Airport River as either the
primary or secondary water source. Airport River provides a much larger catchment
area compared to the existing runoff source and can be used as a reliable water source
based on the analysis performed.
Airport River has not been recognized by public health authorities as suitable
for potable water supply purposes or used in the past as a potable water source,
and thus, additional water quality sampling including comprehensive bacteriological
and chemical analysis is recommended. This includes analyzing multiple samples at
potential locations for a new water intake at both upstream and downstream of the
airport to identify and quantify the risk of industrial (fuel) contamination.
In addition to water quality samples, a field program to quantify the estimated
flow rate of Airport River is recommended for the following melt season.
Depending on the location of the new water treatment plant, upgrades to access
the river/intake and road infrastructure are likely required.
The projected 2043 annual water demand (i.e., raw storage requirements) for the
community of Grise Fiord is 8760 m3 /year.
References
1. Arktis Piusitippaa Inc. (2014) Hydrology assessment grise fiord, Nunavut, Draft. Pond Inlet,
NU, 17 Nov 2014
2. Centre for Water Resource Studies (2017a) Wetland treatment area study in Cape Dorset,
Nunavut. Dalhousie University, Halifax, NS, Jan 2017
Hydrology and Water Balance Study for the Canadian High Arctic … 699
3. Centre for Water Resource Studies (2017b) Wetland treatment area study in Naujaat, Nunavut.
Dalhousie University, Halifax, NS, Jan 2017
4. Centre for Water Resource Studies (2017c) Wetland treatment area study in Sanikiluaq,
Nunavut. Dalhousie University, Halifax, NS, Feb 2017
5. Cold Regions Utilities Monograph (1996)
6. Good Engineering Practice for Northern Water and Sewer Systems, 2nd edn, 2017
7. Hayward J, Johnston L, Jackson A, Jamieson R (2020) Hydrological analysis of municipal
source water availability in the Canadian Arctic Terriotiry of Nunavut. J Arctic Inst North
America 74(1):30–41
8. Kane DL, Gieck RE, Hinzman LD (1990) Evapotranspiration from a small Alaskan Arctic
watershed. Nord Hydrol 21(253):272
9. Liston GE, Sturm M (2004) The role of winter sublimation in the Arctic moisture budget.
Nordic Hydrol Hydrol Res 35(4–5):325–334
10. Wang S, Pan M, Mu Q, Shi X, Mao J, Brummer C, Jassal RS, Krishnan P, Li J, Black TA
(2015) Comparing evapotranspiration from eddy covariance measurements, water budgets,
remote sensing, and land surface models over Canada. J Hydrometeorol 16:1540–1560
11. Water Treatment Plant Design (2020) Nunavut guideline document (phase 4), Dillon Consulting
Limited, Aug 2020
12. Young KL, Lafrenière MJ, Lamoureux SF, Abnizova A, Miller EA (2015) Recent multi year
streamflow regimes and water budgets of hillslope catchments in the Canadian High Arctic:
evaluation and comparison to other small Arctic watershed studies. Hydrol Res 46(4):533–550
13. Young KL, Woo MK (2004) Queen Elizabeth Islands: water balance investigations. Northern
Res Basins Water Balance 290:152–163. Ingold TS, Miller KS (1983) Drained axisymmetric
loading of reinforced clay. J Geotech Eng ASCE 109(2):883–898
Materials
Assessment of Key Imperatives
for Enhancing Precast Adoptability
in Developing Countries
Mostafa Abdelatty
Abstract Precast concrete’s modular application has proved prevalent within the
construction industry over the past decades. The road has been paved for wider
use of modern precast concrete in the construction industry. Unfortunately, use has
been limited in emerging nations despite the booming and ever-growing construction
industry. The main objective of this study is to assess the status of precast concrete
in emerging regions attempting to pinpoint both the advances and barriers associated
with its use. The Egyptian market is an example of an emerging market where a full
set of material products is considered. Data analysis including surveys from site visit
interviews is used to incorporate actual market conditions and consideration into the
findings of this work. Summarized trends of the Egyptian precast concrete industry
with analysis of their impact on market strength and value-added were observed.
Recommendations are provided for concrete societies and the construction industry
by in large toward better utilization of precast concrete within an emerging region like
Egypt. From the surveys and data collected evidently Egypt as a developing nation
showed precast construction possesses roughly 5% of the load bearing construc-
tion market giving it room to penetrate and possess more market share. Modular
units are the major application of precast concrete technologies considering surveys
and feedback forms within this thesis. A significant number of apple-to-apple cost
comparisons were made showing the capability of precast methods of being a cost
saving measure reaching saving of up to 35% in comparison with traditional construc-
tion works, enhancing the understanding of a typical user in a developing region the
power points of using precast construction in comparison with traditional. The reader
will be able to signify the perceptions taken by professionals in the market aiming to
utilize precast to pursue more modern and efficient approaches to modular building.
M. Abdelatty (B)
The American University, New Cairo, Egypt
e-mail: mo90@aucegypt.edu
1 Introduction
Precast concrete has been a prominent part of the ready-made construction industry
for decades. With regards to precast concrete, the worldwide Precast Concrete
Construction (PCC) Market is expected to reach USD 985.80 billion by 2024, from
USD 782.77 billion recorded in 2017 with a 2.6% compound annual growth rate
(CAGR) over 2017–2024, according to market research reports scripted by certified
Precast Concrete Market researcher Payal Agrawal (Agrawal 2018).
Precast concrete is usually composed of a specific mixture comprised of cement,
water, admixtures and aggregates that is then cast into particular figures in a well-
managed environment. The precise concrete is poured into a modeling form and then
cured before being stripped from its real form. These features are transferred to the
construction site to use them for building and erection into the workplace. Precast
concrete is engaged with reinforcement processes. One process involves conventional
reinforcement bars, and another process involves a combination of reinforcing bars
and high-strength steel with high tensile strength. It is possible to also achieve the
same result with single high tensile strength steel only, without the combination of
reinforcing bars. For the reinforcement bars, the specific method utilized namely is
the prestressing method where the strands of steel are pre-tensioned in real form
before the casting of precast concrete. The compressive force exerted by the strands
allocates the elements of precast concrete for spanning wider distances and carrying
great amounts of load [1].
Over the last few years, Egypt poses as a well-suited example of a quickly developing
country. Various changes and advancements have taken place in the construction
industry, as well as the manufacturing of precast concrete: (a) quality control, (b)
assurance of the slab floor and (c) other products assembled with concrete floors.
There are several changes that have taken place in the structural designs and building
codes, and insurance adherence is kept necessary and compulsory. With continuous
advancement of such an industry, the inspection of materials and products manu-
factured from precast concrete is kept necessary before mixing and after mixing.
This is needed for checking the strength of concrete and ensuring the accuracy of
building works at the construction site. Typical testing cylinders are used in the
young, booming markets of Egypt. Each cylinder has about 160 mm of cubic strength,
utilized for inspecting the changes and working performances of precast concrete.
Precast concrete composed of moderative and high levels of strength is utilized in
long-term projects such as construction of big buildings, long bridges and high-rise
offices. In the Egyptian market, the emerging trend is to manufacture moderative and
Assessment of Key Imperatives for Enhancing Precast Adoptability … 705
Fig. 1 Prestressed barrel in the early ages and the general principle of prestressing applied to barrel
construction (Neupane 2020)
While the latest techniques and modern methods have increased esthetic and working
capabilities, they have also affected the preservation of realistic attributes of concrete.
Nontoxic materials are formed and utilized by the production of concrete or from its
usage. The controlled manufacturing of precast concrete increases the sundress of the
environment by optimizing materials used on the construction site. This minimizes
materials wasted when the panels are created and decreases interrelated wastage on
the job site.
Precast concrete is made up of slabs of prestressed concrete. Precast concrete is
developed in solid form or in the form of longitudinal hollow cores. The floor units are
approachable at various depths to meet different capabilities and performances essen-
tial for loading and span. The main purpose behind the excessive usage of precast
concrete is that it offers a number of potential benefits over on-site casting. Increased
production of precast concrete can be completed at ground level which assists with
reliability and safety throughout the construction project. Precast concrete plays a
major role in enhancing the material quality and workmanship on precast plants in
comparison with construction sites [2].
706 M. Abdelatty
In Egypt’s young, booming markets, the panel of precast concrete ranges from
almost $400 to $700 per cubic meter. The panels of precast concrete also have
different edging treatments and finishes based on the final usage of the prod-
ucts. Precast concrete is classified into different types such as PCC slabs, slabs of
prestressed hollow, slabs of prestressed solid, slabs in double tee and waffle slabs.
Precast involves a variant of modular units produced within production plants and at
times, on site. Commonly known identifications of these units are listed below.
1. Slabs
2. Beams
3. Wall Panels Segmental Tunnel and Shafts
4. House Connection Chambers
5. Precast Decorative Panels
6. Reinforced Concrete Utility Access Holes
7. Prestressed Structural Elements
8. Reinforced Concrete Pipes
9. Double Tees
10. Prestressed Concrete Cylinder Pipes
11. Reinforced Concrete Cylinder Pipes.
When covering customary practice and market norms with modern concrete stake-
holders in developing regions such as Egypt, it was explained that most contractors
would rely on two means of fixation. Precast contractor can provide the service them-
selves if they are well equipped and have the resources. In other cases, contractors
contact a more equipped contractor to fixate the precast units on site.
Typical components of bridges at off-site include precast piers, box girders, caps,
piers, beams boxes or W-shapes, as well as components of parapets and deck wearing
surfaces. While the footing is cast in situ most of the time, precast footing can also
be used in short span bridges, but in general, on-site fabrication is more common for
superstructures rather than sub-structures. Other parts include full- and partial-depth
precast girders [3].
There are two options available on site for an engineer to make a decision. (A) The
elements are manufactured in factory and then brought back to site. (B) Prefabrication
of elements at the site adjacent to the building if space is not constrained. A typical
sequence of a precast infrastructure with prefabrication components is given by Khan
et al. [3].
1. Designing and planning
Assessment of Key Imperatives for Enhancing Precast Adoptability … 707
2. Fabrication of components
3. Accelerated testing
4. Benching and shoring
5. Issues raised in erection
6. Site visits
7. Grouting and closure.
Considering the available data on the Precast Market in Egypt was somewhat evasive,
data collection was key in directing the thesis and was a core element for developing a
basis of methodology and field work. Questionnaires and interviews were conducted
with key professionals to cover main points of what does and does not appeal to
stakeholders, as well as what prevents them from considering precast as a viable
construction technique accordingly. Field work and direction are highlighted below
for each methodology used to cover an economic and technical overview of using
Precast Technology.
Upon meeting key stakeholders within the precast market, field tours were also
performed inspecting variant types of precast plants, on-site precast production beds
and storage lots within ongoing construction locations. These site visits and locations
were selected based on availability and accompanied by volunteering contractors’
representatives. To grasp the precast prominence across Egypt, sites were visited
in the two largest cities, Alexandria and Cairo. Site visits were for the purposes
of inspecting precast production plants and understanding their capabilities as well
as to carry out meetings with key representatives of the precast market. These key
stakeholders and establishments were selected based on their track record summation
of managing and execution and also ongoing PCC contracts representing a majority
of the precast market within Egypt. These site tours proved to be essential to form a
better understanding of the boundaries and opportunities within the Egyptian precast
market. One source of site visits and verbal alignment interview was in Saudi Arabia.
Saudi Arabia represents an advanced market to serve as a benchmark. Representation
of site visits and alignment interviews can be seen in Table 1.
708 M. Abdelatty
Interviews both in-person and online were conducted with main members of the
precast industry within Egypt listed below:
1. Bina Group (Saudi Arabia)
2. Cretematic Structural Elements (Egypt)
3. Osman Group (SCIC Precast Concrete) (Egypt)
Table 1 Physical site visits and interactions with key precast market representatives
Site visit/ Location of site/ Description of establishment Nature of
interaction # interview alignment
interaction
1 Saudi Arabia Precast plant covering 400,000 square • Physical Site
Dammam meters Tour
Industrial Area Precast plant also features a covered area • Verbal
for curing purposes as well as loading Interviews
station. Casting beds also are also fitted with
with 4 cranes Marketing
Director
2 Cairo Precast plant covering 175,000 square • Physical Site
Saadat City meters Tour
Precast plant also features a covered area • Verbal
for curing purposes as well as loading Alignment
station Interviews
with
Managing
Partner
3 Cairo Precast plant covering area of 100,000 • Physical Site
Saadat City square meter Tour
Precast plant also features a covered area • Verbal
for curing purposes as well as loading Alignment
station Interviews
with Precast
Plant
Manager
4 Cairo Precast plant covering approximately • Physical Site
6th of October 80,000 square meters Tour
City Precast plant also features a covered area • Verbal
for curing purposes as well as loading Interview
station with Precast
Precast slab production beds are variant Plant Senior
and changed based on demand and supply Engineer
• Interview
with Head of
Marketing
and Sales
(continued)
Assessment of Key Imperatives for Enhancing Precast Adoptability … 709
Table 1 (continued)
Site visit/ Location of site/ Description of establishment Nature of
interaction # interview alignment
interaction
5 Cairo Precast plant specialized in providing • Physical Site
Nasr City heavy infrastructure works such as the Tour
Cairo Metro Line 3 • Phone
Alignment
Business Call
with
Managing
Partner
• Alignment
Interview
with Senior
Site Erection
Engineer
6 Alexandria On site production of Casting Beds also • Physical Site
Borg El Arab seen on site was a storage yard for precast Tour
bridge beams spanning up to 30 m • Verbal
Alignment
Interview
with on-Site
Senior
Engineer
• Verbal
Alignment
Phone
Interview
with
Managing
Partner
7 Cairo On site production plant including casting • Physical Site
New Capital beds for prestressed slabs and precast Tour
hollow core components • Verbal
Alignment
Interview
with on-Site
Senior
Engineer
• Verbal
Alignment
Phone
Interview
with
Managing
Partner
8 Online Call Interview with prominent design and • Recorded
supervision consultant known to occupy a Interview
majority of consultancy service provision with Senior
across Egypt and the New Capital Structural
Consultant
(continued)
710 M. Abdelatty
Table 1 (continued)
Site visit/ Location of site/ Description of establishment Nature of
interaction # interview alignment
interaction
9 Online Call Interview with key architectural consultant • Alignment
known to be designing prominent locations Interview
within New Cairo and New Capital with Owner
Case studies below were aimed to create a comparison tailored for the Egyptian
market to compare Traditional Concrete Developments and precast developments
being able to pinpoint exactly which factors accentuate the benefits of precast
construction the majority traditional construction methods currently used. Two finan-
cial case studies were performed combining both forecasted and actual costs of
production.
For the first financial case study various projects were collected from the New
Capital and priced with the support of Local and Saudi Precast Vendors. Buildings of
different types were compared to allow for a wide variety of comparison for precast
in several building functions.
The second financial case study looks at a massive residential development owned
by one of Egypt’s top developers. This was an actual study done that includes
illustrations and was priced in real time by an actual Egyptian precast bidder. The
study was performed in partnership with a developer to focus on comparing precast
developments to the common practice of traditional construction.
The final third case study was a questionnaire given to professionals only knowl-
edgeable on the uses of PCC. These professionals were a pool of engineers in the
design consultancy field, design supervision field, precast contracting field, on-site
PCC contractors, PCC marketing professionals and PCC top-level business stake-
holders. In essence all representatives who took the survey had a track record working
in PCC. The survey considered perception of PCC as a prominent factor affecting
marketing and promoted use of PCC. This case study analyzes this perception from
above mentioned key professionals in the field.
Assessment of Key Imperatives for Enhancing Precast Adoptability … 711
The first case study comprises several projects collected from the New Capital, a
prominent Cairo location having the highest number of ongoing projects simulta-
neously. With the support of local consultants serving the New Capital, data was
collected and priced in comparison with already-priced BOQs. The BOQs were
originally priced for traditional works and then further priced considering precast
built components. Different structures of variant sizes and complexity were assessed
to have a good notion in analysis of precast concrete prices in comparison with more
traditional forms of building execution. The purpose of these designs is to compare
precast and traditional concrete. They were priced by contractors and designers who
requested an anonymous review. Considering scarcity of precast column load bearing
members being used within Egypt, the study focused on more realistic comparisons’
metrics aimed at comparing the most common products sold within PCC known as
suspended precast slabs in comparison with in situ poured slabs.
The costs mentioned below included:
1. Contractor Final Price
2. Supervision Consultant Fees
3. Design Consultant Fees.
Thirteen projects were assessed as follows. Conceptual drawings were shared for
each, and they were priced from a series of anonymous design consultants, along
with traditional and precast contractors. Projects were priced using forecast BOQs,
and completed projects’ actual spend was considered to better account for related
costs paid for by owner aiming to execute using precast concrete (Table 2).
As mentioned in Table 3, cost equivalence was performed for the most common
element used in PCC within the Egyptian market, and slab costs were compared for
all projects ranging from diverse sizes.
Precast works for 2,500-unit Madinaty Villa Project, New Cairo, Egypt. The case
study below shows Elevations and First Floor Hollow Core (H.C) drawings as shown
in Fig. 2 to reflect the four townhouses scoped for the Villa Project. About 2500 units
of these were forecasted to kick off by the end of current year, 2021. Each unit
consisted of four townhouses interconnected into one building unit. A price list is
shown below representing the cost of building and reasonable net profit covering
overheads, and operational profitability was accounted for to sustain the Precast
Vendor’s Business.
The precast contractor was asked to price the project based on drawings to deliver
a core and shell solution. Core and shell solution refer to concrete works being
delivered, creating suitable grounds to compare the feasibility and execution of
precast concrete in large residential projects. The study was suitable for analysis
712 M. Abdelatty
of the construction ecosystem within Egypt where the majority volume of projects
is deemed to be residential.
Assumptions that were made include on-site creation of precast beds consid-
ering wide-open spaced areas are available within the New Capital Area. Cost was
included within precast slab pricing shown below in BOQ Table 4. The remaining
precast components were transported to site from Saadat City located approximately
80 km from site. To maintain an apple-to-apple comparison, load bearing walls were
accommodated into the study instead of precast columns. They were eliminated from
this study considering the lack of design drawings of the units being available. The
study compared prices to allow contractors to accommodate overheads and trans-
portation costs into their prices considering the New Capital is still considered a
remote location.
Precast contractor was asked to price the BOQ considering the project was to
last over 8 to 10 years and to be delivered in phases. This allowed the contractor to
accommodate for inflation and other increases in commodity prices. The same BOQ
was given to a residential contractor who was asked to price it based on their typical
market norm traditional stipulation and considering bearing walls as a replacement
of columns to respect factors within the study. The contractors were given two weeks
to price BOQs. Terminology of BOQ items was modified by traditional contractors
with regard to them delivering the same core and shell finish set by the precast
contractor. BOQs were then collected from both contractors, and verbal meetings
were discussed to further clarify price differences and breakdowns within prices.
Assessment of Key Imperatives for Enhancing Precast Adoptability … 713
Table 3 Cost comparison between placed cast and floor slabs suspended precast (USD)
S.N# Project In situ USD (A) Precast Difference USD (C Percentage
USD (B) = A−B) Change (%)
1 Education College 348,840 275,583 73,256 21.0
8-unit workers
cottage
2 Building of 2-floor 364,235 309,600 54,635 15.0
head office for
Technical Education
College
3 Building 8-unit 339,984 306,665 57,618 9.8
workers cottage and
2 No. 12 WC
lavatory amenity for
educational
institution
4 Building Education 587,871 508,508 79,363 13.5
College
amphitheater flat
5 Building of 3-floor 231,989 187,401 44,588 19.2
schoolroom tower
for Nurses Training
Institution
6 Finishing of 3-floor 133,468 187,401 44,588 15.0
management office
for District
Assembly
7 Building 33,8850 265,997 72,853 21.5
CSIR-BRRI new
management tower
8 Building Secondary/ 138,489 94,172 44,317 32.0
Commercial School
3-floor 12-unit block
9 Building of 3-story 336,962 26,2830 44,317 33.0
12-unit block for
Secondary/
Commercial School
10 Building of the 175,171 144,180 30,991 17.7
annex of
headquarters tower
for Minerals
Commission
11 Building Polytechnic 75,506 50,589 24,917 29.7
amphitheater
12 Building District 203,001 142,101 60,901 30.0
18-unit 3-floor
institute prototype
(continued)
714 M. Abdelatty
Table 3 (continued)
S.N# Project In situ USD (A) Precast Difference USD (C Percentage
USD (B) = A−B) Change (%)
13 Building Secondary/ 60,006 41,404 18,602 35.1
Commercial 50
Person School new
auditorium
Questionnaires were prepared as the third case study covering various markets
including Egypt, Canada and Saudi Arabia. About 48 professionals experienced
with PCC were selected to give a correct perception of precast concrete, and ques-
tions were focused on understanding their scoring of precast concrete’s different
traits. These professionals have high friction with the construction markets and were
therefore a significant source of accuracy when it comes to assessing and grading
the performance of concrete within the Egyptian market. PCC contractor employees
and managing staff took part in questionnaires. These contractors cover 60 percent
of cumulative precast works within the market. The questionnaire was also shared
with professionals abroad to be able to signify which factors were not affected by the
country’s infrastructure or economical standing but by the technical feasibility of the
precast construction method itself. Below is the questionnaire summary after being
shared with the professionals using Google Forms. Considering COVID-19 circum-
stances, questionnaires were a much easier way of collecting data than face-to-face
interviews or virtual meetings (Table 5).
Assessment of Key Imperatives for Enhancing Precast Adoptability … 715
Table 4 Apple-to-apple comparison precast load bearing walls vs. traditional load bearing walls
No Description Quantity Cost [EGP] Cost [EGP]
For PCC For traditional construction
method
Unit price Amount Unit price Amount
1 200 mm thick 2,237,500 391 874,675,363 615 1,377,181,250
insulated load m2
bearing wall
builder works
2 200 mm thick 260,000 325 84,618,242 354 91,998,400
external solid m2
wall panel,
gray concrete
mold finish
3 200 mm thick 617,500 277 170,749,924 313 193,166,350
internal solid m2
wall panel,
gray concrete
mold finish
4 150 mm thick 495,000 238 117,900,070 273 134,862,750
solid wall m2
panel, gray
concrete mold
finish
5 100 mm thick 202,500 172 34,916,008 205 41,546,925
parapet wall, m2
white concrete
sand blast
finish @ outer
face
6 Regular 20,000 m2 2,048 40,961,343 2,896 57,920,000
concrete
beam, gray
concrete mold
finish
7 precast 75,000 m3 2,527 189,501,086 3,346 250,950,000
concrete stair
& landing,
gray concrete
mold finish
8 200 mm thick 175,000 467 81,665,551 890 155,750,000
solid slab, m3
gray concrete
mold finish
9 150 mm thick 970,000 90 87,076,900 400 388,232,800
slab m2
(continued)
716 M. Abdelatty
Table 4 (continued)
No Description Quantity Cost [EGP] Cost [EGP]
For PCC For traditional construction
method
Unit price Amount Unit price Amount
10 200 mm thick 850,000 95 80,707,500 500 425,552,500
slab m2
Total cost 1,762,771,986 3,117,160,975
Price/m2 (2,635,000.00 m2 ) 743 EGP/m2 1183 EGP/m2
The data represents the responses of various professionals to the relative index.
Calculating the sum of the ranking (R1 ) is done by summing up all the indices. To
calculate the mean of the ranking (R2 ), one divides the sum of ranking (R1 ) by the
total number of indices, then finds the difference between the sum of ranking (R1 ) and
the mean of the ranking (R2 ) using [(R1 )−(R2 )]. Finally, one squares the difference
between the sum of ranking (R1 ) and the mean of the ranking (R2 ) to obtain the
modulus [(R1 ) −(R2 )]2 , entering the formulas in a spreadsheet will automatically
produce the results (Table 6).
Looking at the tackling challenges and workarounds for PCC within developing
regions is a sedated, less effective response in comparison with what is done in
718 M. Abdelatty
Table 8 Issues met in the enactment of PCC in the housing division in different republics
Place of Issues faced Source
study
USA 1. Lack of stable demand for precast concrete construction [8]
2. Less level of standardization
3. Lack of expertise in design and manufacturing
4. Higher costs due to transportation
5. Limitation in sizes of elements due to transportations
USA 1. Incompatibility of elements from various manufacturers [9]
2. Communication issues
3. Inability to meet challenging projects due to limitations in
transportation
4. Cost of transportation
USA and 1. Perception of lousy performance of buildings [10]
Turkey 2. Lack of expertise in design
Malaysia 1. Requirement of huge investments and significant financing in (Rahman
IBS; higher costs et al. 2006,
2. Lack of involvement of small contractors 5–6)
3. Issues like moisture penetration and leakage
4. Lack of expertise and exposure to implementation of IBS
Malaysia 1. Heavy investment and lack of financing (Nawi et al.
2. Higher costs 2011,
3. Lack of expertise in design and execution 34–37)
4. Lack of flexibility in payment terms
5. Logistics issues
6. Negative perceptions of the performance in the past
Australia 1. Retrospective addition of Offsite Manufacturing (OSM) to (Arif, 2009)
projects
2. Lack of expertise in design and higher design cost
3. Transportation cost and carriage cost
4. Lack of adequate skilled expertise
5. Requirements for massive financing and stringent payment terms
Hong Kong 1. Lack of in-house expertise (Jaillon
2. Limited access to site and transportation et al. 2009,
3. Resistance to change over schematic approaches 239–248)
India 1. Absence of need for mass/mammoth housing developments (Das and
2. Elevated price and lack of expertise Jha 2011,
3. Taxation issue not being addressed 47)
4. Lack of standardization Lack of quality roads for the adequate
conveyance of large and heavy essentials to the building location
0 5 10 15 20 25 30 35 40
During the pricing, contractors using only conceptual drawings, there was a large
amount of ambiguity majorly due to the lack of detailed precast or traditional struc-
tural drawings. It was seen that precast contractors had no difficulty pricing the BOQ,
whereas the traditional contractors kept coming back with many queries in order to
accurately price structural components.
Performance of precast in residential units does perform well in savings but in
more standardized structures such as large utility buildings, theaters, higher ceiling
spaces, bridges and other structures showing a more standardized design performs
much more cost savings in comparison with traditional. The 8-worker cottage above
for instance only showed a 9.8% cost savings compared to the more standardized
small school building performing savings of up to 32 percent.
Columns and slabs are illustrated as major cost savers in the case study. Columns
were less when pricing precast. At times precast contractors used steel beams or steel
structures combined with precast construction, this allowed for much wider spans
than in traditional pricing. Flat slabs were also considered in traditional projects;
however, what was saved in column numbers was spent on double reinforcing flat
slabs.
The difference in both the BOQs (Bill of Quantity) as seen in Results are explained
by comparing both precast and traditional BOQs, a difference of 75% has been
noted. This is a stark difference in both construction methods. This is because a
lower quantity of concrete is wasted when using precast concrete. There is no fear
of concrete flash setting or false setting. The arrangement could be made on site as
well in any nearby factories. The transportation could be carried out at nighttime to
save fuel and time consumption. The amount of labor requirement is less than cast
in situ concrete. All these factors contribute to the overall cost reduction of a project.
A component contributing to the price variance was also the project being located
in a wide-open desert area. This allowed for ample space to lay on-site casting
beds for precast panels on site diminishing transportation costs to site. Another
predicament worth considering was the lack of understanding from contractors in
pricing solid wall panels. Solid wall panels are considered more expensive and are
not a widely available product within the Egyptian market. Considering the above
traditional contractors may have overpriced the solid wall panels considering they
are rarely found in the market and are usually subcontracted. This could lead to an
increase total price giving the traditional contractors less of a chance to prove cost
competitiveness.
Assessment of Key Imperatives for Enhancing Precast Adoptability … 723
Sound insulation
Constricted Tolerance
Huge spans
Thermic inertia diminishes lifetime energy costs
Negligible maintenance
Insulation offered by Sandwich Panels
Decrease of on-site action Noise and commotion
Lessening of on-site working hours
Dimensional accuracy
Robustness and Durability
Eminence and Quality of Precast End Product
Swiftness of construction
Lessening of on-site leftover
Modest life cycle cost rate
0 1 2 3 4 5 6 7 8 9
Fig. 4 Overall ranking of agreement of professionals on precast merit using Kendal’s coefficient
The results from Fig. 4 below disclosed that experts regard modest life cycle charges
as the chief merit of utilizing precast concrete commodities and consider acoustic
insulation space as the least of importance. From Fig. 4, it can be observed that most
of the professionals agree using the precast panels and endorse its significance in
the construction industry. When asked about the life cycle cost, 21% strongly agreed
whereas 26% agreed. It can be stated that as a consensus, Egyptian precast users
have the perception on the viability of precast lifetime benefits and the expansive life
span of such structures.
It could be seen from the table that precast concrete construction is able to
reduce wastage on site. Precast factories where robotic or auto sensors machines
work have negligible wastage. This makes the precast concrete more cost-effective.
This is the reason that almost 75% of professionals both strongly agree and agree
on precast concrete performance in regard to minimization of waste. Longevity of
precast lifespan also contributes to this factor majorly due to high durability of
precast concrete and its ease of maintenance. Looking at the amount of demolish-
ment created from traditional construction, precast longevity will reduce climate and
environmental waste on the long run.
The quality of delivery and speed of construction has been recognized by the
professionals for precast concrete. That is why an 83% agreement was made regarding
quality of delivered precast concrete products over traditional, not to mention 90%
agreement by respondents that the precast method is speedier. These number indices
give an indication that precast is revered as a higher-quality product being delivered
with nearly zero error and with nearly 40 to 60 percent reduced delivery time, not
only that but precast products are also delivered more quickly giving an overall higher
724 M. Abdelatty
level of service and client satisfaction. Census can be reached on these two points
that precast is seen by the Egyptian construction market as a high-end product and
service.
The durability of concrete is also endorsed by the professionals where dimen-
sional stability is partially agreed with a nominal number of respondents disagreeing
as well. But, with the general notion swaying toward the agreement of respondents,
one can say high precision is expected by professionals on site when using precast
concrete. Embedding one element into another element requires just a few millime-
ters of tolerance which needs to be maintained in precast. For high precision projects
requiring dimension sensitivity, precast would be a more viable option. This preci-
sion of site however comes with high logistical tracking in production ensuring all
dimensions align specially with precast where tolerance allowance is considered very
low.
As a smaller number of labor and machinery is required for precast concrete,
noise disturbance is considered low at site not to mention precast members site time
in general is almost 15% in comparison with traditional construction methods. This
is agreed on by professionals as well swaying toward believing that precast members
decrease on-site disruptions and noise pollution.
A majority of respondents did sway toward agreeing on the capability of precast
providing negligible maintenance requirements. Precast concrete is a new trade
within Egypt and has not been around long enough to properly attest to its mainte-
nance requirements and intervals. Most precast vendors do not practice preventative
maintenance approaches for these precast systems due to a lack of know how.
Majority of professionals agreed on acoustic insulation capabilities of using
precast concrete, and this could be explained by the lack of acoustical testing and
performance being marketed within Egypt to give precast adopters the additional
option of not having to spend extra costs on sound insulation. Precast does however
expend a larger denser mass than other traditional more porous concretes. This vari-
ance between literature and results indicates a lack of knowledge on the capabilities
of precast concrete.
Reiterating Calculations:
Grand Mean = 2.41 calculated by calculating grand mean of Means of Ranking (R2).
(R1 − RG) 2 = 11.34
k = groupings
i =1
Assessment of Key Imperatives for Enhancing Precast Adoptability … 725
W = 11.34/227.5 = 0.050
Evaluating a 95% significance level for W, the null hypothesis (H0 = set of ranking
by quantity surveyors, architects and civil engineers) is unrelated. The alternative H1
is set of rankings that are related. λ2 = k(n − 1)W, where k is the number of groups
being compared which in this case = 3 (i.e., the quantity surveyors, architects and
civil engineers being compared). λ2 = 3(14 − 1)0.050 = 1.95. From the chi-square
distribution tables, the critical value = 1.95 since the observed λ2 value = 11.070 is
greater than 1.95; the null hypothesis H0 is rejected; and the alternative hypothesis
H1 that the set of rankings by the above groups are related is accepted. This shows
that there is high degree of agreement between contractors, architects and consul-
tants on the advantages of using precast concrete suspended slabs and columns. The
Kendall’s concordance also identified robustness, speed of construction, insulation
and measures as the main advantages of using precast concrete.
4 Conclusions
The concluding section presents the results of Egyptian case studies, surveys, field
data and field alignment interviews in comparison with local and global literature
to evaluate the adoptability of precast concrete in Egyptian construction markets.
Considering the available data, methodology and other parameters associated with
this work particularly in Egypt, the following conclusions can be stated:
1. Precast concrete is an eminent constituent of construction in the recent decade.
However, Egypt’s socio-economic environment seems to show some resistance
combined with a lack of awareness regarding its full-scale adoption in the heavy
construction industry amidst technological advancements.
2. Consultants prefer traditional to precast concrete in large-scale projects due to
the lack of supply of precast contractor services. There are only a few prominent
precast contractors capable of delivering complex precast load bearing units
leading to spiked increases in demand compared to supply leaving less room for
competition, and therefore precast prices also experience a spike in comparison
to remainder of market.
3. Elimination of consultancy supervision fees due to traditional construction
is not enough to give precast concrete more competitive pricing considering
consultancy fees in Egypt are considered low in comparison with global markets.
726 M. Abdelatty
Acknowledgements I would like to thank Dr. Mohamed Nagib Abou-Zeid for his persistence in
keeping me on track and for his priceless patience with me during these difficult COVID-19-related
circumstances and my loving family who were supportive and understanding for the long hours
spent chasing this dream. This work would not have been complete without the professionals who
gave me their heartfelt advice, time and effort to help me close this thesis. They were the key
to delivering well sought out ideas, concepts and theories. Last but certainly not least, my sincere
gratitude to professionals who took time to fill in questionnaires, surveys and sat down for interviews
with no intention but to support my thesis, both those who wanted to stay anonymous and those
willing to be known contributors.
References
13. Dikmen I, Talat Birgonul M, Ozcenk I (2005) Marketing orientation in construction firms:
evidence from Turkish contractors. Build Environ 40(2):257–265. https://doi.org/10.1016/j.
buildenv.2004.07.009
14. The Hindu (2013) Construction industry gets a mechanical makeover. Return to Frontpage.
March 2. https://www.thehindu.com/features/homes-and-gardens/construction-industry-gets-
a-mechanical-makeover/article4465331.ece.
15. Hobees MA, Riazi SR, Ismail R, Yusuf AA, Nawi MN, Omar MF (2021) Facilities Management
(FM) in Industrialized Building System (IBS) projects in Malaysia: challenges and strategies
for improvement. Webology 18(SI03):1–13. https://doi.org/10.14704/web/v18si03/web18016
16. Huang H, Yuan Y, Zhang W, Li M (2021) Seismic behavior of a replaceable artificial controllable
plastic hinge for precast concrete beam-column joint. Eng Struct 245:112848. https://doi.org/
10.1016/j.engstruct.2021.112848
17. Kassim U, Walid L (2013) Awareness of the Industrialized Building System (IBS) implemen-
tation in Northern Malaysia—a case study in perlis. Procedia Engineering 53:58–63. https://
doi.org/10.1016/j.proeng.2013.02.010
18. Khoshbakht M, Gou Z, Dupre K (2017) Cost-benefit prediction of green buildings: SWOT
analysis of research methods and recent applications. Procedia Engineering 180:167–178.
https://doi.org/10.1016/j.proeng.2017.04.176
19. Kim (2015) Flexural capacity of the composite beam using angle as a shear connector. J Korean
Soc Steel Constr 27(1):063. https://doi.org/10.7781/kjoss.2015.27.1.063
20. Neupane K (1970) Investigation of structural behaviour of geopolymer prestressed concrete
beam. SeS Home Page, January 1. https://ses.library.usyd.edu.au/handle/2123/24951
21. Petković D, Barjaktarovic M, Milošević S, Denić N, Spasić B, Stojanović J, Milovancevic
M (2021) Neuro Fuzzy Estimation Of The Most Influential Parameters For Kusum Biodiesel
Performance. Energy 229:120621. https://doi.org/10.1016/j.energy.2021.120621
22. Thomas AD (2021) Housing conditions. In: Housing and urban renewal, pp 10–36. https://doi.
org/10.4324/97810031379482
23. Rathnasinghe AP, Kulatunga U, Jayasena HS, Wijewickrama MK (2020) Information flows
in a BIM enabled construction project: developing an information flow model. Intell Build
Int:1–17.https://doi.org/10.1080/17508975.2020.1848783
24. Yin J, Tong H, Gholizadeh M, Zandi Y, Selmi A, Roco-Videla A, Issakhov A (2021) Economic
construction management of composite beam using the head stud shear connector with encased
cold-formed steel built-up fix beam via efficient computer simulation. Adv Concr constr
11(5):429–445
Alkali-Activated Concrete Workability
and Effect of Various Admixtures:
A Review
1 Introduction
However, achieving adequate workability is one of the main challenges for AAMs.
This is ascribed to the inconsistency of raw materials and used activators [11, 25, 29].
Moreover, most commercial chemical admixtures, successfully used with PC, were
reported incompatible in high alkali environment of AAMs [17, 25, 29]. Hence, the
rheological properties of AAMs must be comprehensively studied, and basic factors
controlling control their workability need to be investigated. This paper reviewed the
effects of the nature and dosage of activators and precursors on the rheology of AAMs.
The reported data are expected to guide engineers in understanding the workability
and rheology of AAMs; hence, they can select suitable admixtures according to the
desired workability. The various ingredients’ effect on the rheology performance of
AAMs is shown in the upcoming sections.
also be given to the used precursor and the water-to-binder ratio (w/b) while selecting
the suitable NaOH concentration.
Another important factor is the silica modulus (Ms), which is the SiO2 /Na2 O molar
ratio. Ms has a significant effect on the distribution of different ions in the solution
and thus the rheological properties of AAM suspensions. Adjusting the Ms will
directly affect the polymerization degree of silicate oligomers and their adsorption
on the particle surface on the rheological properties. Recent studies reported that
increasing the Ms increased the apparent viscosity and yield stress of AAMs [12].
However, in the high-ionized zone (the Ms < 1.8), the viscosity of suspensions showed
insignificant responses to modulus variation.
Hydration Products The AAMs hydration process varied based on the precursors
and was divided into three categories. First, the low-calcium system is generated from
precursors with less than 10% calcium oxides (CaO), such as MK and FA -Class F.
The main hydration product is highly cross-linked aluminosilicate geopolymeric gels
known as N-A-S-H. The second category is a high-calcium system based on calcium-
rich precursors with greater than 10% CaO, such as blast furnace slag. The formed
product is tobermorite-like C-A-S-H. Finally, the third category blends the first two
Table 1 Yield stress and plastic viscosity for various concentrations of NaOH (modified of Zhang
et al. [42])
Precursors Alkali activator to Yield stress (Pa) Plastic viscosity (pa*s)
precursor ratio (%)
Fresh fly-ash blended 0 0.75 0.4
pastes 2 2.4 1.3
4 2.25 1
6 1.8 1
8 1.5 0.95
10 1.4 0.9
12 1.2 0.9
0
increase decrease
Reported Trend
732 N. ELsayed and A. Soliman
categories [9]. The AAMs’ rheological behavior and parameters are substantially
different when different precursors are used.
Usually, the third category is used to compensate for the drawbacks associated
with the first and second categories. For instance, the high calcium content for high
calcium precursors leads to changes in the precipitated gel systems, affecting yield
stress. It was reported that the replacement of slag with fly ash reduced the yield
stress [6]. However, increasing the replacement level of slag to 70 with respect to the
weight of fly ash adversely affects the yield stress and consistency coefficient. This is
attributed to reduced flocculation caused by fine FA particles and a faster dissolution
of slag, resulting in low solid content [6]. Moreover, finer fly ash than slag enhanced
attractive inter-particle forces. Conversely, it must be considered that the introduction
of more slag means faster dissolution, higher structure build-up rate, and formation
of more early C-A-S-H gels, which shorter percolation time, faster storage modulus
growth and flow loss [13, 22]. Another important aspect is the precursor’s content,
as this will reflect the packing and filler effects on the workability. Also, it will
depend on the precursor’s particle sizes, shapes, and amount of fines. The viscosity
and yield stress of suspensions increase as the solid content increases. The following
two aspects can explain it: (1) more solid particles will induce the particles-cluster
effect; (2) less water added would increase the solids concentration, the inter-particle
friction, and thereby the mortar is more reluctant to flow as shown in Table 2 [39].
Another set of admixtures that can affect AAMs’ workability and rheolog-
ical properties include shrinkage-reducing admixtures (SRA), retarders, and de-
flocculants. Previous study by Palacios and Puertas [23]. Showed that SRA addition
did not improve the fluidity of alkali-activated slag paste and even increased the yield
stress. Generally, the effect on fluidity will vary depending on the used dosages and
the molecular weight of used shrinkage reducing admixture such as polypropylene
glycol (PG), as shown in Fig. 2 [41]. On the other hand, citric acid as a setting retarder
admixture was found to increase AAMs fluidity.
734 N. ELsayed and A. Soliman
215
400 1000 2000
Molecular Weight
Adding inert mineral additions (e.g., limestone powder) is known to improve rheo-
logical properties due to optimizing the particle packing and freeing more water for
lubrication. Adding up to 20 respect-to-weight limestone powder in sodium silicate-
activated slag-fly ash grout reduced yield stress and plastic viscosity [37]. On the
other hand, it was reported that AAM is more sensitive to changes in liquid/solid
ratio than ordinary OPC concrete [2]. This will also be affected by the nature of
aggregate and the amount of fine aggregate, as increasing fine aggregate will result
in a higher surface area. Adding fiber to AAM to increase tensile strength and reduce
potential cracking has become a common practice. Results reported by [19] showed
that the aspect ratio and shape of the used fiber and contents are the most dominating
factors for the mixture fluidity, as shown in Table 3. However, its shape will have
a negligible effect on the low fiber content. Also, changing the type of fiber (i.e.,
steel or polyvinyl alcohol (PA) will have a significant effect. Increasing PA fiber
volume raised the viscosity and yield stress of the AAS composite due to its poor
dispersibility [4]. However, producing PA fibred-reinforced AAS with high ductility,
low plastic viscosity, and yield stress was still possible.
Two critical values that control the flowability of any material are plastic viscosity
and yield stress. Plastic viscosity reflects a material resistance to flow after the mate-
rial begins to flow. Yield stress indicates the shear stress required to initiate flow.
The relationships between shear stress and shear strain rate of fresh PC have been
developed. The Bingham model modified Bingham model and Herschel–Bulkley
(H–B) model are the most widely accepted for describing the rheological behavior
of PC [14]. Adding admixtures in PC leads to the deviation of a linear relationship
between shear stress and shear rate, which the modified Bingham model captures
was proposed (Eq. 1) [39]:
Alkali-Activated Concrete Workability and Effect of Various … 735
. .
τ = τ0 + μ γ + c γ 2 (1)
where τ0 is the yield stress (Pa), μ is the plastic viscosity (Pa·s), and c is a
pseudoplastic constant.
Published literature revealed that the Bingham model, modified Bingham model,
and H–B model are empirically suitable for describing the rheological behavior
of AAMs too. Generally, the Bingham model is recommended for NaOH-activated
pastes [44], while the H–B model better suits the sodium silicate-activated pastes [24].
Both the Bingham model and the H–B model [35] were used in the systems activated
by Na2 CO3 (single or mixed with other activators). The modified Bingham model
describes the non-linear behavior deviating from Bingham fluid and avoids a variable
dimension parameter like in the H–B model without mathematical limitation in the
low shear rate region [18]. However, it is rarely used to describe AAMs’ rheological
behavior [43]. The applicability of the modified Bingham model in different AAM
systems needs further study.
On the other hand, the composition of the aluminosilicates also has a signifi-
cant impact on rheological behavior. Alkali-activated metakaolin systems generally
exhibit higher viscosity and apparent yield stress due to their plate-like particles and
large specific surface area [31], which means that suitable activators and adequate
water are needed to ensure good workability [1]. As reported, alkali-activated fly
ash pastes behaved like a Bingham fluid [5], while the rheological behavior of the
fly ash-slag blended systems was more consistent with the H–B model [6]. These
rheological behaviors could not be separated from the differences in particle size,
dissolution behavior, the reactivity of different precursor particles, and the initially
precipitated gels.
736 N. ELsayed and A. Soliman
6 Conclusion
This study demonstrates that the rheological properties of AAMs are highly sensi-
tive to used chemical admixtures which will vary based on the used activator (type
and concentration), precursor, and other ingredients such as mineral additives. The
following conclusions can be drawn from this review:
• There are contradictory data for the NaOH concentration effects on yield stress
and plastic viscosity of AAMs. Attention must be given to the used precursor and
the w/b while selecting the suitable NaOH concentration.
• The AAMs’ rheological behavior and parameters are substantially different when
different precursors types and dosages are used.
• The amount of dissolved Ca2+ is the main factor that controls the interactions
between HRWRA admixtures and AAM.
• The Bingham model is the most representative model for the performance of
AAMs.
References
11. Hammad N, El-Nemr A, El-Deen Hasan H (2021b) The performance of fiber GGBS based
alkali-activated concrete. J Build Eng42(March):102464.https://doi.org/10.1016/j.jobe.2021.
102464
12. Hasnaoui A, Ghorbel E, Wardeh G (2019) Optimization approach of granulated blast furnace
slag and metakaolin based geopolymer mortars. Constr Build Mater 198:10–26. https://doi.
org/10.1016/j.conbuildmat.2018.11.251
13. Jang JG, Lee NK, Lee HK (2014) Fresh and hardened properties of alkali-activated fly ash/
slag pastes with superplasticizers. Constr Build Mater 50:169–176. https://doi.org/10.1016/j.
conbuildmat.2013.09.048
14. Jiao D, Shi C, Yuan Q, An X, Liu Y, Li H (2017) Effect of constituents on rheological properties
of fresh concrete-A review. Cement Concr Compos 83:146–159. https://doi.org/10.1016/j.cem
concomp.2017.07.016
15. Kashani A, Provis JL, Qiao GG, Van Deventer JSJ (2014) The interrelationship between surface
chemistry and rheology in alkali activated slag paste. Constr Build Mater 65:583–591. https://
doi.org/10.1016/j.conbuildmat.2014.04.127
16. Kong DLY, Sanjayan JG, Sagoe-Crentsil K (2007) Comparative performance of geopolymers
made with metakaolin and fly ash after exposure to elevated temperatures. Cem Concr Res
37(12):1583–1589. https://doi.org/10.1016/j.cemconres.2007.08.021
17. Li H, Wang Z, Zhang Y, Zhang G, Zhu H (2021) Composite application of naphthalene
and melamine-based superplasticizers in alkali activated fly ash (AAFA). Constr Build Mater
297:123651. https://doi.org/10.1016/j.conbuildmat.2021.123651
18. Li L, Lu JX, Zhang B, Poon CS (2020) Rheology behavior of one-part alkali activated slag/
glass powder (AASG) pastes. Constr Build Mater 258:120381. https://doi.org/10.1016/j.con
buildmat.2020.120381
19. Liu Y, Zhang Z, Shi C, Zhu D, Li N, Deng Y (2020) Development of ultra-high performance
geopolymer concrete (UHPGC): influence of steel fiber on mechanical properties. Cem Concr
Compos 112(November 2019). https://doi.org/10.1016/j.cemconcomp.2020.103670
20. Lu C, Zhang Z, Shi C, Li N, Jiao D, Yuan Q (2021) Rheology of alkali-activated materials:
a review. Cement Concr Compos 121(April):104061. https://doi.org/10.1016/j.cemconcomp.
2021.104061
21. McLellan BC, Williams RP, Lay J, Van Riessen A, Corder GD (2011) Costs and carbon
emissions for geopolymer pastes in comparison to ordinary portland cement. J Clean Prod
19(9–10):1080–1090. https://doi.org/10.1016/j.jclepro.2011.02.010
22. Palacios M, Gismera S, Alonso MM, d’Espinose de Lacaillerie JB, Lothenbach B, Favier A,
Brumaud C, Puertas F (2021) Early reactivity of sodium silicate-activated slag pastes and its
impact on rheological properties. Cem Concr Res 140(October 2020):106302. https://doi.org/
10.1016/j.cemconres.2020.106302
23. Palacios M, Puertas F (2005) Effect of superplasticizer and shrinkage-reducing admixtures on
alkali-activated slag pastes and mortars. Cem Concr Res 35(7):1358–1367. https://doi.org/10.
1016/j.cemconres.2004.10.014
24. Palacios M, Banfill PFG, Puertas F (2008) Rheology and setting of alkali-activated slag pastes
and mortars: effect if organic admixture. ACI Mat J 105(2):140–148. https://doi.org/10.14359/
19754
25. Phoo-ngernkham T, Chindaprasirt P, Sata V, Hanjitsuwan S, Hatanaka S (2014) The effect of
adding nano-SiO2 and nano-Al2O3 on properties of high calcium fly ash geopolymer cured at
ambient temperature. Mater Des 55:58–65. https://doi.org/10.1016/j.matdes.2013.09.049
26. Poulesquen A, Frizon F, Lambertin D (2011) Rheological behavior of alkali-activated
metakaolin during geopolymerization. J Non-Cryst Solids 357(21):3565–3571. https://doi.org/
10.1016/j.jnoncrysol.2011.07.013
27. Provis JL, Duxson P, van Deventer JSJ, Lukey GC (2005) The role of mathematical modelling
and gel chemistry in advancing geopolymer technology. Chem Eng Res Des 83(7 A):853–860.
https://doi.org/10.1205/cherd.04329
28. Provis JL (2018) Alkali-activated materials. Cem Concr Res 114:40–48. https://doi.org/10.
1016/j.cemconres.2017.02.009
738 N. ELsayed and A. Soliman
29. Rakngan W, Williamson T, Ferron RD, Sant G, Juenger MCG (2018) Controlling workability
in alkali-activated class C fly ash. Constr Build Mater 183:226–233. https://doi.org/10.1016/j.
conbuildmat.2018.06.174
30. Rifaai Y, Yahia A, Mostafa A, Aggoun S, Kadri EH (2019) Rheology of fly ash-based
geopolymer: effect of NaOH concentration. Constr Build Mater 223:583–594. https://doi.org/
10.1016/j.conbuildmat.2019.07.028
31. Rovnaník P, Rovnaníková P, Vyšvařil M, Grzeszczyk S, Janowska-Renkas E (2018) Rheolog-
ical properties and microstructure of binary waste red brick powder/metakaolin geopolymer.
Constr Build Mater 188:924–933. https://doi.org/10.1016/j.conbuildmat.2018.08.150
32. Roy DM, Jiang W, Silsbee MR (2000) Chloride diffusion in ordinary, blended, and alkali-
activated cement pastes and its relation to other properties. Cem Concr Res 30(12):1879–1884.
https://doi.org/10.1016/S0008-8846(00)00406-3
33. Shi C (2002) Characteristics and cementitious properties of ladle slag fines from steel
production. Cem Concr Res 32(3):459–462. https://doi.org/10.1016/S0008-8846(01)00707-4
34. Sun B, Ye G, de Schutter G (2022) A review: Reaction mechanism and strength of slag and fly
ash-based alkali-activated materials. Constr Build Mater 326(February):126843. https://doi.
org/10.1016/j.conbuildmat.2022.126843
35. Torres-Carrasco M, Rodríguez-Puertas C, Del Mar Alonso M, Puertas F (2015) Alkali activated
slag cements using waste glass as alternative activators. rheological behaviour. Boletin de La
Sociedad Espanola de Ceramica y Vidrio 54(2):45–57. https://doi.org/10.1016/j.bsecv.2015.
03.004
36. Uchikawa H, Sawaki D, Hanehara S (1995) Influence of kind and added timing of organic
admixture on the composition, structure and property of fresh cement paste. Cem Concr Res
25(2):353–364. https://doi.org/10.1016/0008-8846(95)00021-6
37. Xiang J, Liu L, Cui X, He Y, Zheng G, Shi C (2018) Effect of limestone on rheological, shrinkage
and mechanical properties of alkali—Activated slag/fly ash grouting materials. Constr Build
Mater 191:1285–1292. https://doi.org/10.1016/j.conbuildmat.2018.09.209
38. Xie J, Kayali O (2016) Effect of superplasticiser on workability enhancement of Class F and
Class C fly ash-based geopolymers. Constr Build Mater 122:36–42. https://doi.org/10.1016/j.
conbuildmat.2016.06.067
39. Yahia A, Khayat KH (2001) Analytical models for estimating yield stress of high-performance
pseudoplastic grout. Cem Concr Res 31(5):731–738. https://doi.org/10.1016/S0008-884
6(01)00476-8
40. Yang KH, Song JK, Song KI (2013) Assessment of CO 2 reduction of alkali-activated concrete.
J Clean Prod 39:265–272. https://doi.org/10.1016/j.jclepro.2012.08.001
41. Ye H, Fu C, Lei A (2020) Mitigating shrinkage of alkali-activated slag by polypropylene glycol
with different molecular weights. Constr Build Mater 245:118478. https://doi.org/10.1016/j.
conbuildmat.2020.118478
42. Zhang DW, Wang DM, Lin XQ, Zhang T (2018) The study of the structure rebuilding and
yield stress of 3D printing geopolymer pastes. Constr Build Mat 184:575–580.https://doi.org/
10.1016/j.conbuildmat.2018.06.233
43. Zhang DW, Wang DM, Liu Z, Xie FZ (2018) Rheology, agglomerate structure, and particle
shape of fresh geopolymer pastes with different NaOH activators content. Constr Build Mat
187:674–680.https://doi.org/10.1016/j.conbuildmat.2018.07.205
44. Zhang DW, Zhao KF, Xie FZ, Li H, Wang DM (2020) Effect of water-binding ability of
amorphous gel on the rheology of geopolymer fresh pastes with the different NaOH content
at the early age. Constr Build Mat 261:120529. https://doi.org/10.1016/j.conbuildmat.2020.
120529
45. Zhang Z, Zhu Y, Yang T, Li L, Zhu H, Wang H (2017) Conversion of local industrial wastes
into greener cement through geopolymer technology: A case study of high-magnesium nickel
slag. J Clean Prod 141:463–471. https://doi.org/10.1016/j.jclepro.2016.09.147
Inconsistencies and False Assumptions
Related to the Determination of Design
Values for FRP Systems
Abstract Fiber reinforced polymer (FRP) materials have been used for over half
a century in the aerospace, marine and automotive industries. During the last few
decades, civil and structural engineers have begun to see the advantage of these mate-
rials as a means to retrofit and rehabilitate existing structural members in buildings,
bridges and other civil infrastructures. There are many guidelines and some codes that
help engineers to design these systems for various structural applications. Despite
the increase in acceptance of these systems there remains some major confusion
related to the most fundamental part of the design process… How do you determine
the characteristic values/design properties of the material? Some of the confusion
stems from the assumption that all fiber reinforced polymers (FRPs) use the same
method to report their design values… they do not. Or that the ASTM standards
being used by the industry provide consistent design properties… they do not. Some
engineers assume that following CSA S806, CSA S6-06 or ACI 440.2R-17 will get
them consistent design properties… it will not. These issues have created confusion,
and they have not been appropriately addressed. This paper will point out the various
inconsistencies and false assumptions in more detail while presenting a suggested
solution that has been in practice in other industries using FRP materials.
1 Introduction
The current guidelines for design and strengthening of civil infrastructure with FRP
composites lack consistency across the industry. In this paper we will use the latest
guidelines and codes from around the world to illustrate and define the source of
the confusion. The goal of this paper is to show how tensile data is evaluated, how
the methods for reducing data for design/characteristic properties vary, and how FRP
manufacturers can report different values using the same guideline. Another goal is to
provide a more complete understanding of FRP materials for practicing engineers.
This paper will focus on wet layup FRP systems with examples of unidirectional
carbon fiber composite materials provided.
2 Material Properties
ACI 440.2R-17 Guide for the Design and Construction of Externally Bonded FRP
Systems for Strengthening Concrete Structures is the design guideline for exter-
nally bonded FRP systems used primarily in buildings for structural strengthening
of concrete walls, columns, beams and slabs. This document was developed to
allow the use of alternate structural materials within the International Building
Code. ACI440.2R provides a roadmap for the design and use of composite mate-
rials, including requirements for materials qualification, structural engineering design
equations and construction requirements. The main point of discussion for this paper
is the material properties section. It is important to note that the industry currently
recognizes three different ways to define the FRP system by its cross-sectional area.
The “dry fiber” method which specifies properties based on the feed fiber properties
and the rule of mixtures, the “net-fiber area” method, which are tensile properties
based on a theoretical fabric thickness, and the “gross laminate area” method which is
properties based on directly measured cross-sectional areas and is the most commonly
used method for wet layup FRPs despite contrary statements in ACI 440.2R.
Material properties and specifically the tensile behavior are defined in Section 4.3
of ACI440.2R. In order to define the material tensile properties of a composite, the
cross-sectional area of the composite must be known to calculate the tensile strength
and elastic modulus. ACI 440.2R provides general comments regarding two methods
for determining this area. The first is the net-fiber area, and the second is the gross
laminate area. ACI 440.2R defines the net-fiber area as “the known area of fiber,
neglecting the total width and thickness of the cured system; thus, resin is excluded”.
The gross laminate area is defined as the measured cross-sectional area of the cured
composite. No additional clarification or direction is provided on the exact calculation
to determine the net-fiber area other than the statement above which does not help
engineers attempting to independently calculate these values. Per ACI 440.2R, the
net-fiber area is recommended for wet layup composites due to the variation in
Inconsistencies and False Assumptions Related to the Determination … 741
thickness measured in the field. The gross laminate area is recommended for pre-
cured laminates, which are manufactured under controlled manufacturing conditions.
Note that ACI 440.2R provides no guidance on the determination of the dry fiber
area or discussion on the rule of mixtures.
Although variation in thickness exists with wet layup composites, measuring
the thickness in the field is a practice that can be readily confirmed by inspectors
or engineers and compared to values stated on product data sheets. The net-fiber
thickness is a more abstract measurement provided by the manufacturer that does
not take into account resin contributions or fabric architecture. The gross laminate
area method for reporting tensile properties of wet layup composites is the preferred
method as it is more relevant and practical from a quality control standpoint. Gross
laminate measurements can also be supplemented with additional measurements to
aide in confirming the gross laminate area of a wet layup composite. This can include
measuring the dry fabric thickness and comparing it to cured composite thickness of
the witness panels made for tensile testing to establish a gross laminate thickness. Dry
fabric thickness can be measured using ASTM D1777 Standard method for Thickness
of Textile Materials. ASTM D1777 is used to determine the thickness of dry textiles
by applying a known pressure over a textile and measuring the resulting thickness.
Note that the thickness of the fabric/textile will inherently include air voids that exist
due to the fabric architecture. When epoxy is added to the fabric, witness panels
can be used to confirm the gross laminate area. Witness samples are typically made
according to published saturation ratios from the manufacturer and with appropriate
tools to create samples of uniform thicknesses. Together, the combination of dry
fabric thickness (not filaments of fiber) and cured composite thickness can aide in
establishing a consistent gross laminate area for wet layup systems. The goal here is
to have a method available for independent verification of thickness across all FRP
systems, which is currently only a myth.
Although two methods exist for defining the cross-sectional area, in practice, the
decision on which method to use is left up to the manufacturer. Major FRP manufac-
turers of wet layup systems may publish tensile properties based on one or the other,
further confusing engineers. Additionally, since a process for calculating the net-fiber
area is not actually provided in ACI 440.2R, it can be difficult to independently verify
the accuracy of the net-fiber area selected for an FRP system. Fortunately, other docu-
ments exist with clear equations for calculating the net-fiber area tensile properties
(EN 2561, 1995, and ISO 10406–2, 2015). The net-fiber thickness is calculated by
dividing the areal weight of the fabric by the density of the feed fiber. This thickness
results in a significantly smaller cross-sectional area than the gross laminate area.
Using the net-fiber area to report tensile properties will produce properties approx-
imately three-times higher than gross laminate area properties, due to the smaller area.
The gross laminate tensile strength of a wet layup carbon fiber composite can have
strengths typically between 100 and 170 ksi. The same composite evaluated using
the net-fiber area can have tensile strengths greater than 400 ksi. Similarly, a modulus
of 12 Msi using the gross laminate area can be reported as 35 Msi using the net-fiber
area. To add to this confusion, the dry fiber properties of the raw feed fiber used to
manufacture the composite fabric may also be reported. These values are typically
742 S. F. Arnold and R. Ortiz
in the range of 550–700 ksi for tensile strength and greater than 35 Msi for elastic
modulus of carbon fiber. The values of the feed fiber are not necessarily indicative
of the final properties since the finished material undergoes weaving, stitching, and
other processes during construction (commonly referred to as the “fabric architec-
ture”) that can affect performance. Thus, attempting to estimate composite properties
using the rule of mixtures from the feed fiber and resin properties, for example, may
not yield appropriate values (FIB-90, 2019). Refer to Table 1 for a range of tensile
values for one CFRP system determined according to the net-fiber area and gross
laminate area methods and compared to the dry feed fiber properties.
Practicing engineers comparing literature from different manufacturers for similar
composite materials must keep an eye out for these differences in reporting tensile
values or risk specifying systems with the highest properties without understanding
the reason for the improved performance.
The tensile properties of composites are reported according to two main ASTM
standards: ASTM D3039 Standard Test Method for Tensile Properties of Polymer
Matrix Composite Materials and ASTM D7565 Standard Test Method for Deter-
mining Tensile Properties of Fiber Reinforced Polymer Matrix Composites Used for
Strengthening of Civil Structures. The first standard, ASTM D3039, uses conven-
tional definitions of tensile strength and elastic modulus. That is, the ultimate tensile
strength is defined as the maximum force before failure over the average measured
cross-sectional area. (In practice, the cross-sectional area is normalized for reporting
purposes to the gross laminate thickness or net-fiber thickness dictated by the FRP
manufacturer. A normalized thickness is used to validate the layer thickness with
the layer thickness used in design.) The elastic modulus is defined as the change in
applied stress between two strain points over the difference between the two strain
points. The two strain points are selected between the linear portions of the stress–
strain plot. ASTM D3039 recommends those points have a difference of 0.002 and
recommends 0.001 to 0.003 as a starting strain range. If the stress–strain curves
experience nonlinear behavior within the recommended range, a more suitable range
Inconsistencies and False Assumptions Related to the Determination … 743
100
80
60
40
20
-20
-0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4
Strain (%)
120
Stress (ksi)
100
80
60
40
20
-20
-0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4
Strain (%)
744 S. F. Arnold and R. Ortiz
Table 2 Comparison of ASTM D3039 and ASTM D7565 tensile properties of a unidirectional
CFRP system
ASTM D3039 (gross ASTM D3039 (net-fiber ASTM
laminate area) area) D7565
Tensile strength/force per 170 ksi 439 ksi 13.6
inch kips/in
Tensile modulus/stiffness 14.6 Msi 37.7 Msi 1.170
per inch kips/in
Inconsistencies and False Assumptions Related to the Determination … 745
4 Design Properties
Once tensile data has been acquired from laboratory testing, FRP manufacturers must
reduce the data to report design values which can be used in the strengthening design
calculations. According to Section 4.3.1 of ACI440.2R, the design tensile strength
and design tensile strain are reported as the mean minus three standard deviations,
based on a minimum of 20 replicates. Using the ACI440.2R approach, the design
properties require a reduction to the measured ultimate tensile strength and measured
rupture strain by three standard deviations. Although a conservative approach, the
design modulus is left unchanged. The design modulus is to be reported as the
measured elastic modulus, calculated in accordance with the relevant standard, in
this case ASTM D3039. Thus, the design modulus is left unchanged from the mean
measured modulus. It should be understood that the most important design variable
for structural engineers is the tensile modulus, and although many engineers assume
that it is reduced, ACI 440.2R does not require it, and in many cases, it is a fallacy
to assume that the reported design modulus is conservative.
In regard to rupture strain, this value is typically reported as a calculated strain
based on linear extrapolation of the stress–strain plot or derivation of Hooke’s Law.
This is due to common laboratory practice of using removable extensometers to
measure strain rather than using strain gauges or non-contacting extensometers.
ASTM D3039 allows the use of clip-on extensometers which are removed prior
to breaking. Consequently, the extrapolation for rupture strain can vary depending
on the strain region selected for the linear fit. One may derive the strain using the
rupture stress and calculated modulus (Hooke’s Law) or linearly extrapolate the strain
based on the stress–strain plot using the strain region used to determine modulus or
the entire strain region up to the removal of the extensometer. These methods may
produce similar results but not necessarily.
Such strain derivations and extrapolations may result in reported rupture strains
that are lower than the true rupture strains. Even when ultimate rupture is measured
correctly, other issues may affect the accuracy of the readings at ultimate such as
slippage in the strain measuring equipment or chattering in the strain readings due
to non-uniform tension in the fiber or partial rupture of fibers during testing. Direct
measurements of rupture strain are difficult to capture and not the default option
with laboratories. The variations in how rupture strain is determined can result in
reported strain data with larger scatter in the sample mean and a larger standard
deviation. Given these issues, a design strain based on ACI 440.2R may result in
overconservative design strains.
On the other hand, elastic modulus is given zero reduction for the design consid-
erations in ACI 440.2R. This is unusual, given that designing with FRP is primarily
based on stiffness equivalency (elastic modulus) and governed by strain compati-
bility to the host materials with typical strains of 0.4% and typically no greater than
0.6%. The focus on rupture strain and ultimate strength for reporting design prop-
erties is an incomplete approach. Additionally, using the mean modulus for design
746 S. F. Arnold and R. Ortiz
does not allow for the expected variation in the material performance between field-
made laminates and the initial material qualification testing performed to establish
the design modulus.
It should be understood that published design values for code approved products
are based on samples made under controlled environments and ideal curing conditions
specifically for the purpose of qualification, which is performed by trusted and accred-
ited laboratories. These systems will undoubtedly show exceptional performance
during testing. Verifying the performance of field-made samples (fabricated under
changing field conditions) to these design values (fabricated under ideal conditions)
can be difficult. Add to this the chance of an inexperienced laboratory performing
field testing and meeting the design properties becomes much more difficult. Field-
prepared samples are likely to meet the design values for strength and strain, which
have been reduced by three standard deviations. However, the elastic modulus can
easily fail to meet the unreduced design modulus given that there is no room for
error.
Historical data has shown that both the tensile strength and tensile modulus of
field-made samples are lower than laboratory-made values on which the design prop-
erties are based. These differences can be seen in Table 3 which compares results
from qualification testing and field-prepared samples of the same CFRP system
tested over that last two years. For the qualification testing, over 80 specimens of the
CFRP system were tested by an accredited laboratory (rather than the 20 minimum
required in ACI440.2R) and over 270 results were collected from 16 different field
projects. The average values are shown for both sample sets. As a reference, all indi-
vidual values for tensile strength and modulus from the qualification testing and field
testing have also been plotted in Fig. 4 at the end of the document. The average tensile
strength and elastic modulus of the field-prepared samples show a drop in proper-
ties, while tensile strain remained unchanged. Tensile strength decreased by 8% and
elastic modulus by 6%. Differences between laboratory samples and field-prepared
samples demonstrate the need for an appropriate reduction to the design modulus to
account for the expected deviations in field-prepared samples. ACI440.2R and other
codes/guidelines would benefit from adopting an approach that reduces the elastic
modulus for design purposes as well. As stated earlier, this would align better with
the current design equations that are based more on the elastic modulus (stiffness
equivalency and strain compatibility).
An alternate approach for determining design properties for FRP systems which
has gained acceptance in other guidelines in the FRP industry is based on a two-
parameter Weibull distribution per ASTM D7290 Standard Practice for Evalu-
ating Material Property Characteristic Values for Polymeric Composites for Civil
Engineering Structural Applications. This standard and its reduction approach is
referenced in the following:
1. Department of Defense: Composite Materials Handbook 17
2. AASHTO Guide Specification for Design of Bonded FRP Systems for Repair
and Strengthening of Concrete Bridge Elements [1]
3. AWWA C305-18 CFRP Renewal and Strengthening of Prestressed Concrete
Cylinder Pipe (PCCP).
According to CMH-17, the material properties of fiber reinforced composite struc-
tures are best characterized by the Weibull distribution (ASTM D7290) and should be
used for design situations. For bridge applications, American Association of State
Highway and Transportation Officials (AASHTO) has developed and published a
specification for the design of FRP systems titled The Guide Specification for Design
of Bonded FRP Systems for Repair and Strengthening of Concrete Bridge Elements.
Section 1.4.3 states: “The design value of the strength or failure strain of the FRP
reinforcement used for strengthening shall be computed in accordance with ASTM
D7290 standard practice”. In the pipe rehabilitation industry, the American Water
Works Association has worked for the past ten years to develop AWWA C305 CFRP
Renewal and Strengthening of Prestressed Concrete Cylinder Pipe (PCCP). Within
this design standard, Section 3.5.2 states that “the characteristic value of strength
and modulus shall be determined in accordance with ASTM D7290”.
The characteristic value is defined in ASTM D7290 as a statistically based material
property representing the 80% lower confidence bound on the 5th-percentile value
of a specified population. The characteristic value accounts for statistical uncertainty
due to a finite sample size. According to the standard, the 5th-percentile value repre-
senting the 80% lower confidence bound is selected to produce characteristic values
comparable to other civil engineering materials that utilize a Load and Resistance
Factor Design (LRFD) approach. The Weibull reduction accounts for both the sample
size and the coefficient of variation of a sample set to determine an appropriate confi-
dence factor. Although a minimum sample size is not specifically required, ASTM
D7290 recommends 30 replicates for composite systems based on other engineered
composite materials such as wood. Additionally, ASTM D7290 does not specifically
define which properties to reduce, but it can be inferred that the reduction is applied
to key design variables.
A comparison of the ACI 440.2R analysis and ASTM D7290 analysis was
performed on the same CFRP system above to demonstrate differences in each design
method. Following the ACI 440.2R guideline, the design tensile strength and design
strain were determined by subtracting three standard deviations from the mean values.
No reduction was applied to the modulus. For the ASTM D7290 design, the tensile
strength, elastic modulus and rupture strain were all reduced. ASTM D7290 design
748 S. F. Arnold and R. Ortiz
properties were determined using Microsoft Excel and Origin statistical software by
OriginLab to solve for the equations defined in the standard.
The design values (ACI440.2R) and characteristic values (ASTM D7290) for the
CFRP system are presented in Table 4.
The design properties show the effect of the two reduction methods on the same
set of data. ACI 440.2R results in a slightly more conservative design strength and
strain than ASTM D7290. Both design strengths are lower than the observed histor-
ical strength from field data (157 ksi). ACI 440.2R sets unrealistic expectations for
the design modulus (14.6 Msi) compared to ASTM D7290. In practice, the field-
prepared samples do exhibit a reduction in modulus (13.7 Msi) and ASTM D7290
appropriately addresses this reduction (12.66 Msi). This is important, given that
modulus is the key design parameter in most applications. The design strain from
ACI440.2R is also slightly more conservative than ASTM D7290. The field-tested
mean strain resulted in a strain equal to the qualification-tested mean strain. Regard-
less, either reduction is conservative. If larger scatter is observed in strain data, an
alternative approach can be performed to derive the design strain from Hooke’s Law
using the design stress from ACI 440.2R (131 ksi) and the mean measured modulus
(14.6 Msi). This strain value is shown in the last row of Table 4 and defined as the
“derived strain”. This process bypasses the issues with measuring rupture strain and
is based on the assumption that composites are linear elastic. Once the design strain
is determined, the tensile modulus is then appropriately reduced per ASTM D7290.
Based on the information presented, the following properties listed in Table 5
are the recommended characteristic or design properties for the CFRP system. The
characteristic tensile strength is reported as the mean strength minus 3 standard
deviations (ACI440.2R). The characteristic tensile modulus is reported as the 5th-
percentile value representing the 80% lower confidence bound of a 2-parameter
Weibull distribution (ASTM D7290). The characteristic strain is the derived strain
from the characteristic tensile strength and the mean tensile modulus.
The approach above is based on using the more conservative of the two methods
for strength and elastic modulus. Alternative approaches or reduction methods exist
Table 5 Characteristic
Design properties
design properties of the CFRP
system Characteristic tensile strength, ksi 131
Characteristic elastic modulus, Msi 12.66
Characteristic tensile strain, % 0.90
Inconsistencies and False Assumptions Related to the Determination … 749
and have been used by FRP manufactures to make it appear that the design properties
have been conservatively reduced, but engineers should pay attention to the specifics
of the published design properties. Alternate methods are likely to be used because
they produce the highest design properties rather than the conservative ones.
Count
lower-bound values and
favors a more conservative 10 20
approach at 5th percentile
15
12
5
7 8
5 5 4 3
0 1 1 1
120 130 140 150 160 170 180 190 200 210
Tensile Strength (ksi)
25
Tensile Strength
20
15
Count
10 20
15
12
5
7 8
5 5 4 3
0 1 1 1
120 130 140 150 160 170 180 190 200 210
Tensile Strength (ksi)
the strength and strain and then subsequently uses the reduced values to calculate
the elastic modulus, citing ACI440.2R (but this is not what ACI440.2R states). As
previously discussed, using reduced values to calculate an elastic modulus can result
in a value that is higher than the measured elastic modulus obtained from real test
data. As an example, the 5% fractile approach is applied to the CFRP system in this
report to demonstrate the effect on the design properties in Table 6. The strength and
strain are reduced, and the elastic modulus is determined by dividing the reduced
strength by the reduced strain. The resulting design strength and strain are higher than
the design values obtained by either ACI 440.2R or ASTM D7290. More importantly,
the elastic modulus is higher than the mean measured modulus (14.6 Msi) obtained
by real data. This approach is the least conservative and completely unrealistic for
determining the elastic modulus but is currently used by major FRP manufacturers
and accepted for code-listed products (ICC-ES).
Inconsistencies and False Assumptions Related to the Determination … 751
system 100
75
50
25
0
0 20 40 60 80
Field Testing
225
200
Tensile Strength (ksi)
175
150
125
100
75
50
25
0
0 50 100 150 200 250 300
20 Field Testing
18
16
Elastic Modulus (Msi)
14
12
10
8
6
4
2
0
0 50 100 150 200 250 300
752 S. F. Arnold and R. Ortiz
Table 6 Comparison of
Characteristic values 5% fractile
characteristic properties and
(Table 5) (ACI 318)
ACI 318 reduction
Tensile strength, ksi 131 145
Elastic modulus, Msi 12.66 15.26
Tensile strain, % 0.90 0.95
This is an example where a manufacturer has published design values and appro-
priately referenced the 5% fractile reduction with the illusion that the design values
are conservatively determined. In reality, the 5% fractile approach is the least conser-
vative approach of the ones discussed in this report. ACI440.2R and ASTM D7290,
both existing guides specific to FRP design properties, result in values that are much
more conservative. Engineers should question the use of alternate design approaches
when more widely accepted guidelines are available.
6 Conclusion
In this paper, we have shown various sources of confusion within the latest guide-
lines on FRP strengthening. Additionally, we have provided a summary of how
tensile data is evaluated, how the methods for reducing data for design/characteristic
properties vary, and how FRP manufacturers can report different values using the
same guidelines. Examples of design approaches for the basic material properties of
FRP were provided according to two industry accepted guidelines, ACI 440.2R and
ASTM D7290, using a dataset from one unidirectional CFRP system. One of the
biggest drawbacks with ACI 440.2R (used primarily with the International Building
Code) has to do with the vague and inconsistent language related to the FRP cross-
sectional area and a failure to provide a statistical reduction to the calculation of
the design modulus. The existing definitions and design examples in ACI440.2R
leave something to be desired for practicing engineers. Allowing the use of the
elastic modulus without any statistical reduction is both misleading and potentially
dangerous in an industry where structural designs are governed much less by ultimate
tensile strength and ultimate rupture strain and much more by the elastic modulus
and strain compatibility of FRP materials with the reinforced concrete or masonry.
Alternate methods such as ACI 318 have been used by FRP manufacturers as well
to report higher design properties, which are currently accepted by International
Code Council Evaluation Service (ICC-ES) and other authorities. In some cases,
these alternate approaches are the opposite of conservative and increase the design
modulus above the actual measured modulus. ASTM D7290 provides an alternative
statistical reduction method to the tensile properties which has been adopted by other
specifications and guidelines (AASHTO, AWWA and CMH-17). A hybrid approach
(as described above), using both methods to derive the most conservative results, is
recommended for determining appropriate design values for composite materials.
Inconsistencies and False Assumptions Related to the Determination … 753
Acknowledgements We would like to give special acknowledgment to the late Dr. M.J.N. Priestley,
as his work remains the inspiration for our pursuit of a more simplified and unified approach to
structural engineering.
References
1. AASHTO (2012) Guide specifications for design of bonded FRP systems for repair and
strengthening concrete bridge elements. First Edition, Washington, DC
2. ACI 318 (2014) Building code requirements for structural concrete. American Concrete
Institute, Farmington Hills, MI
3. ACI 440.2R (2017) Guide for the design and construction of externally bonded FRP systems
for strengthening concrete structures, 2017 edition, American Concrete Institute, Farmington
Hills, MI
4. Adams DO, Adams DF (2002) Tabbing guide for composite test specimens, DOT/FAA/AR-02/
106, U.S Dep. of Transportation Federal Aviation Administration, Washington, DC
5. ASTM D3039 (2017) Standard test method for tensile properties of polymer matrix composite
materials, ASTM International, West Conshohocken, PA
6. ASTM D7565 (2009) Standard test method for determining tensile properties of fiber reinforced
polymer matrix composites used for strengthening of civil structures. ASTM International, West
Conshohocken, PA
7. ASTM D7290 (2017) Standard practice for evaluating material property characteristic values
for polymeric composites for civil engineering structural applications. ASTM International,
West Conshohocken, PA
8. AWWA C305 (2018) CFRP renewal and strengthening of prestressed concrete cylinder pipe,
First Edition. America Water Works Association, Denver, CO
9. FIB Bulletin 90 (2019) Externally applied FRP reinforcement for concrete structures.
International Federation for Structural Concrete /federation internationale du beton, Germany
10. MIL-HDBK-17–1F (2002) The composite materials handbook, volume 1, US Department of
Defense, Philadelphia, PA
11. Nigel Priestley MJ (1993) Myths and fallacies in earthquake engineering—conflicts between
design and reality, Bulletin of the New Zealand National Society for Earthquake Engineering
26(3)
12. BS EN 2561 (1995) Aerospace series—carbon fibre reinforced plastics—unidirectional lami-
nates—tensile test parallel to the fibre direction. European Committee for Standardization,
United Kingdom
13. ISO 10406–2:2015 (2015) Fibre-reinforced Polymer (FRP) reinforcement of concrete—
Test Methods—Part 2: FRP sheets. International Organization for Standardization, Geneva,
Switzerland
Design and Experimentation of Pollution
Absorbing Blocks (PABs)
Abstract Due to the worldwide increase in air pollution which is responsible for a
good percentage of diseases, indoor air quality is now a major concern to people since
there is a much higher capacity to control indoor air pollution than outdoor. Therefore,
this study aims to develop a block that can filter the air from suspended particulate
matter ranging from 2.5 to 10 µm while acting as a passive filtration system installed
as the exterior wall of the building. This block can be efficient in any type of building.
Moreover, this will enable air to enter the building through an inlet in the block facing
the outside of the building and exit the block through an outlet located inside the
building. Between the inlet and the outlet of the block, air circulates in a cyclone
movement due to the inner design of the block. The developed blocks potentially
resulted in an internal centrifugal force for the air passing through that can separate the
particulate matter and contribute to more pure air. In that sense, the overall aim of this
work is to evaluate a simplified and effective walling system incorporating pollution
absorbing blocks (PAB) and contribute significantly to better air quality. In order
to meet this aim, a block was specifically designed. Air quality test was performed
in order to assess the effectiveness of the cyclone movement filtering process and
ensure its efficiency. This is conducted in parallel with conventional compressive
strength tests as well as absorption tests as two of the key tests contributing to the
integrity and performance of the wall system. This work did undergo different cycles
of enhancements to produce a more environmentally friendly wall system and to
minimize the drawbacks of non-purified air within the interior of buildings. The
main aim of this research is to develop a structurally functional block, which is able
to filter the air from a specific range of particulate matter, consequently, contributing
to enhancing indoor air quality.
1 Introduction
2 Literature Review
The current generations are facing a lot of challenges as the human population keeps
rapidly growing, making a lot of essential resources scarce such as food and water as
well as a major impact on the environment contributing to issues like climate change.
This is why this generation is also responsible for coming up with a solution to end
this issue. This is why we concluded that pollution absorbing blocks will be our
main focus to combat the negative effect of small dust particles entering our bodies.
Pollution absorbing blocks is a new technology that needs to be used to tackle those
Design and Experimentation of Pollution Absorbing Blocks (PABs) 757
particles by filtering air externally and providing filtered air into the internal part of a
building. Those blocks are also named ‘breathe blocks’. The most important part of
this block is the ‘cyclone filter’ that is cast inside the block which filters the air from
the pollutant particles such as dust that carry most of the bacteria and viruses on its
surface and get rid of it and provide the purified air indoors [4]. The block consists of
five main parts the inlet where the outside air goes through to the second part which
is the cyclone filter which purifies the air and divides the air into two groups: the
first group goes through the outlet which is the purified air and the drainage which
the pollutant passes through and last but not least the plastic coupler that guides the
air through the whole process (Stott 2015). This passive walling system consists of a
double-layered wall, the first wall which consists of the pollution absorbing blocks
acts as a filter, while the second wall provides insulation and ventilation of the air
(Fig. 1).
The objective of this research is to design and test pollution absorbing blocks (PAB)
aiming at enhancing air quality and mechanical properties.
Scope:
• Adjusting the current design practices from the literature to match the testing
methodology
• Measuring the effect of PAB on air quality
• Comparing the filtration and mechanical performance of the PAB to the traditional
block.
758 A. ElDokhmasey et al.
4 Experimental Work
Four concrete mixes were tested with different cement to water ratios, rice straw
percentage, and admixtures, and mix 4 was the optimal that was used in casting the
blocks (Table 1).
4.2 Tests
The test measures the block’s ability to filter particulate matter ‘PMs’ between 2.5 and
10 µm. The test was conducted using the Air Quality Monitoring Particle Counter,
model PCE-MPC 10 (Fig. 2).
The device was projected into 3 different positions to be able to count PMs of
surrounding air, residue, and outlet ‘filtered’ air.
This test was conducted to measure the block’s ability to withstand rainfall. The
block was exposed to falling water for six hours, and the weight was measured
before and after the exposure. The same test was repeated on a hollow block as a
control experiment (Fig. 3).
Design and Experimentation of Pollution Absorbing Blocks (PABs) 759
The test was conducted ‘according to ASTM C67-03’ to show the quality of the PAB
in terms of compressive strength, compared to a control sample ‘hollow block’ and
making sure that it is up to standards.
The test was conducted ‘according to ASTM C67-07’ to examine and compare the
efficiency of the ‘pollution absorbing block’ to resist water absorption to the control
samples, hollow block, and sand brick (Wahid, 2015).
The test is conducted to examine whether enough air is coming out through the
block’s outlet or not. Three trials were done to increase the test’s reliability. The petri
tube identifies the difference between total and static pressure to calculate Qn, then
mass flow. The rotameter measures mass flow rate of outlet and drainage air directly
(Fig. 4).
This test was conducted using fine sand, to simulate sandy conditions that occur in
areas nearby deserts. This test is crucial to examine how the PAB performs in critical
weather conditions. This was placed in an isolated system that ensured that the inlet
and outlet of the block are controlled. The block was subjected to a huge amount of
sand that was pushed into the block with an air compressor to create a sand storm
simulation for the block (Fig. 5).
Compressive strength test results show that PAB is up to standards compared with
the hollow block, as both scored after 7days 18 MPa and 20 MPa, respectively, while
they scored after 28 days 22.5 MPa and 27 MPa, respectively. Both blocks’ strengths
have increased about 25% between the two periods (Table 2 and Fig. 6).
Table 2 Compressive
Block type Compressive Compressive
strength results
strength 7-day strength 28-day
(MPa) (MPa)
Pollution absorbing 18 22.5
block
Hollow block 20 27
762 A. ElDokhmasey et al.
Water absorption results are presented in the above table. The test was conducted to
examine and compare the efficiency of the ‘PAB’ to resist water absorption to the
control samples, hollow block, and sand brick (Wahid, 2015) (Table 3).
Results show that the PAB did best, absorbing only 2% of water, compared to
3.9% for the hollow block and 23.1% for the sand brick (Wahid, 2015).
The graph shows how many particles of different sizes are present per 1 cubic meter
of air. Initially, results show 12 and 23 µg of particles of diameter sizes more than 2.5
and 10 µm, respectively, per 1 cubic meter of air is present in outdoor air, compared
Design and Experimentation of Pollution Absorbing Blocks (PABs) 763
to 3 and 5 µg of particles of 2.5 and 10 µm, respectively, per 1 cubic meter of air
present in indoor air that comes through the block outlet (filtered).
Reduction in the amount of particles reflects on air quality. As particles decrease
in air, air quality increases. As a conclusion, the block filters air efficiently (Table 4
and Fig. 7).
A rain simulation test was conducted to know how the blocks perform during rain.
Hollow block was used as a control sample. The above table shows that the ‘PAB’
has absorbed 0.3% of water during rain simulation, compared to 0.5% absorbed by
the hollow block (Table 5).
764 A. ElDokhmasey et al.
Table 6 shows that after conducting 3 trials of the same test, on average the percentage
of sand that passed through the system and out with the filtered air is 3.2% of the
sand that the block was subjected to.
Mass flow rate test results are shown in Table 7. The average percentage of air
coming out through the outlet is 66% (filtered), while 34% were coming out through
the drainage system.
6 Conclusions
The designed block has proven its ability to filter air effectively. The pollution
absorbing block provides a positive trend for air filtration across different particle
sizes. When it comes to the aspect of compressive strength, the block is up to standards
where it is able to withstand loads in various structural systems such as traditional
wall systems or load bearing walls. Also, the block can reduce power consumption
since it acts as a passive filtration system, unlike the air purification electronic devices
or air conditioners that have the technology. For material properties, the block reacts
similarly to the conventional blocks when resisting water, as concluded in the water
absorption and rain simulation tests. Also, the block passes adequate filtered volume
of air through the outlet as per the mass flow rate test results.
7 Recommendations
References
1. Air Filters for Clean Air (2017) How to fight air pollution in developing countries—air filters
for clean air. [online] Available at: https://cleanair.camfil.us/2017/10/30/air-pollution-in-develo
ping-countries/
2. Future R (2021) Alternative materials—Pollution absorbing bricks—Rethinking the future.
[online] RTF | Rethinking The Future. Available at: https://tinyurl.com/3zds8fsa
3. Rakhman R, Piliang Y (2020) https://iiste.org/journals/index.php/ads/article/view/52906. Arts
and Design Studies. https://doi.org/10.7176/ads/82-05
4. US EPA (2021) Introduction to indoor air quality | US EPA. [online]. Available at: https://www.
epa.gov/indoor-air-quality-iaq/introduction-indoor-air-quality
5. Yadav S (2020) Pollution absorbing bricks—Civil wale. [online] Civil Wale. Available at: https:/
/civilwale.com/pollution-absorbing-bricks/
6. Lenschow P (2001) Some ideas about the sources of PM10. Atmos Environ 35:23–33
7. Wahid SA, Rawi SB, Desa NS (2015) Utilization of plastic bottle waste in sand bricks
766 A. ElDokhmasey et al.
1 Introduction
Health monitoring is the process of determining and tracking structural integrity and
assessing the nature of damage in a structure. SHM is a non-destructive method for
implementing a damage detection and diagnosis strategy. The SHM process entails
observing a system over time using periodically sampled dynamic response measure-
ments from an array of sensors, extracting damage-sensitive features from these
measurements, and statistically analyzing these features to determine the system’s
current state of health. The output of this process for long-term SHM is periodically
updated information about the structure’s ability to perform its intended function in
light of the inevitable aging and degradation caused by operational environments
[68].
SHM can be classified as a global health monitoring method as well as a local
health monitoring method. The global technique can only detect whether or not there
is damage throughout the structure, but it does not provide enough precise data to
assess the amount of the damage [29]. This can be classified into two categories:
Piezo Monitoring of Concrete—A Review Paper 769
dynamic and static techniques. In global dynamic approaches for damage detec-
tion, the test structure is subjected to low-frequency excitations, either harmonic or
impulse, and the resulting dynamic responses such as mode shapes, natural frequen-
cies, and damping ratio [20] are picked up at specified positions along the struc-
ture [75]. When using the local technique, damages are detected based on localized
structural interrogation using non-destructive procedures, which can detect the exact
location and extent of the damage [11]. Unlike vibration-based methods, the global
static method requires continuous monitoring of key slow-varying parameters over a
longer duration. [43]. Ultrasonic techniques, impact echo testing, acoustic emission,
770 M. Madipalli et al.
magnetic field analysis, eddy currents, X-ray analysis, and penetrant dye testing are
some local SHM techniques.
Another classification of SHM is: passive method and active method. When it
comes to interacting with the environment, a user in passive SHM examines the
structure in all aspects, whereas operational loads or damage initiation cause the
measured signal directly, such as in the guided waves method, impact, and ambient
vibrations [25, 36]. The structure being studied is integrated with sensors and actu-
ators as the observed structure is excited, and its response to it is measured, for
example, acoustic emission method (Scala et al., 1988). Operators are not required
to observe the current state of the structure in this technique; rather, they are trained
to actuate some kind of disturbance or agitation in the system using the actuator and
then monitor the response of this agitation in the system using the sensors [2].
In the context of concrete members, SHM refers to the detection of abnormalities
or deformities (i.e., arising from deterioration, damage, or failure) and provides infor-
mation about the structural health and integrity of concrete members for prolonged
use [3]. In order to provide early warning and redress in the event of incipient partial or
complete structural collapse, several innovative and non-destructive methodologies
have been developed in current practice to identify and monitor structural deficiencies
and cracks present in concrete structures, particularly aging infrastructure elements.
Different NDTs are developed to monitor different issues in concrete. Table 2
depicts the principle, advantages, and limitations describing NDT methods and the
parameter measured in concrete using the respective NDT. However, among all non-
destructive tests, visual testing is probably the most conventional and versatile. Even
though it can yield a plethora of information, it is limited only for preliminary assess-
ment of the structure’s condition. Other traditional monitoring techniques, such as the
Schmidt rebound hammer test, and strain sensing, are passive and bulky, extracting
only secondary data such as load and strain, which may not lead to any direct infor-
mation about damage [75]. The use of electrochemical methods allows for a quick
and simple diagnosis. However, when using electromagnetic sensing techniques on
man-made structures, some adaptation is required to meet stringent exploration depth,
spatial resolution, and signal/noise ratio requirements [82].
Because of the relatively low frequencies involved, impact echo testing is good
only for detecting large voids and delamination but insensitive to small cracks and
discontinuities [52]. The major disadvantage of fiber optic sensors is that it requires
a phase control system to maintain its required optimum sensitivity and high cost of
implementation [2]. The acoustic emission method can be used on loaded structures to
provide continuous surveillance, but damage detection is complicated by the presence
of multiple travel paths from the source to the sensors. The quality of emission
signals in loss is degraded by electrical interference and mechanical ambient noise
[32, 52]. Ultrasonic methods, despite their high sensitivity, cannot detect transverse
surface cracks [23] and require complex processing. Therefore, piezoelectric sensing
technology has grown in popularity in SHM due to its small size, ability to act as both
a sensor and an actuator at the same time, and ability to be embedded into structures
to form intelligent structures [78].
Piezo Monitoring of Concrete—A Review Paper 771
Table 2 (continued)
NDT method Parameter Principle Advantages Limitations
measured
Chloride ion Chloride ion Charged ions, such as 90-day chloride Sensitive to
permeability test permeability chloride (Cl-), will ponding test weathering
accelerate in an correlates well behavior, low
electric field toward with these inherent
the pole of opposite results repeatability
charge. The ions will
reach terminal
velocity when the
frictional resistance of
the surrounding media
reaches equilibrium
with the accelerating
force
Penetration Quality, Measuring the Durable, Damages the
resistance/ uniformity, non-penetrating requires less concrete leaving
Windsor probe surface, and length steel probe maintenance a hole about the
test [13] sub-surface which partly probe diameter
hardness penetrates concrete
when driven by a
powder charge
Frequency Deterioration Variations in the Fatigue cracks Unable to detect
response method of the vibrational modes are can be better the exact location
[2] material’s interrelated to loss of inspected, of any
stiffness stiffness in monitored suitable for discontinuity in
systems, and, low-frequency the structure
generally, analytical ranges
models or
experimentally
examined data are
utilized to find out the
exact location of
damage
Sweep frequency Moisture Signal velocity Robust, rapid, Expensive
technique content changes indicated by non-contact
caused by the application of the
waterproof microwave test
membrane illustrate the
defects and composition and
chloride level texture of the object
NDT method Parameter Principle Advantages Limitations
measured
(continued)
Piezo Monitoring of Concrete—A Review Paper 773
Table 2 (continued)
NDT method Parameter Principle Advantages Limitations
measured
Ground Locate the The greater the Low cost, Complex results,
penetrating radar rebars, voids, difference between portable, difficult
and other dielectric constants at effective interpretations
defects in an interface between [77]
concrete two materials, the
greater the amount of
electromagnetic
energy reflected at the
interface
Infrared (IR) Concrete All objects above 0 K Easy No information
thermography quality, voids, emit IR energy as a interpretation, about depth or
cracks, function of their simple, safe, no thickness of
defects temperature. radiation, rapid defects and
Thermogram detects setup, and results affected
IR radiation emitted portable by environmental
by an object, converts conditions
it to temperature, and
displays it as an image
[6]
Half-cell Corrosion Electric potential of Results in the The rate of
electrical rebars is measured form of corrosion is not
potential method relative to half-cell equipotential determined
and indicates contours
probability of
corrosion
Open circuit Corrosion Electrical potential Single value as Time-consuming
potential (OCP) value (in mV or V) is an indication of and need to be
monitoring measured between the steel closed several
steel reinforcement of condition hours during the
RC and reference instead of inspection
electrode (indicates equipotential
corrosion potential of contours
the steel inside RC)
Resistivity Corrosion Electrical potential Easy, fast, Reinforcement in
method value (in mV or V) is portable, and the test region
measured between inexpensive can provide short
steel reinforcement of technique for circuit and cause
RC and reference routine erroneous
electrode (indicates inspection reduction in the
corrosion potential of measurement
the steel inside RC)
(continued)
774 M. Madipalli et al.
Table 2 (continued)
NDT method Parameter Principle Advantages Limitations
measured
Polarization Corrosion The change in The corrosion Time-consuming
resistance potential during rate is evaluated due to the
reactions relatively electrical
(polarization) is instantly capacitance at
recorded using an steel and concrete
electrode plate on the interface
concrete surface
Galvanostatic Corrosion The anodic current Displays Unstable reading
pulse method pulse is applied corrosion rate, due to parallel or
(GPM) galvanostatically on electrical crossing of the
the steel resistance, and steel
reinforcement from potential value reinforcement
counter electrode simultaneously due to cracks and
placed on the concrete delamination
surface to determine
the corrosion rate of
steel reinforcement in
RC
Electrochemical Corrosion EN describes the No interference Corrosion of steel
noise (EN) fluctuations of current to the system reinforcement
and potential makes analysis
spontaneously unsuccessful
generated by
corrosion reactions
NDT method Parameter Principle Advantages Limitations
measured
Fiber optics Crack The primary core of Durable, low Expensive,
detection, an optical fiber is installation requires precise
measuring surrounded by an costs, free of installation, and
strains, pH annular cladding and EM and complex to
level, a protective covering. radio-frequency develop usable
vibration, The light wave is interference measurements
corrosion, trapped in the core
temperature due to total internal
monitoring reflection (the
cladding has a lower
refractive index than
the core), and it travels
over long distances
with negligible loss
(continued)
Piezo Monitoring of Concrete—A Review Paper 775
Table 2 (continued)
NDT method Parameter Principle Advantages Limitations
measured
Laser scanners Cracks Focused pulses of Collects Inadequate
technique detection, coherent light, where accurate lighting and low
substrate the system calculates geometry data reflectivity
roughness in the time of flight of and radiometric compromise
patch repairs the light (converted to attributes quality of the
distance), while it is scanned image
transmitted and
returned back from
the measured object.
Can be light/ laser
detection and ranging
(LiDAR/LaDAR)
systems
Acoustic Corrosion Elastic waves are Cost-effective, Passive defects
emission generated due to rapid detect, and cannot be
technique release of energy from locate the active effectively
a localized source defect detected
within an RC structure
Impact echo (IE) Corrosion Stress wave are Fast and reliable Reliability
propagated within the method decreases with
RC structure through increase in
vibrations and impact thickness
load
Ultrasonic pulse Corrosion Mechanical energy Estimates the The evaluation
velocity (UPV) propagates through size, shape, and requires careful
the concrete as stress nature of the data collection
waves and is concrete damage and expert
converted into analysis
electrical energy by a
second transducer
3 Piezoelectricity
Jacques and Pierre Curie, two French physicists, discovered piezoelectricity in the
late 1800s. Piezoelectricity is the accumulation of electric charge in materials with
non-centrosymmetric crystal structures in response to applied mechanical stress. The
piezoelectric effect is a linear interaction between a material’s mechanical and elec-
trical states that has no inversion symmetry within the crystal [9]. Direct piezoelec-
tric effect (DPE) and inverse/converse piezoelectric effect (IPE) are the two types of
piezoelectric effect (IPE). The direct piezoelectric effect is a property of some mate-
rials that causes them to generate an electric field when a strain is applied. It was
discovered by Pierre and Jacques Curie in 1880. Lippmann mathematically deduced
the inverse piezoelectric effect from thermodynamic principles a year later, and it
states that if an electric field is applied to a piezoelectric material, it will deform.
776 M. Madipalli et al.
4 Electromechanical Impedance
associated PZT transducer [53, 71]. Several researchers pioneered theoretical work
on the EMI method, with Xu and Liu [1] being the first to consider the adhesive
layer in terms of the impedance model, where it was represented as a 1-D spring
mass damper system, placed in series with the structure. In terms of 2-D impedance
models, Zagrai and Giurgiutiu [81] developed a theoretical model for a circular 2-D
structure, which they then confirmed using experimental data.
Bhalla et al. [66] suggested a simplified 2-D impedance model that takes into
account shear lag generated by the adhesive bond layer between the PZT trans-
ducer and the host structure. The obtained model was compared to Bhalla and Soh’s
previous work [8], which showed the shear lag phenomenon rather well.
A new 3-D model utilizing 3-D actuations and a PZT transducer, recognizing the
constraints of the 1-D and 2-D models, is established by various authors (such as
PZT shape and size). Experiments with embedded and surface-bonded transducer
specimens were used to validate the concept. However, when modeling numerous
PZT-structure scenarios, these single PZT-structure interaction models introduced
a new problem, as earlier models had ignored the mass of the PZT transducers. To
account for the mass influence of multiple PZT transducers, a multiple PZT-structure
(MPZT-S) model was developed [48, 5].
Surface-bonded piezo ceramic transducer patches were used to instrument the
concrete bridge using impedance-based approach, which were electrically activated
at high frequencies in the order of kHz. The damage index was created in nonpara-
metric terms utilizing the root mean square of the variation in admittance signatures
with respect to the baseline signature of the healthy state by measuring the real
component of admittance (reciprocal of impedance) [67]. The damage progression
in the structure was shown to be well correlated with this nonparametric indicator.
To detect and locate disbonds and delamination, an array of piezoelectric trans-
ducers was mounted on to a concrete structure, compared the transfer function of the
damaged structure to that of the intact structure [58]. Statistical approaches have been
used to correlate damage to changes in EM admittance signatures. Using signature
decomposition, a different method for damage diagnosis was developed based on
changes in structural mechanical impedance (SMI), which was retrieved from EM
admittance signature from the PZT sensor [47].
Shin and Oh [61] discovered that EMI signatures are extremely sensitive to
concrete strength gain during hydration. They found that when the curing time
increased, the resonance peak in the EMI response at 200 kHz shifted to the right.
A series of tests employing surface-bonded PZT patches revealed that when the
strength of concrete increases, its mechanical impedance varies, which is reflected
in the conductance signature [62]. With increasing curing time, they observed a
comparable downward and rightward shift in the resonance peak.
An aluminum enclosure was employed to protect PZT patches from moisture
in embedded conditions while monitoring the hydration of concrete; when the
concrete’s strength gain increased, the conductance values at resonance frequency
decreased [49]. Besides RMSD, various statistical metrics such as mean absolute
percentage deviation (MAPD) and correlation coefficient deviation (CCD) were
utilized to assess changes in conductance signatures during hydration of different
Piezo Monitoring of Concrete—A Review Paper 779
5 Wave Propagation
The other method for PZT-based monitoring is the WP technique (Fig. 3). This
method involves the use of piezoelectric transducers embedded or attached to the
structure to actively excite and sense the structure’s vibration characteristics [37].
The sensors detect the sweep sine response after the actuator generates a sweep sine
signal. Wave propagation energy will be attenuated by cracks in the concrete struc-
ture [70]. Using amplitude and phase information as well as digitized signals from
actuators and sensors, the system detects, localizes, and estimates the severity of
structural damage, and determines when the transducer/structure bond has degraded
[58]. Wave creation can be regulated using a phased array of various shapes and distri-
butions. Transfer functions, time of flight analysis with cross-correlation envelope,
instantaneous baseline time reversal, and other signal processing methods can be
utilized for damage monitoring with the signal recorded with piezoelectric devices.
This may result in a more effective structure damage localization mapping (Jaussaud
et al., n.d.).
Surface wave excitation is expected when the PZT transducer is surface-bonded
on the surface of a thick concrete structure (Rayleigh or R-wave). The R-wave prop-
agates along the concrete’s surface and rapidly drops in magnitude with depth below
the surface [44]. Simultaneously, a weaker pressure wave (P-wave) with a higher
velocity will be excited [80]. According to the direct piezoelectric effect, the mechan-
ical wave will be changed back to electrical signatures when it reaches another patch
that acts as a sensor. The propagating wave is influenced by the stiffness of the
concrete as it hardens, which can be seen in the amplitude and time of flight (TOF)
of the wave packet. The TOF is the time difference between the actuator’s wave packet
and the sensor’s wave packet. By measuring the TOF of the corresponding waves at
known actuator–sensor spacing, P-wave and R-wave velocities may be estimated.
Lord Rayleigh published his first wave investigations in 1889 [55]. Horace Lamb
predicted the existence of a specific form of acoustic waves in solids analytically
in 1917, and Lamb waves were eventually named after him—Rayleigh waves—
mechanical elastic deformation waves in solids near a free boundary—were the
subject of his research. Lamb waves are steered by two parallel free limits in thin-
plate structural components. Lamb waves can travel over long distances with minimal
amplitude damping because of the low damping imposed by free boundaries [65]. The
use of Lamb waves to detect delamination in piezoelectric materials was pioneered
by Keilers and Chang [33]. The crack detection and localization characteristics of the
WP approach were then studied in further depth. For effective damage monitoring
in thin metallic structures, researchers have investigated at modeling and tuning of
Lamb waves [24].
For Rayleigh wave (R-wave) velocity measurement in concrete, Shin et al. used
the maximum energy arrival approach to determine the wave velocity by utilizing
its continuous wavelet transform. An experimental investigation was conducted to
demonstrate the feasibility his proposed approach [64]. A combination of Rayleigh
and longitudinal waves was used to evaluate fracture parameters prior to impregna-
tion and to determine the ultimate repair effectiveness [4]. Song et al. used a PZT
actuator–sensor system to explore Rayleigh—WP in concrete structures. R-wave
can be generated and received by a surface-bonded piezoelectric actuator–sensor
system, according to both numerical and experimental data [69]. FreshCon is a tech-
nology consisting of a U-shaped mold with two external piezoelectric transducers
for damage detection and fracture identification and was developed by the Université
Libre de Bruxelles in Belgium to monitor the age of concrete in its early stages by
automatically computing the velocity of the P-wave and shear wave (S-wave) [56].
Kwong et al. [35] used the PZT transducer-based WP approach to conduct
feasibility research on parametric mortar strength prediction throughout the curing
process. The wave velocity calculated from the sensor’s electrical signatures was
quantitatively related to the dynamic modulus of elasticity and mortar strength using
a semi-analytical model. This model was further improved by using both R-wave
and pressure wave (P-wave) to simultaneously analyze the dynamic modulus of elas-
ticity and the Poisson’s ratio at various phases of curing using the WP approach [41].
According to the research [38], despite varied sizes and spacing of PZT transducers,
the performance of the WP approach is reliable and constant across identical concrete
specimens. Even in adverse environments commonly faced by civil structures, the
PZT-based WP technique can perform satisfactorily for 365 days. The WP method
can also distinguish between different types of coarse aggregate.
The actuation frequency is one of the important factors in WP technique. The
frequency of the actuation signal has a substantial impact on the amplitude of
the recorded signal in transducers. Many researchers examined variety of actua-
tion frequencies and eventually concluded a range of frequency values [38, 69, 72].
There has not been any agreement on a single optimum frequency for actuation as
of yet. However, based on the available literature, 120 kHz appears to be a widely
accepted frequency. Many investigations have been conducted using WP method
with embedded PZT transducers. And studies for early-age monitoring in the first
24 h, strength monitoring for up to 28 days, and long-term strength monitoring were
carried out. These are fully addressed in the review paper [40].
There are several research in the literature that deal with numerical modeling of
PZT-structure interactions in SHM. The majority of these studies focus on Lamb
782 M. Madipalli et al.
waves and metallic structures when it comes to the WP approach. Modeling of the
WP approach on concrete structures has a limited number of publications. Song et al.
[69] and Lim et al. [39], for example, used FE modeling to study the propagation
of P- and R-waves in cementitious materials that were activated by surface-bonded
PZT patches. Though the WP methodology is still an ongoing research topic in terms
of modeling but is a proven method with advantages for early-age monitoring, long-
term strength monitoring, and damage detection, it has some practical challenges
associated.
The spacing between the PZT patches plays a significant role in perceiving the
signal from the actuator to the sensor while sending and receiving signals by surface
bonding the PZT patches. The actuator and sensor have been spaced differently
by several researchers. Depending on the purpose, the spacing ranges from 50 to
220 mm. As the actuator–sensor distance increased, the amplitude of the sensor’s
electrical signatures decreased rapidly [38]. To minimize the near field impact, the
spacing should not be less than the range 33–60 mm [63]. The P-wave packet overlaps
the R-wave packet in this instance, making it more difficult to distinguish the TOF
of both waves.
The amplitude and wave velocity of the electrical signatures rely on the roughness
of the surface while surface bonding the patches. The amplitude of the PZT transducer
bonded to the coarser surface is smaller, indicating that energy dissipation is more
on the rough surface. In comparison with a smooth surface, the wave velocity on
a rough surface is substantially lower [38]. Future research may concentrate on the
performance of WP method by varying the spacing between the piezo patches and the
developing a consistent surface preparation method, focusing on the area where the
PZT transducer would be bonded, to reduce the effect of varying surface roughness
in order to make WP method more viable as an NDT for concrete.
6 Conclusion
Since the invention of the EMI methodology, the majority of the tests were
conducted in laboratories or were primarily theoretical, raising concerns about
their applicability in real-world situations. While structural degradation changes the
impedance signature, other factors such as temperature and the durability of PZT
transducers can also change it. The RMSD index can be influenced by mass change
and concrete structure vibration, indicating a reduction in EMI technique sensitivity.
The EMI approach has a narrow sensing zone, which necessitates the use of thou-
sands of patches to monitor major civil structures while also compromising overall
structural stability.
Despite the fact that WP experiments are far fewer than EMI research, standard
methods for preparing and installing surface-bonded PZT, as well as signal processing
and analysis protocols, have yet to be established. Some of the topics that need
more research for WP method include optimal transducer spacing and guidelines for
transducer installation on smooth/coarse concrete structures. These can be installed at
a variety of critical locations to test the structural integrity of the elements of interest,
but calibration charts for WP should be established. However, when compared to EMI
approaches, the benefits of the WP method outweigh the limitations, making it more
feasible. Overall, PZT ceramics cannot be reused, and their brittleness makes them
vulnerable to fracture; therefore, this issue cannot be neglected. There needs to be
more research conducted to develop a solution to the brittle nature of piezo ceramic.
By addressing the above-mentioned concerns in EMI and WP technique, PZT can
be proven as a viable NDT for concrete SHM.
References
8. Bhalla S, Soh CK (2004) Electromechanical impedance modeling for adhesively bonded piezo-
transducers. J Intell Mater Syst Struct 15(12):955–972. https://doi.org/10.1177/1045389X0404
6309
9. Briscoe J, Dunn S (2014) Nanostructured piezoelectric energy harvesters. Springer. https://doi.
org/10.1007/978-3-319-09632-2
10. Chandrasekaran S (2019) Structural health monitoring with application to offshore structures.
World Scientific. https://doi.org/10.1142/11302
11. Chang P, Flatau A, Liu SC (2003) Review paper: health monitoring of civil infrastructure.
Struct Health Monit 2:257–267. https://doi.org/10.1177/1475921703036169
12. Chen J, Li P, Song G, Ren Z (2016) Piezo-based wireless sensor network for early-age concrete
strength monitoring. Optik 127(5):2983–2987. https://doi.org/10.1016/j.ijleo.2015.11.170
13. Clifton JR, Carino NJ (1982) Nondestructive evaluation methods for quality acceptance of
installed building materials. J Res Natl Bur Stand 87(5):407–438. https://doi.org/10.6028/jres.
087.024
14. Cook-Chennault KA, Thambi N, Sastry AM (2008) Powering MEMS portable devices—a
review of non-regenerative and regenerative power supply systems with special emphasis on
piezoelectric energy harvesting systems 17(4):043001. https://doi.org/10.1088/0964-1726/17/
4/043001
15. Coupled Electro-Mechanical Analysis of Adaptive Material Systems—Determination of the
Actuator Power Consumption and System Energy Transfer—C. Liang, F.P. Sun, C.A. Rogers,
1994 (n.d.) Retrieved February 24, 2022, from https://journals.sagepub.com/doi/10.1177/104
5389X9400500102
16. Covaci C, Gontean A (2020) Piezoelectric energy harvesting solutions: a review. Sensors
20(12):3512. https://doi.org/10.3390/s20123512
17. Cracking in Concrete Topic (n.d.) Retrieved December 6, 2021, from https://www.concrete.
org/topicsinconcrete/topicdetail/Cracking%20in%20Concrete?search=Cracking%20in%20C
oncrete
18. Champiri DM, Mousavizadegan S, Moodi F (2012) A decision support system for diagnosis
of distress cause and repair in marine concrete structures. Comput Concr 9. https://doi.org/10.
12989/cac.2012.9.2.099
19. Deeny S, Stratford T, Dhakal R, Moss PJ, Buchanan AH (2008, December 1) Spalling of
concrete: Implications for structural performance in fire
20. Foti D (2013) Dynamic identification techniques to numerically detect the structural damage.
Open Constr Build Technol J 7(1). https://doi.org/10.2174/1874836801307010043
21. Gardner D, Lark R, Jefferson T, Davies R (2018) A survey on problems encountered in current
concrete construction and the potential benefits of self-healing cementitious materials. Case
Stud Constr Mat 8:238–247. https://doi.org/10.1016/j.cscm.2018.02.002
22. Gebregziabhier TT (2009) Durability problems of 20th century reinforced concrete heritage
structures and their restorations. Undefined. https://www.semanticscholar.org/paper/Durabi
lity-problems-of-20th-century-reinforced-and-Gebregziabhier/5d6b41ae43991443068a75cb
8e85a5990ad05fe8
23. Giurgiutiu DV, Giurgiutiu DV, Craig D, Rogers A. (n.d.) Title: electro-mechanical (E/M)
Impedance method for structural health monitoring and non-destructive evaluation
24. Giurgiutiu V (2005) Tuned lamb wave excitation and detection with piezoelectric wafer active
sensors for structural health monitoring. J Intell Mater Syst Struct 16(4):291–305. https://doi.
org/10.1177/1045389X05050106
25. Giurgiutiu V (2014) Chapter 1—introduction. In: Giurgiutiu V (ed) Structural health monitoring
with piezoelectric wafer active sensors (Second ed) (pp 1–19). Academic Press. https://doi.org/
10.1016/B978-0-12-418691-0.00001-0
26. Guidebook on Non-destructive Testing of Concrete Structures (2019, February 28) [Text].
IAEA. https://www.iaea.org/publications/6347/guidebook-on-non-destructive-testing-of-con
crete-structures
27. Guo F, Yu Z, Liu P, Shan Z (2016) Practical Issues related to the application of electrome-
chanical impedance-based method in concrete structural health monitoring. Res Nondestr Eval
27(1):26–33. https://doi.org/10.1080/09349847.2015.1044587
Piezo Monitoring of Concrete—A Review Paper 785
28. Hajializadeh D, Obrien EJ, O’Connor AJ (2017) Virtual structural health monitoring and
remaining life prediction of steel bridges. Can J Civ Eng 44(4):264–273. https://doi.org/10.
1139/cjce-2016-0286
29. Inaudi D, Glisic B (2008) Fibre optic methods for structural health monitoring. Wiley
30. Jaussaud G, Rebufa J, Fournier M, Logeais M, Bencheikh N, Rébillat M, Guskov M (n.d.)
Improving lamb wave detection for SHM using a dedicated LWDS electronics, p 7
31. Kaur Dr. N, Gupta N, Jain N, Bhalla S (2013, March 8) integrated global vibration and low-cost
EMI technique for structural health monitoring of RC structures using embedded PZT patches
32. Kawiecki G (2001) Modal damping measurement for damage detection. Smart Mater Struct
10(3):466–471. https://doi.org/10.1088/0964-1726/10/3/307
33. Keilers CH, Chang F-K (1995) Identifying delamination in composite beams using built-in
piezoelectrics: part I—experiments and analysis. J Intell Mater Syst Struct 6(5):649–663. https:/
/doi.org/10.1177/1045389X9500600506
34. Kim HW, Priya S, Uchino K, Newnham RE (2005) Piezoelectric Energy harvesting under
high pre-stressed cyclic vibrations. J Electroceram 15(1):27–34. https://doi.org/10.1007/s10
832-005-0897-z
35. Kwong KZ, Lim YY, Liew WYH (2016) Non-destructive concrete strength evaluation using
PZT based surface wave propagation technique – a comparative study. MATEC Web of
Conferences 47:02014. https://doi.org/10.1051/matecconf/20164702014
36. Lehmann M, Büter A, Frankenstein B, Schubert F, Brunner B (2006, September 19) Moni-
toring system for delamination detection—qualification of structural health monitoring (SHM)
systems
37. Lichtenwalner PF, Dunne JP, Becker RS, Baumann EW (1997) Active damage interrogation
system for structural health monitoring 3044:186–194. https://doi.org/10.1117/12.274663
38. Lim YY, Kwong K, Liew W, Soh C (2017) Practical issues related to the application of piezo-
electric based wave propagation technique in monitoring of concrete curing. Constr Build
Mater 152:506–519. https://doi.org/10.1016/j.conbuildmat.2017.06.163
39. Lim YY, Kwong KZ, Liew WYH, Padilla RV, Soh CK (2018) Parametric study and modeling
of PZT based wave propagation technique related to practical issues in monitoring of concrete
curing. Constr Build Mater 176:519–530. https://doi.org/10.1016/j.conbuildmat.2018.05.074
40. Lim YY, Smith S, Soh C (2018) Wave propagation based monitoring of concrete curing using
piezoelectric materials: review and path forward. NDT and E Int 99:50–63. https://doi.org/10.
1016/j.ndteint.2018.06.002
41. Lim YY, Zee Kwong K, Liew WYH, Kiong Soh C (2016) Non-destructive concrete strength
evaluation using smart piezoelectric transducer—A comparative study. Smart Mat Struct
25:085021. https://doi.org/10.1088/0964-1726/25/8/085021
42. Liu P, Wang W, Chen Y, Feng X, Miao L (2017) Concrete damage diagnosis using electrome-
chanical impedance technique. Constr Build Mater 136:450–455. https://doi.org/10.1016/j.con
buildmat.2016.12.173
43. Lorenzoni F (2013) Integrated methodologies based on structural health monitoring for the
protection of cultural heritage buildings [PhD, University of Trento]. http://eprints-phd.biblio.
unitn.it/930/
44. Lu Y, Li J, Ye L, Wang D (2013) Guided waves for damage detection in rebar-reinforced
concrete beams. Constr Build Mater 47:370–378. https://doi.org/10.1016/j.conbuildmat.2013.
05.016
45. Mateusz R, Mateusz R, Adam M, Adam M, Tadeusz U (n.d.) An overview of electromechanical
impedance method for damage detection in mechanical structures, p 8
46. Mayeen A, Kalarikkal N (2018) 2—Development of ceramic-controlled piezoelectric devices
for biomedical applications. In: Thomas S, Balakrishnan P, Sreekala MS (eds) Fundamental
biomaterials: ceramics, pp 47–62. Woodhead Publishing. https://doi.org/10.1016/B978-0-08-
102203-0.00002-0
47. Moharana S, Bhalla S (2013, March 8) Review of Non-destructive evaluation of concrete
structures using electro-mechanical impedance technique
786 M. Madipalli et al.
48. Na WS, Baek J (2018) A review of the piezoelectric electromechanical impedance based
structural health monitoring technique for engineering structures. Sensors 18(5):1307. https://
doi.org/10.3390/s18051307
49. Negi P, Chakraborty T, Kaur N, Bhalla S (2018) Investigations on effectiveness of embedded
PZT patches at varying orientations for monitoring concrete hydration using EMI technique.
Constr Build Mater 169:489–498. https://doi.org/10.1016/j.conbuildmat.2018.03.006
50. Negi P, Kaur N, Bhalla S (2015) Experimental strain sensitivity investigations on embedded
PZT patches in varying orientations. In: Advances in structural engineering: materials, Vol 3,
pp 2615–2620. Scopus. https://doi.org/10.1007/978-81-322-2187-6_203
51. Omar T, Nehdi ML (2018) Condition assessment of reinforced concrete bridges: current prac-
tice and research challenges. Infrastructures 3(3):36. https://doi.org/10.3390/infrastructures303
0036
52. Park G (2000) Assessing structural integrity using mechatronic impedance transducers with
applications in extreme environments. https://vtechworks.lib.vt.edu/handle/10919/27719
53. Park G, Sohn H, Farrar CR, Inman DJ (2003) Overview of piezoelectric impedance-based
health monitoring and path forward. The Shock and Vibration Digest 35(6):451–463. https://
doi.org/10.1177/05831024030356001
54. PI Brochure: DuraAct Piezoelectric Patch Transducers; piezocomposite transducers; adap-
tronics, commercial applications; aerospace, automotive, machinery; equipment, building and
structures, Smart Materials; P-876; P876 (n.d.) Smart Materials, 16.
55. Rayleigh L (1885) On waves propagated along the plane surface of an elastic solid. In: Proceed-
ings of the London mathematical society, s1–17(1), 4–11. https://doi.org/10.1112/plms/s1-
17.1.4
56. Reinhardt HW, Grosse CU (2004) Continuous monitoring of setting and hardening of mortar
and concrete. Constr Build Mater 18(3):145–154. https://doi.org/10.1016/j.conbuildmat.2003.
10.002
57. Rubene S, Vilnı̄tis M (2014). Use of the Schmidt rebound hammer for non destructive concrete
structure testing in field. https://doi.org/10.4467/2353737XCT.14.078.2528
58. Saafi M, Sayyah T (2001) Health monitoring of concrete structures strengthened with advanced
composite materials using piezoelectric transducers. Compos B Eng 32(4):333–342. https://
doi.org/10.1016/S1359-8368(01)00017-8
59. Sappati KK, Bhadra S (2018) Piezoelectric polymer and paper substrates: a review. Sensors
18(11):3605. https://doi.org/10.3390/s18113605
60. Scala CM, Bowles SJ, Scott IG (1988) The development of acoustic emission for structural
integrity monitoring of aircraft,. Aeronautical Research Labs Melbourne (Australia). https://
apps.dtic.mil/sti/citations/ADA196264
61. Shin SW, Oh TK (2009) Application of electro-mechanical impedance sensing technique for
online monitoring of strength development in concrete using smart PZT patches. Constr Build
Mat 23(2):1185–1188. Scopus. https://doi.org/10.1016/j.conbuildmat.2008.02.017
62. Shin SW, Qureshi AR, Lee J-Y, Yun CB (2008) Piezoelectric sensor based nondestructive active
monitoring of strength gain in concrete. Smart Mater Struct 17(5):055002. https://doi.org/10.
1088/0964-1726/17/5/055002
63. Shin SW, Yun CB, Popovics JS, Kim JH (2007) Improved Rayleigh wave velocity measurement
for nondestructive early-age concrete monitoring. Res Nondestr Eval 18(1):45–68. https://doi.
org/10.1080/09349840601128762
64. Shin SW, Yun CB, Song WJ, Lee JH (2006) Modified surface wave velocity measurement
technique in concrete. Key Eng Mater 321–323:314–317. https://doi.org/10.4028/www.scient
ific.net/KEM.321-323.314
65. Silva C, Rocha B, Suleman A (2010) Guided lamb waves based structural health monitoring
through a PZT network system
66. Simplified impedance model for adhesively bonded piezo-impedance transducers. J Aerospace
Eng 22(4) (n.d.) Retrieved February 25, 2022, from https://ascelibrary.org/doi/10.1061/%28A
SCE%290893-1321%282009%2922%3A4%28373%29
Piezo Monitoring of Concrete—A Review Paper 787
67. Soh C, Tseng K, Bhalla S (2000) Performance of smart piezoceramic patches in health
monitoring of RC bridge. Smart Mater Struct 9:533. https://doi.org/10.1088/0964-1726/9/4/
317
68. Sohn H, Farrar CR, Hemez FM, Czarnecki JJ (2002, January 1) A review of structural
health review of structural health monitoring literature 1996–2001. (Article LA-UR-02–2095).
Submitted to: Third World Conference on Structural Control, Como, Italy, April 7–12, 2002;
Los Alamos National Laboratory. https://digital.library.unt.edu/ark:/67531/metadc927238/
69. Song F, Huang GL, Kim JH, Haran S (2008) On the study of surface wave propaga-
tion in concrete structures using a piezoelectric actuator/sensor system. Smart Mater Struct
17(5):055024. https://doi.org/10.1088/0964-1726/17/5/055024
70. Song G, Gu H, Mo Y-L (2008) Smart aggregates: multi-functional sensors for concrete
structures—a tutorial and a review. 17(3):033001. https://doi.org/10.1088/0964-1726/17/3/
033001
71. Sun FP, Chaudhry Z, Liang C, Rogers CA (1995) Truss structure integrity identification using
PZT sensor-actuator. J Intell Mater Syst Struct 6(1):134–139. https://doi.org/10.1177/104538
9X9500600117
72. Sun M, Staszewski W, Swamy RN, Li Z (2008). Application of low-profile piezoceramic
transducers for health monitoring of concrete structures. https://doi.org/10.1016/J.NDTEINT.
2008.06.007
73. Tae S, Baek C, Roh S (2017) Chapter 2—life cycle CO2 Evaluation on reinforced concrete
structures with high-strength concrete. In: Nazari A., Sanjayan JG (eds) Handbook of low
carbon concrete, pp 17–38. Butterworth-Heinemann. https://doi.org/10.1016/B978-0-12-804
524-4.00002-6
74. Tawie R, Lee HK (2010) Monitoring the strength development in concrete by EMI sensing
technique. Constr Build Mat 24(9):1746–1753. Scopus. https://doi.org/10.1016/j.conbuildmat.
2010.02.014
75. Thesis (n.d.) Retrieved December 18, 2021, from https://web.iitd.ac.in/~sbhalla/thesis.html
76. USACE publications—Engineer manuals (n.d.) Retrieved December 6, 2021, from https://
www.publications.usace.army.mil/USACE-Publications/Engineer-Manuals/u43544q/434543
572D4547/
77. Verma SK, Bhadauria SS, Akhtar S (2013) Review of nondestructive testing methods for
condition monitoring of concrete structures. J Constr Eng 2013:1–11. https://doi.org/10.1155/
2013/834572
78. Wang T, Tan B, Lu M, Zhang Z, Lu G (2020) Piezoelectric electro-mechanical impedance
(EMI) based structural crack monitoring. Appl Sci 10(13):4648. https://doi.org/10.3390/app
10134648
79. Yan W, Chen WQ (2010) Structural health monitoring using high-frequency electromechanical
impedance signatures. Adv Civ Eng 2010:e429148. https://doi.org/10.1155/2010/429148
80. Yu L, Giurgiutiu V (2008) In situ 2-D piezoelectric wafer active sensors arrays for guided wave
damage detection. Ultrasonics 48:117–134. https://doi.org/10.1016/j.ultras.2007.10.008
81. Zagrai AN, Giurgiutiu V (2002) Health monitoring of aging aerospace structures using the
electromechanical impedance method (T. Kundu, ed; pp 289–300). https://doi.org/10.1117/12.
469888
82. Zaki A, Chai HK, Aggelis DG, Alver N (2015) Non-destructive evaluation for corrosion
monitoring in concrete: a review and capability of acoustic emission technique. Sensors
15(8):19069–19101. https://doi.org/10.3390/s150819069
Evaluating the Performance
of Alkali-Activated Materials Containing
Phase Change Materials: A Review
1 Introduction
In the last decades, the circular economy has become an important target in many
fields in construction, and building industry is one of the main ones. This area after
transportation holds the highest in terms of total carbon dioxide emissions with
25% of the total emissions compared with 28% for transportation. Almost 25% of
the energy consumption in residential and non-residential buildings is related to the
production of building materials [9, 34]. In this respect, precast walls represent nearly
40% of the yearly energy consumption of the precast industry. A large portion of this
energy is gone as an outcome of the steaming system, which is used for producing
precast concrete. Ordinary Portland cement (OPC) concrete is the greatest material
consumed and affordable product due to its high durability and mechanical strength.
However, in terms of the eco-friendly aspect such as high CO2 emission during its
production, which was foresaid before, researchers started to look for an alternative
material to replace with OPC or improve their performance in this respect. The
motivation to reduce carbon footprint and the truth that the OPC structures made many
years ago are still facing decomposition problems points out the deficiency of OPC
[3, 22]. OPC-based concrete shows a high permeability that allows water and other
aggressive media to enter, leading to carbonation and corrosion problems and early
deterioration. These facts pursue researchers to look for alternatives to compensate
for these defects. One of the greatest materials found is the alkali-activated materials
(AAMs), which are high-strength materials with non-cement bases and a much lower
carbon footprint.
AAMs are also called sustainable cementing binder systems or, in some areas,
have been named geopolymers which have been widely accepted around the world.
These materials are made of a vast range of aluminosilicate precursors, with vari-
ation in availability, cost, reactivity, and value worldwide. AAMs have a variety in
performance based on the ingredient used that can eliminate the future construction
materials needs, similarly to the application of OPC [40, 48]. Fly ash, metakaolin,
blast furnace slag, kaolinitic clays, rice husk, red mud, and some natural pozzolans
can be used for raw materials. Various alkali activators also play a major role in
producing this binder. The study of these materials has increased significantly lately
due to their vast environmental, economic, and technical advantages [8, 59]. AAMs
can be applied in different methods, such as in precasting and in-situ casting [5].
It was mentioned in many studies during the last decade that these materials are
probably the best-replacing material with the OPC for precasting or other situations
applications where alkaline reagents can appropriately be handled, and sufficient
monitoring of curing is possible. It is good to enlighten that as the population grows
yearly, the need for fast and responsive assembly of construction materials increases.
The AAMs are the perfect candidate for use in precast systems. Precast is one of
the best solutions. Along with applying alkali-activated materials in this process,
we can achieve high-strength materials with a low carbon footprint in a short time.
However, the precast procedure comes with some issues, such as heat damage on
samples (microcracks), and most importantly, it requires a high amount of energy
usage. Moreover, the high heat loss of the buildings through exterior walls made the
researcher study this issue. Researchers found that applying phase change materials
(PCMs) to store heat within their fine particles can be a great solution.
Phase change materials (PCMs) are materials with the capability of thermal
storage through sensible and latent heat and decomposing energy in the form of
heat. These materials act as sensible energy storage media when they are subjected
to a sudden change in temperature. They present a continuous increase in temper-
ature in the presence of heat until their phases change at a certain point called the
melting point. At temperatures higher than the melting point, the paraffin, the main
Evaluating the Performance of Alkali-Activated Materials Containing … 791
material of PCMs within the particles, undergoes a phase change from the solid-to-
liquid states (or vice-versa). PCMs can absorb heat and act as latent energy storage
media [12, 47]. Moreover, many types of PCMs provide a variety of applications
of these materials in the building components, such as inside the walls as a sheet
or as a particle inside the cementitious mixtures, which provide many benefits such
as reducing the thermal cracks. Furthermore, these materials can be microencapsu-
lated to prevent the mixing of paraffin with the concrete mixtures for undesirable
chemical reactions. It is the latent heat contribution that enhances the energy storage
capacity of PCMs as compared to sensible heat materials and processes, depending
on the nature of the PCMs considered. In the cooling stage, when they transform
from liquid to solid, as the PCMs solidify, their particles release their stored heat
once again and resist a change in the system’s temperature until the phase transition
has been completed [12, 25].
Moreover, cracks as one of the deteriorating elements of OPC and alkali-activated
concrete may develop in restrained concrete elements when volume changes related
to chemical reactions and thermal or moisture fluctuations are prevented due to
end, base, or internal (i.e., aggregate) restraint. Wang et al. [57] studied the relation
between the crack’s characteristics and permeability which confirmed the mentioned
phenomena. While such cracks do not typically impact structural integrity, by acting
as ingress paths for ions/moisture, they accelerate deterioration, increase mainte-
nance costs, and reduce the service life of structures (Mihashi, Leite et al. 2004).
While several strategies such as internal curing, expansive cement, and shrinkage-
reducing admixtures (SRAs) have been developed to mitigate moisture-linked
cracking, fewer options are available to mitigate thermal cracking [45]. According
to a recent study conducted by Al-Shdafat et al. [1] on the application of MPCMs
to the OPC samples, a significant reduction in the shrinkage of samples was seen.
In this respect, Fabio et al. [15] studied the feasibility of using MPCMs to mitigate
thermal cracking in cementitious materials. According to their findings, MPCMs can
absorb heat during the hydration and daytime based on the temperature variations
and delay the temperature change inside the samples. This fact was affected by the
MPCM usage, temperature change, and thermal deformation. Moreover, they found
that the samples’ compressive strength and elastic modulus were not changed signif-
icantly with the application of MPCMs, and they could reduce the crack formations
by modulating the heat inside the samples (Fig. 1).
Considering the above-mentioned information about the application of PCMs in
cementitious and alkali-activated materials, in this study, there has been a vast review
on this topic to gather the most recent studies to help the in-situ engineers decide
with more awareness when using these materials. Some comparisons between the
OPC and AAMs containing PCM and MCPM in terms of mechanical properties and
durability are provided in this review.
792 A. Golizadeh et al.
Fig. 1 Phase change materials heat reaction scheme and related phases through temperature change
2 Discussion
In this section, some reviews about the application of PCMs in the construction
industry, especially in the building materials such as OPC and AAMs, are provided.
OPC-based binders have been widely employed due to their accessibility and low
cost. Given stringent environmental restrictions and global carbon emission control,
one of the key goals of the OPC manufacturing and construction company is to
develop novel cement binders and promote environmental sustainability (energy
savings and low greenhouse gas emissions). A geopolymer is a synthetic alkali
aluminosilicate created by combining solid aluminosilicate with a high-concentration
alkali hydroxide or aqueous silicate solution. In addition, OPC only slightly immo-
bilizes other metals like Hg, As, Cu, Pb, Cr, and others. AAC is one of the OPC
substitute materials that are more environmentally friendly and less expensive than
conventional OPC. These dangerous components have been demonstrated to have
significantly less impact on AAC’s hydration than OPC. The AAC matrix has a lower
capillary porosity and gel pore volume than the OPC matrix. As a result, AAC is
thought to be a good metal cure medium [56]. Using mineral additives to lower
clinker content in cement to reduce CO2 emissions is a major cause of concern
for the precast industry, as the binders produced are frequently not very reactive at
early ages. Under steam curing conditions, the compressive strength of cement-based
materials is examined utilizing composite cement (clinker + slag) or combinations
of clinker and mineral admixtures at both early (1 day) and late (28 days) ages. Based
on performance, economics, and environmental criteria, laboratory results showed
that metakaolin (MK) is a viable solution at a clinker replacement rate of 12.5–25%
by mass [10].
The need for energy is steadily increasing due to rapid economic expansion,
rising population, and rising living standards [29]. The optimal exploitation of energy
Evaluating the Performance of Alkali-Activated Materials Containing … 793
resources has become critical in this circumstance. Because the buildings sector
accounts for over one-third of worldwide final energy consumption and is equally
responsible for CO2 emissions, energy-efficient buildings will play a critical role in
reversing the present energy and climate course. The utilization of space cooling,
heating, and ventilation accounts for over half of all final energy use in buildings
[55]. As a result, space heating and cooling offer the best chance of lowering energy
usage in the construction industry.
Thermal energy storage utilizing phase change material (PCM) has become a hot
topic among researchers and experts. Due to the PCM’s high latent heat storage
capacity boosts the building envelope’s latent heat storage capacity when it is
embedded in it [23]. Other advantages of employing PCM include the ability to
store heat at a nearly constant temperature matching the phase transition tempera-
ture, the ability to store more energy per unit volume than sensible heat materials, the
availability of a wide temperature range, and the inexpensive cost [35, 41]. Organic
PCM is favored as a thermal energy storage material over other PCM due to advan-
tages such as more significant chemical and thermal stability, suitability for wide
temperature ranges, high latent heat of fusion, lower cost, and non-corrosiveness.
However, its application as a thermal energy storage material in buildings is limited
because of poor thermal conductivity and leakage [49].
Thermal conductivity is a crucial property of a material that specifies how it
responds to heat transmission. Low thermal conductivity lengthens the time it takes
for the organic PCM to charge and discharge, lowering the heat transfer rate. In
addition, inadequate thermal conductivity reduces the ability of latent heat storage
to be fully utilized [60]. Shape-stabilized PCMs in building goods can store signif-
icant amounts of thermal energy without the huge structural bulk associated with
sensible heat storage. It can reduce building energy consumption while smoothing
out temperature fluctuations and improving the indoor thermal environment and
building efficiency [14].
According to Latibari et al. [53], shape-stabilized PCM is the most appealing
way to decrease leakage and improve organic PCM’s thermal conductivity. It also
reveals that using nanoparticles as a supporting material increased the PCM’s heat
conductivity by several orders of magnitude. Rathore et al. concluded in a recent
study that PCM loaded with nanoparticles improves the thermal conductivity of the
PCM. According to this study, the use of nano augmented PCM reduces the energy
consumption of buildings.
No single study examines the thermophysical characteristics of organic PCM-
based form stabilized composite PCM, but porous support material and nanoparticles
can improve the thermophysical properties of organic PCM [54].
Compared to the parent organic PCM, the form stabilized PCM has a lower
latent heat storage capacity. This could be due to the porous supporting material
and nanoparticles’ non-participation in the phase transition process [42].
Based on the results of Mohseni et al. [33] study, who worked on the application
of inorganic commercial-grade PCM with a 22 °C melting point to load porous
aggregates, using dual-layer coating for encapsulating them in building components,
the following outcomes can be mentioned. The compressive strength of the samples
794 A. Golizadeh et al.
was 28–40% lower than the control mixture, which can be attributed to the higher
porosity of the used lightweight aggregates. Moreover, the water absorption due
to the nature of the used aggregates increased. The high content of PCM could
mitigate the negative effect of thermal cycling. In this respect, a delay in reaching
the peak temperature was seen through the temperature monitoring with Infrared
thermography camera images. This fact was also true for cooling loads as well.
Based on Demos et al. [11], who studied who worked with bentonite as PCM,
it was shown that concrete with compressive strengths of more than 10 MPa expe-
rienced a less loss of mechanical resistance. However, they were possible with a
significant increase in energy storage capacity and thermal inertia (Fig. 2).
Haider et al. [19] created structural–functional thermal energy storage concrete
using Tetradecane, a low-temperature phase change compound, based on Zeeshan
Haider et al. investigations. According to SEM data, adding SF/MWCNT to the mix
reinforced the concrete used to make TESC, and the addition of SF/MWCNT to
concrete resulted in a denser microstructure (Fig. 3).
PCM-containing mixes were less workable, according to Mohseni’s findings, and
required up to 3% more water. The dry density and compressive strength of mixtures
containing 90% FA and 10% CH were the lowest. When PCMs are added to AACB
mortars, they cause a slight loss in density and a large loss in mechanical strength,
Fig. 3 Variation in
compressive strength with
time for investigated
concrete mixtures [33]
but they nevertheless allow AACB mortars to be manufactured with minimal sodium
silicate content as long as the PCM concentration does not exceed 20% [33].
Using PCM in concrete minimizes thermal gradients in the concrete mix and
unifies the internal temperature, according to Mahmoud Hsino et al. [20]. This
decreases the chance of concrete cracks and scratches, as well as the commencement
of maximum temperature.
A simple test was done by Margarida et al. 2021 to examine temperature variance
within alkali-activated materials and thermal performance. The temperature fluctu-
ation during Colling in PCM-containing AAMs differs from that in reference (0 wt
% PCM), with the former seeing a gentler drop in temperature due to the PCM heat
release ability. It should be emphasized that there were no discernible differences
between the two runs. The temperature of the two samples (see Fig. 4c and d) begins
to differ around the PCM melting area when heated, and this difference grows over
time.
Yangkai et al. 2021 investigated the thermal conductivity of an alkali-activated
slag-based containing paraffin/ceramsite shape-stabilized phase change material
containing paraffin/ceramsite shape-stabilized phase change material. As shown in
Fig. 5, the thermal conductivity decreases as the mass fraction of SSPCM increases.
The thermal conductivity of ASTESC dropped by 10.96% and 25.17% when the
mass fraction of SSPCM grew from 0 to 20% and 60%, respectively. It is worth
noting that the thermal conductivity of ASTESC is highly correlated with the mass
fraction of SSPCM (Table 1).
According to Pilehvar et al. [37], when the age of MPCM is increased from 0
to 20, the compressive strength of GPC and PCC decreases. GPC and PCC have
sufficient compressive strength for structural purposes.
796 A. Golizadeh et al.
Fig. 5 Thermal conductivity development of ASTESC with different dosages of SSPCM [63]
Evaluating the Performance of Alkali-Activated Materials Containing … 797
Table 1 (continued)
Year Location PCM core material Observations Reference
of
study
2020 China Composites PCMs of Energy savings of [51]
Na2HPO4-12H2O and CaCl2-6H2O up to 58.6% and
with expanded graphite 64% were found
in winter and
summer regions,
respectively
2020 China Paraffin wax The thermal [31]
performance was
enhanced by 23%
by increasing the
PCM layer from 6
to 12 mm
2020 China Microencapsulated Na2SO4·10H2O The maximum [58]
inside
temperature has
dropped by 3.1
degrees Celsius
2020 Turkey Various paraffin wax PCMs With the adoption [39]
of RT21HC,
annual heating
energy
consumption was
lowered by 6%
2020 China Paraffin wax Heat flow was [28]
reduced by 73.4%
during
air-conditioning
operation
2020 Mexico Paraffin wax MG29, N-Eicosane and For a paraffin [61]
Salt Hydrates PCM-based roof,
a 57% reduction
in thermal load
was found
2020 China Inorganic ternary mixture Temperature [50]
(CaCl2·6H2O–NH4Cl–SrCl2·6H2O) changes of
and expanded graphite 22.5–27.9v were
observed with an
air flow rate of
11.47 kg/h and a
PCM layer
thickness of
12 mm
(continued)
Evaluating the Performance of Alkali-Activated Materials Containing … 799
Table 1 (continued)
Year Location PCM core material Observations Reference
of
study
2020 China Micronal DS 5038 X- Composite Peak temperature [13]
PCM was lowered by
1 °C thanks to the
artificial
aggregate and
PCM-based panel
2021 India S25, RT 26, S27, FS 28, HS 29, S30, Using the Key [6]
E32 response Index as
a metric, the heat
gain of the FS 28
and HS 29 PCMs
was reduced by
more than 50%
2021 Iran Paraffin wax Building energy [52]
consumption
reduced by 20%
2021 Egypt RT10 HC, SP 24E In the summer (Said and
and winter, the Hassan
highest coefficient 2021)
of performance
was 88% and
22%, respectively
2021 Iraq Paraffin wax There was a (Al-Yasiri
maximum and Szabó
temperature drop 2021)
of 9°C in the
room
2021 China Composite PCM of sodium sulfate It was discovered [27]
decahydrate/expanded vermiculite that the indoor
temperature was
held for an
extended period
of time
2021 Mediterranean Composite PCM of paraffin and Thermal [4]
calcium carbonate amplitude was
reduced by
1–3.5°C, with a
peak temperature
delay of 2–3 h
800 A. Golizadeh et al.
3 Conclusion
According to the reviewed papers on the application of PCMs in the OPC and
alkali-activated mortar and concrete, some points can be mentioned as follows. OPC
concrete is one of the most used materials in the world which provides many benefits,
although a huge amount of yearly energy use and carbon emission is related to the
Evaluating the Performance of Alkali-Activated Materials Containing … 801
production of this material. AMMs can be the perfect alternative for compensating
these defects in terms of the low durability and lowering the ecological impact of
this material. Moreover, PCMs are great materials with the capability of heat storage
which can be used in the mixtures of both mentioned materials to modulate heat
inside the samples. This can provide many benefits during the mixing with modu-
lating the temperature within the samples to reduce the heat damages and reduce
the microcracks for increasing the lifetime of the samples and by modulating the
temperature inside the buildings. It should be mentioned that although the applica-
tion of these materials has some impacts on the final product such as reducing the
mechanical strength and increasing water absorption during the mix, they can reduce
the number of pores inside the samples, improving the freeze and thawing resistance,
and increase the heat capacity of samples.
References
14. Fang G, Tang F, Cao L (2014) Preparation, thermal properties and applications of shape-
stabilized thermal energy storage materials. Renew Sustain Energy Rev 40:237–259
15. Fernandes F, Manari S, Aguayo M, Santos K, Oey T, Wei Z, Falzone G, Neithalath N, Sant G
(2014) On the feasibility of using phase change materials (PCMs) to mitigate thermal cracking
in cementitious materials. Cement Concr Compos 51:14–26
16. Ferreira LFB, Costa HSS, Barata IIA, Júlio ENBS, Tiago PMN, Coelho JFJ (2014) Precast
alkali-activated concrete towards sustainable construction. Mag Concr Res 66(12):618–626
17. Gholamibozanjani G, Farid M (2020) Application of an active PCM storage system into a
building for heating/cooling load reduction. Energy 210:118572
18. Gonçalves M, Novais RM, Senff L, Carvalheiras J, Labrincha JA (2021) PCM-containing
bi-layered alkali-activated materials: a novel and sustainable route to regulate the temperature
and humidity fluctuations inside buildings. Build Environ 205:108281. https://doi.org/10.1016/
j.buildenv.2021.108281
19. Haider MZ, Jin X, Sharma R, Pei J, Hu JW (2022) Enhancing the compressive strength of
thermal energy storage concrete containing a low-temperature phase change material using
silica fume and multiwalled carbon nanotubes. Constr Build Mater 314:125659
20. Hsino M, Pasławski J (2013) Phase change materials as a modifier of ageing cement concrete
in hot and dry climate. Adv Mat Res 804:129–134
21. Jayalath A, San Nicolas R, Sofi M, Shanks R, Ngo T, Aye L, Mendis P (2016) Properties
of cementitious mortar and concrete containing micro-encapsulated phase change materials.
Constr Build Mater 120:408–417
22. Juenger MCG, Winnefeld F, Provis JL, Ideker JH (2011) Advances in alternative cementitious
binders. Cem Concr Res 41(12):1232–1243
23. Khan Z, Khan Z, Ghafoor A (2016) A review of performance enhancement of PCM based
latent heat storage system within the context of materials, thermal stability and compatibility.
Energy Convers Manage 115:132–158
24. Kheiri F (2018) A review on optimization methods applied in energy-efficient building
geometry and envelope design. Renew Sustain Energy Rev 92:897–920
25. Khudhair AM, Farid MM (2004) A review on energy conservation in building applications
with thermal storage by latent heat using phase change materials. Energy Convers Manage
45(2):263–275
26. Kishore RA, Bianchi MVA, Booten C, Vidal J, Jackson R (2020) Optimizing PCM-integrated
walls for potential energy savings in U.S. Buildings. Energy Build 226:110355
27. Liu L, Li J, Deng Y, Yang Z, Huang K, Zhao S (2021) Optimal design of multi-layer structure
composite containing inorganic hydrated salt phase change materials and cement: lab-scale
tests for buildings. Constr Build Mater 275:122125
28. Luo Z, Xu H, Lou Q, Feng L, Ni J (2020) GPU-accelerated lattice Boltzmann simulation of heat
transfer characteristics of porous brick roof filled with phase change materials. Int Commun
Heat Mass Transfer 119:104911
29. Madlener R, Sunak Y (2011) Impacts of urbanization on urban structures and energy demand:
what can we learn for urban energy planning and urbanization management? Sustain Cities
Soc 1(1):45–53
30. Mahdaoui M, Hamdaoui S, Ait Msaad A, Kousksou T, El Rhafiki T, Jamil A, Ahachad M (2021)
Building bricks with phase change material (PCM): thermal performances. Constr Build Mater
269:121315
31. Mao Q, Yang M (2020) Study on heat transfer performance of a solar double-slope PCM glazed
roof with different physical parameters. Energy Build 223:110141
32. Mihashi H, Leite J, Atilde B. o Paulo de (2004) State-of-the-art report on control of cracking
in early age concrete. J Adv Concr Technol 2(2):141-154
33. Mohseni E, Tang W, Khayat KH, Cui H (2020) Thermal performance and corrosion resistance
of structural-functional concrete made with inorganic PCM. Constr Build Mater 249:118768
34. Pacheco-Torgal F, Faria J, Jalali S (2013) Embodied energy versus operational energy. Materials
Science Forum, Trans Tech Publ, Showing the shortcomings of the energy performance building
directive (EPBD)
Evaluating the Performance of Alkali-Activated Materials Containing … 803
35. Pereira da Cunha J, Eames P (2016) Thermal energy storage for low and medium temperature
applications using phase change materials—a review. Appl Energy 177:227–238
36. Pilehvar S, Cao VD, Szczotok AM, Carmona M, Valentini L, Lanzón M, Pamies R, Kjøniksen
A-L (2018) Physical and mechanical properties of fly ash and slag geopolymer concrete
containing different types of micro-encapsulated phase change materials. Constr Build Mater
173:28–39
37. Pilehvar S, Cao VD, Szczotok AM, Valentini L, Salvioni D, Magistri M, Pamies R, Kjøniksen
A-L (2017) Mechanical properties and microscale changes of geopolymer concrete and Port-
land cement concrete containing micro-encapsulated phase change materials. Cem Concr Res
100:341–349
38. Pilehvar S, Szczotok AM, Rodríguez JF, Valentini L, Lanzón M, Pamies R, Kjøniksen A-L
(2019) Effect of freeze-thaw cycles on the mechanical behavior of geopolymer concrete and
Portland cement concrete containing micro-encapsulated phase change materials. Constr Build
Mater 200:94–103
39. Pirasaci T (2020) Investigation of phase state and heat storage form of the phase change
material (PCM) layer integrated into the exterior walls of the residential-apartment during
heating season. Energy 207:118176
40. Provis JL, Bernal SA (2014) Geopolymers and related alkali-activated materials. Annu Rev
Mater Res 44(1):299–327
41. Rathore PKS, Shukla SK (2019) Potential of macroencapsulated PCM for thermal energy
storage in buildings: a comprehensive review. Constr Build Mater 225:723–744
42. Rathore PKS, Shukla SK (2021) Enhanced thermophysical properties of organic PCM through
shape stabilization for thermal energy storage in buildings: a state of the art review. Energy
Build 236:110799
43. Ručevskis S, Akishin P, Korjakins A (2020) Parametric analysis and design optimisation of
PCM thermal energy storage system for space cooling of buildings. Energy Build 224:110288
44. Said MA, Hassan H (2021) Impact of energy storage of new hybrid system of phase change
materials combined with air-conditioner on its heating and cooling performance. J Energy
Storage 36:102400
45. Sant GN (2009) Fundamental investigations related to the mitigation of volume changes in
cement-based materials at early ages Ph.D., Purdue University
46. Šavija B, Schlangen E (2016) Use of phase change materials (PCMs) to mitigate early age
thermal cracking in concrete: Theoretical considerations. Constr Build Mater 126:332–344
47. Sharma A, Tyagi VV, Chen CR, Buddhi D (2009) Review on thermal energy storage with phase
change materials and applications. Renew Sustain Energy Rev 13(2):318–345
48. Shi C, Jiménez AF, Palomo A (2011) New cements for the 21st century: The pursuit of an
alternative to Portland cement. Cem Concr Res 41(7):750–763
49. Singh Rathore PK, Shukla SK, Gupta NK (2020) Potential of microencapsulated PCM for
energy savings in buildings: a critical review. Sustain Cities Soc 53:101884
50. Sun W, Huang R, Ling Z, Fang X, Zhang Z (2020) Numerical simulation on the thermal
performance of a PCM-containing ventilation system with a continuous change in inlet air
temperature. Renew Energy 145:1608–1619
51. Sun W, Zhang Y, Ling Z, Fang X, Zhang Z (2020) Experimental investigation on the thermal
performance of double-layer PCM radiant floor system containing two types of inorganic
composite PCMs. Energy Build 211:109806
52. Tafakkori R, Fattahi A (2021) Introducing novel configurations for double-glazed windows
with lower energy loss. Sustainable Energy Technol Assess 43:100919
53. Tahan Latibari S, Sadrameli SM (2018) Carbon based material included-shaped stabilized
phase change materials for sunlight-driven energy conversion and storage: an extensive review.
Sol Energy 170:1130–1161
54. Tariq SL, Ali HM, Akram MA, Janjua MM, Ahmadlouydarab M (2020) Nanoparticles
enhanced phase change materials (NePCMs)-A recent review. Appl Therm Eng 176:115305
55. Ürge-Vorsatz D, Cabeza LF, Serrano S, Barreneche C, Petrichenko K (2015) Heating and
cooling energy trends and drivers in buildings. Renew Sustain Energy Rev 41:85–98
804 A. Golizadeh et al.
56. Wang DL, Chen ML, Tsang DDCW (2020) Chapter 5—Green remediation by using low-carbon
cement-based stabilization/solidification approaches. Butterworth-Heinemann, Sustainable
remediation of contaminated soil and groundwater. D. Hou, pp 93–118
57. Wang K, Jansen DC, Shah SP, Karr AF (1997) Permeability study of cracked concrete. Cem
Concr Res 27(3):381–393
58. Wang Z, Qiao Y, Liu Y, Bao J, Gao Q, Chen J, Yao H, Yang L (2021) Thermal storage
performance of building envelopes for nearly-zero energy buildings during cooling season in
Western China: An experimental study. Build Environ 194:107709
59. Wongpa J, Kiattikomol K, Jaturapitakkul C, Chindaprasirt P (2010) Compressive strength,
modulus of elasticity, and water permeability of inorganic polymer concrete. Mater Des
31(10):4748–4754
60. Wu S, Yan T, Kuai Z, Pan W (2020) Thermal conductivity enhancement on phase change
materials for thermal energy storage: A review. Energy Storage Mat 25:251–295
61. Xamán J, Rodriguez-Ake A, Zavala-Guillén I, Hernández-Pérez I, Arce J, Sauceda D (2020)
Thermal performance analysis of a roof with a PCM-layer under Mexican weather conditions.
Renew Energy 149:773–785
62. Yu J, Yang Q, Ye H, Luo Y, Huang J, Xu X, Gang W, Wang J (2020) Thermal performance
evaluation and optimal design of building roof with outer-layer shape-stabilized PCM. Renew
Energy 145:2538–2549
63. Zhang Y, Sang G, Du X, Cui X, Zhang L, Zhu Y, Guo T (2021. Development of a novel alkali-
activated slag-based composite containing paraffin/ceramsite shape stabilized phase change
material for thermal energy storage. Constr Build Mat 304:124594. https://doi.org/10.1016/j.
conbuildmat.2021.124594
Experimental Investigation
on Compressive, Tensile, and Flexural
Strengths of Concrete with High Volume
of GGBS, Fly Ash, and Silica Fume
1 Introduction
Due to its low cost and superior compressive strength, concrete is the most used
construction material across the globe. In applications where concrete elements are
highly loaded in compression and the cross-section dimensions are limited due to
architectural constraints, ordinary concrete with compressive strength on the order
of 28–55 MPa becomes inapplicable. Therefore, the use of high-strength concrete,
with compressive strength higher than 55 MPA, could solve this problem. However,
achieving high-strength concrete is not the sole objective in the mix design of
concrete. The lifecycle environmental impact of the concrete production process
should be taken into consideration. The concrete mix should be designed in a way
to reduce the lifecycle environmental impact by reducing the CO2 emissions, hence
reducing the depletion of the ozone layer, and preventing global warming. Interest-
ingly, the amount of carbon dioxide emitted from the concrete manufacturing process
is 0.05–0.13 tons per 1 ton of concrete. 95% of this amount comes as an output of
the cement production process [18]. In order to reduce the carbon dioxide in the
atmosphere, and protect the environment, concrete mixes with lower percentage of
cement should be used. The cement is often replaced with supplementary materials
such as the fly ash, GGBS, and silica fume, to provide sustainable/green concrete.
In order to achieve both, a high-strength and sustainable concrete, a mix with high
volume of GGBS, fly ash, and silica fume are introduced. The use of high volume
of GGBS can significantly improve the compressive strength of concrete. The use
of fly ash in the cementitious paste enhances the properties of hardened concrete
such as the early gain of strength as well as the hardness of the surface [6]. Fly ash
contributes also to enhancement of the fresh concrete by reducing the generated heat
while mixing [5]. On the other hand, silica fume is added in low percentages to the
concrete mix in order to reduce the permeability of the concrete by acting as a filler
between the aggregate particles when mixed with the cement, Siddique [21].
GGBS is a by-product of manufactured pig iron that is produced by mixing lime-
stone, iron ore, and coke into a blast furnace and heating them up to about 1500 °C
[19]. The output of this process is molten slag and molten iron. These components
are automatically separated due to their different specific gravity. The molten slag
floats on the surface of the mix because it is comprised of low-density molecules
such as the silicate, lime, and alumina, while the molten iron stick to the bottom.
After separation, the slag is chilled rapidly by a high-pressure water technique that is
used to maximize the hydraulic properties of the mix. The mix is then cooled, and the
molten slag breaks into smaller particles which are often less than 4.75 mm. These
particles are then grinded and dried in a rotating ball mill to a very fine powder which
is called GGBS. The published literature on applications of GGBS in concrete has
been significantly increased in the last two decades. According to Khan and Ghani
[10], the concrete made with higher amount of GGBS exhibited lower initial setting
time and greater workability. This is due to the fact that the GGBS particles exhibit
a plasticizing effect in the concrete due to their small size. In another study, it was
found that by replacing 70% of the cement by GGBS, the initial and final setting
Experimental Investigation on Compressive, Tensile, and Flexural … 807
time of concrete decreased from 241 to 108 min, and 318 to 182 min, respectively
[21]. Oner and Akyuz [19] found that the concrete compressive strength increases
as the percentage of GGBS is increased in the binder. However, at about 55% of
GGBS, further addition of GGBS does not affect the concrete strength anymore.
This is due to the presence of idle GGBS particles, acting as a filler between the
binder particles [19]. Therefore, it was concluded that GGBS plays a fundamental
role in increasing the strength of concrete as the period of curing extends. However,
the associated challenge with the increase of GGBS in concrete is the decrease in the
early strength of concrete, which is due to the high potential of pozzolanic reaction
in GGBS particles. Therefore, in order to solve this problem, a chemical material
such as the lime is added to the moisture [8].
According to Berry and Malhotra [2], fly ash is an industrial waste produced by
burning the coal in fired-power industrial plants. The coal is pulverized first in mills
to obtain a very fine powder. This powder is then subjected to combustion in huge
boilers to produce heat from the coal. This heat is used then to generate steam to
produce the required power in the fly ash formation process. Glassy alumina silicate
forms after that when the minerals of the coal fuses to form spherical tiny particles
that suspends with the exhausted gases to form the material that called the fly ash.
Much research has focused on the potential of fly ash in improving both the fresh and
hardened concrete properties. Many variables control the strength development of
cementitious material when fly ash is used, like particle sizes, chemical composition,
ability to react and temperature, and other curing conditions [7]. According to De
Maeijer, 2020, when 25% of the cement paste was replaced by ultra-fine fly ash,
the concrete exhibited better workability, electrical resistivity, and the coefficient of
chloride migration, when compared to concrete made without fly ash. This effect
was significantly increased when ultra-fine fly ash was used in similar specimens.
However, when 50% or more of the cementitious paste was replaced by fly ash, the
concrete exhibited brittle behavior. In contrast with the concrete made with Portland
cement, concrete made with optimal amount of fly ash exhibited higher compressive
strength at higher temperature [4]. According to Nath and Sarker [16], when 30–40%
of the binder material was replaced by Class F fly ash, the shrinkage, sorptivity, and
permeability of the concrete were reduced significantly.
According to Nochaiya et al. [17], silica fume is a by-product of the smelling
process in the silicon and ferrosilicon industry. This material, which is often known
as micro-silica, or condensed silica fume is a very fine non-crystalline material which
is used as a partial replacement for the cement in concrete. According to Amin
et al. (2022), the replacement of 25% of the cementitious material by silica fume
contributed to reducing the slump from 385 to 345 mm. Also, when the percentage
of silica fume was increased from 5 to 15%, the compressive strength of concrete
at 28 days increased from 44.93 MPa to 54.23 MPa [24]. Furthermore, 10% of
silica fume in concrete contributed to enhancing the bond strength and improving
the durability of concrete, especially in high-performance concrete mixes [9]. Also,
since silica fume is made of very fine particles, it acts as a filler when mixed with
the cement in concrete. Therefore, it reduces the porosity in the binder material and
permeability in concrete which protects the concrete from corrosion on the long
808 A. A. Hawarneh et al.
run [22]. However, the increase of more the 10% of the cementitious paste could
decrease this reduction or even reverse the output. According to Mazloom et al. [12],
silica fume has a magnificent long-term effect on the creep of concrete, especially
in high-performance mixes. As the percentage of silica fume increases to 15%, the
creep of concrete decreases by 20–30%.
A lot of studies in the literature investigated the use of two mineral admixtures
to enhance the mechanical properties of concrete. For instance, Suda & Rao [23]
conducted an experimental study to investigate the optimum rates of GGBS and
silica fume in high-strength concrete. Nagaraj & Himabindu [15] investigated the
behavior of concrete made with different percentages of GGBS and fly ash. Yazıcı
[25] investigated the effect of two mineral admixtures, fly ash, and GGBS, on the
mechanical properties of geopolymer concrete. Mohan & Mini [14] developed an
enhanced self-compacted concrete using various percentages of GGBS and silica
fume. Reddy & Rao (2016) were able to achieve a compressive strength of concrete
in the range of 32.24–54.22 MPa, using multiple percentages of GGBS and silica
fume. Zhao et al. [26] conducted an experimental program to investigate the shrinkage
of concrete made with fly ash and GGBS. Few articles investigated the effect of
all the fly ash, GGBS, and silica fume, on the mechanical propertied of concrete,
but without blending them in one mix. Kim et al. [11] introduced three concrete
mix designs, using 50% fly ash, 50% GGBS, and 25% silica fume, to improve the
compressive strength of concrete. Mohamed & Najm [13] conducted an experimental
study to enhance the compressive strength of concrete using 10–40% of fly ash, 5–
20% silica fume, or 10–80% GGBS. None of the studies in the literature, according to
the authors’ knowledge, investigated all the compressive strength, flexural strength,
tensile strength, and elastic modulus, of high-strength concrete using three mineral
admixtures. The current study, to the first time, is proposing a high compressive
strength concrete, on the order of 90 MPa, using three mineral admixtures in one
mix, fly ash, GGBS, and silica fume, not only to enhance the compressive strength, but
also the flexural strength, splitting tensile strength, and elastic modulus of concrete.
The proposed mix design contains 30% GGBS, 20% fly ash, and 10% silica fume.
Destructive test, using the compression testing machine, and non-destructive tests,
using the Schmidt hummer, were used to validate the compressive strength results
of the cubes. Pulse velocity test was then used to evaluate the quality of the mix.
2 Methodology
The methodology of work of this study consists of five stages, namely scope defi-
nition, concrete mix design, preparation of specimens, laboratory test setups, and
analysis of results.
Experimental Investigation on Compressive, Tensile, and Flexural … 809
Experimental Program
Ordinary HSC Ordinary HSC Ordinary HSC Ordinary HSC Ordinary HSC Ordinary HSC
Concrete Concrete Concrete Concrete Concrete Concrete Concrete Concrete Concrete Concrete Concrete Concrete
14 28 14 28 14 28 14 28 14 28 14 28
days days days days days days days days days days days days
X6 X6 X6 X6 X6 X6 X6 X6 X6 X6 X6 X6
Cubes Cub e s C ub e s
The scope of this study includes investigating the compressive strength, the flexural
strength of as well as the splitting tensile strength and elastic modulus of concrete
prepared with a high volume of GGBS, fly ash, and silica fume. Two concrete
mixes were prepared, an ordinary concrete and high-strength concrete with a target
compressive strength of 35 and 90 MPa, respectively. The ordinary concrete was
produced without mineral admixtures while the high-strength concrete was produced
using GGBS, fly ash, and silica fume in a weight ratio of 30%, 20%, and 10%, respec-
tively. The experimental program consisted of destructive tests including compres-
sive, tensile, and flexural strength tests as well as non-destructive tests including
Schmidt hammer and ultrasonic tests. The destructive tests of each concrete mix
involve testing the compressive strength of twelve cubes (100 mm), the flexural
strength of six rectangular prisms (100 mm by 100 mm by 300 mm), splitting tensile
strength of six cylinders (150 mm by 300 mm), and elastic modulus of six cylinders
(150 mm by 300 mm). The non-destructive tests involve Schmidt hummer test to
investigate the compressive strength of concrete of the cubes and ultrasonic test to
investigate the quality of concrete for the same. The experimental program of the
study is illustrated in Fig. 1. Summary of the specimens prepared in this research is
shown in Table 1.
Two concrete mixes were prepared in the Construction Materials Laboratory at the
American University of Sharjah, using the ordinary Portland cement ASTM C150.
810 A. A. Hawarneh et al.
Two coarse aggregate sizes were employed in both mixes, which are 10 mm and
20 mm. Moreover, two fine aggregate types were utilized, which are washed sand and
dune sand. The water reducing admixture utilized in these two mixes was Glenium
Sky 502. The ratio of the 20 mm to 10 mm coarse aggregate weights was 40:60 in
both mixes. The properties of coarse and fine aggregates are given in Table 2. The
properties of the superplasticizer used in both concrete mixes are given in Table 3.
The binder material is comprised of 40% cement, 30% GGBS, 20% fly ash, and
10% silica fume. The mix design sheets of the ordinary and high-strength concrete
are given in Tables 4 and 5, respectively. The constituents of the mix, including the
cement, water, aggregates, and mineral admixtures, were calculated in one meter
cube as presented in Tables 4 and 5. Due to material losses during mixing, around
5% extra materials were added to the calculated amount of 0.1056 m3 . Therefore,
the volume used to prepare the molds of each mix was 0.11 m3 . The weight of each
constituent per the trial batch was calculated by multiplying the content per 1 m cube
by 0.11.
The mixing process was started by putting the coarse aggregates (20 mm gravel and
10 mm gravel) in the mechanical rotating mixer. Then, the fine aggregates (dune
sand and crushed sand) were added to the mix. This was followed by adding the
cement, in addition to the mineral admixtures. The water was added based on the
mix design sheets in Tables 4 and 5, to create reactions between the aggregates
and cementitious materials. The chemical admixture “Glenium sky 502” was added
after that to facilitate the mixing process. The chemical admixture was not added
thoroughly in order to avoid segregation or reduction in workability of the mix.
There was no need to add excessive water during mixing because the poured water
was enough to facilitate the mixing process. After mixing, the fresh concrete was
tested for its slump, air content, and temperature after 10 and 60 min. After that,
the molds were cleaned, lubricated, and placed on a flat surface. Then, the fresh
concrete was poured into the molds in three layers and compacted using 16 mm
rod by 25 strokes for each layer. Any extra concrete on the top of the mold was
removed, and the surface of the molds was smoothed out. The molds were kept after
812 A. A. Hawarneh et al.
that in an ambient temperature of 18–22 °C for 24 h. The molds were removed after
that, and the specimens were placed in a curing tank in an ambient temperature of
27 °C until the date of testing. Six cubes of each mix were taken out of the curing
tank after 14 days and subjected to Schmidt hammer and ultrasonic test, then tested
in compression using the compression testing machine. The rest of the specimens,
including six cubes, six beams, and twelve cylinders, from each mix, were taken out
of the curing tank after 28 days and subjected to various tests as explained earlier in
Table 1.
The compressive strength, flexural strength, and splitting tensile strength test setups
were done based on the BS EN 12,390–3:2019, BS EN 12,390–5:2019, and BS EN
12,390–6:2009 codes, respectively, while the elastic modulus test was done based on
the BS 1881 - Part 5: 1970. On the other hand, the Schmidt hammer and ultrasonic
pulse velocity tests were done according to the standard test procedure of the BS EN
12,504–2:2021 and BS EN 12,504–4:2021 codes, respectively.
Experimental Investigation on Compressive, Tensile, and Flexural … 813
Twelve cubes of each mix were tested the compression testing machine. Six of them
were after 14 days, and the other six were after 28 days from molding. The ordinary
and high-strength concrete cubes were centered in the testing machine and loaded
with a load-controlled rate at 0.25 MPa/s and 0.50 MPa/s, respectively, until the
specimens are failed.
The flexural strength test was conducted to examine the flexibility or bending ability
of the prisms of each mix. Six prisms of each mix were tested after 28 days of mixing.
Prior testing, the 500 mm length prism was positioned as molded, in the horizontal
direction, and centered above two supports at a 400 mm span. The specimen was
then subjected to two point loads equidistant from the center. The ordinary and
high-strength concrete prisms were loaded at a load-controlled rate of 0.04 MPa/s
and 0.08 MPa/s, respectively, until the specimens are failed. The load at failure was
recorded then and used to calculate the flexural strength of the prism as provided in
Eq. (1). The experimental flexural strength was compared after that with the target
compressive strength that was calculated based on the CSA A23.3–19 Code using
Eq. (2).
Fl
fr (ex p) = (1)
bd 2
fr (ex p) : flexural strength of concrete based on experimental results.
F: load at failure.
l: distance between the supports.
b: width of the prism.
d: depth of the prism.
fr (target) = 0.60 ∗ f c (2)
The splitting tensile strength was tested on six cylinders of each mix after 28 days of
mixing. The ordinary and high-strength concrete cubes were centered in the testing
machine and loaded at a load-controlled rate of 0.04 MPa/s and 0.08 MPa/s, respec-
tively, until the specimens are failed. The load at failure was recorded then and used
to calculate the splitting tensile strength of the cylinder as provided in Eq. (3).
814 A. A. Hawarneh et al.
2F
f ct (ex p) = (3)
π hdi
The elastic modulus test was conducted on six cylinders of each mix after 28 days of
mixing. As a first step, the specimen was placed in the testing machine and aligned
with the center of the thrust of the lower and upper bearing of the testing machine.
The compressometer, after that, was arranged on the specimen and centered with the
machine. The movable part of the bearing was rotated then manually so that a uniform
seating is achieved. Then, the specimen was pre-loaded to about 75% of the ultimate
load of the specimen to check the alignment of the specimen with the machine and
ensure that the test is properly conducted. After adjusting the alignment, a load-
controlled compression at a rate of 0.25 MPa/s was applied. The load at 0.00005 m/
m longitudinal strain was recorded. Then, the strain at 40% of the ultimate load was
recorded. The strain was calculated by dividing the longitudinal deformation with
the effective gauge length. The actual elastic modulus of the specimen was calculated
after that as per Eq. (4). The experimental modulus of elasticity was compared after
that with the target elastic modulus that was calculated based on the CSA A23.3–19
Code using Eq. (5).
S2 − S1
E c(ex p) = (4)
C − 0.00005
Schmidt hammer test was used as a non-destructive test to check the compressive
strength of six cube specimens from each mix after 14 days and another six after
28 days of mixing. This test was conducted by a rebound hammer, which is a spring-
loaded instrument, that is when released strikes a steel blunger that is in contact
Experimental Investigation on Compressive, Tensile, and Flexural … 815
with the surface of the specimen. The compressive strength is predicted based on the
rebound distance of the hammer from the steel blunger. There is a graph attached with
the instrument and provides multiple charts for the relationships between the rebound
value and the compressive strength of the specimen, based on the hammering angle.
This test is a non-destructive test that was used as well to check the quality of six cube
specimens from each mix after 14 days and another six after 28 days of mixing. In
this test, a pulse of vibrations is generated by a transducer that is held in contact with
the surface of the specimen. This pulse is converted then into an electrical current
by a second transducer. Based on the transition time of the pulse, the quality of the
concrete mix was evaluated. Greater velocities indicate superior material quality and
continuity, whereas slower velocities might indicate concrete with numerous cracks
or voids. The specimen with a pulse velocity of more than 4000 m/s is considered of
good quality. In general, the speed at which pulses move through concrete increases
with the amount of moisture in the concrete. Therefore, the pulse velocity of the wet
concrete might be up to 2% faster than that of the dry concrete of the same specimen.
Before the fresh concrete was cast into the molds, it was tested for its slump. The
slump test results of both mixes are given in Table 6. The results indicate collapse
slump, which means that the workability of the concrete mixes is high. The slump
height as well as the slump radius of the high-strength concrete mix were found
slightly higher than that of the ordinary concrete mix. The fresh concrete was tested as
well for its heat of hydration. The temperature of each mix was recorded immediately
after pouring, after 10 min, and after 60 min. The heat of hydration test results is
provided in Table 7. The temperature of the high-strength concrete mix was found
higher, which was expected, due to the presence of silica fumes and GGBS. The
fresh concrete was tested as well for its density and air content using the entrapped
air container. The density and air content test results are shown in Table 8.
hammer and ultrasonic tests. In the destructive testing part of this study, the cubes
were tested in compression, the prisms were tested in flexure, and the cylinders were
subjected to splitting tensile test or elastic modulus test. The destructive compressive
strength test results of the cube specimens of both mixes are provided in Table 9.
The average compressive strength of the ordinary cube specimens at 14 and 28 days
was found 34.5 MPa and 41.7 MPa, respectively. The experimental compressive
strength at 28 days was in average 19.2% higher than the target compressive strength
which is 35 MPa. On the other hand, the average compressive strength of the high-
strength cube specimens at 14 and 28 days was found 85.7 MPa and 96.5 MPa,
respectively. The experimental to theoretical concrete compressive strength ratio of
the high-strength specimens was found to be 1.073. The compressive strength of the
cube specimens of both mixes by the Schmidt hammer test is shown in Table 10.
Experimental Investigation on Compressive, Tensile, and Flexural … 817
The average compressive strength of the ordinary cube specimens at 14 and 28 days
was found to be 31.1 MPa and 37.4 MPa, respectively. The experimental compres-
sive strength at 28 days was in average 6.7% higher than the target compressive
strength which is 35 MPa. These results are 10.5% less than the experimental results
by the destructive compressive test. On the other hand, the average compressive
strength of the high-strength cube specimens at 14 and 28 days was found 81.2 MPa
and 91.0 MPa, respectively. The experimental to theoretical concrete compressive
strength ratio of the high-strength specimens was found to be 1.011. These results
are 5.8% less than the experimental results by the destructive compressive test. The
ultrasonic pulse velocity test results of both mixes’ specimens, after 14 and 28 days,
are given in Table 11. The pulse velocity of the ordinary cube specimens at 14 and
28 days was in average 5,146 m/s and 5,085 m/s, respectively. The pulse velocity of
all ordinary cube specimens was greater than 4500 m/s, which means that the quality
of the ordinary concrete mix is excellent. On the other hand, the pulse velocity of
the high-strength cube specimens at 14 and 28 days was in average 4,723 m/s and
4,624 m/s, respectively. The pulse velocity of all high-strength cube specimens was
greater than 4500 m/s, which means that the quality of the high-strength concrete
mix is excellent as well, but of less quality compared to the ordinary concrete mix.
The flexural strength test results of the prism specimens of both mixes are provided
in Table 12. The average flexural strength of the ordinary prisms at 28 days was found
3.75 MPa. The experimental compressive strength was on average 5.7% higher than
the target flexural strength which is 3.55. On the other hand, the average compressive
strength of the high-strength prisms at 28 days was found 6.17 MPa. The experimental
to target flexural strength ratio of the high-strength specimens was found 1.085. The
splitting tensile strength test results of the specimens from both mixes are provided in
Table 13. The average splitting tensile strength of the ordinary concrete cylinders at
28 days was found 3.38 MPa. The ratio of the splitting tensile to compressive strength
was found 9.7%. On the other hand, the splitting tensile strength of the high-strength
cylinders at 28 was in average 5.38 MPa. The splitting tensile strength to compressive
strength ratio of the ordinary and high-strength concrete mixes were found to be
0.097 and 0.06, respectively. The elastic modulus test results of the specimens of
the ordinary and high-strength concrete mixes are provided in Tables 14 and 15,
respectively. The elastic modulus of the ordinary concrete cylinders at 28 days was
in average 27,051 MPa, which is 1.6% higher than the target elastic modulus of the
mix which is 26622. The modulus of elasticity of the high-strength concrete cylinders
at 28 days was in average 43,240 MPa, which is 1.3% higher than the target elastic
modulus of the mix which is 42691.
Experimental Investigation on Compressive, Tensile, and Flexural … 819
Fig. 2 Comparison between destructive and non-destructive compression test results of the ordinary
concrete cubes at 28 days
120
Compressive Strength (MPa)
80
60
40
20
0
1 2 3 4 5 6
Test No.
Fig. 3 Comparison between destructive and non-destructive compression test results of the high-
strength concrete cubes at 28 days
5200
5000
4800
Ordinary Concrete
4600
High Strength Concrete
4400
0 1 2 3 4 5 6
Test No.
Fig. 4 Comparison between the ultrasound test results of ordinary and high-strength concrete cubes
at 28 days
822 A. A. Hawarneh et al.
Acknowledgements The experimental work of this research was conducted in the Construction
Materials Laboratory at the American University of Sharjah, under the supervision of the Senior
Laboratory Instructor, Eng. Arshi Faridi, and the help of the Laboratory technician, Eng. Mohamed
Ansari.
References
1. Amin M, Zeyad AM, Tayeh BA, Agwa IS (2022) Effect of ferrosilicon and silica fume on
mechanical, durability, and microstructure characteristics of ultra high-performance concrete.
Constr Build Mater 320:126233
2. Berry, E. E., & Malhotra, V. M. (1980, March). Fly ash for use in concrete-a critical review.
In Journal Proceedings (Vol. 77, No. 2, pp. 59–73).
3. De Maeijer PK, Craeye B, Snellings R, Kazemi-Kamyab H, Loots M, Janssens K, Nuyts G
(2020) Effect of ultra-fine fly ash on concrete performance and durability. Constr Build Mater
263:120493
4. Hela, R., Tazky, M., & Bodnarova, L. (2016). Possibilities of determination of optimal dosage
of power plant fly ash for concrete. Jurnal Teknologi, 78(5–3).
5. Helmuth, Richard, 1987. SP-40, Fly Ash in Cement and Concrete, Portland Cement Associa-
tion, 196 pp.
6. Hemalatha T, Sasmal S (2019) Early-age strength development in fly ash blended cement
composites: Investigation through chemical activation. Mag Concr Res 71(5):260–270
7. Idorn GM, Henriksen KR (1984) State of the art for fly ash uses in concrete. Cem Concr Res
14(4):463–470
8. Imbabi MS, Carrigan C, McKenna S (2012) Trends and developments in green cement and
concrete technology. Int J Sustain Built Environ 1(2):194–216
9. Khan MI, Siddique R (2011) Utilization of silica fume in concrete: Review of durability
properties. Resour Conserv Recycl 57:30–35
10. Khan, K. M., & Ghani, U. (2004). Effect of Blending of Portland Cement with Ground Gran-
ulated Blast Furnace Slag on the Properties of Concrete. Our World in Concrete & Structures
(pp. 329–334). Singapore: Singapore Concrete Institute.
11. Kim, J. E., Park, W. S., Cho, S. H., Kim, D. G., & Noh, J. M. (2012). An Experimental Study
on Mechanical Properties for Ternary High Performance Concrete with Fly-ash, Blast furnace
slag, Silica fume. In Applied Mechanics and Materials (Vol. 204, pp. 3699–3702). Trans Tech
Publications Ltd.
12. Mazloom M, Ramezanianpour AA, Brooks JJ (2004) Effect of silica fume on mechanical
properties of high-strength concrete. Cement Concr Compos 26(4):347–357
13. Mohamed OA, Najm OF (2017) Compressive strength and stability of sustainable self-
consolidating concrete containing fly ash, silica fume, and GGBS. Front Struct Civ Eng
11(4):406–411
14. Mohan A, Mini KM (2018) Strength and durability studies of SCC incorporating silica fume
and ultra fine GGBS. Constr Build Mater 171:919–928
15. Nagaraj K, Himabindu P (2017) Experimental Investigations of High Strength Concrete Using
GGBS & Fly ash. International Journal of Research Sciences and Advanced 2(18):170–181
16. Nath P, Sarker P (2011) Effect of fly ash on the durability properties of high strength concrete.
Procedia Engineering 14:1149–1156
824 A. A. Hawarneh et al.
17. Nochaiya, T., Wongkeo, W., & Chaipanich, A. (2010). Utilization of fly ash with silica fume
and properties of Portland cement–fly ash–silica fume concrete. Construction and Building
Materials, 768–774.
18. Obla KH (2009) What is green concrete? The Indian Concrete Journal 24:26–28
19. Oner, A., & Akyuz, S. (2007). An experimental study on goptimum usage of GGBS for the
compressive strength of concrete. Cement & Concrete Composites , 505–514.
20. Reddy SVB, Rao PS (2016) Experimental studies on compressive strength of ternary blended
concretes at different levels of micro silica and ggbs. Materials Today: Proceedings 3(10):3752–
3760
21. Siddique R (2007) Waste Materials and By-Products in Concrete. Springer Science & Business
Media, Berlin
22. Siddique, R. (2011). Utilization of silica fume in concrete: Review of hardened properties.
Resources, Conservation and Recycling, 923–932.
23. Suda VR, Rao PS (2020) Experimental investigation on optimum usage of Micro silica and
GGBS for the strength characteristics of concrete. Materials Today: Proceedings 27:805–811
24. Giner VT, Ivorra S, Baeza FJ, Zornoza E, Ferrer B (2011) Silica fume admixture effect on the
dynamic properties of concrete. Constr Build Mater 25(8):3272–3277
25. Yazıcı H, Yiğiter H, Karabulut AŞ, Baradan B (2008) Utilization of fly ash and ground gran-
ulated blast furnace slag as an alternative silica source in reactive powder concrete. Fuel
87(12):2401–2407
26. Zhao Y, Gong J, Zhao S (2017) Experimental study on shrinkage of HPC containing fly ash
and ground granulated blast-furnace slag. Constr Build Mater 155:145–153
Nano-Modified Concrete Incorporating
Phase Change Material Under Cold
Temperature
1 Introduction
2 Experimental Procedure
General use (GU) cement, complying with CAN/CSA-A3001-18, was used as the
main binder. A commercially available aqueous nano-silica sol (NS) with 50% solid
content of fully dispersed SiO2 was added in all mixtures at a constant dosage of 4% as
a partial replacement of cement based on previous studies by the authors [1, 28]. The
specific gravity, mean particle size, and surface area of the nano-silica are 1.4, 35 nm,
and 80 m2 /g, respectively. Furthermore, powdered nano-silica (P-NS) and montmo-
rillonite nano-clay [halloysite] (P-NC) were added to selected mixtures at a constant
dosage of 2% by mass of cement. The nano-silica powder has purity of 99.5%,
average specific surface area of 160 m2 /g, average particle size of 20–30 nm, and
density of 2.21 g/cm3 . Whilst the nano-clay, which consists of a two-layered alumi-
nosilicate with a predominantly hollow nanotubular structure, has purity of 99%,
average specific surface area of 70 m2 /g, average particle size of 40–80 nm, density
of 1.98 g/cm3 . For all mixtures, the total binder content and water-to-binder ratio (w/b)
were fixed at 400 kg/m3 and 0.32, respectively. These proportions achieved balanced
performance in terms of fresh, hardened, and durability properties of concrete cast
and cured under normal and low freezing temperatures (23 to −20 °C) [1, 16, 28].
Natural gravel with maximum size of 9.5 mm and well-graded river sand with
fineness modulus of 2.9 were used in this study as coarse and fine aggregates. The
specific gravity and absorption were 2.65 and 2%, respectively for gravel, and 2.53%
and 1.5%, respectively for sand. A high-range water-reducing admixture (HRWRA),
polycarboxylic acid-based, complying with ASTM C494/C494M-19, Type F was
828 A. M. Yasien et al.
)
Temperature (
-10
-15
-20
-25
-30
Concentration of CNAI g/100 g Water
2.2 Procedures
According to previous studies by the authors [28], mixing and casting of all mixtures
were conducted at −5ºC in an environmental chamber, where solid constituents
of concrete were stored for 24 h prior to mixing. Cold water conditioned at (5 ±
1ºC) was used in mixing to mimic conditions of tap water during winter. These
temperatures simulate minimal to low heat conditioning for the main ingredients of
concrete and formwork during winter. To ensure homogenous dispersion of mixtures’
constituents, a particular sequence of mixing, which was developed based on trial
batches conducted in previous studies by the authors [1, 28], was followed. The NS,
air-entraining admixture, and HRWRA were added to 2/3 of the mixing water (solu-
tion I) whilst the CWAS was added to the other 1/3 of the mixing water (solution II).
Both solutions were stirred for 45 s to achieve homogenous dispersion. Afterwards,
the aggregates were mixed with 1/2 of solution I for 30 s. Cement and nano-powder
saturated with PCM, if any, were then added and mixed with the aggregates for
another 30 s. The remaining 1/2 of solution I was added to the mixture and the
mixing continued for 30 s. Finally, solution II was added to the pan, and mixing
continued for another 2 min. Concrete mixtures were poured in moulds, which were
compacted by a 60 Hz vibrating table till air bubbles stopped migrating to the top
830 A. M. Yasien et al.
of the specimens’ surface. After casting, the specimens were immediately trans-
ferred and kept inside another environmental chamber which was set at -15 °C, using
the protection methods described below. To simulate wind conditions, the curing
chamber was equipped with a fan to circulate air at a speed of 25 km/h. After 24 h,
all specimens were demoulded and kept covered using the protection methods until
testing. This freezing temperature was selected to mimic the average and practical
winter temperature for construction in many regions in North America and Europe.
Two methods of hybrid protection, external (control) and internal, were adopted
during curing under the adopted freezing temperature. For external protection, insu-
lation blankets were used to cover the specimens. The blankets were selected based on
the R-value as recommended in ACI-306R-16 to protect ordinary concrete sections
above ground level with thickness ≤200 mm for 7 days. However, a reduction factor
of 25% was applied to the R-values, based on a previous study by the authors [29];
thus, the R-value of the used insulation blankets was 6. Whilst for the internal curing
technique, the paraffin-based PCMs were integrated into concrete using the impreg-
nation methods discussed in the previous section using powder nano-materials as
hosts as well as covering the samples with insulation blankets at an R-value of
3 (50% reduction in the R-value of the insulation blanket utilized in the external
protection). In addition, a 0.3 mm reflective layer was attached to the insulation to
improve heat storage at this deep-freezing temperature for both protection methods.
2.4 Testing
The hardening rates of the different mixtures were assessed based on the initial
and final setting times, which were determined according to ASTM C403/C403M-
16. This test was performed inside the environmental chamber set at the curing
temperature and employing the protection methods described earlier. Both setting
times were determined by measuring the penetration resistance of standard needles
into 150 mm cubes of the mortar fraction of each mixture (portion passing sieve #4
[4.75 mm]). The early- and later-age compressive strength of the different mixtures
were determined by testing triplicate cylinders (100 × 200 mm), as per ASTM
C39/C39M-20, after 7 and 28 days of curing, respectively under the adopted curing
regime and protection methods. The compressive strength cylinders were stored in
lab environment at 22 ± 1 °C for 3 h prior to testing to ensure the absence of any
ice inside the specimens as well as to capture the effect of PCMs in liquid state on
concrete mechanical properties.
In addition, after 28 days, the fluid transport properties of the mixtures were
assessed based on absorption test, which was described by [27]. Triplicate concrete
Nano-Modified Concrete Incorporating Phase Change Material Under … 831
As shown in Fig. 2, both initial (IST) and final (FST) setting times varied between 1.4
to 2.3 h and 5 to 6.9 h, respectively, which coincide with the typical hardening times
(4 to 8 h) of the conventional concrete cast and cured under normal temperatures [24].
This highlights the practicality of these mixtures, considering the protection methods
implemented herein, for construction applications under freezing temperatures down
to −15 °C. The hybrid (internal curing + insulation) protection method led to faster
rates of concrete hardening compared to the conventional (insulation only) protection
method. For example, mixture P-NS50 had 37% and 28% reduction in both initial and
final setting times, respectively relative to that of the control mixture protected using
insulation blankets only. This was associated with the higher amounts of CH (27 to
71%) produced up to the first day of curing in mixtures protected using the hybrid
method (Fig. 3). This can be attributed to the presence of extra 2% of powdered
nanoparticles (P-NS or P-NC) in hybrid protected mixtures, which acted as extra
sites for hydration products to precipitate on; thus, higher rates of skeletal rigidity.
The addition of P-NS as host for the PCM markedly shortened both IST and FST
compared to counterpart mixtures incorporating P-NC when cured under -15 °C.
For instance, the incorporation of P-NS in mixture P-NC100 to produce mixture
mixtures P-NS100 shortened both IST and FST by 17 and 9%, respectively. This can
be attributed to the relatively smaller particle size (25 nm) and higher surface area
(160 m2 /g) of the P-NS in comparison with the P-NC (60 nm; 70 m2 /g); Thus, P-NS
attained higher capability of providing the hydration products with nucleation sites
to precipitate on; consequently, higher hardening rates. These trends conform to the
thermal analysis results as depicted in Fig. 3, since mixtures incorporating PCM in
P-NS comprised the highest calcium hydroxide (CH) content after the first day of
curing, indicating accelerated hydration development of the paste compared to the
counterpart mixtures incorporating PCM in P-NC. For example, regardless of the
832 A. M. Yasien et al.
450
415
IST FST
400 385
350
350 340
300
300
Time (Minutes)
250
200
150 135
120
100 105
100 85
50
0
Control P-NS50 P-NS100 P-NC50 P-NC100
Fig. 2 Initial (IST) and final (FST) setting times of the mixtures (Note Error bars represent standard
deviations)
7
Control P-NS50 P-NS100 P-NC50 P-NC100
6
Calcium Hydroxide Content (%)
0
0 5 10 15 20 25 30
Time (Days)
Fig. 3 Thermogravimetry results for the CH contents with time (Note The table shows exemplar
normalized ratios)
saturation level, the CH contents of P-NS mixtures were 10 to 14% higher compared
to P-NC modified mixtures after the first day of curing. This conforms to previous
studies, which reported the superlative capability of nano-silica to catalyse the cement
hydration compared to nano-clay [25]. Also, regardless of the host material, the
increase in the dosage of the PCM led to a noticeable increase in the setting times of
the concrete mixtures. For instance, mixtures incorporating nanoparticles at 100%
Nano-Modified Concrete Incorporating Phase Change Material Under … 833
PCM saturation level had around 16% longer setting times compared to that of the
counterpart mixtures comprising nanoparticles at 50% level of impregnation. This
can be linked to the tendency of nanoparticles to agglomerate at higher saturation level
with PCM, which might have partially impeded the nucleation effect of nanoparticles.
Also, the existence of larger amount of PCM led to higher probability of leakage,
where the leaked PCM reduced the resistance of mortar to penetration by standard
needles. This was substantiated by the lower amount of CH produced in mixtures
with 100% PCM compared to that in the counterpart mixtures with lower dosage of
PCM (50%) after 1 day of curing at -15 °C (Fig. 3).
As shown in Fig. 4, the compressive strength of the different mixtures was assessed
at early- and late-ages. The 7 and 28 days compressive strength of the mixtures
varied between 20 and 32 MPa and 32 to 47 MPa, respectively. The strength range of
the mixtures complies with the strength requirement of concrete (20 to 45 MPa) for
various construction applications such as concrete pavements, sidewalks, buildings,
bridges as well as repair [21]. These values indicate a sufficient hydration develop-
ment of the different mixtures, which can be ascribed to the nucleation, filler, and
delayed pozzolanic effects. The latter started after 7 days of curing at the adopted
freezing temperature. It is worth noting that, ACI 306R-16 stipulates that the strength
of concrete cast in cold weather should achieve at least 3.5 and 24.5 MPa before
being exposed to one cycle and multiple freezing–thawing cycles, respectively. The
compressive strength of all hybrid protected mixtures surpassed these limits after
7 days of curing under -15 °C.
Changing the protection method had a noticeable influence on the strength devel-
opment of concrete mixtures cured under −15 °C. After 7 days of curing, the
compressive strength of concrete protected by nano-materials impregnated with PCM
was higher than that of similar concrete covered with conventional insulation. For
instance, mixtures incorporating nano-powder as a host to PCM at 50% saturation
level, achieved average compressive strength values of 63% higher than that of the
control mixture, which was 20 MPa after 7 days of curing. This complies with the
higher CH contents in the mixtures protected using the hybrid method up to 7 days of
curing (Fig. 3). For instance, the normalized CH contents of mixture P-NS50 relative
to the control mixture were 1.71, 1.67, and 1.47 after 1, 3, and 7 days of curing, respec-
tively. This can be attributed to the effect of the incorporation of the PCM, which acted
as an internal heating source, which temporarily raised the internal temperature of
concrete; thus, it contributed to boosting hydration and strength development, espe-
cially at early-age, which also contributed to higher late-age strength. For instance,
mixture P-NS50 attained 47% higher strength compared to that of the control mixture
after 28 days of curing at −15 °C. In addition to the early-age hydration develop-
ment, this could be attributed to the filler effect and delayed pozzolanic activity of
powder nanoparticles, producing more secondary calcium silicate hydrate (C-S–H)
834 A. M. Yasien et al.
7-days 28-days
50 47
40 41
Compressive Strength (MPa)
40
36
32 34
31
30 27
25
20
20
10
0
Control P-NS50 P-NS100 P-NC50 P-NC100
Fig. 4 Compressive strength of the mixtures at 7 and 28 days (Note Error bars represent standard
deviations)
compared to the control mixture. Indeed, the abundance of CH due to the boosted
nucleation effect of P-NS and P-NC, as mentioned previously, further activated the
pozzolanic reactivity of the added 2% nanoparticles, consequently higher strength
values. This is substantiated by the TG results, as after 7 days, the hybrid protected
mixtures showed higher rates of continual consumption of CH up to 28 days. For
instance, the normalized CH contents in mixture P-NS50 relative to that in the control
mixture were 1.47, 1.24, and 0.95 after 7, 14, 28 days, respectively.
Irrespective of the level of nanoparticles saturation, the incorporation of P-NS as a
host to the PCM showed a relatively enhanced influence on the strength development
of concrete mixtures at early-age compared to the counterpart mixtures incorporating
P-NC. For instance, the amalgamation of P-NS in the concrete mixtures instead of
P-NC led to an average increase of 9% in the early-age strength. This was explained
by the thermal analysis results as depicted in Fig. 3, since mixtures incorporating
P-NS comprised higher calcium hydroxide (CH) contents, indicating accelerated
hydration development of the cement paste up to 7 days of curing. For example, the
normalized CH contents of mixture P-NS50 compared to the counterpart mixtures
P-NC50 were 1.17, 1.21, and 1.11 after 1, 3, and 7 days of curing, respectively.
This can be attributed to the previously mentioned higher nucleation effect of P-NS
in comparison with the P-NC. Thus, P-NS attained higher capability of providing
the hydration products with nucleation sites to precipitate on in addition to better
distribution of the heat emitted from the PCM within nano-silica particles (P-NS).
Similar to early-age results, P-NS mixtures achieved higher late-age strength
values compared to P-NC but to greater extent. For instance, the addition of P-NS in
the concrete mixtures instead of P-NC led to an average increase of 13% in the 28-
day compressive strength. This coincided with the higher rates of CH consumption
Nano-Modified Concrete Incorporating Phase Change Material Under … 835
3.3 Absorption
The penetrability of the different concrete mixtures was assessed based on a fluid
absorption test after 28 days of curing under the adopted curing temperature and
protection methods. The results ranged from 1.8 to 2.6% (Fig. 5). Complying with the
previous results, mixtures protected by PCM as an internal curing aid achieved lower
absorption values compared to that of the control mixture, which was protected using
an insulation blanket only. For instance, mixture incorporating P-NC impregnated
with PCM at 50% saturation level achieved a decrease of 19% in the absorption value
compared to that of the control mixture, which was 2.6%. This can be ascribed to
the better microstructural development and strength grade of concrete protected by
the hybrid system, as previously discussed in the previous sections.
Similar to the compressive strength trends, regardless of the dosage of the PCM,
the utilization of P-NS as a host to the PCM resulted in better resistance to ingress
of fluids in comparison with P-NC. For instance, the replacement of nano-clay in
mixture P-NC50 with nano-silica to produce mixture P-NS50 led to lower absorption
value by 14%. This can be ascribed to the previously mentioned early-age hydration
development, and pozzolanic and filler effects of nano-silica, which led to more
densification in the microstructure of concrete, and consequently better resistance to
fluid penetration. On the other hand, the absorption of concrete yielded higher values
with increasing the amount of PCM in the nano-powders. This might be attributed
to the additional pores, which were created due to the leakage of PCM at higher
dosages, thus facilitating the ingress of fluids.
836 A. M. Yasien et al.
3
2.6
2.4
Absorption (%)
2.3
2.1
2 1.8
0
Control P-NS50 P-NS100 P-NC50 P-NC100
Fig. 5 Absorption results of the mixtures at 28 days (Note Error bars represent standard deviations)
4 Conclusions
Acknowledgements The authors highly appreciate the financial support from Mitacs and City of
Winnipeg. The IKO Construction Materials Testing Facility at the University of Manitoba, in which
these experiments were conducted, has been instrumental to this research.
References
15. Fernandes F, Manari S, Aguayo M, Santos K, Oey T, Wei Z, Sant G (2014) On the feasibility of
using phase change materials (PCMS) to mitigate thermal cracking in cementitious materials.
Cement Concr Compos 51:14–26
16. Ghazy A, Bassuoni MT, Shalaby A (2016) Nano-modified fly ash concrete: a repair option for
concrete pavements. ACI Mater J 113(2):231–242
17. Karagol F, Demirboga R, Khushefati WH (2015) Behavior of fresh and hardened concretes
with antifreeze admixtures in deep-freeze low temperatures and exterior winter conditions.
Constr Build Mater 76:388–395
18. Korhonen CJ, Cortez ER, Charest BA (1992) Strength development of concrete cured at low
temperature. Concr Int 14(12):34–39
19. Li X, Chen H, Li H, Liu L, Lu Z, Zhang T, Duan WH (2015) Integration of form-stable paraffin/
nanosilica phase change material composites into vacuum insulation panels for thermal energy
storage. Appl Energy 159:601–609
20. Liu F, Wang J, Qian X (2017) Integrating phase change materials into concrete through
microencapsulation using cenospheres. Cement Concr Compos 80:317–325
21. MacGregor JG, Wight JK, Teng S, Irawan P (1997) Reinforced concrete: mechanics and design,
3rd edn. Prentice Hall, Upper Saddle River, NJ, USA
22. Memon SA, Cui H, Lo TY, Li Q (2015) Development of structural-functional integrated
concrete with macro-encapsulated PCM for thermal energy storage. Appl Energy 150:245–257
23. Meshgin P, Xi Y (2012) Effect of phase-change materials on properties of concrete. ACI Mater
J 109(1):72–80
24. Neville AM (2011) Properties of concrete. Prentice Hall, London, UK
25. Paul SC, Van Rooyen AS, van Zijl GP, Petrik LF (2018) Properties of cement-based composites
using nanoparticles: a comprehensive review. Constr Build Mater 189:1019–1034
26. Ratinov VB, Rozenberg TI (1996) Antifreezing admixtures. Concrete admixtures handbook,
2nd edn. William Andrew Publisher, New York, USA, pp 740–799
27. Tiznobaik M, Bassuoni MT (2017) A test protocol for evaluating absorption of joints in concrete
pavements. J Test Eval 46(4):1636–1649
28. Yasien AM, Bassuoni MT (2022) Nano-modified concrete at sub-zero temperatures: experi-
mental and statistical modelling. Mag Concr Res 74(1):2–21
29. Yasien AM, Bassuoni MT, Abayou A, Ghazy A (2021a) Nano-modified concrete as repair
material in cold weather. ACI Mat J 118(2)
30. Yasien AM, Bassuoni MT, Ghazy A (2021) Concrete incorporating nanosilica cured under
freezing temperatures using conventional and hybrid protection methods. J Mater Civ Eng
33(4):04021046
Challenges for the Development
of Artificial Intelligence Models
to Predict the Compressive Strength
of Concrete Using Non-destructive Tests:
A Review
1 Introduction
Today, in every part of the world, structures such as small and tall buildings, bridges,
dams, tunnels, canals, reservoirs, and tanks are made of reinforced concrete (RC).
The widespread use of RC in construction is due to its advantages that include
availability and low cost of materials, long-life operation compared to other building
materials such as steel or wood, high rigidity, and less costly maintenance [1, 2].
Most existing RC structures, after a period of time from their construction, need to
be evaluated for various reasons such as mistakes during the design and construction
process, exposure to harsh environmental conditions, evaluation of damage after an
earthquake, upgrading of existing structures due to development, and structure health
monitoring [3–6]. The accurate prediction of behavior requires a good estimation of
the in-place mechanical properties of concrete, one of the main concerns of engineers
and researchers. Extensive research has been conducted around the world to address
this challenge, namely that concrete is heterogeneous, made of different combinations
of constituent materials and prepared with unique features for each individual project.
The diversity and heterogeneity of concrete materials lead to an inaccurate prediction
of material properties as well as the performance and behavior of concrete members.
To evaluate RC structures, various parameters are examined, the most important
of which is the exact value of compressive strength of concrete (f’c). For this purpose,
engineers conduct various types of tests: completely non-destructive testing (NDT),
semi-destructive testing, and completely destructive testing (Table 1) [7–9]. These
methods can be used for both new concrete and also for concretes that have been in
place for a long time.
Rebound hammer test (known as Schmidt hammer test), ultrasonic pulse velocity
(UPV) test, penetration resistance test (known as the Windsor probe test), pull-out
test, and core test are the methods that are used for evaluating the compressive strength
A rebound hammer, also known as a Schmidt hammer (or Swiss hammer) designed
by Ernst Schmidt in 1945, is an instrument with a spring-loaded mass. The test
hammer hits the concrete at defined energy (Fig. 1). Its rebound is dependent on the
hardness of the concrete surface and is measured by the device. In the other words, the
compressive strength of concrete is estimated indirectly based on surface hardness.
For this device, the correlation curves were developed between the compressive
strength of standard cube samples and the rebound number. However, according
to studies and reports, the correlation is not unique, and it should be checked and
calibrated according to different devices and different test conditions [10–12]. The
rebound hammer test is standardized in ASTM C805 and BS 1881: Part 202. It is
important to mention that just the test procedure is given, and no correlation curve
or equation is provided (except Chinese standard) [13, 14].
The rebound hammer test can be conducted in three different directions; vertical-
downward, horizontal, and vertical-upward (Fig. 2). Figure 3 shows an example of
the resulting calibration curves [15]. The curves shown in Fig. 3 were developed for
concrete with an age of 14 to 56 days and using 150 mm concrete cubic specimens.
The simplicity of the test and the low cost of the machine have made it one of
the most common methods of assessing the compressive strength of concrete. Over
the past few decades, a lot of research has been done that proves that testing is not
always accurate and cannot be used solely to determine the compressive strength
of concrete. The rebound hammer test just provides the information of the concrete
surface within a depth of 30 to 50 mm, and this data is not representative of all of the
concrete [7, 10, 16–20]. In other words, considering that concrete is a heterogeneous
material and 5 cm thick concrete cover in reinforced concrete members, it can be
concluded that the test indirectly estimates the compressive strength of concrete
based on the surface hardness of the cover of RC members. Several factors such
as surface finishing, aggregate proportions, maximum size of aggregate, humidity,
concrete maturity, dimension and location of the reinforcement, concrete mixture
properties, age of concrete, distance from the free edges have an influence on the
results. Table 2 gives the most important factors affecting rebound hammer tests
842 S. A. Alavi and M. Noël
considers one or two influence factors in the rebound test and cannot include all of
these factors. Research has shown that the compressive strength of concrete cannot
be directly correlated by a single function [10, 21]. Therefore, a model with multiple
functions is needed. Thus, the equations and curves proposed could only be used
within the boundary condition (influence factors) of their experimental test.
844 S. A. Alavi and M. Noël
Unlike the Schmidt hammer, which determines the strength based on the hardness
of the concrete surface, the UPV test evaluates the internal properties of the concrete
(Fig. 4). It is worth mentioning that the UPV has traditionally been used for the
quality control of materials such as concrete. UPV testing is utilized to evaluate
the integrity of structural concrete by measuring the velocity of compressive stress
waves along a specific direction to evaluate the density and elastic properties of the
material [22, 23]. As shown in Fig. 5, three configurations are used for the UPV test
on RC members; direct, semi-direct, and indirect. The pulse velocity technique is
an efficient method to evaluate the quality of the concrete because it just depends
on the elastic properties of the material and is independent of the geometry of the
specimen. In the first days after casting the concrete, when hardening occurs, it is
observed that the velocity of ultrasonic waves in concrete increases significantly.
Moreover, theoretical considerations support the existence of a direct relationship
between Young’s modulus and wave velocity in an elastic condition. Finally, between
Young’s modulus and strength, empirical relationships have long been established
[10, 24, 25]. The UPV test is standardized in ASTM C597, while the guidelines
provided by the International Atomic Energy Agency are much more comprehensive
[26].
As shown in Fig. 4, the equipment includes a transmitter, a receiver, and a device
to show the transformation time used to calculate the pulse velocity according to
Eq. 1:
V = L/t (1)
where V: pulse velocity (km/s), L: path length (mm), and t: transmit time (µs). Table
4 shows how the values of the obtained UPV can be used to classify the quality of
concrete [27].
According to previous studies on evaluating concrete strength with the use of the
UPV test, it has been determined that several factors affect this test, which has made
it difficult to obtain the compressive strength of concrete accurately by performing
Challenges for the Development of Artificial Intelligence Models … 845
this test alone. Table 5 lists the most important influencing factors on the UPV
test. In 1981, Samarin and Meynink conducted a comprehensive study on UPV, and
they reported that “the range of variation of velocity with time is much lower when
concrete is older than 28 days: sensitivity of UPV to the rate of concrete strength
is extremely high in the first few days but considerably lower after about 5–7 days,
depending on curing conditions” [10, 28]. Moreover, another important problem is
that the reinforcing bars in the concrete cause errors in the test. Rebars are the easier
846 S. A. Alavi and M. Noël
Fig. 6 a Effect of rebar’s location on UPV test b Effect of cracks and voids on UPV test
Due to the fact that the equations and models proposed to evaluate the compressive
strength of concrete based on the results of the rebound hammer or UPV are not
generally reliable and each proposed model has only considered specific boundary
conditions in the NDT-strength relationship, researchers have explored the use of
Challenges for the Development of Artificial Intelligence Models … 847
Nowadays, more complex systems arising in many fields of sciences such as biology,
medicine, the humanities, and other similar fields can be difficult to analyze with
conventional mathematical and analytical methods and have a high cost of computing
to approach an acceptable and reliable answer. Machine learning (ML) techniques, a
branch of artificial intelligence (AI), can approach more accurate and more rational
solutions to complex problems, with lower computational cost and reduced errors [48,
49]. In recent years, for the prediction of compressive strength of concrete with NDT
tests, preliminary research is in process and no practical model has been presented
yet. However, the studies in this field show encouraging results [7, 21, 41, 50–57].
In 2015, Shih et al. proposed an AI model using the support vector machines
(SVMs) method to estimate the compressive strength of concrete. SVMs are a form
of ML techniques that utilize supervised learning models to classify data and perform
regression analyses. They developed three SVM models with different input vari-
ables: (Model.1) rebound number only, (Model.2) UPV only, and (Model.3) rebound
number + UPV. They collected 95 data from other researchers to develop and vali-
date their models; 89 of the total data were randomly selected for training, and the
remaining data were used for testing each of the three different models. In Table 7,
the output results of the three different models are compared to experimental results.
They considered mean absolute percentage error (MAPE) to evaluate the accuracy
of models. According to Table 7, the combined NDT (known as SonReb) gives better
results in comparison with single input models. Also, they reported that the SVM
model improves the NDT estimation for concrete compressive strength. They did
not compare their results with the available SonReb regression models or equations
[58].
Challenges for the Development of Artificial Intelligence Models … 849
In 2018, Wang et al. proposed an adaptive artificial neural network (ANN) model
to estimate the concrete compressive strength using the UPV and rebound hammer
test data. For this purpose, they tested 315 cylinder samples to develop and validate
the ANFIS prediction model. They reported that “with the adaption of ANFIS, the
estimation error of SONREB test can be reduced to 5.98% (measured by MAPE)”
[57]. However, it should be noted that no information was provided about the data
and their modeling procedure.
Artificial neural networks (ANNs) are learning algorithms that are composed of
artificial neurons (layers) that are conceptually inspired by biological natural neurons
structures of the human brain [59]. And the ANFIS is an integrated system based
on neural networks that optimize with the fuzzy inference system. In other words,
the ANFIS method is the combination of learning techniques of ANNs and the
decision-making feature of fuzzy inference systems [48, 60].
In 2020, Asteris and Mokos, investigated the application of ANNs for predicting
the compressive strength of concrete in existing structures. They developed six ANN
models with different neural network architectures; two models with one input of
UPV, two models with one input of rebound hammer, and two models with two inputs
of both ultrasonic value and rebound hammer. For this purpose, they used a database
consisting of 209 datasets based on experimental results from a single study reported
in the literature that used 36 batches of cubic concrete specimens with each batch
consisting of 6 specimens. The optimum model considered both UPV and rebound
numbers. Moreover, they compared the results of the ANN model with 14 empirical
equations from other researchers. They observed that the ANN model had the highest
correlation (R2 ) value between the predicted output and experimental results from
testing in comparison with the result of other equations [7].
In 2020, Poorarbabi et al. investigated two different methods using ANN and
Response Surface Methodology (RSM) to predict the compressive strength of
concrete by using the combination of three non-destructive tests: UPV, rebound
hammer, and surface electrical resistivity (SR) [21]. The SR test is an NDT test
for estimating concrete permeability. According to the research, a connection exists
between the electrical resistivity of concrete and the deterioration processes such as
an increase in permeability and corrosion of embedded steel [61]. RSM is a statistical
and mathematical method that explores the relationships between several explana-
tory variables and one or more response variables. This optimization method is useful
for analyzing experiments in which one or more response variables are affected by
multiple other parameters [11]. Poorarbabi et al. conducted tests on 180 concrete
850 S. A. Alavi and M. Noël
specimens with 20 different concrete mix designs at three different ages (7, 28, and
90 days of age). They considered the age of concrete as a boundary condition in
their study. First, they developed RSM and ANN models with two inputs (UPV and
rebound number), then developed RSM and ANN models with three inputs (UPV,
rebound number, and SR). According to their results that are given in Table 8, they
reported that “the accuracy of RSM and ANN are increased by adding SR to the
combination of UPV and RN in all of the ages”. The results show that both RSM
and ANN are in good agreement with experimental data for estimating the strength
of concrete [21].
In 2021, Thapa et al. created ANN models and two multiple regression analysis
models to predict the compressive strength of concrete with different percentages
of recycled brick aggregate. They created 189 cubic samples with normal concrete
(strength range 20 to 30 MPa). After 7, 28, and 84 days of curing, the samples were
tested with rebound hammer and UPV followed by destructive testing to obtain their
compressive strength. For ANN model, they considered rebound number and UPV
as the inputs and used randomly 70%, 15%, and 15% of the total datasets (189 data)
for training, testing, and validating. Their results showed that the ANN model had
the highest correlation (R2 ) value between the predicted output and experimental
results from testing and the minimum mean square error (MSE) in comparison with
the multiple regression analysis models (bilinear and double powered) (Fig. 8). The
authors indicated that the ANN model is a reliable tool to predict compressive strength
in comparison to the multiple regression models considering physical change effects
in concrete. Also, they investigated the influence of inputs on the results and reported
that the rebound number was given more importance to predict compressive strength
in comparison with UPV [56].
In 2021, Bonagura and Noble evaluated the performance of ANNs for predicting
concrete compressive strength by SonReb. They developed 9 ANN models with
different training parameters. Then the best one was selected and compared with four
different correlation formulations (parametric multi-variable regression models).
They reported the best performance of the ANN approach in comparison with regres-
sion models. However, no information was provided about the experimental dataset,
Fig. 8 Comparison between actual strength and predicted strength from different models [56]
as well as how to compare the results of the neural network model with the results
of regression models [41].
In 2021, Ngo et al. investigated various AI methods to improve on-site NDT tests
for estimating concrete compressive strength. They created several models based on
three AI methods (ANN, SVMs, and ANFIS), then selected the best model from each
method. For this purpose, they conducted experimental tests to collect 98 datasets.
First, they measured RH and UPV on 98 non-structural beams in the basement of a
large residential complex. For each beam, 10 rebound hammer measurements were
obtained. For the UPV tests, four UPV measurements were taken at each test location.
After the RH and UPV tests, the core samples were taken at the same location. They
considered the average of 10 rebound hammer measurements and the average of 4
UPV measurements as inputs of the various models to predict concrete compressive
strength. Of the total number of 98 datasets, 71% were used for training and 28%
for testing the models (randomly selected). The models’ prediction accuracy is given
in Table 9 (RMSE is the root mean square error, and MAPE is the mean absolute
percentage error). According to the results in Table 9, ANNs, SVMs, and ANFIS
models predict the concrete strength prediction with MAPEs of 14.69%, 10.00%,
and 10.01%, respectively. The performance of both SVMs and ANFIS models is
better than ANNs, and the performance of both SVMs and ANFIS are similar.
Table 10 Current AI-based models to evaluate compressive strength of concrete that ranked by
accuracy
No Method By Data R2 MAPE (%)
1 SVMs 95 data (93% train, 7% test) – 6.77
2 ANFIS 98 data (71% train, 29% test) – 10.01
3 ANNs 209 data (85% train, 15% test) 0.9891 –
4 ANNs 98 data (71% train, 28% test) – 14.69
5 ANNs 180 data (70% train, 30% test) 0.9194 –
6 ANNs 189 data (70% train, 30% test) 0.825 –
Fig. 10 Boundary conditions of estimation the compressive strength of concrete with NDT tests
several different materials and environmental and physical factors also affect it over
time. In other words, to achieve an accurate AI model, a database is needed to
include all boundary conditions. According to Fig. 10, the most important boundary
conditions that affect the strength of concrete, as well as the results of non-destructive
tests, are listed. If just 5 subcategories are considered for each one (e.g., water to
cement ratios of 0.25, 0.35, 0.45, 0.55, 0.65, etc.), at least 100,000 data points are
required to develop a good AI model that covers a wide range of inputs.
By investigating the current models, the following conclusions have been drawn:
(1) data quality and number of unique data are more important than the modeling
method for the development of an AI model;
(2) all proposed AI models are based on a specific range of data;
(3) the existing AI models can only be applied for new inputs that range between
the minimum and maximum values of the training dataset.
Needless to say, the construction industry would greatly benefit from an effi-
cient and universal artificial intelligence model to accurately estimate the compres-
sive strength of concrete based on non-destructive tests of concrete. In Canada and
throughout the world, there are countless RC structures and buildings that need to
be evaluated. The existence of an AI model not only reduces the cost and time of
evaluation of existing structures but also leads to increased confidence in the safety
of structures. This goal is not yet achieved, although available tools seem promising.
Developing an adequate database is one of the main challenges.
Regarding data quality, it is necessary that all data be obtained according to the
same standard and procedure. In other words, to ensure the quality of training data,
854 S. A. Alavi and M. Noël
it is necessary that concrete cast for extracting the data should be made and tested in
the same environmental conditions and with the same and similar devices. In other
words, the process of obtaining data should be the same for all boundary conditions.
Due to the time-consuming process of casting, curing, and hardening the concrete,
it is difficult or impossible for an individual or a research institute alone. However,
such a model could be developed if some institutions and companies could conduct
experiments to collect data based on the same protocol and standard and contribute
toward an open database. It is recommended that the datasets include as much infor-
mation as possible about the concrete, including geometry and casting direction, mix
design, age, curing conditions, and non-destructive test results. The AI models could
be trained to identify the most important of these parameters and to make reasonable
or conservative estimates in cases when information is not available. Even if some
cores need to be extracted to verify the model results, this could significantly reduce
the number of cores that are required, leading to cost and time savings.
References
39. Moczko A (2009) Determination of actual in-situ compressive strength in concrete bridges, 10
th ACI Int. Conf. on recent advances in concrete technology, pp 14–16
40. Pucinotti R (2015) Reinforced concrete structure: non destructive in situ strength assessment
of concrete. Constr Build Mater 75:331–341
41. Bonagura M, Nobile L (2021) Artificial Neural Network (ANN) approach for predicting
concrete compressive strength by SonReb. Struct Durab Health Monitor 15:125
42. Godinho JP, De Souza Júnior TF, Medeiros MHF, Silva MSA (2020) Factors influencing
ultrasonic pulse velocity in concrete. Revista IBRACON de Estruturas e Materiais 13:222–247
43. Alwash M, Breysse D, Sbartaï ZM (2015) Non-destructive strength evaluation of concrete:
analysis of some key factors using synthetic simulations. Constr Build Mater 99:235–245
44. Nobile L (2015) Prediction of concrete compressive strength by combined non-destructive
methods. Meccanica 50:411–417
45. Nobile L, Bonagura M (2014) Recent advances on non–destructive evaluation of concrete
compression strength. Int J Microstruct Mater Prop 9:413–421
46. Cristofaro MT, Viti S, Tanganelli M (2020) New predictive models to evaluate concrete
compressive strength using the SonReb method. J Build Eng 27:100962
47. Ngo TQL, Wang Y-R, Chiang D-L (2021) Applying artificial intelligence to improve on-site
non-destructive concrete compressive strength tests. Crystals 11:1157
48. Naderpour H, Alavi SA (2017b) A proposed model to estimate shear contribution of FRP in
strengthened RC beams in terms of adaptive neuro-fuzzy inference system. Compos Struct
170:215–227
49. Thai H-T (2022) Machine learning for structural engineering: a state-of-the-art review.
Structures 38:448–491
50. Bilgehan M, Turgut P (2010) Artificial neural network approach to predict compressive strength
of concrete through ultrasonic pulse velocity. Res Nondestr Eval 21:1–17
51. Bonagura M (2012) Nondestructive evaluation of concrete compression strength by means of
Artificial Neural Network (ANN)
52. Chopra P, Sharma RK, Kumar M (2016) Prediction of compressive strength of concrete using
artificial neural network and genetic programming. Adv Mat Sci Eng 2016
53. Subaşı S, Beycioğlu A, Sancak E, Şahin İ (2013) Rule-based Mamdani type fuzzy logic model
for the prediction of compressive strength of silica fume included concrete using non-destructive
test results. Neural Comput Appl 22:1133–1139
54. Khashman A, Akpinar P (2017) Non-destructive prediction of concrete compressive strength
using neural networks. Procedia Computer Science 108:2358–2362
55. Mishra M, Bhatia AS, Maity D (2020) Predicting the compressive strength of unreinforced
brick masonry using machine learning techniques validated on a case study of a museum
through nondestructive testing. J Civ Struct Health Monitor:1–15
56. Thapa S, Sharma RP, Halder L (2021) Developing SonReb models to predict the compressive
strength of concrete using different percentage of recycled brick aggregate. Canadian J Civ
Eng
57. Wang YR, Ngo LTQ, Shih YF, Lu YL, Chen YM (2018) Adapting ANNs in SONREB test to
estimate concrete compressive strength. Key Eng Mater 792:166–169
58. Shih Y-F, Wang Y-R, Lin K-L, Chen C-W (2015) Improving non-destructive concrete strength
tests using support vector machines. Materials 8:7169–7178
59. Naderpour H, Sharei M, Fakharian P, Heravi MA (2022) Shear strength prediction of reinforced
concrete shear wall using ANN, GMDH-NN and GEP. J Soft Comput Civ Eng:66–87
60. Jang J-SR (1993) ANFIS: adaptive-network-based fuzzy inference system. IEEE Trans Syst
Man Cyber 23:665–685
61. Azarsa P, Gupta R (2017) Electrical resistivity of concrete for durability evaluation: a review.
Advances in Materials Science and Engineering 2017
62. (2016)ASTM C597—16 standard test method for pulse velocity through concrete. American
Society for Testing and Materials
63. ACI Committee 228 (2003) 228.1R-03: In-place methods to estimate concrete strength.
American Concrete Institute
Challenges for the Development of Artificial Intelligence Models … 857
64. Box GEP, Wilson KB (1951) On the experimental attainment of optimum conditions. J Royal
Stat Soc Series B (Methodological) 13:1–45
65. Chopra P, Sharma RK, Kumar M, Chopra T (2018) Comparison of machine learning techniques
for the prediction of compressive strength of concrete. Adv Civ Eng 2018
66. Logothetis L (1978) Combination of three non destructive methods for the determination of
the strength of concrete. National Technical University of Athenspp
Partial Cement Replacement in Concrete
with Gypsum Powder Recycled
from Waste Drywalls
1 Introduction
The negative impact of the construction industry on the natural environment is unde-
batable. Hereby, two of the most noticeable topics are discussed. First, the cement
manufacturing process could result in carbon dioxide (CO2 ) production, thereby
contributing to climate change and global warming. Second is the gypsum drywall
disposal in landfills, which could result in soil degradation and the contamination
of nearby water resources. Utilizing recycled gypsum as a partial replacement for
cement in concrete structures could be considered as an approach to address both
issues Hansen and Sadeghian (2019) and [1].
Cement manufacturing plants emit significant amounts of greenhouse gases
(GHG) into the atmosphere. The emissions resulting from the cement industry
account for up to 10% of artificial GHG emissions in the world [2] and [1]. Also,
the cement industry alone is responsible for 7% of global CO2 emissions (Malhotra
2010). Needless to say that CO2 could have devastating impacts on the natural envi-
ronment since it is the major contributor to air pollution and global warming [3].
Furthermore, the cement industry consumes a significant amount of energy annually
for manufacturing the product [2]. This could be an amplification for the production
of GHGs resulting from cement manufacturing since a considerable amount of fossil
fuels needs to be burned for sufficient energy to be provided for the cement manufac-
turing process every year (Jing [4]). Nearly 3.4 billion tons of cement are produced as
the major raw material for concrete manufacturing [2]. The costs of cement produc-
tion and the huge volume of cement that is produced annually convinced civil and
environmental engineers to look for replacements for cement as the cementitious
material in concrete.
Construction materials disposal at landfills is another noticeable issue in today’s
world. Each material could have different impacts on the natural environment
depending on the material’s chemical and mechanical properties. Gypsum wallboards
(Also known as drywalls) account for a significant amount of residential construc-
tion waste (nearly 20 percent) (Naik 2010). As a result of drywall disposal, huge
amounts of gypsum are accumulated in landfills, which will cause several environ-
mental issues. Gypsum is capable of releasing hydrogen sulfide gases which could
result in soil degradation [5]. It is also a flammable gas that could be lethal in high
concentrations [6] and [7]. Hydrogen sulfide could also penetrate the soil and cause
water contamination in nearby areas [7]. Utilizing the accumulated gypsum resulting
from drywall disposal is considered as an approach to address the issues resulting
from cement manufacturing and gypsum drywall waste disposal [7].
Using recycled gypsum from waste drywalls as a partial replacement for cement
in concrete could reduce the demand for cement production, and as a result, fewer
GHGs would be emitted to the atmosphere. Also, it could be a rational approach to
eliminate gypsum from our landfills and turn it into a resource. Naik et al. [1] used the
combination of recycled gypsum powder and fly ash class C as a partial replacement
for cement in concrete. According to the results, between 30 and 60% of cement
could be replaced by a gypsum-fly ash class C mixture. More importantly, concrete
Partial Cement Replacement in Concrete with Gypsum Powder … 861
specimens containing 10% gypsum as cementitious material are shown to have the
same compressive strength as the conventional concrete after 28 days (Naik 2010).
Hansen and Sadeghian [7] attempted to replace a higher volume of cement with a
gypsum-fly ash mixture (up to 70%). Gypsum could have negative impacts on the
compressive strength of concrete in a short period after manufacturing. However, the
concrete containing gypsum alongside fly ash and cement as the cementitious paste
is proven to have higher compressive strength compared to the concrete mixture
that has only cement and fly ash as the cementitious material [7]. Therefore, the
application of recycled gypsum powder is acceptable, and utilizing this material in
concrete manufacturing is feasible.
In a previous study by [7] and Naik (2010), only fine particles of the recycled
gypsum from waste drywalls were used. To be more specific, the particles remaining
on the sieve no. 100, sieve no. 200, and the pan during sieve analysis were separated
and used in the concrete. This proportion accounts for only 38% of a certain sample
of recycled gypsum. In other words, a considerable proportion of gypsum drywalls
would remain as waste and would be dumped in the landfills again. Therefore, solu-
tions are needed to be introduced in order to make this approach more sustainable.
In this study, one more step has been taken in the domain of application of recycled
gypsum in concrete, and the whole recycled gypsum is used as a replacement for
cement in several concrete specimens. The main goal of this paper is to introduce an
ultimate performance for the waste gypsum in our infrastructure.
2 Experimental Program
Five total mix designs are considered for this study including one control batch, two
batches that involve fine gypsum as a partial replacement for cement in different
amounts (20% and 10%), and two batches that involve the whole gypsum as partial
replacements for 10% and 20% of cement in the concrete. Fly ash accounts for half
of the cementitious material mass in all the batches, and the other half is dedicated
to cementing only for the control batch and the combination of cement and gypsum
for other batches. The purpose of considering mixed designs with fine gypsum is
to validate the results achieved by [7] making an appropriate comparison with this
study. The detail for mixtures is presented in Table 1. Also, the proportion of each
component of cementitious materials is demonstrated in Table 2. The capital letter
G stands for the mixtures that involve the whole gypsum, while FG stands for those
in which only fine particles of gypsum are used. The letter C stands for control
specimens. The number in front of each letter indicates the proportion of cement
which is replaced by the corresponding gypsum (fine or whole amount). In order to
make better comparisons, considered mix design for this study is the same as Hansen
and Sadeghian [1].
862 K. Takbiri and P. Sadeghian
Overall, three types of sand were available for this study; masonry sand, sand donated
by local sources (Casey Metro, Halifax, NS, Canada) in previous years which was
used by Hansen and Sadeghian, and sand donated by the same source recently.
After sieve analysis, it was revealed that masonry sand is not falling into the ASTM
parameters for making concrete. While both curves corresponding to the second and
third sand are located between the ASTM top and bottom limit, they are not identical.
The third type is decided to be used because of its availability for this study and later
research related to this topic. The coarse aggregate donated by the same source is
half-inch stone which is suitable for making concrete in terms of size distribution.
The cement considered for this study is type GU Portland cement (CRH, Canada
Group, ON, Canada). Fly ash used in the concrete was provided by local sources
(Ocean Contractor Ltd, Dartmouth, NS, Canada). The recycled gypsum provided
from waste gypsum drywalls was provided by USA Gypsum, Denver, PA, USA,
which is the same gypsum used by Hansen and Sadeghian in the previous study.
During Gypsum sieve analysis, the fiber-like particles which were many courses
than normal gypsum particles were observed on most of the sieves (all sieves but
sieve no. 200 and pan). For mix designs that contain fine gypsum (the ID starts with
FG), only the gypsum retaining on sieve 200 and pan is used. For other mixes, the
whole gypsum was used without removing fiber-like and coarser particles. Figure 3
demonstrates the fine gypsum and whole gypsum particles (Figs. 1 and 4).
After conducting sieve analysis on gypsum, some coarse fiber-like particles were
witnessed on the sieve. These particles could be found on all the sieves except sieve
no. 200. In the previous research conducted by Hansen and Sadeghian (2019), these
Partial Cement Replacement in Concrete with Gypsum Powder … 863
particles did not show up on sieve no. 100. The main suspect of this contradiction
was the effect of humidity. To test this hypothesis, sieve analysis was conducted on
dry gypsum as well. A specific proportion of gypsum was oven-dried for 24 h, and
afterward, the sieve analysis was conducted on the dry sample. In this case, those
fiber-like particles were no longer visible on sieve no. 100. As can be seen in Figs. 2
and 5, there are significant differences between the particle size distribution of the two
types of gypsum. It is worth mentioning that the moisture content of gypsum turned
out to be more than 22%, measuring the weight of the dry sample. This proportion
of moisture could affect the sieve analysis results.
Whole gypsum
Fine gypsum
Coarse aggregate
Fine aggregate
Fly ash
Gypsum
Water
Cement
Five batches are considered for this study. Three batches including control specimens
are considered to validate previous studies regarding the impact of fine gypsum
content as cementitious material on the compressive strength of concrete. The other
two batches are considered to assess the impact of using the whole gypsum (fine
particles and coarse particles) instead of fine gypsum in the concrete. A mini mixer
was used for mixing all the ingredients of concrete. All the materials are added to the
mixer in a certain order. The mixer was allowed to work until a homogenous mix is
achieved. For those mix designs which involved gypsum, dehydration was witnessed
in the mix. For this reason, a superplasticizer is used to make the mix workable and
Partial Cement Replacement in Concrete with Gypsum Powder … 865
Fig. 5 Retaining gypsum powder on sieve no. 100 after shaking. a Wet gypsum, b Dry gypsum
hydrated. For each mix design, three 100 mm × 200 mm molds are considered which
are tested on day 7, day 28, and day 90. All the specimens are cured in the moisture
room after being demolded (Fig. 6).
Specimens are removed from the moisture room and capped using sulfur compound
after 24 h. After another 24 h, the capped specimens are tested at ages using the
compressive test machine. The output is the maximum force that each specimen
resists in pounds (lbs). After doing conversions and calculations, the compressive
strength is calculated in Megapascal (MPa).
866 K. Takbiri and P. Sadeghian
Specimens were tested after 7, 28, and 90 days of curing in the moisture room.
For each mix design, three specimens were tested on the testing day, and the average
compressive strength was determined in MPa. The tested specimens and the compres-
sive test results corresponding to day 7, 28, and 90 tests are demonstrated in Figs. 7
and 8 and Table 3.
According to the compressive test results, the gypsum content could have a nega-
tive impact on the compressive strength of concrete. This strength reduction is more
significant at early ages and became lower at later ages. In a longer period, however,
the gypsum content had a positive impact on the compressive strength of concrete.
Using fine gypsum could result in stronger concrete compared to the concrete which
involves the whole gypsum in the mix design; however, according to the sieve anal-
ysis, almost 38% of the whole recycled gypsum would be used for making fine
gypsum concrete. On the other hand, although the whole gypsum concrete is slightly
weaker than fine gypsum concrete, the former is much more sustainable compared
to the latter. Table 4 shows the comparison between whole gypsum, fine gypsum
concrete, and control specimen in terms of compressive strength changes as a result
of using gypsum in concrete.
Hansen and Sadeghian [7] conducted a similar study in which fine gypsum concrete
was evaluated. To make a better comparison, in this study both fine gypsum and whole
gypsum concrete are considered. According to Table 3, the compressive strength of
each mix design in this study is slightly different from that of the previous one
868 K. Takbiri and P. Sadeghian
although the same mix design and materials were used in both. Some parameters
such as humidity, temperature, and the effect of superplasticizer could result in this
difference. The impact of fine gypsum content followed a similar trend in both studies
as can be seen in Table 4.
In this study, five batches with similar mix designs and different cementitious material
compounds were considered. One control batch involved 50% cement and 50% fly
ash as cementitious material, two batches with 10 and 20% of cement replaced with
fine gypsum, and two batches in which 10 and 20% of cement were replaced with
whole gypsum. To provide fine gypsum, only the retaining gypsum particles on sieve
no. 200 and pan were used, whereas for whole gypsum batches, specific proportions
of gypsum were randomly added to the mix. Nine specimens were provided for
each batch which are tested on day 7, day 28, and day 90. According to the tests
on day 7 and day 28, the compressive strength of concrete reduces by increasing
the proportion of gypsum by which cement is replaced. This strength reduction is
more severe in early ages. The compressive strength even reduces to lower levels
if the whole gypsum is used instead of fine gypsum as a replacement for cement.
The promising point is that the strength reduction decreased significantly on the
day 28 test. This could indicate that the negative impact of using gypsum in the
concrete could become less noticeable or even negligible in the long-term period.
According to day 90 test results, utilizing both fine and whole gypsum could increase
the compressive strength of the concrete in the long-term period. Also, the slight
difference between the compressive strength of whole gypsum concrete and fine
gypsum concrete indicates that the whole gypsum concrete could be as practical
in today’s life as fine gypsum concrete with higher sustainability features. Some
contradictions have been witnessed between this study and the previous studies in
this domain while similar mix designs were used in both. Some important factors
could have contributed to this phenomenon including humidity rate, temperature, and
the process of mixing materials and making specimens. It is highly recommended
for further studies to consider the impact of gypsum content in concrete for a longer
time period (more than 90 days). Furthermore, it is essential to seek solutions for
addressing the issues regarding the lack of strength in the early ages of gypsum
concrete. Also, it is worth mentioning that the durability of this type of concrete
must be evaluated. For gypsum concrete to be fully functional in our infrastructure,
it should be able to survive diverse environmental conditions.
Acknowledgements The authors would like to thank Casey Metro (Halifax, NS, Canada) for their
material donation in this research. It is also worth mentioning that USA Gypsum (Denver, PA, USA)
played a key role in this research by providing the primary material for in this research which is
recycled gypsum. Authors would also like to thank Dalhousie University (Halifax, NS, Canada) for
providing the initial resources in this research.
Partial Cement Replacement in Concrete with Gypsum Powder … 869
References
1. Naik TR, Kumar R, Chun YM, Kraus RN (2010) Utilization of powdered gypsum-wallboard in
concrete. In: Proc Int Conf Sustain Constr Mater Technol
2. Nguyen L, Moseson AJ, Farnam Y, Spatari S (2018) Effects of composition and transportation
logistics on environmental, energy and cost metrics for the production of alternative cementitious
binders. Js Clean Prod 185:628–645
3. Meyer C (2009) The greening of the concrete industry. Cem Concr Compos 31(8):601–605
4. Ke J, McNeil M, Price L, Khanna NZ, Zhou N (2013) Estimation of CO2 emissions from China’s
cement production: methodologies and uncertainties. Energy Pol 57:172–181
5. Ahmed A, Ugai K, Kamei T (2011) Laboratory and field evaluations of recycled gypsum as a
stabilizer agent in embankment construction. Soil found 51(6):975–990
6. Chandara C, Azizli KAM, Ahmad ZA, Sakai E (2009) Use of waste gypsum to replace natural
gypsum as set retarders in Portland cement. Waste manag 29(5):1675–1679
7. Hansen S, Sadeghian P (2020) Recycled gypsum powder from waste drywalls combined with
fly ash for partial cement replacement in concrete. J Clean Prod 274:122785
8. Malhotra VM (1999) Role of supplementary cementing materials in reducing greenhouse gas
emissions. In: Infrastructure regeneration and rehabilitation improving the quality of life through
better construction: a vision for the next millennium (Sheffield, 28 June-2 July 1999), pp 27–42
Nano-modified Slag-based Cementitious
Composites Reinforced with Multi-scale
Fiber Systems
Abstract This study responds to the need for improving the overall performance
of concrete infrastructure to achieve longer service life, fewer cycles of repair, and
reduced life-cycle costs. Hence, novel high-performance fiber-reinforced cementi-
tious composites were developed using various types of nano-materials and fibers.
The composites developed in this study comprised high content (50%) slag by mass
of the base binder (700 kg/m3 ) as well as nano-silica or nano-crystalline cellulose
(produced in Canada). In addition, nano-fibrillated cellulose (NFC), produced in
Canada, and a novel form of basalt fiber strands enclosed by polymeric resins called
basalt fiber pellets (BFP), representing nano-/micro- and macro-fibers, respectively,
were incorporated in the composites. The composites were assessed in terms of early-
and late-age compressive strength, flexural performance, and resistance to freezing
and thawing cycles. Generally, the BFP reduced the compressive strength of the
composites, but the co-existence of nano-materials and NFC alleviated this trend.
Furthermore, all nano-modified composites with multi-scale fibers showed notable
improvement in terms of flexural performance (post-cracking behavior, residual
strength, and toughness) and resistance to frost action. Thus, they can be used in
a suite of infrastructural applications requiring high ductility in cold regions.
1 Introduction
2 Experimental Program
2.1 Materials
The main components of the binders were general use (GU) cement and slag, which
meet the [13] specifications. Table 1 lists the physical and chemical properties of the
binders. Two different types of nanoparticles were used as additives to the binders: 6%
nano-silica solution (NS) or 0.1% nano-crystalline cellulose powder (NC) by mass
of the base binder. The NS solution comprises 50% solid content of SiO2 particles
dispersed in an aqueous solution of water and a dispersing agent, with a specific
surface area of 80 m2 /g, mean particles size of 35 nm, and density of 1.1 g/cm3 .
The NC powder has a specific surface area of 430 m2 /g, 5–40 nm width, 50–800 nm
length, and density of 1.5 g/cm3 .
The mixtures were produced using locally available fine aggregate that had a
continuous gradation of 0–600 μm and a fineness modulus of 2.9, as defined by
ASTM C136/136 M [4]. The absorption and specific gravity of the aggregate are 1.5%
and 2.6, respectively, according to ASTM C128 [3]. A high-range water-reducing
admixture (HRWRA) based on polycarboxylic acid and complying with ASTM
C494/C494M [7], type F was added to all mixtures to achieve a target consistency
of 180 ± 20 mm, according to ASTM C230/C230M [5]. Also, an air-entraining
admixture complying with ASTM C260 [6] was utilized to achieve a fresh air content
of 6 ± %. Two different types and sizes of fibers were used to reinforce the mixtures:
nano-fibrillated cellulose (NFC), which is produced via mechanically refining natural
cellulose fibers (micro-fibers), and basalt fiber pellets (BFP) which consists of 16 μm
basalt roving encased in polyamide resin, with the fiber component accounting for
60% of the pellet’s composition (macro-fibers). The properties and dosages of the
fibers are summarized in Table 2.
In this study, six mixtures were cast at constant w/b of 0.30 and 700 kg/m3 base
binder (50% cement and 50% slag). Nano-modified composites comprised either
6% NS or 0.1% NC by mass of base binder. Composites reinforced with a single
fiber system included 4.5% BFP (by volume of composite), while hybrid systems
comprised 4.5% BFP (by volume of composite) and 0.5% NFC (by weight of binder).
The dosages of nano-materials and fibers were selected based on trial batches and the
ranges reported in previous studies [1, 16, 17, 25] (Azzam et al. 2019b) for improving
the hardened properties of concrete/composites. An ID is given for each mixture to
identify the type of material incorporated [basalt fiber pellets (B), nano-fibrillated
cellulose (NFC), nano-crystalline cellulose (NC), and nano-silica (NS)]. Table 3 lists
the proportions of the materials used in each mixture.
A specific sequence of mixing was developed based on trial batches and previous
studies [11]. Firstly, the NS sol or NC powder was directly dispersed into the mixing
water using a homogenizer at 2000 rpm for 4 min. Afterward, the dry constituents
(sand, cement, and slag) were mixed, followed by gradual addition of required mixing
water containing either NS sol or NC with half the amount of HRWR while constantly
mixing for 3–5 min until the homogeneity of the mixture was achieved. The BFP was
then added with the remaining amount of HRWRA, and the ingredients were stirred
to ensure that the pellets were distributed evenly for additional 8–10 min. Once a
uniform, workable and homogenous mixture is achieved, the NFC is added, and the
mixing is continued for a further 3–5 min. This process took about 15–20 min to
complete. After casting the composites in molds, all specimens were consolidated
using a vibrating table. Polyethylene sheets were used to cover the surface of the
specimens for 24 h. Subsequently, the specimens were demolded and cured in a
standard room maintained at 22 ± 2 °C and relative humidity of more than 95% until
testing.
Table 3 Mixtures proportions per cubic meter
Mixture ID Cement (kg) Slag (kg) Nano-silica (kg) Fine Agg. (kg) Water* (kg) BFP (kg) NFC (kg) NC (kg) Air content (%)
NC 350 350 0 1315 210 0 0 0.7 5.5
B-NFC-NC 350 350 0 1189 210 78.3 3.5 0.7 6
B-NC 350 350 0 1195 210 78.3 0 0.7 6
NS 350 350 84 1234 181 0 0 0 5.5
B-NFC-NS 350 350 84 1108 181 78.3 3.5 0 7
B-NS 350 350 84 1114 181 78.3 0 0 6
* Adjusted amount of mixing water considering the water content of NS (50% solid content) or NFC (20% solid content)
Nano-modified Slag-based Cementitious Composites Reinforced …
875
876 O. M. Hosny et al.
2.2 Testing
DF = P N /M (1)
where:
DF durability factor of the test specimen,
P relative dynamic modulus of elasticity at N cycles, %,
N number of cycles at which P reaches the specified minimum value for discon-
tinuing the test or the specified number of cycles at which the exposure is to be
terminated, whichever is less, and
M specified number of cycles at which the exposure is to be terminated.
The compressive strength results of the different mixtures are shown in Fig. 1. Despite
the well-documented slow reactivity of binders incorporating slag [latent-hydraulic
binder] [22], the incorporation of either NC or NS proved its capability of mitigating
the shortcoming of using high-volume supplementary cementitious materials (SCM)
on strength development. Hence all mixtures, which comprised 50% slag, achieved
compressive strength greater than 30 MPa after 3 days of curing. Also, the range of
compressive strength for all composites at 28 days was 68–79 MPa, which complies
with the strength requirements of various infrastructure applications (e.g., bridges/
pavements overlays and patch repair).
Nano-modified Slag-based Cementitious Composites Reinforced … 877
90
3 days 28 days
79
80 76
74
72
70
70 62
Compressive strength [MPa]
60
50
42
38 38
40 36
33
31
30
20
10
0
NC B-NFC-NC B-NC NS B-NFC-NS B-NS
Mixtures ID
Fig. 1 Compressive strengths results for the different composites (Note: error bars represent
standard deviations)
Table 4 shows flexural (first-peak) strength, residual strengths, and toughness of the
different composites at 28 days based on the representative load–deflection curves
shown in Figs. 2 and 3.
The first-peak load of composites, which reflects the mechanical capacity of the
cementitious matrices, decreased with the addition of BFP. Conforming to the results
reported for compressive strength, this can be ascribed to the propagation of microc-
racks at the ITZs created due to the incorporation of BFP in the cementitious matrix,
thus reducing the flexural capacity (based on the first-peak crack). However, all
mixtures had flexural strengths in the range of 5.7–10.4 MP, which is appropriate for
different infrastructure applications including pavement and bridges.
Similar to the compressive strength results, the first-peak strength of the compos-
ites incorporating NC was higher than that of the counterpart mixtures comprising
NS. For instance, the incorporation of NC in mixture B-NC increased the first-peak
load by 19% relative to that of B-NS. Also, the efficient microstructural development
of NC composites was reflected by limiting the negative effects of incorporating the
BFP into the matrix. The addition of BFP in mixtures NC and NS, reduced the
first-peak load by 35 and 43%, respectively. This can be attributed to the previously
mentioned short-circuit diffusion mechanism of NC, which improved the degree of
cement hydration around the BFP and consequently enhanced ITZ quality due to
40
B0-NFC0-NC B4.5-NFC0.5-NC B4.5-NFC0-NC
35
30
25
Load [kN]
20
15
10
0
0 0.5 1 1.5 2
Deflection [mm]
40
B0-CNF0-NS B4.5-NFC0.5-NS B4.5-NFC0-NS
35
30
25
Load [kN]
20
15
10
0
0 0.5 1 1.5 2
Deflection [mm]
the better bond with the cementitious matrix. Similarly, the addition of NFC was
able to partially mitigate the reduction in strength due to the addition of BFP. For
example, mixture B-NFC-NS had a first-peak strength 19% higher than that of the
counterpart mixture without NFC (B-NS). This can be linked to the aforementioned
positive influence of NFC on the hydration development of cementitious matrices
through the internal curing and micro-crack bridging mechanisms.
As shown in Figs. 2 and 3, BFP exhibited a significant improvement in the post-
cracking behavior of the composites. After first cracking stage, a sudden drop in the
load capacity was noted, representing the time transitions between matrix failure and
activation of BFP load-bearing capacity restoration to reach a second peak (deflection
880 O. M. Hosny et al.
As depicted in Fig. 4, all the mixtures survived 300 cycles of freezing and thawing
with a DF higher than 90%, surpassing the relative dynamic modulus of elasticity
limit (60%) stipulated by ASTM C666/C666M.
Conforming to the compressive and flexural results, composites incorporating
either NC or NS without fibers attained a DF of 100% due to the aforementioned
short-circuit diffusion mechanism of NC and the pozzolanic/filler effects of NS,
105
104
NC B-NFC-NC B-NC Ns B-NFC-NS B-NS
103
102
101
100
99
Durability Factor (%)
98
97
96
95
94 Mixture ID Durability
93 Factor (%)
92 NC 100
91 B-NFC-NC 98
90 B-NC 93
Ns 99
89
B-NFC-NS 96
88
B-NS 90
87
86
85
30 60 90 120 150 180 210 240 270 300
No. of Cycles
Fig. 4 Relative dynamic modulus of elasticity versus number of freezing–thawing cycles of the
different composites
Nano-modified Slag-based Cementitious Composites Reinforced … 881
4 Conclusions
Based on the composites’ design and experimental program conducted herein, the
following conclusions can be made:
• The incorporation of nano-materials (NCC or NS) improved the microstructural
development of the composites, which mitigated the effect of slowly reactive
high-volume slag on strength development.
• The addition of BFP resulted in a decrease in the composites’ compressive and
flexural strengths due to the creation of additional ITZs within the cementitious
matrices.
• The amalgamation of NFC with BFP to produce a hybrid fiber system was effec-
tive at mitigating the negative effects of using BFP only as a single fiber system
owing to the short-circuit diffusion/internal curing and micro-crack bridging
mechanisms of NFC.
• BFP contributed to improving the flexural performance of the developed compos-
ites in terms of post-cracking behavior, which was reflected by the deflection-
hardening behavior with significantly improved toughness and residual strength.
• Overall, the proposed composites achieved compressive and flexural strengths
of 68–79 and 5.7–10.4 MPa, respectively after 28 days with high resistance to
freezing and thawing cycles with a DF higher than 90%, qualifying them for
various infrastructural applications in cold regions. Yet, field studies are needed
to verify these laboratory trends, which is recommended for future research.
Acknowledgements The authors are grateful for the financial support provided by the Natural
Sciences and Engineering Research Council of Canada (NSERC), Performance Biofilaments and
the City of Winnipeg through the NSERC Alliance Program. The IKO Construction Materials
Testing Facility at the University of Manitoba was crucial to conduct this experimental program.
882 O. M. Hosny et al.
References
1. Alzoubi HH, Albiss BA, Abu sini SS (2020) Performance of cementitious composites with nano
PCMs and cellulose nano fibers. Constr Build Mater 236:117483. https://doi.org/10.1016/j.con
buildmat.2019.117483
2. ASTM C39/C39M (2018) Standard test method for compressive strength of cylindrical concrete
specimens. ASTM International
3. ASTM C128 (2014) Standard test method for relative density (specific gravity) and absorption
of fine aggregate 1. ASTM International
4. ASTM C136/136M (2014) Standard test method for sieve analysis of fine and coarse aggregates.
ASTM International
5. ASTM C230/C230M (2014) Standard specification for flow table for use in tests of hydraulic
cement. ASTM International
6. ASTM C260 (2010) Standard specification for air-entraining admixtures for concrete. ASTM
International
7. ASTM C494/C494M (2017) Standard specification for chemical admixtures for concrete.
ASTM International
8. ASTM C597 (2016) Standard test method for pulse velocity through concrete1. ASTM
International
9. ASTM C666/C666M (2015) Standard test method for resistance of concrete to rapid freezing
and thawing. ASTM International
10. ASTM C1609/C1609M (2012) Standard test method for flexural performance of fiber-
reinforced concrete (Using beam with third-point loading). ASTM International
11. Azzam A, Bassuoni MT, Shalaby A (2019) Properties of high-volume fly ash and slag cemen-
titious composites incorporating nanosilica and basalt fiber pellets. Adv Civil Eng Mater
8(3):20190018. https://doi.org/10.1520/ACEM20190018
12. Balea A, Fuente E, Blanco A, Negro C (2019) Nanocelluloses: natural-based materials for
fiber-reinforced cement composites. A critical review. Polymers 11(3):518. https://doi.org/10.
3390/polym11030518
13. CAN/CSA-A3001 (2018) CAN/CSA-A3001-18 cementitious materials for use in concrete. In:
Canadian Standards Association
14. Cao Y, Zavaterri P, Youngblood J, Moon R, Weiss J (2015) The influence of cellulose
nanocrystal additions on the performance of cement paste. Cement Concr Compos 56:73–83.
https://doi.org/10.1016/j.cemconcomp.2014.11.008
15. Cao Y, Zavattieri P, Youngblood J, Moon R, Weiss J (2016) The relationship between cellulose
nanocrystal dispersion and strength. Constr Build Mater 119:71–79. https://doi.org/10.1016/j.
conbuildmat.2016.03.077
16. Ghazy A, Bassuoni M, Maguire E, O’Loan M (2016) Properties of fiber-reinforced mortars
incorporating nano-silica. Fibers 4(1):6. https://doi.org/10.3390/fib4010006
17. Hisseine OA, Wilson W, Sorelli L, Tolnai B, Tagnit-Hamou A (2019) Nanocellulose for
improved concrete performance: a macro-to-micro investigation for disclosing the effects of
cellulose filaments on strength of cement systems. Constr Build Mater 206:84–96. https://doi.
org/10.1016/j.conbuildmat.2019.02.042
18. Hsie M, Tu C, Song PS (2008) Mechanical properties of polypropylene hybrid fiber-reinforced
concrete. Mater Sci Eng A 494(1–2):153–157. https://doi.org/10.1016/j.msea.2008.05.037
19. Kang S-T, Kim J-K (2011) The relation between fiber orientation and tensile behavior in an
Ultra High Performance Fiber Reinforced Cementitious Composites (UHPFRCC). Cem Concr
Res 41(10):1001–1014. https://doi.org/10.1016/j.cemconres.2011.05.009
20. Konsta-Gdoutos MS, Metaxa ZS, Shah SP (2010) Multi-scale mechanical and fracture charac-
teristics and early-age strain capacity of high performance carbon nanotube/cement nanocom-
posites. Cement Concr Compos 32(2):110–115. https://doi.org/10.1016/j.cemconcomp.2009.
10.007
21. Lee JJ, Song J, Kim H (2014) Chemical stability of basalt fiber in alkaline solution. Fibers
Polym 15(11):2329–2334. https://doi.org/10.1007/s12221-014-2329-7
Nano-modified Slag-based Cementitious Composites Reinforced … 883
22. Mehta PK, Monteiro PJM (2014) Concrete: microstructure, properties, and materials, 4th edn.
McGraw-Hill Education
23. Mexasa ZS, Konsta-Gdoutos MS, Shah SP (2011) Crack free concrete made with nanofiber
reinforcement
24. Safiuddin M, Kaish A, Woon C-O, Raman S (2018) Early-age cracking in concrete: causes,
consequences, remedial measures, and recommendations. Appl Sci 8(10):1730. https://doi.org/
10.3390/app8101730
25. Said AM, Zeidan MS, Bassuoni MT, Tian Y (2012) Properties of concrete incorporating nano-
silica. Constr Build Mater 36:838–844. https://doi.org/10.1016/j.conbuildmat.2012.06.044
26. Sanchez F, Sobolev K (2010) Nanotechnology in concrete—a review. Constr Build Mater
24(11):2060–2071. https://doi.org/10.1016/j.conbuildmat.2010.03.014
27. Santos RF, Ribeiro JCL, Franco de Carvalho JM, Magalhães WLE, Pedroti LG, Nalon GH, de
Lima GES (2021) Nanofibrillated cellulose and its applications in cement-based composites:
a review. Constr Build Mater 288:123122. https://doi.org/10.1016/j.conbuildmat.2021.123122
28. Santos SF, Teixeira RS, Savastano Junior H (2017) Interfacial transition zone between ligno-
cellulosic fiber and matrix in cement-based composites. In: Sustainable and nonconventional
construction materials using inorganic bonded fiber composites. Elsevier, pp 27–68. https://
doi.org/10.1016/B978-0-08-102001-2.00003-6
29. Sharip NS, Ariffin H (2019) Cellulose nanofibrils for biomaterial applications. Mater Today
Proc 16:1959–1968. https://doi.org/10.1016/j.matpr.2019.06.074
30. Siddique R, Mehta A (2014) Effect of carbon nanotubes on properties of cement mortars.
Constr Build Mater 50:116–129. https://doi.org/10.1016/j.conbuildmat.2013.09.019
31. Spadea S, Farina I, Carrafiello A, Fraternali F (2015) Recycled nylon fibers as cement mortar
reinforcement. Constr Build Mater 80:200–209. https://doi.org/10.1016/j.conbuildmat.2015.
01.075
32. Tang Z, Huang R, Mei C, Sun X, Zhou D, Zhang X, Wu Q (2019) Influence of cellulose
nanoparticles on rheological behavior of oil well cement-water slurries. Materials 12(2):291.
https://doi.org/10.3390/ma12020291
33. Yi Y, Guo S, Li S, Zillur Rahman M, Zhou L, Shi C, Zhu D (2021) Effect of alkalinity on the
shear performance degradation of basalt fiber-reinforced polymer bars in simulated seawater
sea sand concrete environment. Constr Build Mater 299:123957. https://doi.org/10.1016/j.con
buildmat.2021.123957
Applications of Agro-waste
in the Construction Industry: A Review
1 Introduction
Human activities are the main driving forces for the changes in the Earth’s climate,
risking human health, security, biodiversity, and economic growth [1]. This risk has
motivated Canada and many countries to start taking action and to enforce many
regulations to control such growing environmental concerns. Hence, the sustain-
ability was set as the goal for Canadian and other governments committed to the
United Nations’ 2030 Agenda for Sustainable Development, reducing the greenhouse
gas (GHG) emission prioritized actions to transit to net-zero carbon and climate-
resilient operations by 2030 [2]. It is clear that reducing environmental impacts
extends beyond reducing carbon emission to include recycling and reusing various
wastes and saving natural recourses [3]. More attention was directed to construc-
tion materials and waste management. The embodied carbon used for construction
materials should be reduced to 30% by 2025. Simultaneously, at least 75% of waste
should be diverted from landfills by 2030. Therefore, many research projects have
focused on reducing the environmental impacts of two major industrial sectors such
as construction and agriculture. These had added a new challenge to the construction
and agriculture sectors [4].
The growing population has intensified agricultural and industrial activities over
the last decade. This growth resulted in a substantial increase in wastes/residuals
quantities from both harvestable yield and non-harvestable biomass, which harms
the environment. Many farmers tend to burn biomass residues in the open, impacting
the atmosphere and surface water [5]. In numbers, burning 1 tonne of landfilled dry
ash-free wood produces 0.73 ton carbon dioxide (CO2 ) [6]. Hence, waste/residual
incineration has been restricted since 1990.
On the other hand, the forestry and agriculture wastes in Asia and North America
account for two-thirds of biomass wastes arising from crop production [7]. Therefore,
finding an optimum management technique for these massive amounts of waste
and residuals is challenging for the agriculture sector. Sending to landfills was the
best available way to avoid incineration. However, reaching the limit for landfill
space, new environmental regulations and high disposing fees urge the need for an
alternative solution. Several attempts were proposed to overcome waste management
challenges in the agriculture sector, including (a) the use as an alternative fuel (b)
the reuse and recycling in construction applications in the form of supplementary
cementitious materials and alternative aggregates [8].
Recently, concrete products became the biggest waste landfill for many indus-
trial by-products and wastes. Many studies have reported the successful use of
agricultural wastes/residuals (so-called “agro-waste”) as aggregate in structural and
non-structural concrete as a replacement for aggregate [9–13]. Since it represents
around 60–80% of the concrete volume, the whole or partial replacement of natural
aggregates with such agro-wastes will lead to energy and natural resources savings,
solving waste disposal problems and protecting the environment. This replace-
ment for natural aggregate can also affect concrete products’ sustainability level by
reducing concrete embodied energy and reusing recycled materials. Moreover, most
sustainability rating systems recognize diverting agro-wastes from being disposed in
landfills to being used as aggregate for concrete production. For instance, the United
States Green Building Council’s (USGBC) Leadership in Energy and Environmental
Design (LEED) gives points to concrete products incorporating pre-consumer recy-
cled material [13]. Other sustainability points in the energy can be gained from
concrete thermal properties with agro-wastes under LEED “Optimize Energy Perfor-
mance.” Therefore, incorporating agro-wastes as aggregate in concrete will help
achieve more environmentally friendly materials and structures.
Agricultural waste products have a wide range of properties; therefore, research
on their utilization is still in its early stages. There is still a need for more research into
the development of mechanical qualities and the long-term durability of concrete,
Applications of Agro-waste in the Construction Industry: A Review 887
including trash. The purpose of this study is to provide an overview of some of the
agricultural wastes that have been effectively used as concrete aggregate. Recognition
of these resources and their usage in concrete would pave the door for additional
agricultural wastes to be used in the building sector, as well as in other industries,
resulting in a more ecologically friendly concrete industry.
Different types of agricultural waste materials, such as black spruce residual [14],
spruce residual [9], sugarcane bagasse ash (SCBA) [15], groundnut shell [16], oyster
shell [16, 17, 18], sawdust [19], wild giant reed [20], rice husk [21], cork [22], tobacco
waste [23, 13], coconut shell [24] used as an aggregate (fine and coarse), were used as
a replacement for aggregate in concrete and also in different applications depending
on its type and properties. The list of papers which reviewed in this paper is shown
in (Table 1).
3 Mechanical Properties
Fig. 1 Effect of sugarcane bagasse ash (SCBA) on compressive strength in different percentage in
three studies
Replacing giant reed ash (GRA) and giant reed fibres (GRF) in concrete was examined
by Ismail and Jaeel [20]. An improvement in flexural strength was reported as these
fibrous agro-wastes had the ability to bridge cracks and restrain their propagation.
Also, Brás [32] found that when the percentage of cork in the mortar increases, the
flexural strength of the mortar decreases.
4 Thermal Conductivity
Panesar and Shindman [22] found that a combination containing 10% and 20%
cork lowered heat conductivity to 30%, respectively. Then, it described how thermal
conductivity increases with increasing density. Thermal conductivity is determined
not just by density but also by the size, gradation, and proportion of cork replacement.
According to Brás et al. [29, 32], heat conductivity and density are reduced linearly
in mortars with a high cork granulate substitution ratio. Also, it discovered that
mortar containing cork granulates has a larger linear decrease in heat conductivity
than mortar containing expanded polystyrene. Moreira et al. [30] examined the effect
of different expanded cork granulate substitution ratios in cement mortar. They, like
other studies, detected a decline in thermal conductivity coefficient and stated that it
was attributable to decreasing cement concentration in mortar mixtures. The thermal
conductivity of tobacco waste materials suggested that they might be employed as a
covering and dividing material in concrete, approved by Öztürk and Bayrakl [23].
5 Durability
Ghanad et al. [9] showed that with the increase of the content of spruce residual,
water absorption increased. It was related to the decreased cement concentration,
which results in fewer hydration products and, as a result, fewer pozzolanic reac-
tions. In another study, [14] showed that the addition of agro-waste increased the
overall porosity and absorption rate. When the amount of agro-waste in the system
climbed from 0% to 30%, the absorption rate nearly doubled. Moreover, shrinkage
as an indication for potential cracking and opening the microstructure for the ingress
of aggressive materials was evaluated by Ghanad et al. [9]. As sand replacement
with spruce residual increased, the aggregates restraining action is reduced, leading
to increased shrinkage. Shrinkage was proportional to the replacement ratio and
increased.
892 A. H. Shirkouh et al.
6 Conclusion
References
1. Rosa EA, Rudel TK, York R, Jorgenson AK, Dietz T (2015) The human (anthropogenic) driving
forces of global climate change. Clim Change Soc Sociol Perspect 2:32–60
2. Lin B, Agyeman SD (2020) Assessing sub-Saharan Africa’s low carbon development through
the dynamics of energy-related carbon dioxide emissions. J Clean Prod 274:122676
3. Turner DA, Williams ID, Kemp S (2015) Greenhouse gas emission factors for recycling of
source-segregated waste materials. Resour Conserv Recycl 105:186–197
4. 2020–21 Departmental Results Report—Treasury Board of Canada Secretariat. https://www.
canada.ca/en/treasury-board-secretariat/corporate/reports/2020-21-departmental-results-rep
orts.html
5. Junpen A, Pansuk J, Kamnoet O, Cheewaphongphan P, Garivait S (2018) Emission of air
pollutants from rice residue open burning in Thailand, 2018. Atmosphere 9(11):449
6. Lame J (2017) Carbon negative impacts from biomass conversion. Biof Dig. http://www.bio
fuelsdigest.com/bdigest/2017/01/04/carbon-negative-impacts-from-biomass-conversion/
7. Tripathi N, Hills CD, Singh RS, Atkinson CJ (2019) Biomass waste utilisation in low-carbon
products: harnessing a major potential resource. NPJ Clim Atmos Sci 2(1):1–10
8. Sahu R, Chandrakar L, Nirmalkar S, Das B (2017) Waste Management and their Utilization.
Int J Eng Res Tech (IJERT) 06(04). https://doi.org/10.17577/IJERTV6IS040637
9. Ghanad DA, Soliman A, Godbout S, Palacios J (2020) Properties of bio-based controlled low
strength materials. Constr Build Mater 262:120742
Applications of Agro-waste in the Construction Industry: A Review 893
29. Brás A, Gonçalves F, Faustino P (2014) Cork-based mortars for thermal bridges correction in
a dwelling: Thermal performance and cost evaluation. Energy Build 72:296–308. https://doi.
org/10.1016/j.enbuild.2013.12.022. https://www.sciencedirect.com/science/article/pii/S03787
78813008372
30. Moreira A, António J, Tadeu A (2014) Lightweight screed containing cork granules: mechan-
ical and hygrothermal characterization. Cem Concr Compos 49:1–8. https://doi.org/10.1016/j.
cemconcomp.2014.01.012. https://www.sciencedirect.com/science/article/pii/S09589465140
00237
31. Sales A, Lima SA (2010) Use of Brazilian sugarcane bagasse ash in concrete as sand replace-
ment. Waste Manag 30(6):1114–1122. https://doi.org/10.1016/j.wasman.2010.01.026. https://
www.sciencedirect.com/science/article/pii/S0956053X10000681
32. Brás A, Leal M, Faria P (2013) Cement-cork mortars for thermal bridges correc-
tion. Comparison with cement-EPS mortars performance. Constr Build Mater 49:315–
327. https://doi.org/10.1016/j.conbuildmat.2013.08.006. https://www.sciencedirect.com/sci
ence/article/pii/S0950061813007368
The Compressive Strength of Ultra-high
Performance Concrete at Elevated
Temperatures
1 Introduction
2 Experimental Procedure
The UHPC mix was a proprietary design provided by ceEntek. The mix design can
be seen in Table 1. The mix was prepared using an Imer360 Plus Mortarman mixer.
The Imer360 uses a combination of three paddles that rotate through the mix instead
of simply lifting the mix up and dropping it, as is the case in conventional mixers.
This mechanism allows for adequate mixing of low slump materials, such as UHPC.
The cement was first added to the mixer. The superplasticizer was diluted with
potable water, then added to the mixer, within the first 30 s of mixing. After two
minutes of mixing, the PVA fibres of the diameter of 0.2 mm and a length of 12
mm were dispersed over the course of four minutes. After eight minutes of mixing,
the slump was measured. A slump of 225 mm was found after 30 seconds of flow,
indicating that proper workability had been achieved.
In total, 27 cylinders (75 × 150 mm) were cast and left to cure for 48 hours in their
forms. Samples were demoulded and submerged in water for seven days to accelerate
the curing process, as suggested by Ambily et al. (2013).
Cylinders were left to air-dry for approximately one week following water submer-
sion. Cylinders were then subjected to oven-drying at 105 °C to measure moisture
content, and this procedure has been shown to reduce the propensity for UHPC
to explosively spall (Kodur et al. 2021). Pre-drying minimizes the effect of vapour
inside the pores of the UHPC microstructure, thus reducing additional tensile stresses
that could build up and cause explosive spalling [9]. For five to six days, the cylin-
ders were subjected to oven-drying and were weighed every 24 hours until weight
loss stabilized. Once weight stabilization occurred, the cylinders were left in room
temperature conditions until 28 days passed.
898 B. MacDougall et al.
Figure 1 shows the Instron SATEC testing machine and Eurotherm 2408 heating
chamber, used for steady-state temperature tests. For steady-state testing, cylinders
were brought to a specific temperature under no load. Following the maintenance
of thermal equilibrium, cylinders were loaded in compression until failure. Three
different temperature levels were chosen, 300, 400 and 500 °C, based on the capacity
of the heating chamber. The samples were heated at a rate of 2 °C/min. The choice of
heating rate was determined based on a study by Yang et al. (2019), which determined
that this rate speed is slow enough to avoid explosive spalling in UHPC cylinders with
similar dimensions. It has been seen that a high heating rate can make UHPC more
prone to explosive spalling at temperatures above 200 °C, however, a rate of 2 °C/
min has been proven to successfully limit explosive spalling throughout the heating
process (Banjeri and Kodur 2021). Once the desired temperature was reached, a
soaking period of two hours commenced allowing for thermal equilibrium to be
reached. The required soaking period was determined through the use of Type-K
thermocouples placed in the middle of a select number of samples. Figure 2 displays
the temperature rise of the furnace, as well as the temperature rise within the cylinder.
This cylinder was subjected to 300 °C steady-state conditions. From the plot, it can be
seen that it takes approximately two additional hours upon the furnace reaching the
desired temperature for the cylinder to reach thermal equilibrium. A thermocouple
was placed into a cylinder at each temperature level, and it consistently took the
cylinder an extra two hours to reach thermal equilibrium.
Once the cylinders reached thermal equilibrium, the compressive load was applied
until failure occurred. The load was applied at a rate of 0.25 MPa/s (ASTM C39).
Scanning Electron Microscopic (SEM) images were taken of the failed compressive
samples. SEM imaging allows for the determination of how increasing temperatures
affect the UHPC on a microscopic level. The images were taken under the high
vacuum mode on a FEI Quanata 650 FEG Environmental SEM at Queen’s University.
This microscope collects images using a back-scatter electron (BSE) detector in
eight-bit greyscale. In the final images, heavier materials appear brighter, allowing
the various materials in a UHPC sample to be differentiated. A small sample, of
approximately one millimetre in diameter, was taken from a failed cylinder at each
temperature level.
The Compressive Strength of Ultra-high Performance Concrete … 899
3 Experimental Results
The experimental results from the compression steady-state tests are shown in Fig. 3.
Error bars are included to show the variability in results. Each bar is an average value
of three cylinders tested under each steady-state temperature. The error bars were
constructed using the standard deviation, determined from the three samples tested
under each steady-state exposure temperature. The highest variability in compressive
strengths occurs at 300 °C. However, as the standard deviation is determined with
the minimum required number of samples, no concrete conclusions can be made as
to why the compressive strength at 300 °C would have higher variability than at 400
and 500 °C. It is recommended that further tests be completed to better picture of
the variability of compressive strength at each temperature.
The failed cylinders after exposure to steady state 300, 400, and 500 °C are
shown in Fig. 4. Following failure, samples were visually inspected for evidence
of spalling, failure mechanisms, and damage to the surrounding area. The spalling
extent and failure mechanism were similar in all steady-state conditions. Failure
occurred suddenly, with no observable warning, and in an explosive manner. Severe
fragmentation occurred in all samples, with the fragments varying in sizes and shape.
In general, each cylinder had two major fragmentations in cone shapes. The remainder
of the cylinder fragmented into numerous slivers, ranging from approximately one to
five millimetres in length. On the left, Fig. 4 shows the smaller fragments immediately
after failure for each temperature. The right of Fig. 4 shows the major cone section
from a respective sample under 300, 400, and 500 °C.
Figure 5 displays how the UHPC cylinders change with differing temperatures.
At 300 °C, the majority of the PVA fibres have melted, leaving behind small black
channels where they previously existed in a solid state. At 400 °C, it appears that
the vacant PVA channels become more noticeable and darker in colour, indicating
further degradation from 300 °C. At 500 °C, the black vacant channels have largely
disappeared. Instead, the UHPC appears to change to a more whitish-grey colour,
with white spots speckled throughout the matrix. This change in colour indicates that
between 400 and 500 °C, the UHPC undergoes a microstructural change, resulting
in the decreased visibility of the vacant PVA channels.
Fig. 4 Cylinders after failure due to steady-state conditions under compressive loading
902 B. MacDougall et al.
Fig. 5 Changing UHPC microstructure after exposure to various temperatures under steady-state
conditions
Figure 6 presents SEM images taken from the failed specimens subjected to 300,
400, and 500 °C.
4 Discussion
The results from compression testing under steady-state conditions show a clear
decrease in compressive strength with increased exposure temperature. As shown
in Fig. 3, the highest compressive strength occurs at room temperature, and from
that point, there is a steady decrease with increasing temperature. Room temperature
tests show an average compressive strength of 156.5 kN. At 300 °C, the average
compressive strength decreases to 124.1 MPa. This trend continues at 400 and 500 °C,
where compressive strength further decreases to 118.1 and 83.2 MPa, respectively.
The highest variability occurs at 300 °C.
From room temperature to 300 °C, there is a 21% decrease in average compressive
strength under steady-state high temperature exposure. In comparison, from 300 to
400 °C there is a 5% decrease in average compressive strength, and from 400 to 500 °C
there is a 30% decrease in average compressive strength. Therefore, the majority
of strength degradation occurs from 400 to 500 °C. UHPC must thus be designed
to withstand a large compressive strength decrease within this critical temperature
range.
The trend of steady-state compressive results is consistent with that found by [3].
The authors tested cylinders with a hybrid UHPC mixture with steel and PP fibres
in compression immediately after heating to specified temperatures. Since the fibre
types are not the same as in the present research, a direct comparison is not possible,
however, the previous results do provide a reference point. The authors found that at
The Compressive Strength of Ultra-high Performance Concrete … 903
200, 400, and 600 °C, the compressive strength decreased to 145, 121, and 78 MPa,
respectively.
As discussed previously, the failure mechanisms do not change with temperature.
The cylinders tend to break into two main cone pieces, with the remaining UHPC
breaking off into smaller fragments. The failures are sudden and extremely violent,
characteristic of explosive spalling [7]. The UHPC cylinders did not undergo any
spalling throughout the heating process, which is attributed to the low heating rate of
2 °C/min. This can also be attributed to the system of interconnected cracks which
form throughout the heating process, as seen in Fig. 6. The cracking begins at 300 °C,
indicating spaces for pore water pressure to dissipate out of from a low temperature
level. The fact that the cylinders did not spall throughout the heating process indicates
that, at temperatures of 500 °C and below, heating is not sufficient to fail UHPC.
From the analysis of the SEM images shown in Fig. 6, it can be seen how heating of
the samples affects the behaviour of UHPC on a microscopic level. PVA fibres have
a melting point of approximately 220 °C [8]. As 300 °C is the first temperature level,
the PVA fibres have already melted upon loading. This is clear in Fig. 6, with long
slender vacant channels showing where PVA fibres had been located prior to melting.
At this temperature, minor cracks began to form around the PVA fibre channel. The
reasoning behind the crack formation has been largely attributed to a mismatch in
coefficients of thermal expansion (CTEs) between the fibres and the concrete matrix
[11] (Zhang et al. 2020). The CTE of polymers is typically around 10-4 /°C, while
the CTE of UHPC is approximately 10-6 /°C [1].
At 400 °C, the microcracks began to widen and expand to connect vacant PVA
channels, forming the beginning of a network of interconnected cracks. At 500 °C,
the cracks further widened, and severe damage to the sample was seen. The cracks
noticeably connected the vacant PVA fibre channels, extending in numerous direc-
tions radially out of the channel. Additionally, the vacant PVA fibre channels had
a slightly darker colouration than what is seen at 300 and 400 °C. This leads to
the conclusion that further degradation of the concrete matrix occurs as temperature
continues to rise. The vacant PVA fibre channels appeared to become deeper with
increasing temperature. From 300 to 600 °C, the SEM images clearly show a deep-
ening and widening of the vacant channels with the increase in temperatures. From
400 to 600 °C, the largest decrease in compressive strength occurred, indicating that
this is also the temperature range where the largest degradation of the concrete matrix
occurred.
The Compressive Strength of Ultra-high Performance Concrete … 905
5 Conclusions
Acknowledgements Financial support was provided by the Natural Research Council of Canada
(NRC), Canadian Precast/Prestressed Concrete Institute (CPCI), and the Queen’s University Civil
Engineering Department. The authors would also like to thank the government of Canada for the
COVID relief funds.
References
1. Ahlborn TM, Misson DL, Peuse EJ, Gilbertson CG (2008) Durability and strength characteriza-
tion of ultra-high performance concrete under variable curing regimes. In: Fehling E, Schmidt
M, Stürwald S (eds) Proceedings of 2nd International Symposium on Ultra High Performance
Concrete. Kassel, Germany, pp 197–204
2. Banerji S, Kodur V, Solhmirzaei R (2020) Experimental behavior of ultra high performance
fiber reinforced concrete beams under fire conditions. Eng Struct 208
3. Banerji S, Kodur V (2022) Effect of temperature on mechanical properties of ultra-high
performance concrete. Fire Mater 46(1):287–301
4. Heinz D, Dehn F, Urbonas L (2004) Fire resistance of ultra high performance concrete
(UHPC)—testing of laboratory samples and columns under load. In: International symposium
on ultra high performance concrete, pp 703–716
5. Kodur V, Banerji S, Solhmirzaei R (2020) Effect of temperature on thermal properties of
ultrahigh-performance concrete. J Mater Civil Eng 32(8)
906 B. MacDougall et al.
1 Introduction
Climate change has adversely affected human health, natural ecosystems, and the
global economy. Approximately 33% of total energy consumption is expended by
the building sector, and this percentage is expected to reach 42.4% by 2030 [1,
2]. In particular, the operational phase of buildings accounts for 73% of their total
energy consumption and 64% of their CO2 emissions [3]. In addition, as a result of
the insufficient energy storage capacity and resistance to thermal changes, signif-
icant amounts of the energy are ultimately lost through the building envelope [4].
Therefore, it is highly beneficial to improve the envelopes and insulation capacity
of buildings, especially if new high-performance cementitious composites are also
included.
Materials that can store thermal energy, such as phase change materials (PCMs),
can be used to improve the energy storage capacity of concretes. PCMs can effectively
store solar energy owing to their very high volumetric heat capacity and small varia-
tions in temperature while changing their phases [5, 6]. Owing to their high latent heat
storage densities, PCMs can absorb thermal energy when the outside temperature is
high and release it when the outside temperature is low. Thus, incorporating PCMs
into concretes may highly improve the thermal properties of construction materials.
Recent studies evaluated the effectiveness of incorporating PCMs into various
concrete mixes to develop a self-energy storing building material. By incorporating
any new material into concrete, the various properties of concrete are subjected to
change. Pilehvar et al. [7] explored the effect of microencapsulated PCMs (MPCMs)
on the fresh and mechanical properties of Portland cement concrete and geopolymer
concrete. In their findings, increasing the PCM replacement (by mass of sand) from
0 to 20% resulted in a decrease in slump for both concrete mixes. This loss in worka-
bility was attributed to the smaller particle size of the PCMs compared to the sand it
replaces. By replacing the sand with a finer material, the water demand increases and
the workability decreases. A greater percentage of solids in the concrete mix may
also cause this trend [8]. The compressive strength of Portland cement concrete and
geopolymer concrete also decreased with increasing MPCM content. Using SEM
imaging, the authors observed weak connections and air voids between the MPCM
and the cementitious matrix. PCMs have also been evaluated in high-performance
concrete. For example, Hunger et al. [9] incorporated MPCMs into self-compacting
concrete. A decrease in compressive strength was noticed. Furthermore, this imple-
mentation resulted in a decrease in thermal conductivity but an increase in heat
capacity that contributes to a better thermal performance for building applications.
MPCMs have been further studied in ultra-high-performance concrete (UHPC) [10].
A 35% strength reduction was observed within 3 days of steam-cured UHPC with
10 wt% incorporated MPCMs.
Engineered cementitious composites (ECCs) are a recently developed concrete
known with its outstanding mechanical, durability, and cracking/self-healing perfor-
mances. The application of ECC allows for long life cycles, lower long-term costs
and sustainability benefits compared to conventional normal strength concrete [11].
Development of Self-Energy Storing Engineered Cementitious Composites 909
Common ECC constituents include cement, fly ash, fine silica sand, polymeric fibers,
water, and high-range water reducer admixtures (HRWRA) [12, 13]. Regarding its
mechanical properties, the compressive strength of ECC can range from 30 to 90 MPa.
The absence of coarse aggregates in the mix allows ECCs to possess low (compared
to normal concrete) elastic moduli in the range of 20–25 GPa. Moreover, this material
is defined by its high tensile ductility achieving strain capacities up to 5% higher than
normal concretes and exhibiting fine multiple cracking behavior with crack openings
lower than 100 µm [13].
An approach to conserving the thermal energy in building envelopes is to increase
the thermal mass of the envelope material. However, although the thermal proper-
ties of ECC can be improved with the addition of PCMs, this can cause important
reductions in its mechanical strengths. For example, Desai et al. [14] incorporated
a paraffin-based PCMs into ECCs at 3% total mass replacement. The compressive
strength results indicated a reduction from 47 to 28 MPa at 28 days. Also, the presence
of PCM reduced the ultimate tensile strength of ECC by around 20%.
Tuncel and Pekmezci [15] explored glass-fiber-reinforced cement composites with
PCMs at 4% and 8% total weight replacement. This method of replacement was
shown to result in significant loss of mechanical performance due to the reduced
volume of cementitious matrix and fibers. Another technique of incorporating PCM
into ECC was also tried by Gürbüz and Erdem [16] who developed a hybrid (0.5%
steel and 1.5% PVA fiber) ECC with MPCMs at 0, 1, 2, 3, and 5% replacement of silica
sand by weight. Although mechanical properties were not highly decreased, the effect
of PCMs on the thermal performance was insufficient at these low replacement levels.
Xu and Li [17] incorporated a paraffin/diatomite composite PCM with a particle size
of up to 300 µm into ECC to improve its thermal capacity. This integration resulted
a significantly lower compressive strength, though thermal insulation was improved.
The significant reduction in compressive strengths was attributed to the nature of
PCM/diatomite clay-based properties and its incorporation method by total volume
of silica sand [17]. Thus, it is important to consider new parameters when including
PCMs into ECC compositions for a better balance between the mechanical and
thermal performance of these composites.
The goal of this study is to investigate the inclusion of microencapsulated PCMs
with a particle sizes between 15 and 30 µm into ECC mixtures to improve their
thermal storage capacity while optimizing their mechanical and cracking perfor-
mances compared to the control mixture without MPCMs. Two PCMs of low
and moderate melting temperatures of 6 °C and 18 °C are evaluated, and their
effectiveness in storing and releasing energy at fluctuating temperatures is assessed.
910 R. Malantic et al.
2 Experimental Work
2.1 Materials
The materials used for the casting of ECC samples include ordinary Portland cement
(PC), class-F fly ash (FA), silica sand (SS) of 400 µm maximum aggregate size,
polyvinyl alcohol (PVA) fibers, and polycarboxylic ether-based super-plasticizer.
The chemical and physical properties of the cementitious materials FA are provided
in Table 1. The microencapsulated PCMs were Nextek 6D and Nextek 18D, supplied
by Microtek Laboratories Inc, with phase change temperatures of 6 °C and 18 °C,
respectively. Both MPCMs are in the form of a dry powder (> 97% solids) of 15–
30 µm mean particle size, with a melamine-based capsule wall containing paraffin
as the PCM. The heat of fusion for Nextek 6D and Nextek18D is ≥ 170 J/g and
≥ 190 J/g, respectively. Figure 1 shows the visual appearance of PVA fibers and
MPCMs used in this study.
MPCM–sand ratio from 0.1 to 0.3. HRWRA dosages were increasingly added until
achieving the desirable workability. More specifically, 8 kg/1m3 , 10 kg/1m3 , and
12 kg/1 m3 HRWRA amounts were used for MPCM contents of 10%, 20%, and
30%, respectively. The seven mixes are outlined in Table 2. The produced ECCs
were coded based on the MPCM type and contents. For instance, ECC-6D30 is the
mixture with MPCM-type 6D and content of 30%. The quantities of Nextek 6D and
Nextek 18D are equal for both mixes, since the specific gravities of Nextek 6D and
Nextek 18D are the same.
All ECC mixes were casted using a planetary mixer with a volume of 120 L. The
fresh properties were evaluated by targeting a flow diameter of around 20 cm for all
compositions. The control and MPCM-based ECC mixtures were casted into 360
× 75 × 50 mm prism molds to evaluate the flexural strength and mid-span beam
deflection capacity at 7 and 28 days. About 50 mm cubic specimens were casted
to evaluate the compressive strength at 7 and 28 days. Lastly, 150 × 150 × 50mm
specimens were casted for the assessment of thermal performance during heating/
cooling cycles. After casting, specimens were demolded after 24 h and stored in
plastic bags at 23 ± 2 °C and 95 ± 5% until the testing time.
The thermal performance of MPCM-ECC can be analyzed by the amount of
heat that is actively released during cooling and absorbed during heating periods.
Additionally, MPCMs increase the thermal mass by increasing the specific heat
capacity of the material. These two functions of MPCMs enable a self-storing energy
system that can assist in regulating temperatures for building applications. To apply
this concept for testing thermal performance, MPCM-ECC specimens were set up
in an environmental chamber to have one side exposed to the ambient temperature,
and the other side completely insulated. 150mmx150mmx50mm specimens were
912
inserted into an insulation unit made from 3-inch XPS panels that covered five out of
six surfaces of the specimen. To ensure that the unit was completely insulated, spray
insulation foam was used to fill in all the gaps. StrainSmart data acquisition software
was used to monitor temperatures from thermocouples placed on the center of the
exposed side, the parallel sides, and the insulated side, similar to the setup presented
in D’Alessandro et al. [18]. This configuration, shown in Fig. 2, allows for monitoring
the thermal performance of the isolated side which can be attained by the applied heat
solely from the heat conduction through the sample thickness. The environmental
chamber was programmed to undergo a heating/cooling regimen which goes 8 °C
above and below the phase change temperature [18]. For ECC-6D specimens, the
chamber was programmed to transition from 14 °C to −2 °C in 8-h segments outlined
in Fig. 3a. Similarly, the regimen for ECC-18D specimens consisted of transitioning
between 26 °C and 10 °C, as shown in Fig. 3b. These heating/cooling regimens
were chosen as an example of the change of temperature between day and night
times of two different seasons of the year. The relative humidity was kept constant
throughout testing. After removing samples from the bag at 28 days, specimens were
placed in an oven at 25 °C to dry for 14 days, minimizing the effect of water on the
thermal performance of ECCs. Three samples from each mix were subjected to the
test simultaneously.
The compressive strengths of the various ECC mixes at 7 and 28 days are shown in
Fig. 4. It is observed that the compressive strength decreases as the percentage of
PCMs increases. The ECC-control mix attained a compressive strength of 54.6 MPa.
For ECC-6D mixes, the MPCM incremental replacements caused a small increase
of 1.6% at 10% MPCM and decrements of 3.7% and 14.2% at 20 and 30%, respec-
tively. ECC-18D10, ECC-18D20, and ECC-18D30 exhibited strength losses of 1.8%,
4.9%, and 17.5, respectively. An important aspect to note is the very close effect of
MPCM6D and MPCM18D on the compressive strengths of ECC mixes. This was
anticipated since the two MPCMs have identical properties (density, particle size,
shell material). The strength losses for ECCs with 10% and 20% replacement levels
appear to be very small. The 10% replacement mixes show no negative effect on the
compressive strength, and this can be associated with the filler effect of the higher
fineness MPCMs that replaces the particles of sand in the microstructure of the
cementitious matrix. Also, the finer MPCM elements may have acted as nucleation
sites for cement hydration [19], thereby contributing to the strength compensation at
different ages. This contradicts the decreased trend of compressive strengths reported
in the literature up to this level of PCM [16], thus confirming the effectiveness of
the method of MPCM addition with fine particle sizes and silica sand replacement.
914 R. Malantic et al.
Fig. 2 a Environmental chamber and data acquisition system; b configuration for thermal
performance testing
(a) (b)
28 16
26 14
Temperature (°C)
24 12
Temperaute (°C)
22 10
20 8
18 6
16 4
14 2
12 0
10 -2
8 -4
0 8 16 24 32 40 0 8 16 24 32 40
Time (h) Time (h)
by 5.0% and 20.6% compared to ECC-6D10, respectively. This explains that MPCM
also helped compensating the negative effect of reducing PVA fibers in ECC.
The deflection at the peak load is a strong indicator of the ductility and multiple
cracking behavior of ECC. The control mix attained a deflection of 4.01 mm at
maximum loading. This value increased with increased MPCM contents. For ECC-
6D10 and ECC-18D10, a decrease in deflection is observed. However, ECC-6D20
and ECC-18D20 specimens have close deflection values to the control mix (approx-
imately 9% increase for both mixes). It is important to note the significant increase
in the deflection of ECC-6D30 (42.9%) and ECC-18D30 (35.0%) specimens as
compared to the control ECC. The increasing deflection trend is in line with previous
studies [21]. Thus, incorporating MPCMs at high replacement levels seems to provide
better ductile behavior for ECC specimens with 1.5% PVA fibers than the control
mixture with 2% PVA fibers.
The thermal profiles of ECC-6D and ECC-18D are shown in Figs. 6 and 7, respec-
tively. The thermal performance is analyzed by observing the heat release/absorption
at the melting/solidifying temperature of the PCM. The ECC-6D and 18D mixes expe-
rienced a temperature deflection near the corresponding phase transition temperature,
indicating heat being absorbed/released during the phase change. A greater deflec-
tion suggests greater thermal energy being absorbed/released. It is important to note
that laboratory analysis from the manufacturer suggests a phase change temperature
± 2 °C of the specified value. Moreover, the amount of time heat is being released/
absorbed to delay the conduction of heat throughout the thickness of the sample
which is indicative of good thermal performance. In general, the greater temperature
disparity between ECC-control and MPCM-ECC, the better thermal performance.
The general trend for ECC-6D and ECC-18D is an increase in the deflection
and overall thermal inertia for increasing MPCM content. With more MPCMs, more
energy is stored and released that ultimately contributes to greater temperature deflec-
tions. In addition, the duration of the PCM effect increases with MPCM content.
During the temperature drop, ECC-6D10 specimens provided approximately 6 h of
the heat releasing effect and kept up to 3 °C apart from the ECC-control. On the
temperature rise, about 7 h of heat absorption is observed reaching a 2.5 °C temper-
ature difference with the control. This effect is enhanced by the addition of more
MPCMs, as in the case of ECC-6D30. The descending ramp shows the PCM effect
lasting for 8 h with up to 4 °C difference compared to the control. In the ascending
ramp, the heat absorption effect can be seen for 9 h and a maximum of 5 °C differ-
ence with ECC-control. Regarding ECC-18D mixes, the insulation effect due to an
increased thermal mass can be seen. ECC-18D mixes have longer PCM effect dura-
tions. For instance, ECC-18D30 does not even reach the control temperature and the
effect is visible for 15 h when the temperature rises. For building applications, the
duration and temperature deviations from MPCMs can significantly save on energy
Development of Self-Energy Storing Engineered Cementitious Composites 917
Fig. 5 Flexural strength of different ECC mixes at 28 days. a ECC-Control; b ECC-6D10; c ECC-
6D20; d ECC-6D30; e ECC-18D10; f ECC-18D20; g ECC-18D30
918 R. Malantic et al.
35
ECC-6D10
30 ECC-6D20
ECC-6D30
25 ECC-Control
Ambient
Temperature (°C)
20
15
10
-5
0 8 16 Time (h) 24 32 40
35
30
25
Temperature (°C)
20
15 ECC-18D10
ECC-18D20
10
ECC-18D30
ECC-Control
5
Ambient
0
0 8 16 24 32 40
Time (h)
unreacted fly ash particles, known with their smooth and spherical shape. As per
EDS analysis, oxygen, carbon, calcium, and silicon were the dominant products, in
relation to the amorphous reaction products and existing FA elements.
The microstructural outcome from incorporating MPCM in ECCs can be clearly
seen in Fig. 8b, which also shows a compact and homogenous microstructural
network with the presence of high contents of fly ash and MPCM constituents. These
findings are in line with the mechanical results of MPCM-based ECCs and demon-
strate that the porous system reported in the literature from the use of MPCM mate-
rials in concretes [22, 23] can be avoided by the use of the appropriate microencap-
sulated capsules and incorporation methods. In addition, the MPCM particles were
welly distributed and bonded within the matrix of ECC, with no signs of clumping.
The number of broken capsules was almost negligible in Fig. 8b, though the high
risk of breakage (damage) during the mixing of MPCM, as shown by Figueiredo
et al. [5, 24], thus also validating the appropriate mixing method as another reason
for the increased mechanical and thermal properties of developed MPCM-ECCs.
The presence of MPCM elements was reflected in the EDS analysis which shows
higher amount of carbon, from the melamine-based MPCM capsule shells, and lower
concentrations of calcium and silicon in ECC-18D30 sample compared to that of the
control mixture.
In Fig. 8c, large traces of reaction products were observed on the surface of some
MPCM particles, especially those in direct contact with unreacted cement and fly
ashes. This supports the previous discussion about the role of high fineness MPCM
granules in serving as a nucleation sites for the hydration of cementitious materials in
the matrix of ECC. Also, this highlights the importance of conserving the amount of
binder materials when adding MPCM into ECC in order to not disrupt its hydration
system and related mechanical performances.
920 R. Malantic et al.
Fig. 8 SEM micrographs and EDS analyses of MPCM and control ECCs: a ECC-control,
b ECC18D30, and c ECC18D30 at higher magnification
Development of Self-Energy Storing Engineered Cementitious Composites 921
4 Conclusion
References
1. Abergel T, Dulac J, Hamilton I, Jordan M, Pradeep A (2019) 2019 Global status report for
buildings and construction sector. UNEP—UN Environment Programme, Dec 2019. Accessed
17 Mar 2022. http://www.unep.org/resources/publication/2019-global-status-report-buildings-
and-construction-sector
2. Sarihi S, Mehdizadeh Saradj F, Faizi M (2021) A critical review of Façade Retrofit measures for
minimizing heating and cooling demand in existing buildings. Sustain Cities Soc 64:102525.
https://doi.org/10.1016/j.scs.2020.102525
3. Ma J-J, Du G, Zhang Z-K, Wang P-X, Xie B-C (2017) Life cycle analysis of energy consumption
and CO2 emissions from a typical large office building in Tianjin, China. Build Environ 117:36–
48. https://doi.org/10.1016/j.buildenv.2017.03.005
4. Pasupathy A, Velraj R, Seeniraj RV (2008) Phase change material-based building architecture
for thermal management in residential and commercial establishments. Renew Sustain Energy
Rev 12(1):39–64. https://doi.org/10.1016/j.rser.2006.05.010
5. Fernandes F et al (2014) On the feasibility of using phase change materials (PCMs) to mitigate
thermal cracking in cementitious materials. Cem Concr Compos 51:14–26. https://doi.org/10.
1016/j.cemconcomp.2014.03.003
6. Khudhair AM, Farid MM (2004) A review on energy conservation in building applications
with thermal storage by latent heat using phase change materials. Energy Convers Manag
45(2):263–275. https://doi.org/10.1016/S0196-8904(03)00131-6
7. Pilehvar S et al (2017) Mechanical properties and microscale changes of geopolymer concrete
and Portland cement concrete containing micro-encapsulated phase change materials. Cem
Concr Res 100:341–349. https://doi.org/10.1016/j.cemconres.2017.07.012
8. Adesina A, Awoyera PO, Sivakrishna A, Rajesh Kumar K, Gobinath R (2020) Phase change
materials in concrete: an overview of properties. Mater Today Proc 27:391–395. https://doi.
org/10.1016/j.matpr.2019.11.228
922 R. Malantic et al.
9. Hunger M, Entrop AG, Mandilaras I, Brouwers HJH, Founti M (2009) The behavior of self-
compacting concrete containing micro-encapsulated Phase Change Materials. Cem Concr
Compos 31(10):731–743. https://doi.org/10.1016/j.cemconcomp.2009.08.002
10. Ren M, Wen X, Gao X, Liu Y (2021) Thermal and mechanical properties of ultra-high perfor-
mance concrete incorporated with microencapsulated phase change material. Constr Build
Mater 273:121714. https://doi.org/10.1016/j.conbuildmat.2020.121714
11. Krouma A, Syed ZI (2016) A review on the use of engineered cementitious composite in bridges.
Mater Sci Forum 860:125–134. https://doi.org/10.4028/www.scientific.net/MSF.860.125
12. Siad H, Lachemi M, Sahmaran M, Mesbah HA, Anwar Hossain KM (2018) Use of recy-
cled glass powder to improve the performance properties of high volume fly ash-engineered
cementitious composites. Constr Build Mater 163:53–62
13. Siad H, Lachemi M, Sahmaran M, Anwar Hossain KM (2017) Mechanical, physical, and self-
healing behaviors of engineered cementitious composites with glass powder. J Mater Civil Eng
29(6):04017016
14. Desai D, Miller M, Lynch JP, Li VC (2014) Development of thermally adaptive engineered
cementitious composite for passive heat storage. Constr Build Mater 67:366–372. https://doi.
org/10.1016/j.conbuildmat.2013.12.104
15. Tuncel EY, Pekmezci BY (2019) Performance of glass fiber-reinforced cement composites
containing phase change materials. Environ Prog Sustain Energy 38(3):e13061. https://doi.
org/10.1002/ep.13061
16. Gürbüz E, Erdem S (2020) Development and thermo-mechanical analysis of high-performance
hybrid fibre engineered cementitious composites with microencapsulated phase change mate-
rials. Constr Build Mater 263:120139. https://doi.org/10.1016/j.conbuildmat.2020.120139
17. Xu B, Li Z (2014) Performance of novel thermal energy storage engineered cementitious
composites incorporating a paraffin/diatomite composite phase change material. Appl Energy
121:114–122. https://doi.org/10.1016/j.apenergy.2014.02.007
18. D’Alessandro A, Pisello AL, Fabiani C, Ubertini F, Cabeza LF, Cotana F (2018) Multifunc-
tional smart concretes with novel phase change materials: Mechanical and thermo-energy
investigation. Appl Energy 212:1448–1461. https://doi.org/10.1016/j.apenergy.2018.01.014
19. Jayalath A, San Nicolas R, Sofi M, Shanks R, Ngo T, Aye L, Mendis P (2016) Properties
of cementitious mortar and concrete containing micro-encapsulated phase change materials.
Constr Build Mater 120:408–417
20. Jamekhorshid A, Sadrameli SM, Farid M (2014) A review of microencapsulation methods of
phase change materials (PCMs) as a thermal energy storage (TES) medium. Renew Sustain
Energy Rev 31:531–542. https://doi.org/10.1016/j.rser.2013.12.033
21. Erdem S, Gürbüz E (2019) Influence of microencapsulated phase change materials on the
flexural behavior and micromechanical impact damage of hybrid fibre reinforced engineered
cementitious composites. Compos Part B Eng 166:633–644. https://doi.org/10.1016/j.compos
itesb.2019.02.059
22. Snehal K, Das BB (2020) Effect of phase-change materials on the hydration and mineralogy
of cement mortar. In: Proceedings of the institution of civil engineers-construction materials,
8:1-1
23. Li M, Shi J (2019) Review on micropore grade inorganic porous medium based form stable
composite phase change materials: Preparation, performance improvement and effects on the
properties of cement mortar. Constr Build Mater 10(194):287–310
24. Figueiredo A, Lapa J, Vicente R, Cardoso C (2016) Mechanical and thermal characterization of
concrete with incorporation of microencapsulated PCM for applications in thermally activated
slabs. Constr Build Mater 112:639–647. https://doi.org/10.1016/j.conbuildmat.2016.02.225
Environmental
Estimating Lake Evaporation
for the South Saskatchewan River Basin
of Alberta
Z. Islam (B)
River Engineering and Technical Services, Alberta Environment and Protected Areas, Edmonton,
AB, Canada
e-mail: Zahidul.Islam@gov.ab.ca
S. Tanzeeba
Airshed and Watershed Management, Alberta Environment and Protected Areas, Calgary, AB,
Canada
C. de la Chevrotière · P. Rokaya
Watershed Resilience and Transboundary Waters, Alberta Environment and Protected Areas,
Edmonton, AB, Canada
1 Introduction
The rapid growing population and their increasing water demands for domestic and
industrial consumption, irrigation, and hydropower have led to significant human
interventions in natural river systems. These interventions in the form of dams,
reservoirs, diversions, and withdrawals have significantly altered the natural flow
regimes and terrestrial water cycles [23]. A new hydrological reality is that half of
the world’s rivers are regulated, forming thousands of reservoirs, that can store large
volumes of water [24]. It is estimated that globally dams and reservoirs store more
than 20% of the global mean annual runoff [10]. In Canada alone, there are over
15,000 dams, of which 933 are considered as “large” dams under the definition of
the International Commission on Large Dams [7]. While these dams and reservoirs
enhance water availability, support food and energy security, provide flood mitiga-
tion, and offer recreational value, they also pose adverse effects on the environment,
including changes in the natural flow regime, obstruction of sediment and nutrient
transport, and loss of habitat and biodiversity [23].
Dams and reservoirs not only change magnitude and timing of the flow to the
downstream, but they also intensify evaporative losses by increasing the upstream
surface area of water exposed to direct sunlight and wind [17]. Globally, evapora-
tive losses from reservoirs are estimated to be greater than the combined consump-
tion from industrial and domestic water uses [26]. In the reservoir water budget,
water surface evaporation typically represents a major component. The annual water
balance of lakes Superior and Tahoe in North America shows that evaporative losses
can be as high as 40–60% of the total reservoir output [9]. In Alberta, mean annual
evaporation from reservoirs (675 mm) is about 35% higher than mean annual precip-
itation (500 mm) [11]. Reservoirs can also have significant impacts on the overall
water balance of regulated river basins. Despite these facts, reservoir evaporation
has been an inconsistently and inaccurately estimated component of the water cycle
within modern water resources’ management practices owing to the complexities
involved with quantifying these losses [9].
However, to better support efficient water resource management, it is essential to
incorporate accurate reservoir evaporation information into current reservoir opera-
tion rules. This is particularly important for transboundary river basins, where evap-
orative losses are accounted for in the calculation of apportionable flow (i.e., volume
to be shared based on bilateral or multi-lateral agreements). In a transboundary
context, evaporative losses are generally recognized as a diversion or use of water, in
addition to the diversion from the reservoir storage itself. The South Saskatchewan
River Basin (SSRB) in Western Canada is such an example (see Fig. 1). The river
is subjected to an interprovincial water sharing agreement, namely the 1969 Master
Agreement on Apportionment. Therefore, the main objective of this study was to
investigate the evaporative losses from reservoirs and account for these losses as a
portion of the total water use. This will aid in closing the reservoir water balance and
equitable sharing of water resources.
Estimating Lake Evaporation for the South Saskatchewan River Basin … 927
The SSRB originates in the Rocky Mountains of Alberta and is a major sub-basin
of the Nelson River Basin of Western Canada. The drainage basin of the SSRB
spans approximately 121,095 km2 and encompasses a diverse topography from high
mountains (elevation of 3559 m) to prairie landscapes (elevation of 244 m), see
Fig. 1. The Red Deer is the largest sub-basin (41%), but contributes only 18% of
the mean annual flow. The Bow and Oldman sub-basins account for 21% and 22%
928 Z. Islam et al.
ET = 2ETW − ETP
2.3 Data
ECCC, Alberta Environment and Protected Areas (EPA), and AGI. The interpola-
tion method for precipitation is the Hybrid Inverse Distance cubed weighting (IDW)
using a daily search radius out to 60 km, or a maximum of eight closest stations,
whichever is satisfied first. In case no stations are available within 60 km of the
township center, the interpolation process uses the nearest neighbor, regardless of
its distance from the township center [2]. For temperature, humidity, and solar radi-
ation, the interpolation process includes a linear IDW procedure with a radius of
200 km or eight closest stations whichever is satisfied first. In case no stations are
available within 200 km of the township center, the interpolation process uses the
nearest neighbor, regardless of its distance from the township center.
We used wind speed data from two sources: Adjusted and Homogenized Canadian
Climate Data (AHCCD) and a hybrid source of data developed by EPA. AHCCD
uses selected station data that incorporates adjustments on historical station data
[1]. These data are updated every few years (as time permits) by ECCC. Wind
speed data are available for 14 Alberta stations. The three nearest AHCCD wind
stations (Red Deer, Calgary, and Lethbridge) were used in this study. “Hybrid” is a
climate dataset generated for Alberta by EPA [8]. It consists of five high-resolution
historical station-based gridded climate datasets (ANUSPLIN, Alberta Township,
PCIC PNWNAmet, CaPA, and NARR). These datasets were used to generate the
hybrid dataset for multiple climate parameters. The REFerence Reliability Evalu-
ation System (REFRES) was used to systematically rank multiple climate datasets
to generate the hybrid climate dataset. The hybrid data were generated using the
two sources: 1950–1970, using PCIC NorthWest North America meteorological
(PNWNAmet) dataset, and 1971–2019, using ACIS dataset.
2.4 Methodology
Step 1: For each reservoir, a representative meteorological data time series was gener-
ated for the 1955–2020 period. Since ACIS data provide climate data in a township
grid, average climate data were generated for a reservoir where the waterbody is
distributed over multiple township grids.
Step 2: Gross lake evaporation was estimated for fifteen reservoirs using Morton’s
shallow lake evaporation model. Although the reservoirs included in the study are
considered as deep reservoirs, the total gross annual evaporation from a reservoir
estimated by shallow lake model and deep lake model is the same. Since deep lakes
store radiative energy in spring/early summer and release it in late summer/fall,
the deep lake evaporation for spring and summer is generally less compared to the
shallow lake evaporation and greater evaporation is observed for months in the fall.
Estimated gross shallow lake evaporation rate would need to be converted to deep
lake evaporation if these evaporative losses are to be considered on a monthly or
seasonal scale, to account for the heat storage effects.
Step 3: For each reservoir, the mean and maximum annual net evaporative losses
were estimated using the following relationship:
Estimating Lake Evaporation for the South Saskatchewan River Basin … 931
(E − P) ∗ (APost − APre )
MeanNetLoss = dam3
1, 000, 000
(E − P)Max ∗ (APost − APre )
MaxNetLoss = dam3
1, 000, 000
where E is the mean annual gross evaporation (mm), P is the mean annual precip-
itation (mm), (E − P)Max is the maximum annual net evaporation (mm), APost is
the post-construction surface area (m2 ), and APre is the pre-construction surface area
(m2 ).
Moreover, statistical analysis on precipitation and gross evaporation over the
1955–2020 period revealed two important years of interest: the historical “driest”
year (the year with lowest precipitation) and the historical “highest evaporation” year
(the year with highest gross evaporation). Annual net evaporative losses for those
specific years were also estimated.
Step 4: All fifteen reservoirs were ranked according to various criteria to identify
the reservoirs with significant evaporative losses. Criteria used to rank the reservoirs
were (i) mean net evaporative losses, (ii) maximum net evaporative losses, (iii) net
evaporative losses of the “driest” year, and (iv) net evaporation losses of the “highest
evaporation” year.
Step 5: The top six reservoirs in terms of evaporative losses were selected for
further analysis. The PFRA-Meyer’s model was then used to calculate evaporation
loss for the selected six reservoirs.
Step 6: A comparative analysis was conducted in order to evaluate the differences
between the two evaporation models (i.e., Morton and PFRA-Meyer).
Step 7: According to Morton [16], the shallow lake/reservoir model does not
consider sub-surface heat storage within the water mass. Therefore, Morton’s deep
lake evaporation model was used to account for heat storage effects on the selected
six reservoirs. This step is important to estimate evaporative losses at seasonal or
monthly scales.
3 Results
The long-term (1955–2020) gross annual evaporation from the fifteen reservoirs
is presented in Fig. 2. The St. Mary reservoir has the highest gross evaporation,
whereas the Upper Kananaskis reservoir shows the lowest gross evaporative loss.
The whiskers of the box plot show that there is a significant year-to-year variation in
gross evaporation across the reservoirs. In the majority of the reservoirs, the highest
gross evaporation occurred in 1961, whereas the lowest evaporation was observed in
1978.
932 Z. Islam et al.
Fig. 2 Long-term average (1955–2020) gross annual evaporation from the fifteen reservoirs
The net evaporative losses (which is the difference between gross evaporative loss
and precipitation) were also calculated for all fifteen reservoirs. The net evaporative
losses were estimated based on increased surface area of the reservoirs (difference
in the water surface area between the post-construction and pre-construction period)
and are presented under four categories: historical mean, historical maximum, and
annual losses in 1960 and in 1961 (See Fig. 3). The year 1961 was included as the
year with the highest gross evaporation, whereas 1960 was the year with the lowest
precipitation. The results show that the annual net evaporative losses in reservoirs
follow similar patterns to that of gross evaporation rates. The St. Mary reservoir
revealed the highest annual net evaporative loss, both in terms of mean (11,896
dam3 per year) and maximum (21,581 dam3 per year), whereas the lowest mean
(− 83 dam3 per year) and max (659 dam3 per year) net evaporation losses were
observed for the Upper Kananaskis reservoir. The Spray, Gleniffer, Waterton, and
Twin Valley reservoirs showed net maximum evaporative losses in a range of 5000–
6500 dam3 per year. Similarly, the Ghost, Pine Coulee, and Minnewanka reservoirs
have net maximum evaporative losses in the range of 3000–5000 dam3 per year.
The maximum net evaporative losses were less than 3000 dam3 per year for the rest
of the reservoirs. Local climate was found to be the dominant factor in evaporative
losses. As a result, reservoirs in Southern Alberta with a hot and dry environment
showed relatively larger evaporative losses, whereas the reservoirs in the western
region, having wet and cooler conditions, revealed lesser evaporative losses [11].
Estimating Lake Evaporation for the South Saskatchewan River Basin … 933
Fig. 3 Net annual evaporative losses from the fifteen reservoirs: a historical mean, b historical
maximum, c for the year 1960, d for the year 1961
The annual maximum net evaporative losses across the reservoirs also follow
similar pattern to net evaporative losses in 1960 (see also Fig. 3). The results also
highlight that the year with the highest gross evaporative loss may not be the year
with the highest net evaporative loss. For instance, 1961 was found to be the year with
historically highest gross evaporative loss, but the majority of stations showed the
highest net evaporative loss in 1960. Furthermore, although the gross evaporative loss
was highest in 1961, the net evaporative loss for that year was much less compared to
other years, which highlights that precipitation plays a larger role in the calculation
of net evaporative losses in reservoirs.
The summation of all reservoirs net evaporative losses shows that on average,
about 34,000 dam3 per year (max ~ 76,000 dam3 per year) of water is lost from
these fifteen reservoirs annually. While these evaporative losses may be a significant
volume in terms of the water balance for individual reservoirs, they are less significant
compared to the average annual apportionable flow volume for the SSRB. The total
net evaporative loss was only about 0.5% (maximum net loss was ~ 1%) of the total
apportionable flow volume of SSRB near Alberta–Saskatchewan border (downstream
of the confluence with the Red Deer River). However, in a very dry and hot year with
low flow conditions, the evaporative losses may be higher.
934 Z. Islam et al.
Table 1 Ranking of the fifteen reservoirs based on four categories: historical mean, historical
maximum, and annual losses in 1960 and 1961
Reservoir Ranking based on Ranking based on Ranking based on Ranking based on
name maximum net loss mean net loss 1960 net loss 1961 net loss
St. Mary 1 1 1 1
Oldman 2 2 2 2
Spray 3 9 3 4
Gleniffer 4 10 4 6
Waterton 5 6 5 8
Twin Valley 6 3 6 3
Ghost 7 4 7 7
Pine Coulee 8 5 8 5
Minnewanka 9 7 9 9
Glenmore 10 8 10 10
Bearspaw 11 11 12 11
Chain 12 12 11 14
Barrier 13 13 13 15
Lower 14 14 14 12
Kananaskis
Upper 15 15 15 13
Kananaskis
Estimating Lake Evaporation for the South Saskatchewan River Basin … 935
Table 2 Comparison of mean annual gross evaporative losses as estimated by Morton and Meyer’s
models using two types of wind speed data: AHCCD and Hybrid
Rank Reservoir Wind speed data: hybrid (1955–2019) Wind speed data: AHCCD
(1955–2014)
Morton PFRA-Meyer % Change Morton PFRA-Meyer % Change
(dam3 ) (dam3 ) (PFRA-Meyer (dam3 ) (dam3 ) (PFRA-Meyer
compared to compared to
Morton) (%) Morton) (%)
1 St. Mary 829 949 14 829 1197 44
2 Oldman 797 997 25 795 1187 49
3 Spray 742 803 8 739 1033 40
4 Gleniffer 708 761 8 705 864 23
5 Waterton 803 1017 27 803 1189 48
6 Twin 811 895 10 808 1166 44
Valley
The gross evaporation losses from the top six reservoirs as identified in Sect. 3.2
were also computed using the PFRA-Meyer’s model. Table 2 shows a comparison of
mean annual gross evaporative losses from the selected six reservoirs as estimated by
Morton’s and the PFRA-Meyer’s models using two different wind speed data sources:
hybrid and AHCCD. The analysis reveals two distinct results. First, the evaporative
losses computed using the PFRA-Meyer’s model are consistently higher than the
values estimated using Morton’s model across all stations and by using both sources
of wind speed data. For instance, using the hybrid wind speed data, PFRA-Meyer’s
model showed 8–27% higher gross evaporation than Morton’s model, whereas using
the AHCCD wind speed data, gross evaporation losses were 23–49% higher than
Morton’s model. Secondly, while Morton’s model does not use wind speed data,
the PFRA-Meyer’s model displays a very high sensitivity to wind speed data. For
instance, using the PFRA-Meyer’s model, the difference in gross evaporation ranges
from 13 to 30% for two sets of wind speed data. This comparison demonstrates that
the PFRA-Meyer’s model is highly sensitive to wind speed data, which is due to the
fact that wind speed is linearly related to gross evaporation in the PFRA-Meyer’s
equation.
The reservoirs selected in this study are considered deep reservoirs/lakes. Therefore,
for each of the reservoirs, the estimated gross shallow lake evaporation needs to be
936 Z. Islam et al.
Fig. 4 Comparison of long-term averaged (1955–2020) shallow (SLE) and deep (DLE) lake
evaporation for six selected reservoirs
4 Discussion
While there have been significant research advances in both direct measurement
of evaporation as well as evaporation models, not all of that have translated into
operational practice due to data and resource constraints. In the Canadian prairies,
evaporation models, such as Morton or Meyer, with some modifications have been
used widely in operational estimates of evaporation. For instance, EPA and AGI use
Morton’s model to estimate evaporative loss in Alberta, whereas in Manitoba and
Saskatchewan, the PFRA-Meyer’s model is used more.
Previous studies have also compared different evaporation models for the prairie
landscape. An older study by Prairie Provinces Water Board [18] could not conclu-
sively identify if Morton or Meyer’s model is more appropriate for the calculation of
evaporation loss. However, they note that annual evaporation losses calculated using
Meyer model are somewhat larger with greater variability than the estimates from
Estimating Lake Evaporation for the South Saskatchewan River Basin … 937
Morton’s model, which in aligns with the current study. EPA [3] also compared lake
evaporation values estimated using the Morton’s model for three stations in Alberta
with the results obtained from Granger and Hedstrom and Meyer’s model and found
that Morton’s model compares favorably with other methods. Another previous study
by Prairie Provinces Water Board [20] found Morton’s model to be suitable if wind
speed data are missing and the PFRA-Meyer’s model when several meteorological
data (e.g., air temperature, wind speed, and dew point temperature) are available.
However, due to lack of observed data, it is difficult to identify the most appropriate
model or even to claim which model is better suited for which region. Most of the
above-mentioned studies, including this study, can only provide comparative results
based on these methods that only tell a generic story. A recent study by Attema [5]
used observation-based eddy covariance measurements for open water evaporation
in prairie landscapes and compared the results with other estimation-based methods.
The results showed that the Morton’s shallow lake estimates for weekly and monthly
evaporative losses were quite a bit closer in magnitude to eddy covariance-measured
losses, but with some seasonal biases.
Using Morton’s model, the net evaporation was found to range between − 0.12 mm
per day and 1.13 mm per day (with a mean of 0.52 mm per day) across the fifteen
reservoirs. In the Canadian prairies, a prior study that investigated the evaporation
loss from Lake Abraham and Brazeau Reservoir in the North Saskatchewan River
reported a net evaporation of 0.13 and 0.26 mm per day, respectively [19]. Similarly,
in Val Marie reservoir in the Frenchman River basin, Saskatchewan, evaporation was
found to average between 3 and 5 mm per day, whereas for Shellmouth Reservoir
in the Assiniboine River basin, Manitoba, it was comparatively lower (< 1 mm per
day) [5]. In warmer climatic zones across world, significantly higher evaporative
losses have been reported. For instance, in a small water reservoir in Northern Israel,
Tanny et al. [21] reported a evaporation loss of about 5.48 mm per day. Similarly,
in another small and shallow reservoir of Midmar Dam in KwaZulu-Natal, South
Africa, Mengistu and Savage [14] reported an evaporation loss from 3 to 4 mm
per day. Although individual reservoirs may have diverse evaporative losses, their
cumulative effects can be large. It is estimated that about 7.53 million dam3 /year
water is lost due to evaporation from 3415 reservoirs in Texas, USA [25]. Likewise,
a long-term annual evaporation loss of about 33.33 million dam3 /year was reported
from 721 reservoirs across USA by Zhao and Gao [26].
5 Conclusion
Evaporation from an open water surface is a critical and continuous process in the
water cycle and largely influences the water balance of a reservoir. Evaporative losses
from reservoirs are also recognized in some transboundary agreements as significant
losses to apportionable water. In this study, we calculated the annual gross and net
evaporation losses from fifteen reservoirs of the SSRB for the 1955–2020 period and
identified six reservoirs with significant annual losses (i.e., > 5000 dam3 /year). Our
938 Z. Islam et al.
results show that on average, 34,000 dam3 (max ~ 76,000 dam3 ) of water volume is
lost from these fifteen reservoirs each year. We also compared two different model
and data sources and showed that the PFRA-Meyer’s is highly sensitive to wind
speed data. Further evaluation of Morton’s shallow and deep lake models highlights
that monthly variabilities in evaporation losses are better captured by the deep lake
model.
The current study only focuses on the estimation of reservoir evaporative
losses based on climatic factors. However, watershed characteristics, surface runoff,
groundwater fluctuation, precipitation interception by different environment around
reservoirs might affect the evaporation process, which was not in scope of this study.
We recommend a full water balance study for specific reservoirs to estimate the
evaporative losses and validate the modeling results.
Acknowledgements We would like to thank Dr. Anthony Liu of Environment and Climate Change
Canada, Dr. Hyung Eum of Alberta Environment and Protected Areas, and Mr. Ralph Wright of
Alberta Agriculture and Irrigation for providing necessary data and model for this study. Mr. Michael
Seneka from Alberta Environment and Protected Areas also provided some initial reviews in this
study. We are grateful to the Prairie Province Water Board (PPWB) Secretariat and the PPWB
Committee on Hydrology for providing valuable comments and suggestions at the early stages of
this study.
(continued)
Reservoir Location Starting Elevation Surface area (m2 ) Live storage
name year (m) (m3 )
Lat Lon River Basin Pre-construction Post-construction
Ghost 51.227 − 114.769 Bow River 1932 1393 11,600,000 70,925,000
Bearspaw 51.139 − 114.211 Bow River 1955 1175 1,950,000 13,815,000
Glenmore 50.964 − 114.071 Elbow River 1933 1060 3,840,000 17,762,000
Gleniffer 52.013 − 114.216 Red Deer 1983 946 17,600,000 278,000,000
River
References
1. Adjusted and Homogenized Canadian Climate Data (AHCCD) (2021) Retrieved from https://
open.canada.ca/data/en/dataset/9c4ebc00-3ea4-4fe0-8bf2-66cfe1cddd1d
2. Alberta Climate Information Services (2020) Retrieved from https://acis.alberta.ca/acis/tow
nship-data-viewer.jsp
3. Alberta Environment and Protected Areas (2013) Evaporation and Evapotranspiration in
Alberta Edmonton, Alberta, Canada
4. Alberta Township Survey System (2021) Retrieved from https://www.alberta.ca/alberta-tow
nship-survey-system.aspx
5. Attema JB (2020) Using Eddy Covariance and Over-Lake measurements from two prairie
reservoirs to inform future evaporation measurements. University of Saskatchewan
6. Axelson JN, Sauchyn DJ, Barichivich J (2009) New reconstructions of streamflow variability
in the South Saskatchewan River Basin from a network of tree ring chronologies, Alberta,
Canada. Water Resour Res 45(9)
7. Canadian Dam Association (2017) Dams in Canada. Retrieved 02 Aug 2017 from https://
www.cda.ca/EN/Dams_in_Canada/Dams_in_Canada/EN/Dams_In_Canada.aspx?hkey=972
32972-53dc-4deb-87bd-dc023e0a0bfd
8. Eum H, Gupta A (2019) Hybrid climate datasets from a climate data evaluation system and
their impacts on hydrologic simulations for the Athabasca River basin in Canada. Hydrol Earth
Syst Sci 23:5151–5173
9. Friedrich K, Grossman RL, Huntington J, Blanken PD, Lenters J, Holman KD, … Skeie E
(2018) Reservoir evaporation in the Western United States: current science, challenges, and
future needs. Bull Am Meteorol Soc 99(1):167–187
10. Hanasaki N, Kanae S, Oki T (2006) A reservoir operation scheme for global river routing
models. J Hydrol 327(1–2):22–41
11. Islam Z (2013) Evaporation and evapotranspiration: methods and application in Alberta:
Government of Alberta
12. Islam Z, Gan TY (2015) Potential combined hydrologic impacts of climate change and El Niño
Southern oscillation to South Saskatchewan River basin. J Hydrol 523:34–48
13. Liu A, Taylor N, Kiyani A, Mooney C (2014) Evaluation of lake evaporation in the North
Saskatchewan River Basin: PPWB
14. Mengistu M, Savage M (2010) Open water evaporation estimation for a small shallow reservoir
in winter using surface renewal. J Hydrol 380(1–2):27–35
15. Morton FI (1983) Operational estimates of areal evapotranspiration and their significance to
the science and practice of hydrology. J Hydrol 66(61–64):61–76
16. Morton FI (1986) Practical estimates of lake evaporation. J Clim Appl Meteorol 25:371–387
17. Pokhrel Y, Hanasaki N, Koirala S, Cho J, Yeh PJ-F, Kim H, … Oki T (2012) Incorporating
anthropogenic water regulation modules into a land surface model. J Hydrometeorol 13(1):255–
269
940 Z. Islam et al.
18. Prairie Provinces Water Board (2003) Status report on studies and research to address “Guide-
lines for evaporation estimates required by the Prairie Provinces Water Board”. Prairie
Provinces Water Board, Regina, Saskatchewan, Canada
19. Prairie Provinces Water Board (2014) Evaluation of lake evaporation in the North Saskatchewan
River Basin (vol 171). Prairie Provinces Water Board, Regina, Saskatchewan, Canada
20. Prairie Provinces Water Board (2015) Basin review calculation of apportionable flow for
the North Saskatchewan river at the Alberta/Saskatchewan interprovincial boundary. Prairie
Provinces Water Board, Regina, Saskatchewan, Canada
21. Tanny J, Cohen S, Assouline S, Lange F, Grava A, Berger D, …, Parlange M (2008) Evaporation
from a small water reservoir: direct measurements and estimates. J Hydrol 351(1–2):218–229
22. Tanzeeba S, Gan TY (2012) Potential impact of climate change on the water availability of
South Saskatchewan River Basin. Clim Change 112(2):355–386
23. Vorosmarty CJ, McIntyre PB, Gessner MO, Dudgeon D, Prusevich A, Green P,
… Davies PM (2010) Global threats to human water security and river biodiver-
sity. Nature, 467(7315):555–561. http://www.nature.com/nature/journal/v467/n7315/abs/nat
ure09440.html#supplementary-information
24. World Commission on Dams (2000) Dams and development: a new framework for decision-
making: the report of the World Commission on Dams: Earthscan
25. Wurbs RA, Ayala RA (2014) Reservoir evaporation in Texas, USA. J Hydrol 510:1–9
26. Zhao G, Gao H (2019) Estimating reservoir evaporation losses for the United States: fusing
remote sensing and modeling approaches. Remote Sens Environ 226:109–124
Unbox Your Ideas—the Benefits
of Integrated Design
Quin MacKenzie
Abstract Traditional project delivery in North America is often fraught with design
changes, budget and schedule overages, and adversarial relationships between the
design and construction teams. Rather than the traditional approach of distinct plan-
ning, design, and construction teams, each of whom operate in a “black box” passing
a project along from phase to phase, the concept of an Integrated Design Process
(IDP) is to have one integrated team work together to consider risks and opportunities
across project phases.
Integrated design or integrated planning and design is the concept of having a multi-
disciplinary team work together from project onset to collaborate and co-create a
project that is more inclusive, holistic and considers a variety of risks up front.
Starting at the project onset, this can require more energy with the intent of energy
being reduced further along the project lifecycle, particularly reducing the amount
of rework that would have to occur during key transitions between project phases.
An example of this is the hand off that typically occurs upon design completion from
an engineering team to a contractor, who would then conduct a constructability and
phasing review to determine the feasibility of the design in actual practice and how
best to phase project construction.
In the design process for buildings and infrastructure, the most cost-effective time
to implement sustainable design principles, discover synergies, and accommodate
stakeholder and end-user needs is early on during project planning and concept
development. Leveraging an IDP can require more energy up front, with the intent
that it is reduced further along in the project lifecycle through the reduction of
rework (redesign or retrofit) that would have to occur later. As a result, during early
Q. MacKenzie (B)
Luuceo Consulting Inc, Delta, BC, Canada
e-mail: quin@luuceo.com
Traditional project delivery in North America for civil infrastructure projects has
typically followed a Design-Bid-Build approach. In this approach, planning and
design services are conducted either by an external consultant or in-house by the
project owner, then a contract for construction services is awarded and planning and
design information is handed off to a contractor to build [3].
Although this is one of the most common project delivery methods used in North
America, there are problems associated with the approach, which can often be char-
acterized as siloed, where each project phase is a distinct stage gate, delivered within
a “black box” (see Fig. 1). This scenario, where project team members are separated
by distinct contracts that do not overlap, can lead to a less effective project, more
rework, and budget and schedule overruns. Additionally, these silos can give rise to
an adversarial relationship between the design team and the contractor, especially if
constructability reviews and phasing plans lead to redesign.
The approach to project sustainability and resilience can also be limited under
traditional project delivery. Ideally, sustainability and resilience goals and objectives
are created collaboratively, and in consideration of the entire project lifecycle. This
provides an opportunity for buy-in from all project team members for project goals,
rather than have team members “inherit” the responsibility of goal achievement
further along in the project lifecycle. It also allows for a comprehensive approach to
forming strategies to achieve sustainability and resilience goals and to manage risks
associated with the project. For example, discussing feasibility of water reduction
during construction with the contractor in the room allows them an opportunity
to provide experiential knowledge and recommendations that the project owner or
design engineer may not have.
Unbox Your Ideas—the Benefits of Integrated Design 943
Fig. 1 Diagram of traditional “black box” approach to infrastructure project delivery. Project
phases and their associated teams are siloed with little overlap and communication between design,
construction and operations, and maintenance team members
Projects often experience failures such as, budget overruns, schedule delays, and
personnel turnover, due to a lack of communication or soft skills. Working in collabo-
ration can foster camaraderie and trust, thus reducing the risk of failure and providing
opportunities to create a better project.
2.1 Overview
Integrated Design Process (IDP) is an approach to planning and design where repre-
sentatives from the owner, planning, design, and contractor organizations, form a
multidisciplinary team. Beginning early in the project, this team works together in a
collaborative fashion, to create a project that is more robust, sustainable, resilient and
considers a comprehensive set of risks up front. Rather than the traditional approach
of distinct planning, design, and construction teams, each of whom operate in a
“black box” passing a project along from phase to phase, through stage gates, the
concept is to have one integrated team work together to consider risks and oppor-
tunities throughout the project lifecycle, with iterative coordination between team
members across all phases, design, construction, and operations and maintenance
(see Fig. 2). This allows risks and opportunities related to the entire project lifecycle
to be considered upfront, reducing the need for rework or significant design changes
in later project phases.
IDP originated in the green buildings industry almost 30 years ago. It was formally
adopted on high-profile sustainable building projects including Natural Resource
Canada’s C-2000 pilot program in the early 1990s [2]. Another example, mentioned
in the Integrated Design Process Guide for the Canada Mortgage and Housing Corpo-
ration, is Mountain Equipment Co-op’s green buildings policy which requires design
teams to implement IDP for each new store to facilitate the achievement of high
sustainable performance targets [4].
944 Q. MacKenzie
Fig. 2 An integrated design process includes iterative collaboration and coordination between
disciplines and teams across the project lifecycle
In many ways, the need for IDP on civil infrastructure projects, is more critical.
Often, these projects are more complex with additional challenges related to the
size, scale, timeframe, and diversity of expertise required to deliver, and nature of
stakeholder impacts. In short, the application of IDP on infrastructure projects can be
more challenging, but the benefits of using an integrated design process from project
onset can also be greater.
In the design process for buildings and infrastructure, the most cost-effective time
to implement sustainable design principles, discover synergies, and accommodate
stakeholder and end-user needs is during preliminary and schematic design [4]. This
means that leveraging an IDP can require more energy up front, with the intent that
it is reduced further along in the project lifecycle through the reduction of rework
(redesign or retrofit) that would have to occur later. Once a project has progressed
to the final stages of design and construction and eventually operations it becomes
challenging to make changes and the cost of rework can be prohibitive. As a result,
during early design, there is the largest opportunity to influence project performance,
including sustainability.
An example of this is the hand off that typically occurs upon design completion
from an engineering team to a contractor, who would then conduct a constructability
and phasing review to determine the feasibility of the design in actual practice and
how best to phase it. Conducting this review process as part of a separate project
phase, following design completion, can lead to changes in the design after a project
budget has been created, or changes in design when construction has already begun.
Both situations can have impacts on project cost and schedule. Another example
could be the equipment selection for a project. Engaging the appropriate operations
Unbox Your Ideas—the Benefits of Integrated Design 945
and maintenance personnel during equipment selection can provide for valuable input
based on experience, training time, functionality, and maintenance costs.
2.2 Methodology
It is important to consider the synergies between IDP and sustainable design frame-
works because one of the key purposes of IDP has always been to enhance sustainable
performance, originally for buildings, and now more broadly applied to civil infras-
tructure. Sustainable design frameworks can be leveraged to set goals, track progress,
and report on metrics, in alignment with IDP. A sustainable design framework is a
set of best practices and approaches to managing different sustainability indicators
on a project.
In North America, one of the most common sustainable design frameworks
for infrastructure projects is the Institute for Sustainable Infrastructure, Envision
framework. Envision has 64 credits, across five categories:
• Quality of Life
• Leadership
• Resource Allocation
• Natural World
• Climate and Resilience
Each credit can be considered an indicator of sustainability performance of a
project. Credit information includes performance metrics associated with different
levels of achievement, as well as best practices for iterative improvement. It is also
intended that project performance across the selected credits be tracked, managed,
and documented throughout the project lifecycle.
Unbox Your Ideas—the Benefits of Integrated Design 947
Going through each of the 64 credits during early project workshopping can
provide a framework to set project goals, benchmark performance, and identify
gaps and opportunities to improve project sustainability and resilience. Envision
also rewards the collaborative efforts of an IDP, with several credits referencing the
concepts of partnering, integration, and risk sharing.
3 Discussion
3.1 Results
An IDP was employed on a $50 million project, renewing the underground utilities
and above ground hardscape and landscape features at a post-secondary institution
in North America. As part of this, early contractor involvement was leveraged during
project planning and conceptual design. Through multiple interdisciplinary working
sessions, involving the design team, contractors, cost consultant, stakeholders, and
operations and maintenance personnel, the project was able to plan for end-user
needs and constructability and phasing considerations upfront.
Some of the results of leveraging IDP early on this project included:
• Enabled the team to be responsive and adaptable to change and additional scope
inclusions, per feedback from stakeholders and end-users.
• Project was able to stay on track through regular team collaboration, despite the
need to address challenging site conditions.
• Alignment between IDP and reporting on project funding requirements.
• Project completed ahead of schedule.
• Early cost certainty and project completed on budget.
• Project exceeded sustainability and resilience goals with performance verified
through an Envision award.
This project was such a success that the client has elected to implement IDP on
future large-scale projects on campus.
3.2 Challenges
are used. It is recommended that further exploration of the benefits of using a CCDC
30 contract be explored, after a significant number of projects have leveraged it.
Examining the sustainability benefits of these projects, compared to projects utilizing
other CCDC contracts should be included in this study.
Turnover is another barrier to implementing successful IDP. Even if the entire
project team can be engaged early on, most multiyear projects will face issues with
personnel turnover. It is unrealistic to expect that a project team will stay consistent
from project planning through to construction completion. To avoid “project memory
loss”, documenting meetings and decisions is critical. This provides the context
required to onboard new project team members. Additionally, having ongoing team-
building events throughout a project can facilitate stronger team dynamics, reinforce
project goals and objectives identified in initial partnering, and create an inclusive
environment for new personnel.
Co-location and setting up a project office to facilitate the ability for team members
from different organizations to work from a common physical space is an important
component of IDP. But the COVID-19 pandemic has changed the way we collaborate
on projects. Although in-person collaboration has begun to resume, there is still a
desire for virtual collaboration on many projects. Additionally, there will always
be situations where team members or subject matter experts that are in a different
geographical location from the rest of the team and a virtual collaboration method
is required. Having a professional facilitator can be even more valuable in virtual
situations. They will know which platforms and tools to leverage and keep team
members engaged throughout a virtual working session.
4 Conclusion
The early implementation of IDP can benefit civil infrastructure projects. Some key
components of the IDP methodology include partnering and chartering, structure
facilitation, and leveraging design frameworks. Whether the project follows a tradi-
tional design-bid-build or an alternative project delivery approach, integrated design
techniques can be leveraged to involve the contractor, stakeholders, end-users, and
operations and maintenance personnel early on. This contributes constructability and
phasing knowledge which supports the feasibility of planning and design decision
and can facilitate better estimation of project costs in advance.
There are challenges associated with the implementation of IDP, regardless of the
project delivery method being leveraged. There are contractual barriers to risk and
reward sharing that can inhibit a comprehensive and integrated team from working
together beginning in early planning and continuing through construction. Addi-
tionally, most infrastructure projects are faced with issues of personnel retention
due to the long timeframes of these projects. Personnel turnover can be reduced
by improving project culture and working environment but cannot be eliminated
completely. Having mechanisms for recording project decision making can facilitate
the onboarding of new project staff, as necessary. Lastly, the COVID-19 pandemic
950 Q. MacKenzie
has impacted the ability of project teams to work together in-person, making co-
location of project team staff challenging, leading to a reliance on virtual platforms
and tools to facilitate integration.
When applied properly and consistently throughout the project lifecycle, IDP can
lead to the following outcomes:
• Project sustainability goals and objectives are met or exceeded (through measur-
able and trackable mechanisms, such as the Envision framework).
• Project stakeholders are satisfied with project.
• Project is completed on time and on budget.
To facilitate the successful implementation of IDP, the following should be
considered:
• Planning and application of the IDP methodology.
• Selection of project delivery method.
• Identification of roles and responsibilities prior to project onset.
• Approach to ongoing tracking, reporting, and measurement mechanisms for
project goals and objectives.
Taking an integrated approach to civil infrastructure projects can reduce silos and
pull ideas out of a “black box”, allowing all project team members to contribute to
early planning and design. This can lead to more innovative, sustainable, and resilient
projects that are less likely to go over budget and have schedule delays. Additionally,
early integration can facilitate stakeholder buy-in and smoother transitions between
project phases.
References
1 Introduction
Waterborne illnesses are one of the most important health concerns for displaced
people living in refugee and internally displaced persons (IDP) settlements. These
illnesses are among the largest contributors to infectious disease burden in these
settlements and are a leading cause of excess morbidity and mortality [3, 10]. Water
users in these settings typically collect water from public water distribution points,
which they then transport to and store in the dwelling. Preventing contamination of
drinking water during the post-distribution period of transport and household storage
is especially important as contamination of previously-safe drinking water has been
identified as a contributing factor in outbreaks of cholera, hepatitis E, and shigel-
losis in refugee and IDP settlements in Kenya [14, 27], Malawi [30], Sudan [35],
South Sudan [1, 15], and Uganda [19, 29]. Residual chlorine protects water against
recontamination during the post-distribution period, and ensuring that stored water
has at least 0.2 mg/L is often sufficient to protect against household recontamination
[13, 21, 25, 28]. However, current drinking water quality guidelines for humani-
tarian response only provide this minimum FRC at the point-of-distribution; not the
point-of-consumption. This fails to account for the loss of residual protection due
to post-distribution FRC decay which can lead to previously-safe water becoming
vulnerable to contamination during the post-distribution period. To ensure that water
remains safe against contamination during the post-distribution period, water system
operators must select an FRC concentration at the point-of-distribution that ensures
that there is still 0.2 mg/L of FRC hours later at the point-of-consumption. To select
an appropriate FRC concentration, water system operators in humanitarian response
settings need models of post-distribution chlorine decay that can accurately fore-
cast the point-of-consumption chlorine concentration based on data collected at the
point-of-distribution.
While FRC decay in piped water distribution systems is well understood (e.g. [4,
8, 26, 33]), comparatively, few studies have investigated FRC decay in household
stored water. Ali et al. [1, 2] used routine water quality monitoring data from refugee
settlements in South Sudan, Jordan, and Rwanda to fit empirical chlorine decay
Evaluation of Process-Based Ensemble Models for Forecasting … 953
models that specified chlorination guidance for different temperature and environ-
mental hygiene conditions. Wu and Dorea [36] investigated seven different empirical
chlorine decay equations using datasets from a variety of water sources, including
one dataset from a humanitarian response setting. One of the main limitations of these
studies is that they used deterministic modelling, which generates point predictions
of the point-of-consumption FRC concentration. Post-distribution FRC decay is a
highly uncertain process, and a single set of conditions at the point-of-distribution
can produce a wide range of FRC concentrations at the point-of-consumption. In
this context, deterministic models which produce point-estimates of FRC decay are
inadequate as they fail to quantify the uncertainty in post-distribution FRC decay.
The uncertainty that characterizes post-distribution FRC decay is apparent when
considering water stored in the household is essentially an open system (as opposed
to water in a pipe which is essentially a closed system). In this context, FRC decay
can be influenced by a range of quantifiable and unquantifiable factors, ranging from
reactant concentrations in the treated water matrix, to chlorine demand introduced
through contamination in the container (either settled solids or biofilms), or to chlo-
rine demand exerted by contamination introduced through user interactions during
the storage period. These factors contribute to the substantial uncertainty in the actual
process of post-distribution FRC decay in refugee and IDP settlement settings. Based
on the taxonomy discussed by Thiboult et al. [32], this uncertainty can be consid-
ered in terms of both the structural uncertainty and parameter uncertainty. Structural
uncertainty, uncertainty in the selection of an appropriate FRC decay model, arises
due to limited the previous research into appropriateness of FRC decay models for
characterizing the special case of post-distribution decay. Parameter uncertainty,
uncertainty in the selection of model parameter values, for example, the rate of FRC
decay, arises from the variability of factors influencing FRC decay at a given site.
This study seeks to address these two sources of uncertainty through the use of
process-based ensemble modelling.
This study developed an ensemble forecasting system to quantify uncertainty in
post-distribution FRC decay. Ensemble forecasting systems group predictions from
many individual models (ensemble members) into a probability distribution and use
this distribution as the basis for the forecast [17]. Ensemble forecasting systems
are commonly used in many high-uncertainty tasks such as weather forecasting and
hydrology [6]; however, they have recently also been adopted for forecasting post-
distribution water quality. De Santi et al. [11] developed an ensemble forecasting
system using artificial neural networks, a type of machine learning model to forecast
post-distribution FRC. This probabilistic forecasting system was able to capture some
of the uncertainty in the post-distribution FRC decay and produced accurate post-
distribution FRC targets. However, there is, as of yet, no ensemble forecasting system
to forecast post-distribution FRC decay using process-based models. Considering
and effectiveness of process-based chlorine decay models, this is a clear gap in the
existing literature and a potential solution to providing improved FRC guidance in
humanitarian response.
This study had two main objectives. The first objective was to minimize structural
uncertainty by identifying the FRC decay model that best fits routine water quality
954 M. De Santi et al.
monitoring data from humanitarian response settings. The second objective was to
compare approaches for forming ensembles using FRC decay models based on their
ability to quantify the parameter uncertainty in the post-distribution FRC decay.
Achieving these objectives will improve the quality of FRC decay forecasts in refugee
and IDP settlements, producing better chlorination guidance, and ultimately helping
provide safer drinking water in humanitarian response settings.
2 Methods
The data used for this study was collected between June and December, 2019
from Camp 1 of the Kutupalong-Balukhali Expansion Site refugee settlement in
Bangladesh. This site hosts over 83,000 Rohingya refugees from Myanmar who fled
persecution in that country beginning in 2017. The drinking water system for Camp
1 delivers water to 190 tapstands from 10 water distribution networks drawing water
from 14 boreholes with inline chlorination. The water treatment and distribution
systems were operated by Médecins Sans Frontières (MSF) and by the Bangladesh
Rural Advancement Committee (BRAC).
Data collected for this study included paired samples of FRC measured at the
point-of-distribution (tapstand) immediately prior to collection by a water user
and then again from the same unit of water at the point-of-consumption after a
follow-up period ranging from 4 to 19 h. Both the point-of-distribution and point-of-
consumption measurements were timestamped and these timestamps used to calcu-
late the elapsed time between distribution and consumption. To develop and evaluate
the ensembles, 90% of the data was randomly sampled from the overall dataset
to form a calibration dataset, and the remaining 10% of the data was used as an
independent testing dataset. This testing dataset was used to evaluate the model
performance on previously unseen data to evaluate the models’ ability to generalize
on new observations.
2.2 Ethics
Data collection for this study received approval from Human Participants Review
Committee, Office of Research Ethics at York University (Certificate #: 2019–186).
This study also received approval from the MSF Ethical Review Board (ID #: 1932),
and the Centre for Injury Prevention and Research Bangladesh (Memo #: CIPRB/
Admin/2019/168). All water quality samples were collected only when informed
consent was provided by the water user.
Evaluation of Process-Based Ensemble Models for Forecasting … 955
The first objective of this study was to minimize structural uncertainty in post-
distribution chlorine decay modelling by comparing different chlorine decay models
to post-distribution chlorine decay data from humanitarian response settings. Due to
limited research into the mechanics of post-distribution chlorine decay, this decay was
modelled using empirical reaction kinetic models. These models fit decay parameters
based on observed changes in FRC concentration as opposed to attempting to quan-
tify individual underlying processes contributing to decay. Thus, uncertainty about
the complex and varying processes influencing post-distribution chlorine decay can
be addressed directly in the bulk decay parameters.
Several empirical reaction kinetic models exist with varying degrees of
complexity. Feben and Taras [12] proposed an empirical model of chlorine demand
(D), shown in Eq. 1, where k and n are adjustable decay parameters specific to the
chlorinated water.
D = kt n (1)
The Feben and Taras model of chlorine demand can then be written in terms of
the initial chlorine dose, C 0 to model the chlorine residual (C) as a function of time:
C = C0 − kt n (2)
The main limitation of the Feben and Taras model is that it does not easily translate
to a rate law that accounts for chlorine dose in the decay rate [16]. In response to
this, [16] tested five potential decay equations, described in Table 1. This study
included all five of the models tested by Haas and Karra [16] as well as the empirical
relationship determined previously by Feben and Taras [12] to evaluate which model
best fits observed FRC decay data from the post-distribution period in humanitarian
response settings.
models, and varying the parameters of the same model [17]. Since this study sought
specifically to address parameter uncertainty, ensembles were formed using the latter
approach by varying the decay parameters of the ensemble members (see Table 1 for
the list of parameters included in each model).
The model parameters for the empirical reaction kinetic models were estimated
during model calibration by using numerical solvers to minimize a loss function.
Two approaches were investigated for forming ensembles through model calibra-
tion, described in the sections below. The same numerical solvers were used for
both approaches. The first-order and power decay models were calibrated using the
“Powell” solver from the SciPy Python package [34], which is a conjugate gradient
solver that performs one-dimensional minimization to iteratively find local minima
on subsequent directions in the process of finding the overall function minimum
[23]. These two models were optimized as unbound optimization problems. Due
to physical constrains on the C * and w model parameters, the remaining models
were calibrated using the Truncated Newton Conjugate (TNC) solver from the SciPy
Python package [34]. This solver uses a two-loop system that integrates the Newton
method with conjugate gradients; however instead of solving to completion, the
Evaluation of Process-Based Ensemble Models for Forecasting … 957
Newton method is only applied for a fixed number of steps (hence, it is truncated),
and the steepest gradient used during that iteration is used to update the search direc-
tion until the optimal solution is identified [22]. For the limited FRC decay models
(limited first order, limited power decay), the C * parameter, which represents the
stable component of FRC that does not decay, was bound to the higher value of
either the minimum observed point-of-consumption FRC in the calibration dataset
or 0.03 mg/L, which is the measurement error of the Palintest PTH 7091 compact
chlorometer with Palintest DPD1 reagents (Palintest Ltd., Tyne & Wear, UK) which
was used to collect the FRC measurements. For the parallel first-order FRC decay
model, the w term was bound between 0 and 1 as this represents the proportion of
the FRC which follows the fast-decay reaction (as opposed to the slow) and as such
cannot exceed 1.
The first ensemble formation approach resampled data from the calibration set to
obtain a unique training dataset for each model. Thus, each ensemble member is
trained using a unique training dataset, resulting in each model having a unique set
of decay parameters. Ideally, this results in the ensemble as a whole containing a
representative sample of model parameters to produce a forecast that matches the
distribution of the underlying dataset. In this first approach, model parameters were
optimized to minimize the sum of squared error (SSE—Eq. 8). The ensembles using
this first approach contained 100 unique members.
2
SSE = ytrue − ypred (8)
The second ensemble formation approach was based on the principle of quantile
regression (QR), an extension of ordinary least-squares regression (OLSR). In OLSR,
regression coefficients are selected to minimize squared error, and thus, the regression
converges to the mean of the distribution of observations. In QR, the regression
coefficients are selected to minimize a quantile error (QE) function (Eq. 9), and thus
instead of regressing to the mean, a QR model regresses to a selected quantile (α) of
the distribution of observations.
1
n
α/100 ∗ (ytrue − ypred ) if (ytrue − ypred ) > 0
QE = max (9)
N i=1 (α/100 − 1) ∗ (ytrue − ypred ) if (ytrue − ypred ) < 0
958 M. De Santi et al.
To incorporate the principle of QR into this study, the QE function was used as
the loss function to optimize the FRC decay model parameters. 99 individual FRC
decay models were trained, with one model for each quantile from 1 to 99. The
first ensemble member was optimized using Eq. 9 with α = 1, and the second was
optimized with α = 2 and so on up to 99. Thus, parameter uncertainty is explicitly
addressed by optimizing each ensemble member to a specific quantile of the observed
distribution. Since this approach addresses parameter uncertainty through the loss
function, no randomization of the training set was included, and each ensemble
member was calibrated using the full training dataset.
Unlike deterministic models where performance can be evaluated based on the differ-
ence between a set of point predictions and the observation corresponding with each
prediction (e.g. mean squared error), the probabilistic performance of ensemble fore-
casting systems is typically evaluated based on two qualities of the resulting forecast:
reliability (or calibration) and sharpness (or resolution) [5]. The reliability of the fore-
cast measures the similarity between forecast probability distributions and the under-
lying distribution of the observations, with an ideal ensemble forecasting the same
distribution as the underlying data. Sharpness measures the spread of the forecast
around each observation, with an ideal forecast having the narrowest possible spread
around each observation. A good forecast is one that maximizes the sharpness subject
to reliability–meaning that the ideal forecast will be as sharp as possible without sacri-
ficing reliability. Four ensemble verification metrics were used to evaluate these two
criteria: percent capture, confidence interval reliability diagram, the rank histogram,
and the continuous ranked probability score. Throughout the following section, O
refers to the full set of observed point-of-consumption FRC concentrations, and oi
refers to the ith observation, where there are I total observations. F refers to the
full set of forecasted point-of-consumption FRC concentrations forecasted by the
ensembles, where f im is the prediction by the mth member in the ensemble on the
ith observation, and F i refers to the ensemble forecast for the ith observation. Thus,
for each observation, there is a corresponding probabilistic forecast. Together, these
are referred to as a forecast-observation pair. For the following metrics, it is assumed
that the predictions of each member in the ensemble are sorted from low to high for
each observation such that f im ≤ f im+1 from m = 0 to m = M.
Percent capture (PC) measures the percentage of observations where the observed
point-of-consumption FRC concentration was within the limits of the ensembles’
Evaluation of Process-Based Ensemble Models for Forecasting … 959
forecast. While this does not fully measure reliability, it does provide an indication
of whether the ensemble forecasts are at least as dispersed as the observations. PC is
a positively oriented score (higher score indicates better performance), with an upper
limit of 100% and a lower limit of 0%. To calculate PC, observation oi is consid-
ered captured if f i0 ≤ oi ≤ f iM [20]. When evaluating the ensemble performance,
both PC for the overall dataset and the percentage of observations with point-of-
consumption FRC below 0.2 mg/L captured (PC<0.2 ) were considered. The latter
provides an indication of how well the ensemble forecasting system can capture
high-risk observations.
The confidence interval (CI) reliability diagram is a visual tool for assessing forecast
reliability based on an adaptation of the reliability diagram proposed by Boucher
et al. [5]. The CI reliability diagram compares the frequency of observed values with
the corresponding CI of the ensemble, where the ensemble CIs are derived from the
sorted ensemble forecast (for example, the ensemble 90% CI would include all of
the forecasts between f 0.05M and f 0.95M ). This was extended further in the present
study by plotting the PC of each CI within the ensemble against the CI level, from the
10% CI to the 100% CI levels at 10% intervals. Based on this visual tool, De Santi
et al. [11] developed a numerical score (CIscore ) for the CI reliability diagram which
calculated the squared distance between the proportion of values captured within
each CI and the ideal capture for that CI, which is equal to the 1:1 line on the CI
reliability diagram. This distance was calculated for each CI threshold, k, from 10 to
100% in 10% increments as shown in Eq. 10. This score is negatively oriented with
a minimum value of 0.
1
CI Reliability Score = (k − Percent Capture in CIk )2 (10)
k=0.1
The rank histogram (RH) is another visual tool used to assess the reliability of
ensemble forecasts. To construct the RH, for each forecast-observation pair, the
observation oi is added to the sorted vector of forecast values Fi , with the new vector
having M + 1 members. A rank is assigned to the observed value based on its rank in
the vector. This is then repeated for each forecast-observation pair, and the histogram
of these ranks forms the RH. If the forecast and observed probabilities are the same,
then any observation is equally likely to occur in any of the M + 1 ranks, which would
result in a flat rank histogram. If the forecasted and observed probability distributions
are different, then the rank histogram will not be flat and may be either U-shaped,
960 M. De Santi et al.
The continuously ranked probability score (CRPS) measures the area between the
forecast cumulative distribution function (cdf) and the observed cdf for each forecast-
observation pairing. The CRPS simultaneously measures reliability and sharpness
as well as uncertainty [18]. For a given forecast-observation pair, the cdf of the
forecast is calculated from the ensemble forecast distribution. The observed cdf is
represented with the Heaviside function H {x ≥ xa }—a stepwise function which is
0 for all concentrations of point-of-consumption FRC below the observed FRC and
1 for all concentrations of point-of-consumption FRC above the observed concen-
tration. To calculate the average CRPS across all observations (CRPS), [18] derived
a calculation of CRPS for ensemble forecasts that treats the forecast cdf as a step-
wise continuous function with N = M + 1 bins where each bin is bounded at two
ensemble forecasts, and the value in each bin is the cumulative probability. CRPS is
calculated using gn , the average width of bin n (average difference in FRC concen-
tration between forecast values m and m + 1) and on the likelihood of the observed
value being in bin n. Using these values, the CRPS for an ensemble can be calculated
as
N
CRPS = gn [(1 − on ) pn2 + on (1 − pn )2 ] (11)
n=1
Figure 1 shows the training and testing values of the loss function used for the two
ensemble formation approaches described in Sect. 2.4 for each of the chlorine decay
equations used. These models show that for both ensemble formation approaches,
the Feben and Taras model produced the highest training and testing error. This
largely agrees with the findings of Wu and Dorea [36] who also found that the Feben
and Taras model performed worse than other chlorine decay models on a variety of
Evaluation of Process-Based Ensemble Models for Forecasting … 961
Fig. 1 Training and testing error for both ensemble formation approaches. a Average SSE for
the ensemble forecasting systems calibrated using resampling (ensemble formation approach 1);
b Sum of QE across quantiles for ensemble forecasting systems calibrated using quantile regression
(ensemble formation approach 2). The highest SSE and QE were produced in both training and
testing by the Feben and Taras while the lowest SSE and QE were produced by the parallel first-order
model
water sources, including one household water treatment dataset from a humanitarian
response setting. A possible explanation for this might be that the decay term in
this model is in terms of time only and does not include the chlorine concentra-
tion, whereas the remaining models include the initial chlorine concentration in the
calculation of the demand.
Figure 1 also shows that the ensemble forecasting systems for the remaining
FRC decay models perform similarly, with relatively small changes in performance
between the different models. Interestingly, in the single reaction models, the differ-
ence in performance between first-order and power decay ensemble forecasting
systems is very small, indicating that adding the reaction order as an additional
degree of freedom in the power decay model did not substantially improve the fit
of the model to the data. This is similar to the findings of Powell et al. [24] who
found that adding additional decay terms beyond the first-order decay equation only
produces marginal improvements. Additionally, while there was not a substantial
change in performance, the parallel first-order model produced the lowest value of
the loss function (SSE in Fig. 1a, QE in Fig. 1b) in both training and testing, which
is consistent with the original findings of Haas and Karra [16]. What these findings
show is that, other than the Feben and Taras model, all of the considered models are
able to accurately fit the observed data, with the parallel first-order model producing
the best performance. Note that there is a much larger difference between the training
and testing SSE in Fig. 1a as opposed to the QE in Fig. 1b. This is because SSE takes
962 M. De Santi et al.
the sum of the error over all observations, and there are more training observations
than testing observations, whereas the QE is taken as a sum over each ensemble
member, representing each quantile, and in both training and testing the ensembles
had the same number of ensemble members.
Fig. 2 Parallel first-order ensemble model forecasts. a Ensemble calibrated with resampling with
least-squares regression; b ensemble calibrated with quantile regression. The use of SSE in the
calibration of the ensemble members yields underdispersed forecasts, whereas quantile regression
yields forecasts that match the underlying distribution of the data
4 Conclusions
This study presented the first use of ensemble forecasting systems to predict point-
of-consumption FRC in refugee and IDP settlements using process-based models of
post-distribution FRC decay. Ensemble forecasting systems were formed using six
different FRC decay models. Many of the resulting ensemble forecasting systems
were able to capture the full range of observed decay rates, though the best perfor-
mance was obtained using the parallel first-order decay model. Forming the ensemble
forecasting systems by calibrating the ensemble members to different quantiles
964 M. De Santi et al.
Acknowledgements We would like to gratefully acknowledge our field data collection staff in
Bangladesh: Mohammed Areshart, Mohammed Hares, Mohammed Osman, Omar Faruk, Abdu
Rohman, Hashim Ullah, Mohammed Yahaya, Mohammed Sayed, and Azaz Ullah. We would also
like to extend our gratitude to the MSF mission in Bangladesh and headquarters staff in Amsterdam
for their support in implementing this study. We are also grateful to Mike Spendlove, James E.
Brown, Mohamed Moselhy, Ngqabutho Zondo, and Nayeem Munier for their contributions on
developing the SWOT Web tool. We would also like to express our gratitude to Syed Saad Ali for
his work on the original process-based analytics for the SWOT. Funding: Field data collection and
reporting were supported by the Achmea Foundation, and further research funding was provided by
the Natural Science and Engineering Research Council and York University. The Safe Water Opti-
mization Tool Project is supported by Creating Hope in Conflict: A Humanitarian Grand Challenge;
a partnership of USAID, The UK Government, the Ministry of Foreign Affairs of the Netherlands,
and Global Affairs Canada, with support from Grand Challenges Canada and ELRHA. The views,
opinions, and policies expressed do not necessarily reflect the views, opinions, and policies of
funding partners.
References
1. Ali SI, Ali SS, Fesselet J-F (2015) Effectiveness of emergency water treatment practices in
refugee camps in South Sudan. Bull World Health Organ 93(8):550–558
2. Ali SI, Ali SS, Fesselet J (2021) Evidence-based chlorination targets for household water safety
in humanitarian settings: recommendations from a multi-site study in refugee camps in South
Sudan, Jordan, and Rwanda. Water Res 189:116642
3. Altare C, Kahi V, Ngwa M, Goldsmith A, Hering H, Burton A, Spiegel P (2019) Infections
disease epidemics in refugee camps: a retrospective analysis of UNHCR data (2009–2017). J
Global Health Rep 3:e2019064
4. Biswas P, Lu C, Clark RM (1993) A model for chlorine concentration decay in pipes. Water
Res 27(12):1715–1724
Evaluation of Process-Based Ensemble Models for Forecasting … 965
5. Boucher MA, Perreault L, Anctil F (2009) Tools for the assessment of hydrological ensemble
forecasts obtained by neural networks. J Hydroinf 11(3–4):297–307
6. Boucher MA, Perreault L, Anctil F, Favre A-C (2015) Exploratory analysis of statistical post-
processing methods for hydrological ensemble forecasts. Hydrol Process 29:1141–1155
7. Bröcker J (2012) Evaluating raw ensembles with the continuous ranked probability score. Q J
R Meteorol Soc 138(667):1611–1617
8. Clark RM, Sivaganesan M (2002) Predicting chlorine residuals in drinking water: second order
model. J Water Resour Plan Manag 128(2):152–161
9. Candille G, Talagrand O (2005) Evaluation of probabilistic prediction systems for a scalar
variable. Q J R Meteorol Soc 131(609):2131–2150
10. Cronin AA, Shrestha D, Cornier N, Abdalla F, Ezard N, Aramburu C (2008) A review of water
and sanitation provision in refugee camps in association with selected health and nutrition
indicators—the need for integrated service provision. J Water Health 6(1):1–13
11. De Santi M, Khan UT, Arnold M, Fesselet J-F, Ali SI (2021) Forecasting point-of-consumption
chlorine residual in refugee settlements using ensembles of artificial neural networks. npj Clean
Water, 4:35
12. Feben D, Taras MJ (1950) Chlorine demand constants of detroit’s water supply. J Am Water
Works Assoc 42(5):453–461
13. Girones R, Carratalà A, Calgua B, Calvo M, Rodriguez-Manzano J, Emerson S (2014) Chlorine
inactivation of hepatitis e virus and human adenovirus 2 in water. J Water Health 12(3):436–442
14. Golicha Q, Shetty S, Nasiblov O, Hussein A, Wainaina E, Obonyo M, Macharia D, Musyoka
RN, Abdille H, Ope M, Joseph R, Kabugi W, Kiogora J, Said M, Boru W, Galgalo T, Lowther
SA, Juma B, Mugoh R, … Burton JW (2018) Cholera outbreak in Dadaab Refugee camp,
Kenya—November 2015–June 2016. Morb Mortal Wkly Rep 67(34):958–961
15. Guerrero-Latorre L, Hundesa A, Girones R (2016) Transmission sources of waterborne viruses
in South Sudan refugee camps. Clean: Soil, Air, Water 44(7):775–780
16. Haas CN, Karra SB (1984) Kinetics of wastewater chlorine demand exertion. J Water Pollut
Control Fed 56(2):170–173
17. Hamill TM (2001) Interpretation of rank histograms for verifying ensemble forecasts. Mon
Weather Rev 129(3):550–560
18. Hersbach H (2000) Decomposition of the continuous ranked probability score for ensemble
prediction systems. Weather Forecast 15(5):559–570
19. Howard CM, Handzel T, Hill VR, Grytdal SP, Blanton C, Kamili S, Drobeniuc J, Hu D, Teshale
E (2010) Novel risk factors associated with Hepatitis E virus infection in a large outbreak in
Northern Uganda: results from a case-control study and environmental analysis. Am J Trop
Med Hyg 83(5):1170–1173
20. Khan UT, Valeo C (2017) Comparing a Bayesian and fuzzy number approach to uncertainty
quantification in short-term dissolved oxygen prediction. J Environ Inf 30(1):1–16
21. Lantagne DS (2008) Sodium hypochlorite dosage for household and emergency water
treatment. J Am Water Works Assoc 100(8):106–114
22. Nash SG (1984) Newton-Type minimization via the Lanczos method. SIAM J Numer Anal
21(4):770–788
23. Powell MJD (1964) An efficient method for finding the minimum of a function of several
variables without calculating derivatives. Comput J 7(2):155–162
24. Powell JC, West JR, Hallam NB, Forster CF, Simms J (2000) Performance of various kinetic
models for chlorine decay. J Water Resour Plan Manag 126(1):13–20
25. Rashid M-U, George CM, Monira S, Mahmud T, Rahman Z, Mustafiz M, Parvin T, Bhuyian
SI, Zohura F, Begum F, Biswas SK, Akhter S, Zhang X, Sack D, Sack RB, Alam M (2016)
Chlorination of household drinking water among cholera patients’ households to prevent trans-
mission of toxigenic vibrio cholerae in Dhaka, Bangladesh: CHoBI7 Trial. Am J Trop Med
Hyg 95(6):1299–1304
26. Rossman LA, Clark RM, Grayman WM (1994) Modeling chlorine residuals in drinking-water
distribution systems. J Environ Eng 120(4):803–820
966 M. De Santi et al.
27. Shultz A, Omollo JO, Burke H, Qassim M, Ochieng JB, Weinberg M, Feikin DR, Breiman RF
(2009) Cholera outbreak in Kenyan refugee camp: risk factors for illness and importance of
sanitation. Am J Trop Med Hyg 80(4):640–645
28. Sikder M, String G, Kamal Y, Farrington M, Rahman AS, Lantagne D (2020) Effectiveness
of water chlorination programs along the emergency-transition-post-emergency continuum:
evaluations of bucket, in-line, and piped water chlorination programs in Cox’s Bazar. Water
Res 170(115854):1–10
29. Steele A, Clarke B, Watkins O (2008) Impact of jerry can disinfection in a camp environment—
experiences in an IDP camp in Northern Uganda. J Water Health 6(4):559–564
30. Swerdlow DL, Malenga G, Begkoyian G, Nyangulu D, Toole M, Waldman RJ, Puhr DND,
Tauxe RV (1997) Epidemic cholera among refugees in Malawi, Africa: treatment and
transmission. Epidemiol Infect 118(3):207–214
31. Talagrand O, Vautard R, Strauss B (1997) Evaluation of probabilistic prediction systems.
ECMWF Workshop on Predictability. ECMWF, Shinfield Park, Reading, England, pp 1–25
32. Thiboult A, Anctil F, Boucher M-A (2016) Accounting for three sources of uncertainty in
ensemble hydrological forecasting. Hydrol Earth Syst Sci 20:1809–1825
33. Vasconcelos JJ, Rossman LA, Grayman WM, Boulos PF, Clark RM (1997) Kinetics of chlorine
decay. J Am Water Works Assoc 89(7):54–65
34. Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, Burovski E,
Peterson P, Weckesser W, Bright J, van der Walt SJ, Brett M, Wilson J, Millman KJ, Mayorov
N, Nelson ARJ, Jones E, Kern R, Larson E, … van Mulbregt P, SciPy 1.0 Contributors (2020)
SciPy 1.0: fundamental algorithms for scientific computing in Python.Nat Methods 17:261–272
35. Walden VM, Lamond EA, Field SA (2005) Container contamination as a possible source of a
diarrhoea outbreak in Abou Shouk camp, Darfur province, Sudan. Disasters 29(3):213–221
36. Wu H, Dorea C (2021) Evaluation and application of chlorine decay models for humanitarian
emergency water supply contexts. Environ Technol 1–10
Sustainable Management of CO2
Generated by a Wastewater Treatment
Plant
1 Introduction
2 Methodology
To achieve the main objective, such as mitigating and converting carbon dioxide
emitted by WWTP facilities, a novel device has been developed based on the elec-
trochemical reduction of CO2 to methanol. Methanol is a clean fuel source that
contains a large energy density, and it is a great alternative to fossil fuel or electric
energy produced by fuel cell (FC) technology [1, 23, 24]; particularly, when such
methanol can be used in-situ in WWTP.
The proposed electrochemical device, comprising a series of electrodes and a
simple electrolyte, has been exposed to CO2 flow. Then, at adequate conditions,
reduction of CO2 at the cathode to methanol occurred following Eqs. (1–3).
Anode reaction:
Cathode reaction:
Sustainable Management of CO2 Generated by a Wastewater Treatment … 969
Overall reaction:
Fig. 1 Schematic configuration of the electrochemical device for CO2 sequestration and its conversion to fuel at a WWTP where the outlet by-product is
methanol
S. Abedini et al.
Sustainable Management of CO2 Generated by a Wastewater Treatment … 971
Table 1 Characteristics of
Component Average values (ppm)
electrolyte (tap water)
Cations Na+ 4.19
K+ 5.99
Ca2+ 9.21
Mg2+ 11.44
Anions Cl− 25.52
Br− 0.41
PO4 3− 21.96
NO3 − 0.93
The device voltage is a key variable since it affects energy consumption and current
efficiency. Also, the reaction mechanism of CO2 electro-reduction is strongly affected
by cathodic potential value. To investigate the effect of cathodic potential on produced
methanol volume, several experiments were carried out at different applied cathodic
potentials (from 8.3 to 110 V). Figure 2 displays the impact of electrical potential on
methanol production in the electrochemical device.
The results showed that with increasing cathodic potential, the concentration of
produced methanol increased too. The maximum methanol concentration at ambient
temperature found to be 116.4 ppm after 20 min of CO2 injection at the rate of 0.8
m3 h−1 . The results of this experiment revealed that the optimum cathodic potential
for the reduction of carbon dioxide was 80.9 DCV.
It was found that by increasing voltage (from 8 to 80 V), the methanol production
significantly improved in the device. However, the concentration of methanol has
dropped with increasing cathodic potential from 80 to 110 DCV. It was speculated
that higher hydrocarbons like ethanol, propanol, etc. were formed if the number of
140 116.4
electrical potential for the
120
best methanol production, at
flowrate 0.8 m3 h−1 , input 100
(ppm)
protons available at the cathode increased. Thus, the equilibrium potential required
for the reduction process also increased. Another reason for methanol decline might
be adsorption of carbon monoxide on the electrode. Carbon monoxide formed from
CO2 in the cathode might adsorb on the anode (Cu). This adsorption would interfere
with hydrogen formation. However, electrochemical reduction of CO2 depends on
the concentration of H2 or adsorbed hydrogen atoms at the electrode surface and
the formation of a thin layer of copper oxide on the anode surface [2, 5, 18]. Thus,
the charge passed through the electrochemical device would be used for hydrogen
evolution rather than carbon dioxide reduction. Moreover, with increasing potential
to 110 V, the temperature of electrolyte might increase too, then the solubility of
carbon dioxide is reduced at higher temperatures (about 30–38 °C) [2].
In order to investigate the effect of the gas input duration on the methanol production,
several tests were carried out at different influx periods (10, 20, and 30 min).
The results showed that maximum methanol concentration (116.4 ppm) was
obtained after 20 min (Fig, 3). The input duration increased from 10 to 20 min
and promoted the methanol generation, while the optimum period was found to be
20 min. However, methanol production has been reduced during 30 min period.
This phenomenon can be due to blocking the active sites on the electrode surface
by CO [11]. Carbon monoxides might hinder the fresh carbon dioxide to be contacted
to the surface of the cathode [5].
Consequently, for improving methanol production, it is necessary to increase the
active sites on the electrode surface. Reaction (Eq. 4) illustrates the formation of CO
from the reduction of CO2 [11].
60
40
20
0
0 5 10 15 20 25 30 35
Temperature (◦c)
974 S. Abedini et al.
In order to study the input CO2 flowrate’s effect on methanol generation, seven
flowrates (from 0.5 to 2.5 m3 h−1 ) were tested. The investigations were conducted
during the optimal period of 20 min under ambient temperature, where optimal 80.6
DCV was applied.
The results indicate that CO2 flowrate higher than 2 m3 h−1 has an inhibitory
effect on the formation of methanol (Fig. 5).
However, by raising the flowrate from 0.5 to 1 m3 h−1 , the production rate of
methanol increases from 65 to 118 ppm. This rise in methanol generation indicates
that with increasing flowrate, the CO2 mass transfer between the liquid and gas phases
increases too. On the other hand, with further increase of gas flowrate, the collision
of CO2 with electrodes enhances, and consequently, the CO2 reduction elevates [17].
Furthermore, lower methanol production rate at higher flowrate levels might be also
related to reduction in methanol solubility in the electrolyte. Indeed, the reduction
of methanol solubility may be caused by the enhanced turbulence in the liquid phase
at higher gas flowrate [17].
4 Conclusion
The electrochemical device converting CO2 into green fuel, such as methanol, is
a promising technique for the mitigation of GHG emission at WWTP and, subse-
quently, an efficient prevention of climate change. This paper shows a novel electro-
chemical device, which is able to reduce CO2 and synthesis it into methanol in-situ.
The impact of operation parameters (gas input time, electrical potential, temperature,
and inlet gas flowrate) on methanol generation was investigated to define the best
conditions for CO2 electrochemical conversion at a WWTP. Thus, optimal operation
parameters for the novel electrochemical CO2 convector of 1.25 L were generated.
The results showed that during 20 min of a continuous gas flow at 0.8 m3 h−1 , at
ambient temperature of 24 °C, and under voltage gradient of 8 DCV/cm, 60% of CO2
was sequestrated, and simultaneously, methanol (at the concentration of 116 ppm)
was produced.
Sustainable Management of CO2 Generated by a Wastewater Treatment … 975
Acknowledgements The authors acknowledge the financial support of NSERC Discovery Grant
awarded to Dr. M. Elektorowicz. They also are thankful to the Gina Cody School at Concordia
University for FRS bursary dedicated to Dr. S. Abedini’s doctoral research.
References
1. Achmad F, Kamarudin SK, Daud WRW, Majlan EH (2011) Passive direct methanol fuel cells
for portable electronic devices. Appl Energy 88(5):1681–1689
2. Al-Juboori O, Sher F, Hazafa A, Khan MK, Chen GZ (2020) The effect of variable operating
parameters for hydrocarbon fuel formation from CO2 by molten salts electrolysis. J CO2
Utilization 40:101–193
3. Ashrafi O (2012) Estimation of greenhouse gas emissions in wastewater treatment plant of
pulp & paper industry (Doctoral dissertation, Concordia University)
4. Arnarson L, Schmidt PS, Pandey M, Bagger A, Thygesen KS, Stephens IE, Rossmeisl J (2018)
Fundamental limitation of electrocatalytic methane conversion to methanol. Phys Chem Chem
Phys 20(16):11152–11159
5. Chaplin RPS, Wragg AA (2003) Effects of process conditions and electrode material on reaction
pathways for carbon dioxide electroreduction with particular reference to formate formation.
J Appl Electrochem 33(12):1107–1123
6. Fagnani E, Melios CB, Pezza L, Pezza HR (2003) Chromotropic acid–formaldehyde reaction in
strongly acidic media. The role of dissolved oxygen and replacement of concentrated sulphuric
acid. Talanta 60(1):171–176
7. Franchini M, Mannucci PM (2015) Impact on human health of climate changes. Eur J Intern
Med 26(1):1–5
8. Glasgow Climate Change Conference—October-November 2021 | UNFCCC (13 Nov 2021).
https://unfccc.int/conference/glasgow-climate-change-conference-october-november-2021
9. Hazarika J, Manna MS (2019) Electrochemical reduction of CO2 to methanol with synthesized
Cu2O nanocatalyst: Study of the selectivity. Electrochim Acta 328:1350–1353
10. He Z, Qian Q, Ma J, Meng Q, Zhou H, Song J, Liu Z, Han B (2016) Water-enhanced synthesis
of higher alcohols from CO2 hydrogenation over a Pt/Co3O4 catalyst under Milder conditions.
Angew Chem Int Ed 55(2):737–741
11. Hori YI (2008) Electrochemical CO2 reduction on metal electrodes. In: Modern aspects of
electrochemistry. Springer, New York, NY
12. Kampschreur MJ, Temmink H, Kleerebezem R, Jetten MS, van Loosdrecht MC (2009) Nitrous
oxide emission during wastewater treatment. Water Res 43(17):4093–4103
13. Kavuma C (2013) Variation of methane and carbon dioxide yield in a biogas plant, Master
science Thesis, Royal Institute of Technology Stockholm
14. Lee B, Hibino T (2011) Efficient and selective formation of methanol from methane in a fuel
cell-type reactor. J Catal 279(2):233–240
15. Ma M, Jin BJ, Li P, Jung MS, Kim JI, Cho Y, Kim S, Moon JH, Park JH (2017) Ultrahigh
electrocatalytic conversion of methane at room temperature. Adv Sci 4(12):1700379
976 S. Abedini et al.
16. Ma Q, Tipping RH (1998) The distribution of density matrices over potential-energy surfaces:
application to the calculation of the far-wing line shapes for CO2. J Chem Phys 108(9):3386–
3399
17. Mustafa NFA, Mohd Shariff A, Tay WH, Abdul Halim HN, Mhd Yusof SM (2020) Mass transfer
performance study for CO2 absorption into non-precipitated potassium carbonate promoted
with glycine using packed absorption column. Sustainability 12(9):3873
18. Nogami G, Itagaki H, Shiratsuchi R (1994) Pulsed electroreduction of CO2 on copper
electrodes-II. J Electrochem Soc 141(5):1138
19. Rafizadeh A, Poormohammad L, Shariati SH, Mirzajani E (2011) Introducing a colorimetric
method for the detection of methanol in several types of drinks. J Mazandaran Univ Med Sci
21(84):150–152
20. Ritchie H, Roser M (2020) CO2 and greenhouse gas emissions. Our world in data
21. Wasmus S, Küver A (1999) Methanol oxidation and direct methanol fuel cells: a selective
review. J Electroanal Chem 461(1–2):14–31
22. Wellinger A, Murphy JD, Baxter D (eds) (2013) The biogas handbook: science, production
and applications. Elsevier
23. Wiebe R, Gaddy VL (1940) The solubility of carbon dioxide in water at various temperatures
from 12 to 40 and at pressures to 500 atmospheres. Critical phenomena. J Am Chem Soc
62(4):815–817
24. Zakaria Z, Kamarudin SK (2016) Direct conversion technologies of methane to methanol: an
overview. Renew Sustain Energy Rev 1(65):250–261
Residual Methane Generation Capacity
of Waste Residue in a Landfill
Bioreactor: Case Study of Calgary Biocell
Abstract With the aim of evaluating the environmental economic value of Biocell
technology, in this paper, excavated waste residue (EWR) from the Calgary Biocell
was characterized for the first time. The objective was to find the carbon offset of the
Calgary Biocell project through biochemical methane potential (BMP) assay. The
EWR samples were collected from different locations within the Calgary Biocell,
and the physical and chemical characteristics of the individual EWR samples were
evaluated following the standard methods. The BMP assays were conducted using the
modified solid-phase method that represent condition of landfills more accurately. A
composite sample which was a mixture of all EWR samples was prepared to represent
the entire Biocell condition. Laboratory batch experiments were conducted using the
biodegradable fraction of EWR to evaluate the methane (CH4 ) generation potential
(Lo ) and the first-order rate coefficient (k) values of Calgary Biocell. Based on the
BMP assays, it was shown that the excavated waste from the Calgary Biocell has
gone through more degradation (i.e., with a production of 16.81 ± 2.9 mL CH4 /g
TS) compared with the other studies on excavated waste, emphasizing effectiveness
of Biocell technology in terms of waste degradation.
1 Introduction
Landfills play a vital role in climate change by emitting greenhouse gas (GHG)
into the atmosphere. Anaerobic digestion (AD) of the organic fraction of the waste
material leads to the production of landfill gas (LFG) [23] which consists of methane
(CH4 ) and carbon dioxide (CO2 ) [41]. According to Intergovernmental Panel on
Climate Change (IPCC), both CO2 and CH4 are GHGs. However, CH4 from non-
fossil sources is 34 times more harmful to the environment compared to CO2 over the
100-year period due to having a higher global warming potential (GWP) (IPCC 2014).
If the GHGs from landfills are released into the atmosphere directly, it poses potential
climate change impacts. In 2019, the GHGs from the waste sector contributed 28 Mt
CO2 -equivalents to the total national emissions in Canada (Environment and Climate
Change Canada 2021). Overall, the waste sector represents 3.8% of total Canadian
GHG emissions (Environment and Climate Change Canada 2021). In this regard,
municipal solid waste (MSW) disposed of in landfills comprises 82% of the waste
sector emissions in Canada (Environment and Climate Change Canada 2021).
Among the options to minimize CH4 emissions from landfills, one common option
is to collect the LFG and use it to produce energy. However, this option is only viable
when enough LFG is produced within the landfill to make the process economical.
LFG from a landfill should have a minimum CH4 concentration of 20% (v/v) to be
suitable for energy recovery purposes [27]. One method available to achieve this
objective is to operate a landfill as a bioreactor landfill. The foremost idea of a
bioreactor landfill is the recirculation of landfill leachate to encourage the speed of
waste decomposition [29, 39, 40]. This results in an enhanced gas generation that
allows the recovery of LFG to produce energy. Furthermore, in a bioreactor landfill,
the waste volume decreases faster, which results in rapid space and resource recovery
[19].
Conventional bioreactor landfills could be operated in aerobic or anaerobic modes
depending on the desired outcome [34]. However, the Biocell is a variation of the
landfill bioreactor approach that goes through successive anaerobic and aerobic
stages [18]. The anaerobic stage improves CH4 production, whereas the aerobic
stage enhances waste decomposition to a point where it could be mined in the last
stage to recover excavated waste residue (EWR) for beneficial uses.
The award-winning Calgary Biocell, located in Calgary, Alberta, Canada, is the
first of its kind where the three phases, such as anaerobic bioreactor, aerobic biore-
actor, and mining, are utilized in one landfill cell. Unlike a traditional landfill or a
bioreactor landfill, the Calgary Biocell is not a permanent facility. The contents of the
Biocell are currently being removed to recover fully- or partly-stabilized waste for
beneficial purposes. The EWR from the Calgary Biocell is used to estimate experi-
mental residual CH4 generation potential to evaluate the overall carbon offset of the
project.
Biochemical methane potential (BMP) assay is the most common experimental
approach conducted in the laboratory to determine the degradability of waste. Conse-
quently, one specific aim of this study is to evaluate the experimental landfill CH4
980 T. Abedi et al.
generation parameter from BMP assays to provide information on the state of EWR
for environmental economic value of Biocell technology. The objectives of the
research were to determine:
• The composition and characteristics of EWR samples recovered from Calgary
Biocell at locations distributed spatially and at different depths
• The CH4 generation parameters, namely L o and k, of EWR for the entire Biocell
and EWR samples recovered from locations distributed spatially and at different
depths
• The carbon offset potential of Biocell using the evaluated k and L o values
Excavated waste residue (EWR) samples were collected from Calgary Biocell during
borehole drilling in October 2020 using the grab sampling method. Three pilot bore-
holes were drilled, and the spatial location of the borehole drilling points was chosen
based on the distance from the edge of the Biocell to determine the effect of spatial
change in degradation. The ground surface elevation was another deciding factor in
choosing a borehole drilling point since it shows the level of waste degradation at that
location during Biocell operation. A lower ground surface elevation indicates more
degradation at that point. The EWR samples were collected at different locations
along the borehole. In total, 8 EWR samples were collected in 10 L containers. The
EWR samples were identified as P1-1 , P1-2, etc., based on their locations. Biocell was
initially constructed in three waste lifts, and each lift was separated with a 500 mm
thick intermediate cover layer. The upper layer was called lift 1, and the bottom layer
was named lift 3. For instance, P1-1 is the sample from point 1 and the first lift. Table
1 provides the location details of the EWR samples. Attempts were made to collect
one sample from each lift of an individual borehole point. However, since point 1
had a smaller total depth, only two samples were collected from this point. Figure 1
illustrates the vertical position of samples in the Biocell. Collected samples were
transferred to a lab at the University of Calgary for further analysis.
2.2 Inoculum
For conducting BMP assays, an inoculum was used to enhance the biodegrada-
tion process within BMP reactor bottles. Stabilized biosolid is one of the best
options as inoculum for the BMP assay since it is easy to obtain and does not
have excessive background methane [50]. In addition, biosolid is free of disease-
causing pathogens compared with un-stabilized sludge samples. Required biosolid
Residual Methane Generation Capacity of Waste Residue in a Landfill … 981
samples were obtained in May 2021 from the Bonnybrook wastewater treatment
plant (WWTP) located in Calgary. The biosolid samples were collected from the
anaerobic digester where primary and activated sludge are treated for over 25 days
at 35 °C.
Batch experiments were carried out by adding a certain amount of solid into a solution
with a specific solid/liquid (S/L) ratio [49]. These mixtures are vigorously stirred or
982 T. Abedi et al.
shaken during the entire reaction time [49]. In this research, a batch BMP assay was
conducted to measure the CH4 production potential of the EWR from the Calgary
Biocell. A mixture of EWR grab samples was prepared to represent the entire Biocell
condition and was called a “composite sample”. The composite sample was tested
in triplicate to find the experimental errors and assess the variability in experimental
measurements. The BMP assays were conducted in a mesophilic condition with a
temperature of 35 °C (± 2°).
The BMP experimental setup is shown in Fig. 2. 2L VWR storage bottles with
GL screw caps were used as the reactor. The lids of the bottles were drilled and
resealed with 33 mm chlorobutyl septa purchased from VWR. 250 mL VWR storage
bottles with GL screw caps were used for the biogas collection and displaced water
collection bottles.
Saturated NaCl solution has been shown to be the best barrier solution to prevent
the dissolution of CO2 and CH4 in water [48]. However, in the study by Pearse [35],
the use of saturated NaCl posed serious threats to the estimation of biogas volumes.
In that study, salt crystals precipitation out of the solution resulted in clogging the
valves and tubing [35]. To avoid this issue, an acidified 75% saturated NaCl solution
(DI water containing 36.7% NaCl) that was proposed by Walker et al. [48] was used
instead to minimize the salt crystallization in and on the experimental setup.
The gas produced from the EWR sample in the 2L reactor transferred an equal
volume of solution from the biogas collection bottle to the displaced water collection
bottle. Masterflex E-Lab Tygon tubing was used to connect the three bottles: the 2L
reactor to the biogas collection bottle and the biogas collection bottle to the displaced
water collection bottle. In each bottle, the tubing was passed through a hole drilled
into the cap and was connected to a needle. The tubing end in the gas collection
bottles was sealed with a Cole-Palmer one-way Luer check valve with a silicone
diaphragm using a Cole-Palmer male Luer adapter to prevent the movement of gas
in a direction different from what is shown in Fig. 3. To prevent any short circuiting
and escape of gas from the biogas collection bottle, an Air-tite hypodermic needle
was used. Also, the Cole-Palmer male Luer adapter was used for all connections.
Residual Methane Generation Capacity of Waste Residue in a Landfill … 983
4
5
6 Second LiŌ
7
8
9
10 Third LiŌ
P1 P2 P3
The bottles were finally sealed off using a Play-Doh-like duct seal. The displaced
water collection bottle was left open to maintain atmospheric pressure throughout
the setup and prevent any pressure build-up within the setup.
Methods proposed by Angelidaki et al. [2] and Holliger et al. [15] were used as a
reference with modifications proposed by Pearse [35] for the solid-phase BMP assay
to represent landfill conditions. The methodologies have undergone peer reviews and
inter-laboratory studies by BMP researchers [13, 16, 24, 25]. The required amount of
the EWR sample was weighed out and added to the 2L reactor. All weight measure-
ments were obtained using a Mettler AT 250 (Fisher Scientific Canada) analytical
scale. The inoculum was weighed out and mixed with the waste using a blender.
The mixture was distributed into EWR reactor, and the headspace of the reactor was
sparged and flushed with a mixture of 80% N2 and 20% CO2 gas for 10 min to assure
having an anaerobic condition in the reactor. One reactor was filled with inoculum
alone that was called “blank” to calculate the amount of CH4 generated from the
inoculum. The bottles were transferred to a Fisher Scientific upright fridge to be
incubated at 35 °C.
The principle of water displacement was used to measure the amount of biogas
produced from EWR reactors. To find the composition of generated biogas, a gas
sample was obtained from each EWR reactor’s headspace using SGE 008,770, 5 mL
syringes with valves fitted with Trajan Scientific needles. The needle had a side hole
dome tip point style. Biogas composition in terms of CH4 , CO2 , N2, and O2 was
then determined using an SRI gas chromatograph (GC) with a thermal conductivity
984 T. Abedi et al.
detector (TCD). The CH4 volume was then determined by multiplying the percentage
of CH4 in the headspace biogas composition and the volume of displaced water. When
the displaced water collection bottles were on the point of overfilling with water, the
bottles were emptied, and the biogas collection bottles were refilled with the barrier
solution and were resealed again.
50
P1
producon (mL CH4/g
45
-1
40 P1
Cumulave gas
-2
35 P2
VS waste)
-1
30 P2
25 -2
P2
20 -3
P3
15 -1
10
0
0 20 40 60 80 100
Time (day)
During the BMP assays, the final cumulative CH4 volume was taken as the
biochemical methane potential for the EWR samples. The modified Gompertz model
(1) that is a common method in the literature to model data in anaerobic digestion
was used [7, 8, 21, 31].
Rm × e
Bt = P × exp − exp (λ − t) + 1 (1)
P
where BT is cumulative methane production at final day (day100) (mL CH4 /g VS), P
is methane production potential (mL CH4 /g VS), t is time in days, and k is first-order
rate constant, expressed in reciprocal of time (day−1 ).
The physical and chemical characteristics of the EWR grab samples were evaluated
using the standards and methods. The results are summarized in Table 2.
It can be seen from Table 2 that the volatile solids (VS) fraction of the waste is
lower in EWR samples from the bottom layer of the Biocell (i.e., P1-2 , P2-3, and P3-3 )
compared with samples from the upper layer (i.e., P1-1 , P2-1, and P3-1 ). VS fraction
of the waste has been identified as an indicator of the effectiveness of stabilization
strategies for landfills [4]. Therefore, VS data implies that waste in the deeper parts
of the Biocell has gone through more degradation. This finding is in line with the
visual observations during borehole drilling. Waste from deeper parts had a darker
color, and it was more difficult to recognize materials in the waste. In addition, the VS
values fit within the ranges reported by the other researchers [12, 28]. VS contents
below 25% of dry solids (DS) had been suggested as a stability indicator for the
biodegradable fraction of MSW [4]. Based on this observation, it can be concluded
that, the biodegradable fraction of the EWR still has the potential to degrade.
The average lignin content of the EWR samples was determined as 17.1%. Lignin
content higher than 19% has been reported to restrict carbohydrate conversion during
hydrolysis and inhibit CH4 generation [44]. Consequently, the lignin content of the
excavated waste is expected to be non-inhibiting.
Alkalinity, volatile fatty acid (VFA), and pH are three parameters that play a signif-
icant role in the buffering capacity of a system to reach a stable state. The buffering
capacity in this study showed better results compared to the capacity reported in the
literature for 22 countries [47]. The pH values of all EWR samples were found to be
in the same range reported by Mönkäre et al. [30] and Pecorini and Iannelli [37, 38]
for waste excavated from landfills.
Phosphorus concentrations in the excavated waste were found to be lower than
those reported in the literature for two possible reasons. The solid components in
extracted liquid were filtered out in this study leading to the determination of only the
dissolved phosphorus contents and not total phosphorus. Also, MSW differs between
geographical regions due to variations in composition, weather, economic activities
as well as nutritional habits. High variability of phosphorus, N and lignin values were
observed in different cities [6].
The moisture content (MC) is one of the key factors affecting the rate of waste
degradation. The MC of the EWR grab samples from Biocell was determined, and
the results are presented in Table 3.
Figure 4 shows an increase in MC of the EWR grab samples as sample collection
depth increases. As mentioned earlier, more leachate was found in the EWR grab
samples from the deeper parts of the Biocell (i.e., P1-2 , P2-3, and P3-3 ) with an average
MC of 45.95% while the average MC for the shallower parts of the Biocell (i.e., P1-1 ,
P2-1, and P3-1 ) was 41.79%. The spatial variation (in a horizontal direction) of the
MC was less significant compared with the vertical variation. As an example, the
standard deviation of the MC for EWR grab samples from the first lift of all three
borehole locations (i.e., P1-1 , P2-1, and P3-1 ) was determined to be 1.80, whereas the
standard deviation of the MC from different lifts of point 3 (i.e., P3-1 , P3-2, and P3-3 )
was determined as 4.27.
The sudden increase in the MC of P2-2 can be related to the fact that P2-2 was located
above an intermediate layer, thereby leachate blockage through the intermediate
6000
0 Blank
0 20 40 60 80 100
Time (day)
Fig. 4 Moisture content variation vertically and horizontally in the Biocell
layer might have happened at that point which resulted in containing more leachate
compared with other EWR grab samples.
Comparing the MC of the Biocell ranging from 41 to 48% with other landfill
mining case studies around the world shows that Biocell is carrying more moisture
(5 to 10%) compared with the average MC reported in the literature [17, 37, 38, 51],
emphasizing the impact of leachate recirculation system utilized in the Biocell.
When conducting the BMP assay, the amount of displaced water collected in the
displaced water collection bottle was related to the amount of total biogas gener-
ated in the reactor during anaerobic degradation. The composite sample, which was
mentioned in Chapter three, was tested in triplicate to find the experimental error. The
replicates were named CS-1, CS-2, and CS-3 for ease of identification. The calcu-
lated error was added to grab samples as an error. Figure 5 represents the cumulative
amount of displaced water in the displaced water collection bottles. This amount is
considered the total biogas generation from the composite sample reactors.
The EWR composite sample reactors produced almost the same amount of biogas
of about 5000 mL after 100 days. The blank reactor was filled with only inoculum
to measure the amount of biogas generated from the inoculum alone. Measuring
biogas generation from inoculum was needed since the amount of CH4 generated
from the inoculum should be deducted for the calculation of landfill parameters such
as reaction rate (k) and CH4 generation potential (L o ).
The average biogas yield from the EWR composite sample reactors was 75.34 mL/
g VS. Pecorini et al. [36] also conducted BMP assays on excavated waste from a
988 T. Abedi et al.
50.00
Fig. 5 Biogas generation in replicate reactors with EWR composite sample and blank
conventional sanitary landfill in Italy. The total biogas generation from the excavated
waste in their study after 90 days was measured as 340 mL/g VS with an inoculum
to substrate ratio (ISR) of 1.5 [36]. The differences between values can be explained
in part by the fact that more degradation has taken place in the Calgary Biocell
compared with a conventional sanitary landfill. The Biocell had higher moisture
content (roughly 10% higher) than conventional sanitary landfills due to having a
leachate recirculation system, thus it provided a better condition for waste biodegra-
dation. Therefore, the residual potential to produce biogas from the excavated mate-
rials in the Biocell was lower than the biogas production potential from conventional
sanitary landfill materials that have undergone less biodegradation. In addition, a
higher ISR value results in a higher biogas yield [35]. In the current study on EWR
from the Calgary Biocell, ISR was chosen as 0.4. Consequently, lower biogas yield
compared with the literature could be associated with the use of a smaller ISR as
well.
Time-dependent CH4 yields of the replicates with the EWR composite sample are
shown in Fig. 6.
The cumulative CH4 generation was divided by the grams of volatile solid of the
EWR in each reactor. CS-3 showed a higher final CH4 production after 100 days
(BMP100 ) with 36.27 mL CH4 /g VS which is about 17% higher compared with CS-
1 with a 32.70 mL CH4 /g VS. The difference in the final CH4 value among the
replicates can be related to the errors in measurements and also inaccuracy of the
quartering method for dividing the composite sample into three replicates. Some
replicates could contain more un-degraded material compared with the others. For
instance, CS-3 might have more un-degraded materials in its composition resulting
Residual Methane Generation Capacity of Waste Residue in a Landfill … 989
50
P1
45
-1
0
0 20 40 60 80 100
Time (day)
Fig. 6 Cumulative CH4 generation in replicate reactors with EWR composite sample
lower moisture content compared with the standard slurry-phase method. Pearse [35]
showed that CH4 yield from the slurry-phase reactors was 30% and 120% higher in
100 g and 200 g treatments, respectively, and also had a shorter lag phase than their
solid-phase counterparts.
Furthermore, different types of materials have different CH4 generation potentials;
thereby, the BMPf value is different in each research. For instance, Pecorini et al. [36]
detected paper, and cardboard category has the highest BMPf value among different
excavated waste categories. Also, the green waste and wood fraction produced a
significant amount of biogas, thus having a high BMPf value while the textile category
presented a low contribution [36].
The modified Gompertz model (1) was fitted to the CH4 yield graph presented in
Fig. 6. The Excel solver was used to find the values for CH4 production potential (P),
specific rate constant (Rm ), and lag phase (λ) parameters in the Modified Gompertz
model. Excel solver minimized the sum of square differences between the exper-
imental and calculated values of cumulative CH4 production (BMP) at each time.
Also, the L o value which is the CH4 production potential of the waste was calculated.
The difference between the P value and L o is in the denominator of their fraction.
P value indicates the CH4 generation potential per gram of volatile solids (VS),
whereas L o shows the same concept expressed in total waste (TS).
Thereafter, Eq. (2) was used to find the first-order rate constant (k) for the EWR
sample using the results of BMP assays. The P value was obtained from Eq. (1) and
cumulative CH4 production on the final day of the experiment (BMP100 ) for each
sample. K value was calculated utilizing the Excel solver to solve one equation with
one unknown.
Table 4 includes modified Gompertz model parameters obtained from time-
dependent cumulative CH4 yields in BMP assay as well as k values for EWR
composite sample replicates.
The average L o value for the EWR composite sample was calculated as 16.81
± 2.90 mL CH4 /g TS. The determined L o value in the current study is higher than
the L o value reported for a Finnish landfill aged 24- to 40-years [30]. Mönkäre
et al. [30] stated the L o value for the excavated waste as 5.8 ± 3.4 mL CH4 /g TS.
It can be concluded that the Biocell due to the leachate recirculation system can
provide a condition for the waste to degrade faster than a conventional sanitary
landfill. Although the composition of the waste is a significant factor in L o value,
Biocell after 15 years of operation showed almost the same level of degradation as
a sanitary landfill after 40 years. However, the previous studies on excavated waste
from conventional sanitary landfills have demonstrated a higher L o value. Pecorini
and Iannelli [37, 38] estimated L o value as 52.2 ± 28.7 mL CH4 /g TS and stated
that the excavated waste is not degraded based on the L o value. Pecorini et al. [36]
reported 115 mL CH4 /g TS and 47 mL CH4 /g TS for excavated waste from 2001
and 2007, respectively. Sandoval-Cobo et al. [21] reported L o value for excavated
waste as 151 and 135 mL CH4 /g TS for ISR ratios of 2.0 and 1.0, respectively.
Sormunen et al. [43] tested excavated waste from 17- and 48-years landfills. The L o
value estimated during the BMP assay on the top and middle layer of landfills was
found to be 68 mL CH4 /g TS and 44 mL CH4 /g TS for 17- and 48-years landfills,
respectively. Whereas L o value was evaluated as 21 mL CH4 /g TS and 8 mL CH4 /g
TS, respectively, for 17- and 48-years landfills in the BMP assay from bottom layers
[43].
In lysimeter studies and large-scale digesters where the reactors have higher solids
to liquid ratio, the L o values obtained agreed more with the results of the current
experiment and have a much lower L o value compared with those from the slurry-
phase BMP assays [5, 26]. Furthermore, studies that obtained values from the field
data showed L o values in the range of 8–200 mL CH4 /g TS with significant numbers
in the range of less than 60 CH4 /g TS which agrees with the values obtained from
BMP assays in this research [11].
The k values were determined from the BMP100 data obtained from time-
dependent cumulative CH4 production after 100 days. The average k value is
0.0196 day−1 . Lag phase (λ) values were also longer than the values reported by other
researchers [11, 21]. The longer lag phase can be related to using the solid-phase
method in the BMP assay. The previous studies were conducted in the slurry-phase,
while the slurry treatments are known to have a shorter lag phase due to the dilution
of toxic effects by the excess liquid content [3]. On the other hand, the BMP assays
conducted in the solid-phase with the same ISR by Pearse [35] showed a longer lag
phase with a lag phase of 81 days compared with the current study with an average
lag phase of 38.32 days for the composite sample.
The biodegradable fraction of EWR consisted of paper, yard waste (wood), and
unidentifiable materials. Since the origin of unidentifiable materials was not clear, it
was assumed that 50% of the unidentifiable materials are made of inert materials that
992 T. Abedi et al.
do not produce LFG. The rest of the unidentifiable materials category was assumed to
act like a woody waste after degradation. Therefore, the ultimate analysis of woody
waste, mixed paper, and wood was used for estimation of the EWR molecular formula
and is reported in Table 5.
Based on the composition of the composite EWR sample, the percentage of
unidentifiable, paper/cardboard, and wood in the composition was determined as
61.4%, 15.2%, and 5%, respectively. Since it is assumed that only 50% of the
unidentifiable materials produce LFG and act as woody waste, the percentage of
woody waste was considered 30.7%.
Based on the atomic weight of each element, the total moles of each element in the
biodegradable fraction of EWR were calculated and normalized based on nitrogen.
Table 4 presents the number of moles for each element in the biodegradable fraction of
EWR. The molecular formula for the biodegradable volatile solids (BVS) fraction of
EWR was estimated as C207 H314 O141 N. Sekman et al. [42] determined the molecular
formula of the organic matter of landfilled solid waste as C4 H6 O3, and Themelis
and Kim [46] estimated the molecular formula as C6 H10 O4 . Pearse [35] evaluated
the molecular formula for the fresh waste as C39 H70 O30 N. The L o -theoretical was
determined using the Buswell equation for the BVS fraction [26, 33]:
a b 3c n a b 3c
Cn Ha Ob Nc + n − − + H2 O → + − − CH4
4 2 4 2 8 4 8
n a b 3c
+ − + + CO2 + CNH3
2 8 4 8
(3)
Since the waste was degraded and the ultimate analysis for the fresh materials
was used, it was assumed that the L o -theoretical of the EWR is 50% of the calculated
L o -theoretical. Then, the calculated L o -theoretical of EWR is 73.45 mL CH4 /g TS.
However, the average experimental result is 16.81 mL CH4 /g TS. The differences
between the values could be the result of several factors. First, the waste has been
degraded so the CH4 potential has decreased while the theoretical value is calculated
based on the molecular formula for fresh organic materials, including paper and
woody waste, and modified for excavated waste by an assumption. Second, although
experimenting in the solid-phase is a better representation of the landfill condition,
the experimental values are lower compared with the slurry-phase method [21, 26].
Table 5 Ultimate analysis of woody waste, mixed paper, and wood [45]
Percent by weight (dry basis)
Component Carbon Hydrogen Oxygen Nitrogen
Woody waste 49.5 6.0 42.7 0.2
Mixed paper 43.5 6.0 44.0 0.3
Wood 47.8 6.0 38.0 3.4
Residual Methane Generation Capacity of Waste Residue in a Landfill … 993
Therefore, the experimental value obtained from the BMP assay would have been
higher if the slurry-phase method was used.
Several models are available for estimating the CH4 generation rate in landfills using
site-specific input parameters. The LandGEM model is one of the most popular
models and was developed by the US Environmental Protection Agency (US EPA) to
estimate landfill gas generation [10]. The US EPA’s LandGEM model uses the Scholl
Canyon model to predict annual CH4 emissions from a landfill. The Scholl Canyon
Model uses first-order rate kinetics to determine gas generation during anaerobic
degradation of waste within a landfill and is written as [10]:
n
Q methane = k L o Mi e−kti (4)
i=1
where Qmethane is total CH4 production rate (m3 /yr), k is gas generation rate constant
(yr−l ), L o is ultimate CH4 generation potential (m3 /ton), and M i is sub-mass placed
in the cell in year i (tons).
This research found the experimental L o value for the entire Biocell as 16.81 mL
CH4 /g TS which is equal to 16.81 m3 CH4 /ton of waste and the average k value
as 0.02. Based on the historical data of the Biocell project, roughly 45,860 tons of
waste were deposited initially. However, it is assumed that 30,000 tons of waste are
still deposited in the Biocell after 15 years of operation and degradation. Using the
LandGEM model with the experimental L o and k values results in the estimation of
42,754 m3 of CH4 emissions into the atmosphere over the next five years. Considering
CH4 GWP of 34 (IPCC 2014), 1,453,636 m3 CO2 -equivalents will be emitted into
the atmosphere from Calgary Biocell if the waste remains in place within the next
five years. The amount of LFG emissions is much higher for a longer period under
the business-as-usual scenario.
In the Paris Agreement that was adopted in December 2015, Canada and 195 other
countries agreed to keep “the increase in the global average temperature well below
2 °C above pre-industrial levels and to pursue efforts to limit the temperature increase
to 1.5 °C above pre-industrial levels acknowledging that this would considerably
decrease the hazards and effects of climate change” [32]. Under this Agreement,
994 T. Abedi et al.
Canada committed to lowering GHG emissions by 30% from levels in 2005 by the
year 2030 (Environment and Climate Change Canada, 2021). To reach the emission
reduction goal, Canada followed various strategies. As an example, Canada charged
industries per ton of their emissions. In 2021, the Alberta government charged $40
per ton of emissions, whereas it is stated that the price will go up to $50 per ton in
2022 [1]. Considering the carbon tax in Alberta, the excavation of the Biocell creates
revenue of about $108,730 over the next 5 years.
4 Conclusion
The Lo values of EWR estimated from BMP assays were lower compared with L o
values reported by other studies conducted on excavated waste materials highlighting
the efficiency of the Biocell in degrading materials. The average L o value of the EWR
(16.81 mL CH4 /g TS) was approximately one-fifth of the L o value for fresh waste
(57.23 mL CH4 /g TS) tested using a similar solid-phase method and ISR by Pearse
[35]. The observed decrease in L o value could be attributed to the degradation of
EWR in the Biocell.
Acknowledgements The authors wish to acknowledge the funding received from the Natural
Sciences and Engineering Research Council (NSERC) of Canada; CEERE (Centre for Environ-
mental Engineering Research and Education) at the University of Calgary.
References
27. La H, Hettiaratchi JPA, Achari G, Dunfield PF (2018) Biofiltration of methane. Biores Technol
268:759–772. https://doi.org/10.1016/J.BIORTECH.2018.07.043
28. Machado SL, Carvalho MF, Gourc JP, Vilar OM, do Nascimento JCF (2009) Methane gener-
ation in tropical landfills: simplified methods and field results. Waste Manag 29(1):153–161.
https://doi.org/10.1016/J.WASMAN.2008.02.017
29. McBean EA, Syed-Ritchie S, Rovers FA (2007) Performance results from the Tucumán solid
waste bioreactor. Waste Manage. https://doi.org/10.1016/j.wasman.2006.12.017
30. Mönkäre TJ, Palmroth MRT, Rintala JA (2016) Characterization of fine fraction mined from two
Finnish landfills. Waste Manage 47:34–39. https://doi.org/10.1016/J.WASMAN.2015.02.034
31. Nielfa A, Cano R, Fdz-Polanco M (2015) Theoretical methane production generated by the
co-digestion of organic fraction municipal solid waste and biological sludge. Biotechnol Rep.
https://doi.org/10.1016/j.btre.2014.10.005
32. Paris Agreement. (n.d.). Retrieved 10 Feb 2022, from https://ec.europa.eu/clima/eu-action/int
ernational-action-climate-change/climate-negotiations/paris-agreement_en
33. Parra-Orobio BA, Donoso-Bravo A, Ruiz-Sánchez JC, Valencia-Molina KJ, Torres-Lozada
P (2018) Effect of inoculum on the anaerobic digestion of food waste accounting for the
concentration of trace elements. Waste Manage 71:342–349. https://doi.org/10.1016/J.WAS
MAN.2017.09.040
34. Patil BS, Singh DN (2017) Simulation of municipal solid waste degradation in aerobic and
anaerobic bioreactor landfills. Waste Manage Res 35(3):301–312
35. Pearse L (2019) Biochemical methane potential of landfilled municipal solid waste using a
non-slurry approach. http://hdl.handle.net/1880/110224
36. Pecorini I, Albini E, Rossi E, Iannelli R, Raco B, Lippo G (2018) Landfill mining: a case study
on characterization of excavated waste. Procedia Environ Sci Eng Manag 5:153–158
37. Pecorini I, Iannelli R (2020a) Characterization of excavatedwaste of dierent ages in view of
multiple resource recovery in landfill mining. Sustainability (Switzerland) 12(5). https://doi.
org/10.3390/su12051780
38. Pecorini I, Iannelli R (2020) Landfill GHG reduction through different microbial methane
oxidation biocovers. Processes 8(5):591. https://doi.org/10.3390/pr8050591
39. Pohland FG (1975) Acclerated solid waste stabilization and leachate treatment by leachate
recycle through sanitary landfills. Prog Water Technol
40. Reinhart DR (1996) Full-scale experiences with leachate recirculating landfills: case studies.
Waste Manage Res. https://doi.org/10.1006/wmre.1996.0036
41. Sabour MR, Mohamedifard A, Kamalan H (2007) A mathematical model to predict the
composition and generation of hospital wastes in Iran. Waste Manage 27(4):584–587
42. Sekman E, Akkaya GK, Demir A, Yıldız Ş, Balahorli V, Aykut NO, Bilgili MS (2019) Investiga-
tion of solid waste characteristics in field-scale landfill test cells. Global NEST J 21(2):153–162.
https://doi.org/10.30955/gnj.002982
43. Sormunen K, Ettala M, Rintala J (2008) Detailed internal characterisation of two Finnish land-
fills by waste sampling. Waste Manage 28(1):151–163. https://doi.org/10.1016/J.WASMAN.
2007.01.003
44. Steffen F, Requejo A, Ewald C, Janzon R, Saake B (2016) Anaerobic digestion of fines
from recovered paper processing—influence of fiber source, lignin and ash content on biogas
potential. Biores Technol 200:506–513. https://doi.org/10.1016/J.BIORTECH.2015.10.014
45. Tchobanoglous G, Theisen H, Vigil SA (1993) Integrated solid waste management engineering
principles and management issues (Issue 628 T3). McGraw-Hill
46. Themelis NJ, Kim YH (2002) Material and energy balances in a large-scale aerobic
bioconversion cell. Waste Manage Res 20(3):234–242
47. Tyagi VK, Fdez-Güelfo LA, Zhou Y, Álvarez-Gallego CJ, Garcia LIR, Ng WJ (2018) Anaerobic
co-digestion of organic fraction of municipal solid waste (OFMSW): Progress and challenges.
Renew Sustain Energy Rev 93:380–399. https://doi.org/10.1016/J.RSER.2018.05.051
48. Walker M, Zhang Y, Heaven S, Banks C (2009) Potential errors in the quantitative evaluation
of biogas production in anaerobic digestion processes. Biores Technol 100(24):6339–6346.
https://doi.org/10.1016/J.BIORTECH.2009.07.018
Residual Methane Generation Capacity of Waste Residue in a Landfill … 997
49. Wang TH, Li MH, Teng SP (2009) Bridging the gap between batch and column experiments:
a case study of Cs adsorption on granite. J Hazard Mater 161(1):409–415. https://doi.org/10.
1016/J.JHAZMAT.2008.03.112
50. Weaver JE (2013) Effect of inoculum source on the rate and extent of anaerobic biodegradation
51. Ximenes FA, Cowie AL, Barlaz MA (2018) The decay of engineered wood products and paper
excavated from landfills in Australia. Waste Manage 74:312–322. https://doi.org/10.1016/j.
wasman.2017.11.035
Decarbonized Natural Gas Supply Chain
with Low-Carbon Gaseous Fuels: A Life
Cycle Environmental and Economic
Assessment
Abstract Continuous growth in the economy has caused an increasing demand for
energy resulting in numerous environmental concerns. Despite the popularity gained
by renewable energy, certain economic activities still require fossil fuels. Among
existing fossil fuels, natural gas (NG) plays a critical role in ensuring Canada’s energy
security. However, the Canadian oil and gas sector is a major contributor to national
greenhouse gas emissions. Therefore, rigorous actions are required within the NG
industry to ensure sustainability in its operations. Hydrogen and renewable natural
gas (RNG) are identified as low-carbon gaseous fuels capable of decarbonizing the
NG supply chain. RNG has already been used in the market, whereas hydrogen is
gaining increased attention from utilities due to its ability to produce in higher capac-
ities than RNG. Moreover, hydrogen blending into NG systems is piloted worldwide
as an effort to reduce emissions from building heating and other carbon-intensive
applications in the energy sector. However, the feasibility of different NG supply
chain configurations coupled with low-carbon gaseous fuels is still under question
due to multiple economic and environmental factors. Therefore, this study attempts
to conduct a cradle-to-grave life cycle environmental and economic assessment of
different NG supply chain configurations coupled with hydrogen and RNG. A life
cycle thinking-based methodological framework is proposed to evaluate and compare
the different supply chain configurations. The framework is presented with a case
study for BC’s natural gas sector with six supply chain configurations for the Cana-
dian NG industry. The life cycle environmental and economic performance of the
six configurations were evaluated using life cycle assessment and life cycle costing.
The performance was integrated using the eco-efficiency analysis tool. According to
the study results, replacing RNG with NG is shown to be the most desirable option.
However, hydrogen blending with natural gas is still of high cost. Furthermore,
the costs and environmental impacts of hydrogen production vary with its produc-
tion method. Hydrogen production with electrolysis has lower impacts compared
to hydrogen production with steam methane reforming (SMR). The findings from
this study are geared toward enabling decision-makers and investors to gain a more
holistic view of investment decisions related to green energy initiatives in the NG
sector.
Keywords Life cycle assessment · Life cycle cost · Low-carbon gaseous fuels ·
Hydrogen · Renewable natural gas · Decarbonization
1 Introduction
Continuous economic growth has caused an increasing demand for energy resulting
in numerous environmental concerns. Therefore, nations worldwide have pledged to
work toward zero carbon economies. As a leader in cleaner energy, Canada has set
national and provincial emission targets to reduce carbon emissions by 40% below
2007 levels by 2030, with the plan to achieve net zero by 2050 [3]. In addition,
the Pan-Canadian Framework on Clean Growth and Climate Change has set some
key actions to achieve these emission goals. These actions are applicable to six key
economic sectors: electricity, buildings, transport, heavy industry, agriculture, and oil
and gas. Among them, the Canadian oil and gas sector emits the most GHGs, where
it is responsible for 191 million tonnes of CO2 eq in 2019 alone [8, 26]. Therefore,
the oil and gas sector demands a special focus on national decarbonizing efforts.
Natural gas (NG), in particular, plays and will continue to play a crucial part in
the Canadian energy sector as a cleaner option for fossil fuel. However, NG still
emits significant volumes of GHG from its operations spanning from upstream to
downstream. Therefore, the industry faces financial, social, and political pressures
in cutting down its supply chain emissions. Hence, a significant shift in NG business
operations is required to achieve industrial emission targets and remain competitive
in the energy market.
The NG industry has already identified and applied some avenues for transi-
tioning to sustainable supply chain operations. New investments in NG systems are
increasing in the industry due to the urgency of reducing GHG emission levels. There-
fore, enterprises invest heavily in diversifying their portfolio with cleaner gaseous
fuels such as hydrogen and renewable natural gas (RNG). For example, one of the
low-carbon energy projects of Enbridge Gas involves blending a small amount of
hydrogen gas into their existing NG network [16]. Moreover, FortisBC Inc. offers
the option to use RNG instead of conventional NG [18]. According to the FortisBC
Clean Growth Pathway to 2050, the NG industry decarbonization efforts include
decarbonizing pipeline gas with RNG, hydrogen, and integrating carbon capture and
storage into the supply chain [6]. However, implementing these new technologies
and strategies will require massive changes to the industry’s infrastructure [9].
The feasibility and sustenance of the above-mentioned decarbonization pathways
depend on various macro and micro-environmental factors. The macro-economic
factors that affect the industry’s competitive landscape come from socio-economic,
political, technical, and environmental pillars. Moreover, any investment decision
Decarbonized Natural Gas Supply Chain with Low-Carbon Gaseous … 1001
may carry a certain level of business risks and is often subjected to uncertainty.
Therefore, decision-making related to the decarbonization of the industry is a
complex task that requires comprehensive assessments to determine the feasibility
and sustainability of an investment decision.
Several studies in the published literature evaluate the GHG emission impacts of
the NG supply chain from production to end-use with a life cycle perspective [20,
22, 29]. However, none of the referred studies have conducted comprehensive, inte-
grated performance comparisons of new business avenues such as hydrogen and RNG
blending and replacing natural gas with cleaner gaseous fuels with improved natural
gas supply chain pathways. NG supply chains’ environmental impact assessments
should not only be limited to emissions. However, none of the referred literature has
focused on environmental impacts beyond GHG emissions. Moreover, most supply
chain cost assessments have concentrated only on operational costs. However, it is
essential to conduct economic evaluations from a life cycle perspective to determine
true costs and benefits.
The aim of the current study is to quantify the life cycle economic and environ-
mental impacts of new decarbonization avenues of the natural gas supply chain with
hydrogen and renewable natural gas (RNG). A scenario-based life cycle economic
and environmental assessments were conducted. The eco-efficiencies of new busi-
ness avenues were evaluated by integrating the life cycle environmental and economic
performances. Based on the study results, the cost of emission reduction in the defined
gaseous fuel supply chain pathways is discussed. The findings will be useful for
municipalities, utility providers, and investors in making investment decisions for
cleaner gaseous fuel supply chains.
2 Methodology
The study assesses the life cycle economic and environmental benefits of different
supply chain configurations that replace natural gas supply with cleaner gaseous
fuels such as renewable natural gas (RNG) and hydrogen. Six gaseous fuel supply
chain configurations to decarbonize the conventional natural gas supply chain were
defined initially. Each decarbonization option’s life cycle economic and environ-
mental impacts was evaluated using life cycle assessment (LCA) and life cycle
cost (LCC) tools. The economic and environmental impacts were further integrated
to determine the eco-efficiency of the proposed gaseous fuel supply chain path-
ways. Figure 1 provides the methodological framework proposed to assess the life
cycle environmental and economic performance of the gaseous fuel supply chain
configurations.
1002 R. Kotagodahett et al.
The study considered five scenarios, including the business-as-usual scenario (BAU),
to analyze and compare the costs and benefits of replacing NG with RNG and
hydrogen. The BAU (scenario 1) is the conventional production of natural gas.
Scenario 2 represents renewable natural gas (RNG) production. Scenario 3 indicates
hydrogen production with steam methane reforming (SMR), while scenario 4 is the
production of green hydrogen with electrolysis using grid electricity. Scenarios 5 and
6 are hythane production, a natural gas and hydrogen blend. The blending limit was
determined from literature, 85% natural gas and 15% hydrogen [18]. The hydrogen in
scenarios 5 and 6 is produced using the SMR and electrolysis methods, respectively
(Table 1).
A supply chain is deemed good environmental performance if its resources are effi-
ciently utilized, waste is reduced and recycled, and emissions and pollution are
mitigated [1]. The LCA tool is used extensively in gaseous fuel industries to evaluate
its upstream and downstream operations [14, 27, 28].
The net GHG emission reduction of replacing the conventional natural gas heating
system with renewable natural gas (RNG) was assessed through the life cycle assess-
ment (LCA) tool. The software used was SimaPro. Similarly, the LCA was applied
to the remaining scenarios for comparison. The functional unit was 1 MJ of energy
generated over 40 years. The life cycle inventory data requirement is given in Table
2. The ReCiPe midpoint impact assessment was used for the assessment. The supply
chain stages up to fuel distribution were considered in the assessment to determine
the impacts that need to be borne by the utility providers. The considered supply
chain stages are shown in Fig. 2.
The economic impact assessment was conducted considering the overall life cycle.
The life cycle cost (LCC) of each option was calculated by adding the present values
of each cost element. Equation (1) was used to calculate the LCC [15],
n
n
LCC = IC + FOC + VOC + COD (1)
i=1 i=1
where
The cost of disposal was assumed negligible compared to the investment costs for
scenarios 1 and 2.
The present value of one-time cash flow was calculated using Eq. (2) [12];
FV
PV = ; (2)
(1 + r) t
Decarbonized Natural Gas Supply Chain with Low-Carbon Gaseous … 1005
where
FV future cash flow;
r discount rate.
The present value of recurring cash flows was calculated using Eq. (3) [12];
A[(1 + r) t − 1]
PV A = ; (3)
r(1 + r) t
where
A annually recurring cash flow.
The data for the economic assessment was obtained from the literature [2, 21,
25]. The investment costs (IC) include the cost of the equipment and its installation
costs. IC and Fixed O&M costs are given in Table 3.
Additionally, hydrogen production by electrolysis is powered by grid electricity.
The revenue for gaseous fuel sale data was obtained from utility providers [5]. Renew-
able natural gas costs were calculated based on FortisBC rates. The selected RNG
blend was assumed to be 100% with a rate of an annual premium of $629.92 per
yearly average usage of 90GJ per year [5].
2.4 Eco-efficiency
E LCCscore
ratio = . (4)
C LCAscore
The current study used the eco-efficiency parameter to evaluate the NG supply
chain decarbonization scenarios based on combined economic and environmental
performance.
This section discusses the results obtained in detail of the decarbonization scenarios
{scenario 2 (100% NG), 3 (100% RNG), 4 (100% hydrogen), 5 (85% NG and 15%
hydrogen), and 6 (85% NG and 15% RNG)} in comparison with the business-as-usual
case.
The major focus among the stakeholders is to reduce the life cycle emissions of
the industrial operations. Figure 3 shows the upstream industrial life cycle emis-
sions of the business-as-usual (BAU) case (conventional natural gas: scenario 1)
along with overall emissions of the proposed decarbonization strategies. Moreover,
the chart provides the impact on global warming with the other decarbonization
scenarios compared to the BAU. Scenario 2, where NG is replaced with RNG, has
the lowest impact on climate change. This is because of the lower emissions during
the fuel production process compared to conventional natural gas production. The
second lowest came from 100% hydrogen production with mercury electrolyzers
(scenario 4), which is powered by the BC grid. The highest emissions came from
scenario 3, where 100% hydrogen fuel is produced from steam methane reforming.
That is because the overall life cycle emissions include emissions from natural gas
production, which is a raw material for the process.
The environmental performance assessment with the LCA tool provides further
information on life cycle impacts in addition to the global warming potential. Figure 4
depicts the environmental performance of the scenarios compared to the BAU,
with impact categories, namely global warming potential, damage to the ecosystem,
resource depletion, and risk to human health. Scenario 2 (RNG) performs the best
with the normalized value, with a lower environmental impact under all the categories
compared to the BAU. Scenarios 4 and 6, which are hydrogen with electrolysis, and
a blend of natural gas and hydrogen with electrolysis, have a lower impact on global
warming compared to the BAU. However, both the scenarios depict higher impacts in
all other categories than the BAU. This is because; the electrolyzer type is a mercury
electrolyzer that requires more resources and impacts human health with the waste
Decarbonized Natural Gas Supply Chain with Low-Carbon Gaseous … 1007
0.06
0.05
0.04
0.03
0.02
0.01
0
Scenario 1 Scenario 2 Scenario 3 Scenario 4 Scenario 5 Scenario6
Scenarios
ENV-7
streams it generates. Hydrogen production with SMR shows higher global warming
potential. Yet, it performs better with regard to the damage to the ecosystem and
human health.
The overall life cycle environmental performance of each of the scenarios is given
in Table 6 in Appendix.
In addition to the life cycle environmental performance assessments, the finan-
cial performance of new business ventures is an essential factor to consider in
200%
180%
160%
140%
120%
100%
80%
60%
40%
20%
0%
Climate Change Damage to eco-system Resource deple on Human health risk
Scenario 2 Scenario 3 Scenario 4
Scenario 5 Scenario6 Scenario 1
Fig. 4 Life cycle performance of decarbonization option compared to the BAU case (scenario 1)
1008 R. Kotagodahett et al.
decision-making. Hence, a further life cycle cost assessment was conducted on the
six scenarios.
Life cycle cost integrates the following cost elements, namely initial capital,
decommissioning cost, salvage values at the end of life, and fixed and variable oper-
ating costs. Table 4 provides each cost element’s present value and the life cycle cost.
The life cycle cost of hydrogen production has the highest cost figure. This is due to
its novelty in technology. However, once the technology matures, these cost values
are expected to drop.
For comparison, the above cost figures in Table 4 were compared against BAU
to determine the cost of emission reduction with the defined NG supply chain
decarbonization scenarios. The results are given in Table 5 and Fig. 5.
According to Fig. 5, life cycle emission savings can be obtained from scenarios
2, 4, and 6. However, when the upstream emission of hythane (SMR hydrogen with
0.030 20.00
MJ)
0.020 15.00
0.010
10.00
0.000
-0.010 Scenario 2 Scenario 3 Scenario 4 Scenario 5 Scenario 6 5.00
-0.020 0.00
Life Cycle emission reduc on( kg CO2-eq/ MJ gaseous fuel) Incremental LCC ($/MJ)
Fig. 5 Life cycle emission reduction and incremental LCC of NG supply chain decarbonization
strategies
natural gas) and 100% hydrogen are considered, they do not guarantee any emission
reduction. According to the literature, an emission reduction with hydrogen can be
obtained only during the end-use. Moreover, as the literature depicts, the hythane
transport emissions are three times higher than 100% natural gas transmission [17].
When it comes to life cycle costs, every decarbonization strategy comes with
an additional financial burden to the gaseous fuel utility provider. The highest of all
comes if the industry replaces natural gas with 100% hydrogen produced from SMR.
This is followed by 100% hydrogen production with electrolysis powered by a grid.
The lowest incremental LCC is with replacing NG with RNG.
For further comparison, the environmental and economic performance of the
decarbonization scenarios against conventional natural gas were combined into a
single performance parameter, namely life cycle cost of emission reduction. As
given in Table 5, the cost of emission reduction is highest for hydrogen produc-
tion with SMR. It is followed by hythane, where hydrogen produced from SMR
(15%) is blended with natural gas (85%). The lowest cost of emission reduction can
be achieved if NG is replaced with 100% RNG. However, the feasibility of RNG
production can be questionable with the lower availability of biomass required for
RNG production. In that regard, hydrogen produced from electrolysis can be shown
as a desirable option when blended with conventional natural gas.
Moreover, electrolysis is a novel technology, which results in higher costs at
the current state. However, when it reaches commercial maturity, these costs are
expected to go below the current levels. Yet, the high level of impact on the ecosystem
and human health makes it undesirable compared with the other options. There-
fore, further improvements are currently under research in electrolysis technology
development [7].
The cost of emission reduction only integrates the life cycle emissions with the life
cycle cost performance. However, for a further holistic overview, the eco-efficiency
ratio of the decarbonization strategies was evaluated. To calculate the eco-efficiency
1010 R. Kotagodahett et al.
Scenario 6
Scenario 5
Eco-efficiency ra o
Scenario 4*
Scenario 3
Scenario 2
Scenario 1
(0.70) (0.20) 0.30 0.80 1.30 1.80 2.30 2.80 3.30 3.80
ratio, the revenue from the sale of gaseous fuels was included in the life cycle cost
figure to calculate the life cycle revenue. The eco-efficiency is depicted in Fig. 6.
According to the figure, the best-performing option is scenario 2, where natural
gas is 100% replaced with renewable natural gas. This is because RNG is sold at a
premium price to cover its costs, thus, it generates a positive revenue over the life
cycle since the costs are comparatively lower than other decarbonizing strategies.
Moreover, as depicted in Fig. 4, the scenario is the best-performing option for envi-
ronmental performance. Hence, it is clear that RNG is a better option among the
considered strategies. However, the results may change with the blending propor-
tions of RNG with NG. In addition, it is important to assess the resource availability
for RNG production since biomass may not be enough to cater to the energy demand
of a region. From that, the optimum blending percentages should be determined in
future research. On the other hand, hydrogen still requires considerable research and
development to improve its cost and environmental performance.
In addition, considered decarbonization options, there are few other methods that
are proposed in the literature to further reduce emissions from hydrogen production.
One is integrating carbon-capturing to SMR. Another is using renewable energy
to power the electrolysis process. In the current study, the electrolysis is powered
by BC grid. BC grid comparatively has a greener grid mix. However, suppose the
electrolysis process is powered by a grid with a gird mix of the higher fossil fuel
composition. In that case, the emission could be higher than what is obtained in the
current analysis. Hence, the use of renewable energy to power electrolysis could reap
further environmental benefits.
Another business avenue for the Canadian natural gas industry would be the export
of liquefied natural gas to generate internationally transferred mitigation outcomes
(ITMO). An ITMO is defined as the reduction of the absolute tons of CO2 equivalents
from the emission account of a particular party and the addition of that equivalent
amount to the emissions account of another party [10]. This decarbonization option
was ruled out of the current scenario analysis since it has additional elements limiting
Decarbonized Natural Gas Supply Chain with Low-Carbon Gaseous … 1011
the option from being compared with the scenarios assessed in the current study. Yet,
generating ITMOS by exporting liquefied natural gas (LNG) to Asian markets that
rely heavily on coal-based power enables Canada to gain emission credits to achieve
national emission targets [11].
According to the evaluations, RNG can reduce GHG emissions in the natural gas
supply chain. However, RNG system costs and resource availability prevent it from
becoming the preferable option at the current condition. Therefore, further research
should focus on determining the optimum blend of RNG and NG to cater to regional
energy needs with the available resources. Furthermore, at the current technology
status, 100% hydrogen cannot be used in most other end-uses other than in the trans-
port sector. Therefore, hythane is the more technically feasible option to use hydrogen
in common energy sector uses such as residential energy needs. However, the green
premium attached to it, lack of awareness, and infrastructure prevent the consumers
from wide utilization of hydrogen gas. These concerns can be diluted if economic
incentives such as carbon credit mechanisms and tax credits that assign a financial
value to the reduced emissions. Moreover, awareness among the society should also
be made available. Moreover, since RNG and hydrogen are emerging commodities,
there is a lack of well-established tariff structures for RNG and hydrogen project
investments. Therefore, future research should focus on devising policy and regula-
tory measures to reduce greener gaseous fuel costs and define economically viable
gas rates.
4 Conclusion
The NG industry has secured a prominent place in the energy sector to ensure energy
security due to its comparatively lower emission intensity. However, certain supply
chain operations are highly energy and resource-intensive. Therefore, the industry has
explored decarbonizing opportunities such as integrating methane abatement tech-
nologies, hydrogen gas blending and replacing NG use with RNG and hydrogen fuel.
However, implementing these new technologies and strategies will require massive
changes to the industry’s infrastructure [9]. Therefore, it is important to assess the
feasibility and sustenance of the decarbonization strategies in gaseous fuel indus-
tries. Ensuring a sustainable supply chain for gaseous fuel industries such as natural
gas, RNG, and hydrogen is challenging. Therefore, the right assessment tools are
essential. The suitability of the tool to assess the supply chain depends on multiple
factors that include [19].
• The ability to analyze the impacts of sustainability,
• Deliver quantitative assessment of impacts to the greatest extent possible,
• Simplicity and transparency of the method.
The current study employed LCA, LCC, and eco-efficiency assessment tools to
assess the life cycle environmental and economic performance of different decar-
bonized natural gas supply chain configurations. From the results, it is clear that
1012 R. Kotagodahett et al.
analyzed decarbonization options have the potential to reduce life cycle GHG emis-
sions along the gaseous fuel supply chains. Among the assessed options, RNG has
shown to be a worthwhile project to move forwards with. It has an untapped potential
to reduce emissions from present natural gas dependence. Moreover, hydrogen and
hydrogen blending also proved to reduce greenhouse gas emissions in the natural gas
supply chain. However, technological advancements and clear incentives and regu-
latory mechanisms are required to reduce its financial costs. The findings from the
study would help utility providers, building designers, and owners to make informed
decisions related to building energy infrastructure.
Appendix
See Table 6.
Table 6 (continued)
Impact category Unit Scenario Scenario Scenario Scenario Scenario Scenario
1 2 3 4 5 6
Freshwater kg 1.3E-04 6.0E-05 5.8E-05 1.5E-03 1.2E-04 3.3E-04
ecotoxicity 1,4-DCB
Marine kg 1.7E-04 8.3E-05 7.4E-05 2.0E-03 1.5E-04 4.5E-04
ecotoxicity 1,4-DCB
Human kg 5.9E-05 8.6E-05 3.0E-05 3.5E-03 5.4E-05 5.7E-04
carcinogenic 1,4-DCB
toxicity
Human kg 3.8E-03 1.3E-03 1.7E-03 3.3E-02 3.5E-03 8.2E-03
non-carcinogenic 1,4-DCB
toxicity
Land use m2a crop 8.5E-06 7.1E-05 8.3E-06 5.5E-04 8.5E-06 9.0E-05
eq
Mineral resource kg Cu eq 4.4E-06 6.5E-06 2.0E-06 5.9E-05 4.1E-06 1.3E-05
scarcity
Fossil resource kg oil eq 1.7E-02 9.7E-04 7.0E-03 9.7E-03 1.5E-02 1.6E-02
scarcity
Water m3 4.2E-06 4.2E-05 2.2E-03 1.2E-03 3.3E-04 1.8E-04
consumption
References
1. Al-Haidous S, Al-Ansari T (2020) Sustainable liquefied natural gas supply chain management:
a review of quantitative models. Sustainability (Switzerland) 12(1):1–23
2. Biswas WK, Engelbrecht D, Rosano M (2013) Carbon footprint assessment of western
Australian LNG production and export to the chinese market. Int J Prod Lifecycle Manag
6(4):339–356
3. CleanBC (2020) 2020 climate change accountability report
4. FortisBC (2019) Clean growth pathway to 2050. https://www.cdn.fortisbc.com/libraries/docs/
default-source/about-us-documents/clean-growth-pathway-brochure.pdf?sfvrsn=1a4b811f_2
5. FortisBC (2020) Renewable natural gas rates. https://www.fortisbc.com/services/sustainable-
energy-options/renewable-natural-gas/renewable-natural-gas-rates (March 3, 2021)
6. FORTISBC (2019) The determination of change in GHG emissions resulting from LNG export
from FortisBC, Tilbury Plant, to Industries in China Prepared For. 01:1–35
7. Global CCS Institute (2017) CCS: a critical technology for saving our environ-
ment. http://www.globalccsinstitute.com/sites/www.globalccsinstitute.com/files/uploads/glo
bal-status/1-0_4529_CCS_Global_Status_Book_layout-WAW_spreads.pdf
8. Greenhouse Gas Emissions. https://www.canada.ca/en/environment-climate-change/services/
environmental-indicators/greenhouse-gas-emissions.html (July 29, 2019)
9. Guidehouse (2020) Pathways for British Columbia to achieve its GHG reduction
goals. (August):31. https://www.cdn.fortisbc.com/libraries/docs/default-source/about-us-doc
uments/guidehouse-report.pdf?sfvrsn=dbb70958_4
10. International Emissions Trading Association (IETA) (2017) Article 6 of the Paris agreement
implementation guidance an IETA “Straw Proposal”
11. International Institute for Sustainable Development (2020) Current status of article 6 of the
Paris agreement: internationally transferred mitigation outcomes (ITMOs)
1014 R. Kotagodahett et al.
12. Karunathilake H et al (2018) Renewable energy integration into community energy systems: a
case study of new urban residential development. J Clean Prod 173:292–307. https://doi.org/
10.1016/j.jclepro.2016.10.067
13. Kong Z, Dong X, Liu G (2016) Coal-based synthetic natural gas vs. imported natural gas in
China: a net energy perspective. J Clean Prod 131:690–701. https://doi.org/10.1016/j.jclepro.
2016.04.111
14. Koroneos C, Dompros A, Roumbas G, Moussiopoulos N (2004) Life cycle assessment of
hydrogen fuel production processes. Int J Hydrogen Energy 29(14):1443–1450
15. Kotagodahetti RK (2020) Community level emission reduction with carbon capturing: a life
cycle thinking based approach. The University of British Columbia (Okanagan Campus)
16. Low-Carbon Energy Project | Enbridge Gas. https://www.enbridgegas.com/about-enbridge-
gas/projects/low-carbon-energy
17. Di Lullo G, Oni AO, Kumar A (2021) Blending blue hydrogen with natural gas for direct
consumption: examining the effect of hydrogen concentration on transportation and well-to-
combustion greenhouse gas emissions. Int J Hydrogen Energy 46(36):19202–19216. https://
doi.org/10.1016/j.ijhydene.2021.03.062
18. Melaina MW, Antonia O, Penev M (2013) Blending hydrogen into natural gas pipelines
networks: a review of key issues. Technical Report NREL/TP-500–51995 (March)
19. National Research Council (2011) Sustainability and the U.S. EPA Sustainability and the U.S.
EPA. The National Academic Press, Washington, DC
20. Nie Y et al (2020) Greenhouse-gas emissions of canadian liquefied natural gas for use in
China: comparison and synthesis of three independent life cycle assessments. J Clean Prod
258:120701. https://doi.org/10.1016/j.jclepro.2020.120701
21. NREL. Data and Tools. https://www.nrel.gov/research/data-tools.html
22. Okamura T, Furukawa M, Ishitani H (2007) Future forecast for life-cycle greenhouse gas
emissions of LNG and city gas 13A. Appl Energy 84(11):1136–1149
23. Perera P, Hewage K, Shahria Alarti M, Sadiq R (2017) Eco-efficiency analysis of recycled
material for residential construction: a case study of Okanagan (BC). In: 6th CSCE-CRC
international construction specialty conference 2017—held as part of the The Author(s), under
exclusive license to Springer Nature Switzerland AG annual conference and general meeting
2017 1(Jeffery 2011), pp 525–534
24. Prabatha T (2022) A life cycle thinking-based energy retrofit planning approach for existing
residential buildings. The University of British Columbia
25. Raj R et al (2016) A techno-economic assessment of the liquefied natural gas (LNG) production
facilities in western Canada. Sustainable Energy Technol Assess 18:140–152. https://doi.org/
10.1016/j.seta.2016.10.005
26. RBC (2021) The $ 2 Trillion transition: Canada’s road to net zero the $ 2 trillion transition:
Canada’s road to net zero, pp 1–26
27. Sharafian A, Blomerus P, Mérida W (2019) Natural gas as a ship fuel: assessment of greenhouse
gas and air pollutant reduction potential. Energy Policy 131(x):332–346. https://doi.org/10.
1016/j.enpol.2019.05.015
28. Weber CL, Clavin C (2012) Life cycle carbon footprint of shale gas: review of evidence and
implications. Environ Sci Technol 46(11):5688–5695
29. Zarei J, Amin-Naseri MR, Khorasani AHF, Kashan AH (2020) A sustainable multi-objective
framework for designing and planning the supply chain of natural gas components. J Clean
Prod 259:120649. https://doi.org/10.1016/j.jclepro.2020.120649
An Optimization Model for Selecting
Combinations of Crops that Maximize
the Return Inside Self-sustainable
Greenhouses
Abstract The main purpose of this research is to outline the development of an opti-
mization model for selecting a combination of crops that have the maximum equiva-
lent annual cash flow (EACF) inside a self-sustainable greenhouse. A self-sustainable
greenhouse will require to provide the total energy required for irrigation (pumping
energy) for the combination of crops that will be selected. Photovoltaic (PV) energy
is chosen as the source of renewable and sustainable source of energy. The research
was conducted by establishing the required database, generating different scenarios,
assessing the economic performance along the project life, forming the optimization
model using genetic algorithm (GA), and systemizing the model using Microsoft-
Excel. The database includes two main parts: data on the crops’ requirements and
data on the PV pumping system. Typically, irrigation experts provide the required
pumping energy, and then, the designer sizes the PV system based on the given value
by the available solar energy/m2 . This leads to over design of the system which
leads to higher cost. In this research, the impact factors, i.e., location, soil type,
water source, crop types and their planting season, and PV water pumping (PVWP)
system characteristics, were considered for developing the optimized model. As
PVWP system depends on available solar energy which fluctuates overtime, the
output (water) also fluctuates. Therefore, optimizing the relation between the output
(pumping energy required) and input (solar energy available) will lead to meet the
requirements for irrigation in the best manner possible. This study has led to the
development of a general model for optimal sizing for a ground mounted PV systems
that meets the required energy needs for irrigation for a combination of crops that
maximize the return.
1 Introduction
Greenhouse production has many advantages over the open-field production as better
monitoring for climate conditions and proper maintenance techniques which allows
to have off-season year-round production and higher yield. Greenhouse can produce
up to 15 times more yield per area compared to open-field production [43]. Green-
houses are categorized according to the applied technology inside them. During
the last 25 years, Egypt has applied new greenhouse structures with double span
that have better ventilation and easier management. In the middle of 1990s, new
investigators in Egypt started to use multi-span greenhouses as double span ones
don’t suit large farming. During the last 15 years, farmers adopted Parron system
to greenhouses by using wooden structure with higher height and more ventilation.
They implement such technique to have better production for vegetables and trop-
ical fruits inside greenhouses. Using wooden structures instead of steel structures
will reduce the capital cost of the project so motivating more investors to start apply
protective farming [1]. During 1980s, low technology greenhouses started to shift
to modern technology greenhouses to reduce the environmental hazards such as soil
fertility degradation and groundwater pollution that occur because of the excess use
of pests and chemicals inside the low technology greenhouses [2].
In addition, adequate and suitable water irrigation resources are essential for
growth of crop producing food; indeed, there are barriers that face the reclamation
of large areas of desert land such as the lack of suitable water resources and the high
cost of providing water. Although surface irrigation water is insufficient on a global
scale to meet our demands, there is an abundance of salt water in Earth’s seas and
oceans, covering 2/3 of the area of the world, as well as brackish water on land. Both
of which may be adequately desalinated using techniques such as reverse osmosis,
provided that sufficient energy is available. In addition, with sufficient energy, we can
also draw water from deep aquifers and bring them to the surface to water the crops,
as well as reduce their salinity if need be. Hence, long-term development requires a
sustainable source of energy to be used to fill the gap in the available irrigation water
resources.
The agricultural sector dependency on energy resources has been increased like
other sectors [7]. The reason behind that is the modernization of many agricultural
production operations such as irrigation using new techniques to save time and water
loss [29]. Globally, irrigation pumps consume about 62,000,000 MWh of electricity
annually [57]. Some areas don’t have access to the grid; therefore, they have to find
an alternative way to get electricity. Most of these areas mainly depend on diesel,
gasoline, and kerosine pumps to pump water for irrigation. Diesel pumps are the most
used for crop irrigation [33]. However, this might not be a proper solution especially
for remote areas, as transportation cost and cost of maintenance for pumps are rela-
tively high [3]. Some experts estimate that the remaining fossil fuel will be fully used
within 50 years as the annual increase rate of energy consumption is assumed to be
1.4% from 2008 to 2035 [34]. This raises the importance of using renewable energy
sources. In particular, solar energy is expected to have an effective role in renewable
An Optimization Model for Selecting Combinations of Crops … 1017
and sustainable energy development especially in remote areas [26]. In Egypt, the
annual global solar radiation ranges between 2000 and 3200 kWh m−2 from north to
south which gives a great potential for applying PV systems [50]. According to New
and Renewable Energy Authority (NREA), the total installed capacities in Egypt
are around 15 MW, and most of them are stand-alone systems to power different
applications such as lighting systems (27%), cell phone networks (25%), adver-
tising lighting systems (16%), communication systems (13%), cathodic protection
systems (7%), water pumping systems (6%), rural electrification, refrigeration, and
others (6%) [12]. Generally, the most abundant source of renewable energy in the
areas of the world which suffer most from shortage of irrigation water, such as the
Sahara Desert region, is the solar energy. This also raises the importance of installing
PV system to generate electricity to drive deep submersible pumps to draw water
from boreholes. The PV system is characterized by durability, low carbon emissions,
easy maintenance, and relatively low cost for maintenance [40]. The output of the
PV installations is continuously variable and depends on both the characteristics of
the PV modules and their components and the instantaneous atmospheric conditions.
On the demand side, the amount of daily water requirements for irrigation fluc-
tuates widely with location, climate, season, soil type, water type, and crop type.
Hence if PV system is used to meet the irrigation needs of one single crop, it may
be dormant most of the time if this crop is seasonal, and highly oversized the rest
of the time if the summer and winter demands vary widely as the case with most
crops. Hence, this situation will be highly uneconomical and is also waste of energy
resources.
In this paper, an optimization model was developed to aid investigators in the
agricultural industry to find the best combination of crops inside greenhouses that
have the maximum EACF.
2 Literature Review
Optimizing greenhouse farming annual return is done by proper planning for green-
house farming, resource allocation, and optimizing the use of resources. Due to the
complexity of the sector, most of the researchers use linear programming approach
to determine the optimal crop combinations and feasibility of decision variables [8].
Linear programming starts with a feasible solution and uses pivot operations to main-
tain the feasibility of the solution and identify the value of the objective function.
Pap [45] established a mathematical model of optimal agricultural production plan
applied on a small farm located in west Vojvodina using linear programming model
to maximize the total return by adopting crop rotation policy. The results indicate
that applying rotation policy will aid to have higher revenues compared to the tradi-
tional crop practices [45]. The Food and Agricultural Organization estimates that
1018 E. N. Eman et al.
food production will need to be increased by 70% to feed the population in 2050
[28]. This leads researchers to not only focus on crop yield but also on sustainable
growth.
Sarker and Quaddus [53] compared two versions of the crop planning problem
which are single objective linear programming approach and multi-objective
approach with conflicting objectives. Linear programing approach presents one
optimal solution while multi-objective technique offers set of possible solutions as
so provides a better vision for crop planning program, and the user has to choose the
appropriate solution [53]. This raises the importance of optimizing the net benefit in
addition to the environmental factors such as water and energy to reach the optimum
solution.
Optimizing the water use will aid to increase the annual return as well as optimize
the resources used; therefore, [36] developed a multi-objective water allocation model
to optimize the economic, social, and environmental benefits of an area. The results
showed that the model provides a sustainable approach to allocate water resources in
regions where there is lack of water resources and accordingly allocates the resources
[36]. However, this requires accurate estimation for the required water for irrigation
as well as the available water to be able to calculate the required energy to draw
water from the water source. Studying the regional climate is essential for accurate
calculations for the irrigation water requirements.
Most of the previous models were developed using linear programing approach;
however, Hosny et al. [27] introduced a comprehensive database and multi-objective
optimization model using GA that aids the user to find efficient and economic method-
ology for the best utilization of land for either open-field farming or greenhouse
farming. However, the greenhouse farming depends on data that has to be entered by
the user [27]. This means higher consumption of energy, i.e., higher cost of produc-
tion. Therefore, developing a comprehensive database and having a balance between
crop planning, used resources, the yield of production and energy consumption is
essential to reach the maximum net benefit.
Collecting the sun energy is for free but it needs the appropriate equipment to have
an optimized system. There are various methods for optimizing stand-alone PV
systems [23, 31, 32, 46]. The previous studies analyzed the performance of PV system
based on two important factors which are regional climates (i.e., the geographical
factors and meteorological factors) and the available area characteristics (i.e., the on-
site installation factors and the available area) [4, 22, 31]. Some studies have been
done to maximize the performance of PV system by only considering the azimuth
angle of the installed panels (ψ) which concluded that the southwest orientation is
the best orientation for year-round performance for the Northern hemisphere [51].
Other studies have been done to maximize the average year-round performance by
optimizing the slope of the installed panels (β) which revealed that the optimal
An Optimization Model for Selecting Combinations of Crops … 1019
angle for the installed panels is the same as the latitude [54]. Gopinathan et al. [24]
developing a model to maximize the performance by introducing more than one
factor. The solar irradiance was estimated based on the ψ (− 90° to 90°) and the β
(0°–90°), and the results showed that the β changed as the ψ got away from south
(0°) [24]. However, most of the previous studies ignore the effect of the ambient
temperature on the PV performance.
Several studies have identified that the PVWP system can be an attractive applica-
tion for sustainable energy for irrigation. Hamidat et al. [25] established a program to
test the PVWP system performance used to irrigate regions in the Sahraa, and it was
concluded that PVWP systems are suitable for crop irrigation in small-scale appli-
cations [25]. Cuadros et al. [15] illustrated how to design PVWP systems using drip
irrigation method to have an effective irrigating water system for olive tree orchard
in Spain [15]. Recent research on PVWP systems has focused mainly on system
modeling and improvements in the system components (PV modules, controllers,
inverters), performance of PVWP system under various operating heads, and their
effect on the environment [35, 38, 41]. Glasnovic and Margeta [22] proposed an opti-
mization model for optimizing the PVWP size taking into consideration the required
water for irrigation and the available water. They concluded that dynamic models
give more realistic results; however, the economic performance for the lifecycle
cost analysis (LCCA) was not covered [22]. Kelley et al. studied the technical and
economic feasibility for implementing PVWP system for irrigation and concluded
that there are no barriers for applying PVWP system; however, the only barrier is
the high price of the PV modules [31]. Therefore, a LCCA is required to prove the
compatibility of the PVWP system.
The model targets the best utilization of the land by maximizing the return, and this
will occur by balancing between the crops’ profit, the required water, and the size
of PVWP system. As the user enters the necessary inputs, the model investigates
the best combination of crops, starting date for each crop, occupied area for each
crop and type of panel (ToP). The purpose of the model is to find the maximum
equivalent annual net cash flow (EACF) by having a dynamic simulation of PVWP
system, solar irradiance, required water for irrigation, and crop yield response to
season. In addition, the economic aspects of the PVWP systems are investigated. IIC
and LCCA are used to compare different water pumping techniques, and EACF is
used to evaluate the net benefits gain, which will be further detailed in the upcoming
sections.
1020 E. N. Eman et al.
The developed model consists of five different modules: (1) inputs module, (2)
database module, (3) calculations module, (3) optimization module, and (5) output
module. First, the user inputs all the required information for the project through the
input module. Then, the database module is filtered according to the input data. Next,
the calculation module uses the filtered database and the inserted inputs to calcu-
late the required parameters for calculating crops net benefits and sizing the PVWP
system. After that, the optimization modules will use these data to optimize the
objective function which is maximizing the EACF by adjusting the model variables
to generate the results in the output module.
The model allows the user to enter the required data which includes the following:
The model reads and uses the input data to optimize EACF and PVWP system for
the combination of crops that have been selected by the model. The model has been
developed using Microsoft-Excel and Evolver 8.2 which is an Excel Add-in [44].
The model consists of several worksheets. To be able to insert the inputs, gets into
the worksheet, named “model inputs,” and fills in the required data as indicated in
Table 1. Inputs guide the user to include the correct data in the right place. The model
has been developed to read the input data inserted by the user and share them among
other worksheets to estimate the essential parameters for optimizing the model such
as the reference evapotranspiration (ETO ) and the hourly average solar radiation (I).
The database module consists of two main parts: data on crops’ requirements and
data on PVWP system.
Data on Crops’ Requirement
The first part in the crops’ requirement database contains data about the crops and
trees that can be grown inside greenhouses in Egypt. The total duration for a pool
of crops according to planting season is collected [13, 27, 48, 56]. In addition, the
crop coefficient (K C ) value for each crop during different growing stages is identi-
fied according to Cropwat that are freely distributed by the Food and Agricultural
Organization (FAO) of the United States [30]. Also, the crops’ costs are recognized
according to the Egyptian market, they include the cost of making rows, seeds, fixed,
temporary workers and engineers, composites, fertilizers, and pesticides. The second
part includes data about the greenhouses and their costs. In this study, the analysis
was based on single span greenhouses using medium technology.
Data on PVWP Systems
It includes two main sections which required for PV system and data on water
pumping system. Concerning, the database of PV system involves two main parts.
The first one covers the regional meteorological and geographical information for 27
governorates in Egypt collected from the National Aeronautics and Space Admin-
istration (NASA). The global sunshine hours and solar radiation values for 27 loca-
tions covered almost all regions in Egypt are calculated and presented in “database
module.” The yearly average value of sunshine hours is 12 h per day, and the global
solar radiation in Egypt is found to be around 6.0 kWh m−2 day−1 which give a great
potential for applying PV system there.
The second part deals with the specifications of the PV modules. They are cate-
gorized into six groups according to their type (monocrystalline or polycrystalline)
and number of cells (36, 60, or 72) as indicated in Table 1. The physical information
for the panels, i.e., capacity, efficiency, dimensions, losses, average cost, temp coef-
ficient of power, PV transmittance, rated voltage, rated current open circuit voltage,
and short circuit current, was collected according to the Egyptian market.
Moving to the water pumping system, the selection of a proper pump in a solar
water pumping system depends on required discharge (Q) and total head losses (HTE )
[19]. Each pump type has its own efficiency as presented in Table 2 [6]. Usually, the
pump controller system includes maximum power point tracker (MPPT) to assure
that the solar array is delivering power at its maximum. AC powered pump has also
inverter to convert the DC power to AC.
The calculation module consists of three models. The first model calculates the
hydraulic energy (E H ) by identifying the required discharge (Q) for the combination
1022 E. N. Eman et al.
of crops and the total head losses (H TE ). The second unit is for sizing the PV system
that gives the power output from the PV array based on location (), slope (β), type
of panels (ToP), azimuth of panels (), and ambient temperature (T a ) to generate
the water irrigation system. The third model is to analyze the economic performance
of PVWP system using EACF approach.
Hydraulic Energy (E H ): The hydraulic energy demand (E H ) is determined
according to the plants water requirements, the irrigation method, and the total head
losses according to Eq. (1) [22, 55],
ρg Q d HTE
EH = (1)
36 ∗ 105 ∗ η F η N
where Qd is the total water demand in (m3 day−1 ), EH is the hydraulic energy (kWh), ρ
is the density of water (1000 kg m−3 ), g is the acceleration due to gravity (9.81 m s−2 ),
ηF is the efficiency of the irrigation system (fractional losses), ηN is the efficiency
of the irrigation method (open channel, drip, trickle, flood).
Concerning the total water demand if the sum of crop evapotranspiration (ETc ),
leaching requirement (LR), and irrigation applied in soil disinfection and pre-
transplanted irrigation, i.e., 50 mm for soil disinfection done every (1–2) years and
20 mm for pre-transplanted, needed to be calculated [9, 14, 30]. To be able to calculate
the ETc using Eq. (2) [47], the reference evapotranspiration (ETO ) was calculated
according to Almeria radiation method using Eq. (3) [39], and the KC values are
extracted from Cropwat.
(0.288 + 0.0019n)Ro τ Julian days(n) ≤ 220
ETo = (3)
(1.339 − 0.00288n)Ro τ Julian days(n) > 220
The leaching requirement (LR): The leaching requirement is the amount of water
needed to remove excessive salts that cause a crop yield decrement, and it was
calculated using Eq. (4) [14], where ECiw is the electrical conductivity of the irrigation
water applied, and ECe is the soil salinity tolerated by crop as measured on soil
saturation extract.
An Optimization Model for Selecting Combinations of Crops … 1023
ECiw
LR = (4)
5ECe − ECiw
Moving to the total head loss, the typical head consists of static head (H ST ), kinetic
head (H DT ), and pressure head (H F ). According to Ghoneim, the static head is the
elevation head from the water surface level to the point of discharge, the kinetic head
represents the kinetic energy of the fluid (HDT = V 2 /2g), where V is the velocity
and g is the acceleration (9.81 m sec−2 ). In addition, there is a fraction loss that can
be reduced by enlarging the pipe diameter, eliminating bends, and reducing the flow
rate [21]. Therefore, the total head from borehole T TE can be expressed by Eq. (5)
[22], where increment i assumes the values i = 1 to N (N is the total number of time
stages, decades), H TE(i) the total head lift (m).
Friction losses are calculated according to Eq. (6) [16], where f is Darcy friction
factor, l is the pipe length (m). Darcy friction factor can be identified from moody
chart. The minor fractional losses are assumed to be 10% according to PV project
analysis in 2005 [49].
8 f l Q2
HF(i) = (6)
π 2 gd 5
Nominal power capacity of PV system (Pel ): The nominal capacity or peak power
of PV (Pel ) generator under standard conditions is given by Eq. (7) [18, 22] consid-
ering the effect of the ambient temperature (T a ), where Pel is the nominal capacity
(W ), α P is the temperature coefficient of PV module (/°C), T C is the cell tempera-
ture (°C), TC,STM is the cell temperature at standard test conditions (°C), ηS is the
efficiency of the subsystem (inverter, motor, pump), I T is the daily solar irradiation
on PV surface (kWh m−2 ).
1000 EH
Pel = ∗ (7)
(1 − αP TC − TC,STM ηS IT
Next, Erb’s correlation equation is used to calculate the diffuse fraction using
Eq. (9) [20]. Then, the direct hourly solar radiation (I b ) is calculated by subtracting
the diffused hourly solar radiation (I d ) from the total hourly solar radiation (I). After
that, the HDKR model is used to estimate the collected solar radiation on tilted
surface (I T ) using Eq. (10) [17]. Figure 1 represents a sample that was developed by
the model for total, diffused, and direct solar radiation on horizontal surface and the
estimated collected solar radiation on a tilted PV surface (β = 25°) at Aswan that lies
at latitude 24.09011 on July 17.Next, Erb’s correlation equation is used to calculate
the diffuse fraction using Eq. (9) [20]. Then, the direct hourly solar radiation (I b ) is
calculated by subtracting the diffused hourly solar radiation (I d ) from the total hourly
solar radiation (I). After that, the HDKR model is used to estimate the collected solar
radiation on tilted surface (I T ) using Eq. (10) [17]. Figure 1 represents a sample that
was developed by the model for total, diffused, and direct solar radiation on horizontal
surface and the estimated collected solar radiation on a tilted PV surface (β = 25°)
at Aswan that lies at latitude 24.09011 on July 17.
⎧
Id ⎨ 1.0 − 0.09k T k T ≤ 0.22
= 0.9511 − 0.1604k T + 4.388k T2 − 16.638k T3 + 12.336k T4 0.22 < k T ≤ 0.8
I ⎩
0.165 k T > 0.8
(9)
Id (1 − Ai )(1 + cos β) β
IT = (Ib + Id Ai )Rb + × 1 + f sin3 + Iρg (1 − cos β)/2
2 2
(10)
Radiaon on Horizontal
1.0 Surface
0.8 Diffused irradiance on
Horizontal Surface kWh/m2
0.6
0.4 Direct (Beam) Solar Radiaon
0.2 on Horizontal Surface
kWh/m2
0.0 Esmated Collected Solar
5.5 6.5 7.5 8.5 9.5 10.511.5 12.5 13.5 14.5 15.516.5 17.5 18.5 Radiaon kWh/m2
Time (hrs)
NPV
EACF = (14)
At,i
1− 1
(1+i)t
At,i = (15)
i
1026 E. N. Eman et al.
The output module provides the user with the maximum EACF, in addition, iden-
tifying the combination of crops, the starting date for each crop, occupied area for
each crop and ToP to be used to meet the objective function. In addition to other
economic parameters such as NPV, IIC, estimated quantity for water used per year,
and percentage for the maximum utilization of the land.
The development of the model depends on integrating two main models to reach the
required objective which is selecting combination of crops that have the maximum
EACF and irrigated using an optimized PVWP system. The first main model is to
select the combination of crops that have the maximum equivalent annual return,
and the second model is to optimize the size of the PVWP system. Integrating these
two models with some modifications leads to develop the new model which is never
done before. The model formulation is described in Fig. 2.
An Optimization Model for Selecting Combinations of Crops … 1027
4 Model Validation
To outline to what extent the developed optimization model describes the system
and for the purpose of determining appropriate results, verification of the model is
essential. To verify the developed optimization model, two locations in Egypt, i.e.,
Luxor and Giza, having different climatic and environmental conditions have been
selected. Considering all system parameters are variable, it is required to determine
their reference values upon which certain variables change. Then, their effect on
calculation of EACF and other parameters is observed.
1028 E. N. Eman et al.
Table 3 Reference
PV pumping Nominal efficiency of 43%
parameters for Luxor region
system motor-pump unit
Nominal efficiency of 90%
inverter
Climate Latitude 25.70231
region
Borehole Water conductivity (ECiw ) 0.8 dS
Static level 50 m
Velocity 0.9 m sec−1
Diameter 0.12 m
Length of pipe 60 m
Head lift from the ground 0
surface (HOT )
Maximum discharge 72 m3
capacity (Qmax )
Irrigation area Total area 5 feddans
Type of soil Clay soil
Soil salinity (ECe ) 1.1 dS m−1
Irrigation Irrigation method Surface drip
method Efficiency of drip method ηN 90%
Financial Allowable budget 4,700,000 EGP
parameters Future discount rate 8.75%
The reference parameters for Luxor region have been selected as indicated in Table
3.
This study selects the irrigation period to be twice a week in summer and once
a week in winter, which is in practice more realistic, in view of water and energy
consumption as there is no significant change in the moisture of the active soil from
which crops intake water.
The optimization results of the combination of crops, area for each crop, and starting
date for each crop and ToP are shown in Table 4 which represents the log of progress
steps in the optimization process of the developed optimization model. The EACF
was maximized to 318,620 US$ when the combination of crops was Tomato I, Tomato
IV, Sweet Pepper II, and Mango, and the corresponding area for each crop was 2, 1,
An Optimization Model for Selecting Combinations of Crops … 1029
Luxor
150
EAI - EHI (KWH)
100
50
0
1
15
29
43
57
71
85
99
113
127
141
155
169
183
197
211
225
239
253
267
281
295
309
323
337
351
365
DAYS
The same reference parameters have been selected as for the area in Luxor location
except for the climate parameters and the soil type. Giza is located at latitude of
30.00951 resulted in different meteorological data than Luxor area. The type of soil
is assumed to be sand.
The optimization results of the combination of crops, area for each crop, starting date
for each crop, and occupied area for each crop and ToP are shown in Table 6 which
represents the log of progress steps in the optimization process of the developed opti-
mization model. The EACF was maximized to 397,064 US$ when the combination of
crops was Sweet Pepper, Green Beans II, and Mango, and the corresponding area for
each crop was 3, 1, and 1 feddan, respectively, as illustrated in Table 7. Sweet Pepper
has three different planting dates as indicated in Table 7. The nominal maximum
power required to satisfy the requirements of the irrigation system is 72,486 Wp . To
satisfy the required nominal power, 191 panels of type 6 were required.
The difference between the daily energy produced and the daily required hydraulic
energy is as shown in Fig. 4. The positive values for the energy indicate that the system
is sufficient as E Ai − E Hi >= 0. To have an efficient system the max (E Ai ) − max
(E Hi ) ≈ 0, it was observed that the critical period occurs on day 209 as the difference
between E Ai − E Hi is 7.77 kWh as indicated in Fig. 4.
Table 6 Log of progress steps in the optimization process for Giza
Trial results Adjustable cells
D23 L49 L50 L51 L52 L53 L54 L55 L56 L57– L82 L83 L84
1 275,806 6 0 0 0 0 0 1 1 0 … … 0 0
86 356,657 6 0 0 0 0 0 1 1 1 … … 0 0
149 377,051 6 0 0 0 0 0 1 1 1 … … 0 0
1544 377,492 6 0 0 0 0 0 1 1 1 … … 0 0
3917 389,428 6 0 0 0 0 0 1 1 1 … … 0 0
6665 389,536 6 0 0 0 0 0 1 1 1 … … 0 0
25991 390,509 6 0 0 0 0 0 1 1 1 … … 0 0
48675 397,064 6 0 0 0 0 0 1 1 1 … … 0 0
Trial results Adjustable cells
L85 K49 K50 K51 K52 K53 K54 K55 K6 K57– K82 K83 K84 K85
1 275,806 0 8.5 9.0 2.0 3.0 10.0 10.5 1.9 3.2 … … 19.1 15.5 15.2
An Optimization Model for Selecting Combinations of Crops …
86 356,657 0 8.5 9.0 2.0 3.0 10.0 10.5 1.9 3.2 … … 19.1 15.5 15.2
149 377,051 0 8.5 9.0 2.0 3.0 9.8 10.5 1.9 3.2 … … 19.1 15.5 15.2
1544 377,492 0 8.5 9.0 2.0 3.0 9.8 10.5 1.9 3.2 … … 19.1 15.5 15.2
3917 389,428 0 8.5 9.0 2.0 3.0 9.8 12.0 1.9 3.2 … … 19.1 15.5 15.2
6665 389,536 0 8.5 9.0 2.0 3.0 9.8 12.0 1.9 3.0 … … 19.1 15.5 15.2
25991 390,509 0 8.5 9.0 2.0 3.0 9.8 12.0 1.9 3.0 … … 18.9 15.5 15.2
48675 397,064 0 8.5 9.0 2.0 3.0 9.8 12.0 1.9 3.0 … … 18.8 15.5 15.2
1031
1032 E. N. Eman et al.
Giza
250
EAI - EHI (KWH)
200
150
100
50
0
1
14
27
40
53
66
79
92
105
118
131
144
157
170
183
196
209
222
235
248
261
274
287
300
313
326
339
352
365
DAYS
The LCCA indicated that it worth to investigate in Giza than in Luxor as it has higher
NPV, higher estimated revenue for the first year, and higher return per every 1 m3 of
water within the allowable budget compared to Luxor as indicated in Table 8.
Although the average daily water requirements for both locations are almost the
same, there is a big difference in the number of required panels. The required number
of panels are 134 for Luxor and 191 for Giza although same ToP is used in both
surface (kW/m2)
Radiaon on PV
Esmated Solar
8
6
4 Luxor
2 Giza
0
Jan Feb Mar Apr May Jun Jul Aug Sept Oct Nov Dec
Months
locations. This can be justified as Luxor has higher estimated solar irradiance than
Giza as indicated in Fig. 5 considering the effect of the T a on the performance of the
PV modules.
Although the same reference parameters were used for both locations except for
climate values and soil type, noticeable difference in the EACF is obtained. This can
be justified as Luxor has lower net benefit form crops because of the transportation
costs. In addition, soil salinity for Luxor (1.1 dS m−1 ) has lower value compared to
Giza (1.3 dS m−1 ) resulted in higher leaching requirement for Luxor. The higher solar
irradiance in Luxor also leads to higher ETo which means higher ETC . Therefore, the
total water requirements for Luxor will be greater compared to Giza. By variation
of different elements in the optimization output, it can be concluded that the model
response to any change in the reference parameters.
6 Conclusion
The proposed model presented in this study introduced a new approach to the problem
by considering almost all the relevant factors affecting both the PV system and the
irrigation system such as location, PVWP system, local climate, soil type, water
source, total head losses, pool of different crops, and irrigation system. Accurate
estimation for the irrigation water requirements and solar irradiance was carried
out. In addition, the PV energy output was calculated taking into consideration the
slope of installed panels (β), modules’ azimuth (), and the effect of the ambient
temperature (T a ) on the performance of the PV modules as well. The model was
developed based on GA technique. Then, the model was verified by comparing the
model outputs for two locations in Egypt (Luxor and Giza), and it was proved that the
model describes the subject system very well as several useful outcomes achieved.
The model was run to maximize the EACF by considering the net benefit from the
combination of the crops and the PVWP system costs. Accordingly, the ToP, combi-
nations of crops, starting date for each crop, and occupied area for each crop were
adjusted to meet the objective function. This model is characterized by having a flex-
ible database module that can be edited by the user if needed. Next, the model outputs
were generated including maximum EACF, economic analysis, and optimized PVWP
1034 E. N. Eman et al.
system size. It was also observed that the total irrigation water requirements and the
available solar irradiance have significant effects on the EACF through comparing
the optimization results for the two locations (Luxor and Giza). Although Luxor is
characterized by high solar irradiance which gives it an advantage over other loca-
tions in optimizing the PV system size, it was noticed that higher percentage of
water for irrigation is required because of higher ETo value resulted from the high
solar irradiance. Giza provided better economic analysis than Luxor as it had higher
EACF (397,064 US$) compared to Luxor (318,620 US$) and higher return per 1 m3
of water (20.15 US$) compared to Luxor (16.26 US$). In both locations, the PVWP
system was considered as sufficient system as the E Ai − E Hi > 0.
The percentage of the total energy losses in both locations exceeded 50% which
highlight the importance of optimizing the percentage of the total energy losses;
therefore, further investigation on applying other options for energy storage systems
is greatly needed. To sum up, the optimization model presented in this study is
an example for how technology and sustainability can be integrated into agricul-
tural sector to help farmers and investigators to find efficient and economic practice
for the best utilization of land for greenhouse farming while bridging the gap of
overestimation for irrigation water and consequently oversizing for PVWP system.
References
1. Abdrabbo M, Negm A, Fath H, Javadi A (2019) Greenhouse Manag Best Pract Egypt 11:369–
381
2. Abou-Hadid AF, Medany MA, El-Beltagy AS (1993) Productivity and pre-economical studies
on the use of drip irrigated low tunnels [for tomatoes] in Egypt. Egypt J Hortic (Egypt)
3. Abu-Aligah M (2011) Design of photovoltaic water pumping system and compare it with diesel
powered pump. Jordan J Mech Indust Eng 5:273–280
4. Al-Badi A, Yousef H, Al Mahmoudi T, Al-Shammaki M, Al-Abri A, Al-Hinai A (2018) Sizing
and modelling of photovoltaic water pumping system. Int J Sustain Energ 37(5):415–427
5. Aliyu M, Hassan G, Said SA, Siddiqui MU, Alawami AT, Elamin IM (2018) A review of
solar-powered water pumping systems. Renew Sustain Energy Rev 87:61–76. https://doi.org/
10.1016/j.rser.2018.02.010
6. Bakelli Y, Hadj Arab A, Azoui B (2011) Optimal sizing of photovoltaic pumping system with
water tank storage using LPSP concept. Sol Energy 85(2):288–294
7. Bekhet H, Azlina A (2010) Energy use in agriculture sector: input-output analyses. Int Bus
Res 3:111–120
8. Bhatia M, Rana A (2020) A mathematical approach to optimize crop allocation—a linear
programming model. Int J Des Nat Ecodyn 15(2):245–252
9. Bonachela S, González AM, Fernández MD (2006) Irrigation scheduling of plastic greenhouse
vegetable crops based on historical weather data. Irrig Sci 25(1):53
10. Chandel SS, Naik MN, Chandel R (2015) Review of solar photovoltaic water pumping system
technology for irrigation and community drinking water supplies. Renew Sustain Energy Rev
49:1084–1099
11. Collares-Pereira M, Rabl A (1979) The average distribution of solar radiation-correlations
between diffuse and hemispherical and between daily and hourly insolation values. Sol Energy
22(2):155–164. https://doi.org/10.1016/0038-092X(79)90100-2
12. Comsan MNH (2010) Solar Energy Perspect Egypt
An Optimization Model for Selecting Combinations of Crops … 1035
39. Melsen LA, Lanen HAJ, Wanders N, Huijgevoort MHJ van, Weedon GP (2011) Reference
evapotranspiration with radiation-based and temperature-based method—impact on hydrolog-
ical drought using WATCH Forcing Data. In: EU-WATCH Technical Report (Issue 39, p 1null).
WATCH. https://dspace.library.uu.nl/handle/1874/221990
40. Mohamed SA, Abd El Sattar M (2019) A comparative study of P&O and INC maximum power
point tracking techniques for grid-connected PV systems. SN Appl Sci 1(2):174
41. Mozaffari Niapour SAKH, Danyali S, Sharifian MBB, Feyzi MR (2011) Brushless DC motor
drives supplied by PV power system based on Z-source inverter and FL-IC MPPT controller.
Energy Convers Manage 52(8):3043–3059
42. Odeh I, Yohanis YG, Norton B (2006) Economic viability of photovoltaic water pumping
systems. Sol Energy 80(7):850–860
43. Padmanabhan P, Cheema A, Paliyath G (2016) Solanaceous fruits including tomato, eggplant,
and peppers. In: Caballero B, Finglas PM, Toldrá F (eds) Encyclopedia of food and health.
Academic Press, pp 24–32
44. Palisade (2021) Evolver—sophisticated optimization for spreadsheets. Palisade. Retrieved 20
Jan 2022
45. Pap Z (2008) Crop rotation constraints in agricultural production planning
46. Qoaider L, Steinbrecht D (2010) Photovoltaic systems: a cost competitive option to supply
energy to off-grid agricultural communities in arid regions. Appl Energy 87(2):427–435
47. Rauff KO, Shittu SA (2015) Determination of evapotranspiration and water use efficiency in
crop production. Agric Sci 06(09):1058–1067
48. Retamales JB, Hancock JF (2012) Blueberries. CABI
49. RETScreen International Clean Energy Decision Support Centre (Canada), United Nations
Environment Programme, United States, & CANMET Energy Technology Centre (Canada)
(eds) (2005) Clean energy project analysis, RETScreen engineering & cases textbook (3rd ed)
[Electronic resource]. CANMET Energy Technology Centre
50. Rezk H, El-Sayed AHM (2013) Sizing of a stand alone concentrated photovoltaic system in
Egyptian site. Int J Electr Power Energy Syst 45(1):325–330
51. Sánchez E, Izard J (2015) Performance of photovoltaics in non-optimal orientations: an
experimental study. Energy Build 87:211–219
52. Sánchez-Carbajal S, Rodrigo PM (2019) Optimum array spacing in grid-connected photo-
voltaic systems considering technical and economic factors. Int J Photoenergy 2019:e1486749
53. Sarker RA, Quaddus MA (2002) Modelling a nationwide crop planning problem using a
multiple criteria decision making tool. Comput Ind Eng 42(2):541–553
54. Sharma MK, Kumar D, Dhundhara S, Gaur D, Verma YP (2020) Optimal tilt angle
determination for PV panels using real time data acquisition. Global Chall 4(8):1900109
55. Sharma R, Sharma S, Tiwari S (2020) Design optimization of solar PV water pumping system.
Mater Today: Proc 21:1673–1679
56. Shukla S, Shrestha N, Jaber F, Srivastava S, Obreza T, Boman B (2014) Evapotranspiration
and crop coefficient for watermelon grown under plastic mulched conditions in sub-tropical
Florida. Agric Water Manag 132:1–9
57. Solar pumping for irrigation: improving livelihoods and sustainability (2016) Publications/
2016/Jun/Solar-Pumping-for-Irrigation-Improving-Livelihoods-and-Sustainability. Retrieved
28 Dec 2021
58. Wazed SM, Hughes BR, O’Connor D, Calautit JK (2017) A review of sustainable solar irrigation
systems for Sub-Saharan Africa. Renew Sustain Energy Rev 81
Sustainability Assessment of Applying
Circular Economy to Urban Water
Systems
1 Introduction
Water, energy, and nutrients’ resources are highly demanded by communities and
resource consumption can be affected by population increase, urbanization, changes
in diet, and economic development [21]. Furthermore, by 2050 water demand will
increase by 20–30% [2], energy consumption for thermal heating will increase by
30% [6], and phosphorus consumption will be doubled [11].
Moreover, water scarcity has become one of the major threats to sustainable
development [8]. Furthermore, the increasing energy demand and fossil fuel shortage
have settled to a run for new energy sources, which are clean and renewable [18]. On
the other hand, phosphorus (P) is a mineral non-renewable resource, and its demand
increases as the demand for fertilizers grows [10].
In the context of increasing demand and resource scarcity, new paradigms such
as the circular economy (CE) become highly important for resource management
and recovery. The CE paradigm has three main objectives: (1) regenerate natural
capital, (2) keep extracted resources in use, and (3) decrease waste production [12].
Furthermore, the application of CE to UWS is an opportunity to recover water, energy,
and nutrients, as well as to meet sustainable development goals [16].
However, to implement CE in Urban Water Systems, new processes need to be
used, as conventional processes do not prepare products to the expected quality.
For instance, water recovery requires the inclusion of water disinfection and tertiary
treatment, depending on the intended use [13]. Energy recovery requires biogas
purification and the implementation of a generator, and biosolids recovery requires
sludge digestion and lime application [14]. Furthermore, the resource recovery
processes demand chemical use and construction of new facilities, which present
social, economic, and environmental impacts. Depending on the energy matrix,
chemical use, transportation, and construction become highly important determi-
nants in the environmental and economic impacts [14, 15]. Moreover, few papers
assessed sustainability aspects of CE implementation in UWS (social, economic, or
environmental), considering resource recovery strategies [1, 3, 7, 9, 19, 20]. Like-
wise, no works accounted for the LCA and LCC impacts of Urban Water Systems
considering the construction and operational phase.
In this context, this paper aims to analyze the environmental and economic impact
of Urban Water Systems located in Kelowna, British Columbia, Canada, analyzing
stormwater, drinking, and wastewater facilities. Collection and distribution pipes
are also considered in the boundaries. This study aimed to develop a sustainability
index in order to assist in decision-making for the sustainability of UWS. Five
different alternatives were evaluated: business as usual (BAU), water recovery, energy
recovery, biosolids recovery, and water– energy–nutrient nexus.
Sustainability Assessment of Applying Circular Economy to Urban … 1039
2 Methodology
Kelowna is a city located in British Columbia, Canada. The total population served
by the UWS is 143,000 people [4]. Furthermore, the main water source in Kelowna
is Okanagan Lake which is also used as a leisure space by citizens [5].
The Urban Water System of Kelowna is constituted of water extraction, water
treatment, water distribution, wastewater collection, wastewater treatment, and
stormwater collection. Water is collected from five points in Kelowna: (1) Polar
Point, (2) Eldorado, (3) Cedar Creek, (4) Swick, (5) GEID. Water is treated through
two simple processes. From numbers 1–4, water is only disinfected. Point 5 (GEID)
requires water filtration and disinfection. Similarly, the stormwater is collected
using pipes, which are disposed of the Okanagan Lake without any treatment. At
last, the wastewater treatment consists of a screening phase, clarifier, Bardenpho
system, second clarifier, and filter. The sludge from wastewater goes through a thick-
ening process. The UWS of Kelowna presents different important goals: (1) guar-
antee multiple waters uses, (2) guarantee minimum water quality, and (3) mitigate
environmental, social, and economic impacts.
(1) conventional system, referred to as business as usual, (2) water recovery system,
(3) energy recovery system, (4) biosolids recovery system, and (5) water–energy–
nexus system, which recovers all three resources.
Furthermore, this study was conducted as an attributional analysis, and two
different substitution methods were used for allocation. For water recovery, the
system with water recovery considered that 60% of the wastewater was reused and
this value was subtracted from the total consumption reducing freshwater withdrawal
proportionately. Regarding energy and biosolids recovery, however, the system
considered a “comparative” flux, which was added to all alternatives that did not
carry these resources recovery. These methods were performed to ensure that all
systems would have the same functions.
Both background and foreground data were used to conduct the Life Cycle Assess-
ment. Foreground data were obtained directly with the Kelowna facility, while back-
ground data included both literature data and the data from Ecoinvent 3.3. This study
uses a data quality pedigree matrix as defined by [22] to assess the quality of data
used and make better-informed choices during the data-gathering phase. Due to data
availability, the following were used as criteria for the data quality analysis.
A. Data from 2019.
B. A year of data should be considered, to account for seasonal variability.
C. Data would come from Kelowna, preferably.
D. Technology needs to be equivalent to the scenario definition.
Ecoinvent 3.3 was used as one of the background data sources. In the Ecoinvent, the
processes were chosen as Def (default allocation), unit, and processes from British
Columbia were preferred. If no processes from British Columbia were available,
processes from Canada or other provinces were applied, moving to Rest of the World
Sustainability Assessment of Applying Circular Economy to Urban … 1041
(ROW) processes and Global processes (GLO). This method was used to ensure
geographical relevancy as much as possible. The data used in the analysis are shown
in the supplemental materials. The table presents the processes chosen in the database,
the value, unit, and data quality assessment.
The Life Cycle Impact Analysis was conducted considering midpoint and endpoint
categories. The following categories and methodologies were selected: (1) Global
Warming Potential (IPCC 2013), (2) Cumulative Energy Demand—CED (CED—
Ecoinvent), (3) Water footprint (Aware), (4) Ozone depletion, Human toxicity, Photo-
chemical oxidant formation, Particulate matter formation, Ionizing radiation, Climate
change ecosystems, Terrestrial acidification, Freshwater eutrophication, Terrestrial
ecotoxicity, Freshwater ecotoxicity, Marine ecotoxicity, Agricultural land occupa-
tion, Urban land occupation, Natural land transformation, Metal depletion, Fossil
depletion (ReCiPe Endpoint (H) V1.13/World ReCiPe H/H).
The environmental index was calculated in two steps: normalization and weighting.
Normalizations were conducted using Eq. 1.
xi j
xi j = , (1)
xJ
E index is the environmental index. wj is the weight for each impact category, and
x i j is the value calculated in SimaPro. y j is the highest weighted sum of the elements.
The weights used in this work are shown in Table 1.
The same boundaries and goals were used to conduct the LCC analysis. Further-
more, data quality analysis was also conducted. LCC inventory is displayed in the
1042 T. A. Rebello et al.
supplemental materials. LCC was calculated using the Net Present Value method
(Eq. 3).
T
Ct
NPV = , (3)
t=0
(1 + r )t
In which C is the estimated cost in year t, r is the discount rate, and T is the period
analyzed in years.
In this work, the discount rate considered was 4.5%. The cost index was obtained
through the normalization of the NPV using Eq. 1.
E index + Cindex
Sindex = . (4)
2
Sindex is the sustainability index for each scenario. E index is the environmental index
and Ew is the environmental weight, calculated with Eq. 2. Cindex ¯ is the cost index
and Cw is the cost weight, normalized from the Life Cycle Costing calculations.
Sustainability Assessment of Applying Circular Economy to Urban … 1043
3 Results
According to ISO 14040:2006, characterization results are mandatory for all envi-
ronmental impacts analyses conducted with Life Cycle Assessment. In this context,
Tables 2 and 3 show the characterization results for all categories.
While normalization is not mandatory for Life Cycle Assessment studies (ISO
14040:2006), it facilitates the comparison among different alternatives. Figure 2
shows the normalized scores for the environmental impacts.
For the categories of Global Warming Potential, Water Footprinting, and Cumu-
lative Energy Demand, the water–energy–nutrients nexus Urban Water System
presented the best environmental performance. On the other hand, energy recovery
presented a better environmental performance in most ReCiPe categories, followed
by biosolids recovery and the nexus urban water treatment.
For the conventional system, the highest environmental impact when concerning
the water footprint is due to the wastewater treatment construction (28% of the
total impact). The second highest impacting process is wastewater collection system
construction (24%). Furthermore, Global Warming Potential is highly impacted by
the construction of the wastewater collection system and the stormwater collection
system (33 and 23%, respectively). This is due to the material choice in the process
selected in SimaPro, which uses concrete as the main source of materials for the pipes.
Additionally, Kelowna is a big city with most of its population spread out, which
MEC PMF
FEC IR
TEC CCE
FEU TA
Fig. 2 Life Cycle Assessment normalized categories. *GWP 100 y: Global Warming Potential
100 years, WF: Water footprint, OD: Ozone depletion, HT: Human toxicity, POF: Photochem-
ical ozone formation, PMF: Particulate matter formation, IR: Ionizing radiation, CCE: Climate
change ecosystem, TA: Terrestrial eutrophication, FEU: Freshwater eutrophication; TEC: Terres-
trial ecotoxicity; FEC: Freshwater ecotoxicity; MEC: Marine ecotoxicity; ALO: Agricultural land
occupation; ULO: Urban land occupation, NLT: Natural land transformation; MD: Metal depletion,
and FD: Fossil depletion
increases piping extension and pumping needs. Likewise, the collection of wastew-
ater and stormwater is responsible for 33 and 23% of the total CED impact. This
result demonstrates that considering the study case, energy and Global Warming are
highly connected. The ReCiPe categories followed the same behavior. The compara-
tive fluxes did not highly impact the environmental performance of the conventional
system.
1046 T. A. Rebello et al.
For the water recovery system, a similar behavior was identified. Wastewater treat-
ment represented 33% of the total water footprint, followed by wastewater collection
(25%). The same behavior was seen on Cumulative Energy Demand 34% for wastew-
ater treatment and 23% for wastewater collection. The same behavior was identified
for ReCiPe and IPCC categories.
In the biosolids recovery system, the wastewater treatment represents 32% of the
water footprint, followed by 25% in the wastewater collection and water distribu-
tion 17%. For the Cumulative Energy Demand, 33% of the impact is caused by
wastewater collection, followed by stormwater and water distribution (23% each).
The same behavior was identified for IPCC categories, 34% wastewater collection,
23% stormwater, and 23% water distribution. The ReCiPe categories had over 40%
contribution coming from the wastewater collection over 30% in most categories for
stormwater.
For the water–energy–nutrients nexus, 33% of the water footprint is due to the
wastewater treatment, 26% is due to wastewater collection, and 18% is due to the
water distribution system. The CED impact category followed the same structure:
34% wastewater collection, 23% stormwater, and 23% for water distribution. IPCC
method: 34% wastewater collection, 24% stormwater, and water distribution 23%.
At last, the ReCiPe categories had over 30% of the impact coming from wastewater
collection and over 20% coming from stormwater.
Sustainability Assessment of Applying Circular Economy to Urban … 1047
Table 4 Environmental
index results
Alternatives
j
i Environmental
w j ∗ xi j index
j=0 i=o
Table 4 shows the results for the environmental index results. The alternatives with
the lowest score present the best overall environmental performance (energy recovery
scenario).
Table 5 shows the estimated Life Cycle Cost of each scenario and the normalization,
calculated with Eq. 1 considering one year of operation and the construction of the
UWS. Regarding the cost, the water–energy–nutrients nexus scenario presented the
lowest economic score, followed closely by biosolids recovery and water recovery.
The worst-case alternatives are energy recovery and the conventional scenario, which
presents the highest economic impact. The main impact in the final result was due
to the construction phase, and the biosolids comparative flux presented an important
role to differentiate the alternatives.
After the aggregation of LCA and LCC for all alternatives, the sustainability
index showed that the conventional system is the worst-case scenario. All recovery
systems presented a better sustainability performance when compared to the conven-
tional system. Furthermore, the best sustainability performance was achieved by
the biosolids recovery scenario. This performance was followed by the electricity
recovery in the equal weights and the environmental scenario but followed by the
water–energy–nutrient nexus system in the economic scenario. These results were
due to the low environmental impact identified in the biosolids recovery scenario
since the comparative flux used for energy is integrated with a clean energy mix
(Table 6).
4 Discussion
The results of the sustainability index for different options show that the biosolids
recovery system has the best sustainability performance. However, the lowest Life
Cycle Cost was identified for the nexus scenario, due to the cost–benefit of the
recovered energy. Furthermore, this work identified that construction was the main
responsible for both environmental and economic impacts, contrasting with current
literature. The energy scenario represented the best alternative in the environmental
index, closely followed by the biosolids recovery scenarios.
Tjandraatmadja et al. [20] also evaluate recovery alternatives, considering the
Life Cycle Assessment and Life Cycle Cost for a conventional wastewater system,
common effluent drainage, and a small-scale treatment plant, considering various
technologies (lagoon, living machine, membrane biological reactor, rotating reactor,
extended aeration, among others). Similarly to this work, the wastewater system had
an important contribution to final impacts, and wastewater pumping was the main
process for most alternatives. In the Life Cycle Assessment part, the wastewater
separation with no reuse was the scenario with the highest Global Warming Potential,
showcasing that these technologies need to apply recovery of resources to have
environmental and economic benefits.
Furthermore, Bonoli et al. [1] have analyzed six alternatives including greywater
reuse, different energy matrices, and a traditional water supply. In their work, the
graywater alternatives presented the best environmental performance, which corrob-
orates with the water recovery system results obtained in the present contribution.
At last, Rebello et al. [14] have evaluated the environmental impacts associated with
the operation of wastewater treatment systems. In their work, the authors discovered
that the nexus wastewater treatment presented advantages regarding environmental
performance. These results corroborate the current contribution.
Table 6 Sustainability index final aggregation
Scenario Equal weights Economic scenario Environmental scenario
Conventional 1.0000 1 1
Water recovery 0.9602 0.9599 0.9603
Energy recovery 0.9162 0.9663 0.8660
Biosolids recovery 0.8974 0.9298 0.8650
Nexus wastewater 0.9528 0.9517 0.9538
* In italic: lowest sustainability index
Sustainability Assessment of Applying Circular Economy to Urban …
1049
1050 T. A. Rebello et al.
The following represent the main limitations of this study: (1) use of background
data for some alternatives;
(2) lack of uncertainty of sensitivity analysis, which can be implemented in future
works; (3) exclusion of the water use stage, due to lack of information. Future
research should consider these limitations and evaluate the combination of the alter-
natives presented in the present study. Furthermore, studies should consider database
creation and Life Cycle Inventory creation, one of the main challenges in current Life
Cycle Assessment studies [15]. Frugal technologies should also be evaluated for the
Canadian reality, as construction represented a high amount in the final cost and in
the environmental performance of all systems.
5 Conclusion
This work aimed to analyze the sustainability of the Urban Water System of Kelowna
by combining a Life Cycle Assessment and a Life Cycle Costing Assessment. In
this context, five alternatives were tested, considering different levels of material
recovery: conventional system, water recovery, energy recovery, biosolids recovery,
and water–energy–nutrients nexus UWS.
Results showed that the best environmental performance was found in the energy
recovery system, while the best economic performance was identified in the nexus
system. However, the combination of the environmental and the economic index
resulted in a better performance for the biosolids recovery system, due to how
comparable the environmental results of energy and biosolids recovery were.
Contrary to what the literature states, in this work, construction represented an
important environmental and economic impact, being responsible for over 90% of the
total environmental impact in all alternatives and 90% of the total economic impact in
the conventional system. This result implicates that new studies should also consider
the construction phase in their boundaries, especially when considering low time
periods.
References
Abstract Silver nanoparticles (AgNPs) embedded within ceramic water filters have
gained popularity as a methodology for improved disinfection over filtration alone,
though concerns exist regarding their cost, health, and ecological impacts. Research
into the replacement and/or supplementation of AgNPs with less expensive NP alter-
natives is therefore of interest. The influence of NP and microbial contaminant
concentrations on disinfection performance, however, require further exploration
and elucidation. This research thus investigates the potential synergistic disinfection
of AgNP and zinc oxide (ZnO) at various co-application concentrations when chal-
lenged by three levels of Escherichia Coli contamination. In this study, 1 L of water
with dissolved organics, micronutrients, and E. Coli in concentrations of 102 , 103 ,
or 105 CFU/mL was treated with AgNPs and/or ZnO. AgNPs concentrations ranged
from 0.5 to 20 ppb, and ZnO concentrations ranged from 50 to 1000 ppb under dark
conditions. E. coli samples were enumerated over a 72-h period. The results demon-
strate that both NP and bacterial concentration significantly influenced outcomes. For
example, 10 ppb AgNP alone did not achieve any disinfection under any bacterial
loading over 72 h. However, log removal values (LRVs) of − 0.86, − 1.06, and +
0.01 were measured in water with 102 , 103 , and 105 CFU/mL, respectively, when
combined with 75 ppb ZnO and − 2.62, − 2.17, and − 3.06 when combined with
1000 ppb ZnO, respectively; a negative LRV indicates bacterial inhibition, while
a positive LRV indicates bacterial growth. This study therefore illustrates that co-
application of AgNP-ZnO holds potential for implementation in the design of low-
cost water treatment solutions that utilize nanoparticles for disinfection if influent
bacterial concentrations are managed appropriately.
1 Introduction
Significant research attention has been recently emerged around metallic nanopar-
ticles (MNPs) as novel solutions for treating microbiological water contamination.
MNPs are stabilized metals with diameters between 1 and 100 nm that can kill
microbes by penetrating cell membranes and bonding with DNA, pitting cell walls
to promote cytoplasmic leakage, and/or reacting with the aquatic environment to
promote oxidative killing [11]. Specific interest has further been directed toward
silver nanoparticles (AgNPs) due to their superior bactericidal effectiveness to other
MNPs, particularly as additives to filtration technologies like sand, activated carbon
or ceramic water filters [7, 8, 14]. However, concerns exist regarding their cost,
health, and ecological implications, as well as their efficacy in low concentrations
[1, 8]. For instance, long-term consumption of silver in concentrations of above 1 mg/
L may bioaccumulate and subsequently lead to conditions such as argyria or DNA
damage if more than 10 g are consumed over a 70-year lifetime [3, 16]. Meanwhile,
Garza-Cervantes et al. [4] demonstrated that the minimum inhibitory concentration
(MIC) of silver ions against 104 CFU/mL Escherichia coli (E. coli) after 24 h is
approximately 6.5 mg/L. Additionally, AgNP cost is subject to the volatility of inter-
national markets and can reach as high as $20/g [9, 10]. Technology manufacturers,
especially in lower income countries, are therefore required to delicately balance
disinfection efficacy with health and price considerations.
Interest in alternative metal nanoparticles to AgNPs has responsively grown, with
zinc oxide (ZnO) receiving particular attention. For instance, Venis and Basu [15]
found that 1 mg/L of AgNP achieved a log removal value (LRV) of 1.4 after 5 h
in water with an initial E. coli concentration of 105 CFU/mL, whereas 0.67 mg/
L AgNP combined with 0.33 mg/L ZnO achieved an LRV of 3.2, demonstrating
clear Ag–Zn synergy. Likewise, Garza-Cervantes et al. [4] found the 3.2 mg/L Ag+
achieved an LRV of 1.0 after 1 h in water with 104 CFU/mL, whereas 32.7 mg/L
Zn2+ and 3.2 mg/L Ag+ synergistically achieved an LRV of 3.8. ZnO also does not
have an international guideline as it has no proven negative health impacts [16], and
its price is reported as low as $0.1/g [9, 10]. Significant opportunity therefore exists
to simultaneously reduce the required quantity of silver for enhanced disinfection
and maintain or improve disinfection efficacy.
Previous work has shown that while silver disinfection is most effectively achieved
in concentrations above 1 mg/L, some filtration technologies are reported to typically
release between 0.001 and 0.5 mg/L [1, 14]. The effectiveness of silver in this low
range therefore remains unelucidated. And further, how such low concentrations may
perform in combination with ZnO requires further research. While the interaction of
MNPs with cells is also generally well known [11], limited research has investigated
the role that microbial concentration has on disinfection efficacy. In other words,
how does bactericidal efficacy change when MNP-bacterial ratios change?
This research investigates AgNP disinfection with ZnO against three concentra-
tion levels of E. coli. Metals are challenged in batch-phase disinfection experiments
that are run constantly over 72 h after initial MNP injection. The objective of this
The Influence of Nanoparticle–Cell Ratios on the Disinfection … 1055
work is thus to identify how MNP-bacterial ratios impact disinfection, and further,
how those impacts differ when AgNPs are supplemented with ZnO.
Complete characterization of silver and zinc oxide nanoparticle used in this research
may be found in [15]. Briefly, citrate-capped AgNPs were provided by Argenol
Laboratories (Spain), and ZnO was purchased as puriss powder from Sigma Aldrich
[9, 10]. As measured with Scanning Electron Microscopy, AgNP size ranged from
34 to 58 nm while ZnO ranged from 30 to 680 nm. Some agglomeration was also
observed among both types of MNPs in an aqueous environment. When added in
combination, ZnO appeared to agglomerate around AgNPs to form new particulates
within the same size range of ZnO in isolation.
K-12 Migula (ATCC® 29,947) Castellani and Chalmers Escherichia coli was
purchased from Cedarlane laboratories as a dry pellet, which was rehydrated with
1 mL of No. 3 Nutrient Broth and centrifuged for 10 s. 9800 µL of broth was then
injected with 200 µL of E. coli solution (isometrically) and incubated for 1 h at
37 °C. 1 mL of glycerol was then added to the stock solution and incubated for 3 h,
after which it was divided into 200 µL aliquots and stored at − 20 °C. For testing
purposes, 100 µL of frozen stock was thawed, added to 9900 µL of No. 3 Nutrient
Broth (0.1% solution), and incubated at 37 °C for 19 h until the stationary growth
phase; a 108 CFU/mL culture resultingly achieved.
Influent and effluent bacterial concentrations were measured as per USEPA
Method 1604 with the membrane filtration method [12, 13]. 1 mL samples were
drawn, which were appropriately serially diluted with a buffer solution (0.5% MgCl2
and 0.125% KH2 PO4 ). Diluted samples were spread over a 0.45 µm filter paper
(25 cm; Millipore) and vacuum filtered. Samples were placed upon MI agar (Thomas
Scientific) nutrient pads and incubated at 37 °C for 24 h. Colonies were counted by
hand under black light.
1056 R. A. Venis et al.
MNPs were challenged by soft water of low alkalinity as per Table 7 in [12, 13].
1 L of synthetic water was injected with dissolved organics and micronutrients
to increase organic carbon content. Bacterial stock was added to form the desired
initial concentrations of approximately 102 , CFU/mL, 103 CFU/mL, or 105 CFU/
mL. Actual bacterial concentrations can be found in Table 1.
Aside from bacteria, all other water quality parameters were consistent between
test conditions. pH was adjusted with 1N solutions of HCl (acid) and NaOH (base).
These water quality parameters were further measured before metal injection and
may be found in Table 2.
MNPs were injected directly into challenge water already contaminated with E. coli
to initiate each experiment. AgNPs were added in concentrations of 0, 0.5, 1, 5, 10,
and 20 ppb, which were combined with 0, 50, 75, 200, and 1000 ppb ZnO. Bacterial
The Influence of Nanoparticle–Cell Ratios on the Disinfection … 1057
concentrations were measured before metallic injection, followed by 5, 24, 48, and
72 h thereafter. Each sample was taken in duplicate, and each MNP combination
was evaluated twice (i.e., n = 4). The intention of this experimental procedure is to
measure MNP efficacy during long-term water storage.
Note that 0.5, 1, and 5 ppb AgNPs were not evaluated against 50, 75, or 200 ppb
ZnO in challenge water containing 103 CFU/mL or 105 CFU/mL due to no observable
impact on bacterial growth when challenged by water with 102 CFU/mL (see Results
and Discussion).
The disinfection results of all of the various AgNP and ZnO combinations with the
three E. coli testing concentrations are shown in Fig. 1. Several important trends may
be observed. First, as the initial concentration of bacteria increases, less disinfection
is observed at the same respective MNP concentrations overall. In addition, much
higher ZnO concentrations are required to achieve disinfection than AgNPs. For
instance, 20 ppb AgNP and 1000 ppb ZnO achieved log removal values (LRVs) of
− 2.7 ± 0.01, − 2.8 ± 0.17, and − 2.1 ± 0.10 when challenged by water with 102 ,
103 , and 105 CFU/mL, respectively, after 24 h (Note: LRV > 0 is growth; LRV < 0
is disinfection). Meanwhile, 5 ppb AgNP and 1000 ppb ZnO achieved LRVs of −
2.6 ± 0.05, − 1.1 ± 0.10, and − 1.4 ± 0.15 at 102 , 103 , and 105 CFU/mL, respectively,
after 24 h, of which the latter two observations were significantly different from
the former (p < 0.01) but statistically similar to each other (p > 0.05). LRVs from
both MNP combinations were thus demonstrably worse with increasing bacterial
challenges, though the observed changes were greater with 5 ppb than 20 ppb AgNP.
The impact of MNP concentration on disinfection was also critically impacted
by time. That is, when MNPs were in lower concentrations, bacterial regrowth was
observed. However, when more plentiful, disinfection was maintained. For example,
1 ppb AgNP and 1000 ppb ZnO achieved LRVs of − 2.0 ± 0.11, − 0.8 ± 0.08, and −
0.9 ± 0.11 at 102 , 103 , and 105 CFU/mL, respectively, after 24 h, and − 2.7 ± 0.03, +
0.7 ± 0.16, and + 0.2 ± 0.20 after 72 h, respectively. Conversely, 10 ppb AgNP with
1000 ppb ZnO achieved LRVs of − 2.7 ± 0.03, − 2.2 ± 0.07, and − 2.2 ± 0.52 at 102 ,
103 , and 105 CFU/mL, respectively, after 24 h, and − 2.6 ± 0.03, − 2.2 ± 0.07, and −
3.0 ± 0.02 after 72 h, respectively. The higher AgNP concentration thus maintained
or improved bacterial disinfection and prevented regrowth in all challenge waters.
Similarly, ZnO was most impactful as storage time increased. For instance, 0.5 ppb
AgNPs in water with an initial bacterial concentration of 102 CFU/mL achieved LRVs
of + 1.2 ± 0.19, + 1.1 ± 0.02, and − 2.3 ± 0.22 with 0, 50, and 1000 ppb ZnO after
24 h, respectively, compared with + 2.1 ± 0.17, + 1.2 ± 0.12, and − 2.7 ± 0.01 after
72 h, respectively. Some bacterial regrowth was thus observed in all cases except for
when combined with 1000 ppb ZnO, with less growth observed as ZnO concentration
increased. When compared with the higher silver concentrations, 20 ppb AgNPs in
water with an initial bacterial concentration of 105 CFU/mL achieved LRVs of +
1058 R. A. Venis et al.
Fig. 1 Escherichia Coli disinfection (log removal value) of AgNPs and ZnO when combined in
various concentrations and when challenged by three levels of bacterial concentration. Removal
occurs when values are below zero, while growth occurs when values are greater than zero
0.5 ± 0.13, + 0.3 ± 0.34, and − 2.1 ± 0.10 with 0, 50, and 1000 ppb ZnO after 24 h,
respectively, compared with + 0.8 ± 0.30, + 0.7 ± 0.11, and − 3.4 ± 0.06 after
72 h, respectively. As ZnO concentration increased, the initial observed disinfection
significantly improved, and the amount of bacterial regrowth significantly reduced
or did not occur (p < 0.01). While ZnO therefore does indeed improve disinfection,
its impacts appear greatest when AgNP concentration is higher, ZnO concentration
is higher, and time increases.
These impacts may be best understood as a function of ZnO disinfection mecha-
nisms. As is detailed by Venis and Basu [15], ZnO does not exhibit strong toxicity
in low concentrations, but rather influences the cell’s gene transcription processes
by “shocking” cells into upregulating the zinc exporting proteins. This upregula-
tion then results in the production of intracellular cysteine, which increases cell
membrane permeability and transiently traps excess intracellular zinc within the
cytoplasm, leading to cell degradation [5, 17]. ZnO therefore weakens E. coli cells,
making it easier for AgNPs to exhibit its toxic affect. As such, the observed disin-
fection impacts were synergistic; as ZnO concentrations increased, more cells were
sufficiently weakened and disinfection impacts by AgNPs within the matrix were
more easily facilitated.
This increasing disinfection with increasing ZnO and AgNP concentrations is
moreover resultant of more ZnO availability for cell weakening and AgNP avail-
ability for exhibiting toxic effects. Namely, when in lower concentrations, fewer
The Influence of Nanoparticle–Cell Ratios on the Disinfection … 1059
Fig. 2 Contour plots displaying %Change in LRV relative to no-metal control as a function of Ag
and Zn concentrations when challenged by 102 CFU/mL after a 5 h, b 24 h, c 48 h, d 72 h, when
challenged by 103 CFU/mL after e 5 h, f 24 h, g 48 h, h 72 h, and when challenged by 105 CFU/
mL after i 5 h, j 24 h, k 48 h, and l 72 h
1060 R. A. Venis et al.
water with 102 CFU/mL, with particularly significant impacts when AgNP concen-
trations were above 10 ppb AgNP. Robust multi-stage water treatment, like that
utilized by ceramic water filters, is therefore recommended to ensure MNP efficacy.
For instance, if high bacterial masses pass through a filter matrix, it is possible that
the eluted MNP concentrations will be insufficient to meet the challenge water needs
[14]. Similarly, if filtration is not employed, such as in MNP-impregnated tablets
or cubes [2, 6], this research suggests that achieved disinfection with eluted metal
concentrations may be inadequate; this concern is particularly important if water is
stored for long time periods. Reductions in bacterial concentrations to 102 CFU/mL
or lower will thus provide the highest likelihood of strong MNP efficacy over time.
A final important implication of this work is that ZnO appears to be a very
useful additive for improving AgNP disinfection efficacy. Given its very low relative
cost, the inclusion of ZnO within MNP-based technologies may further result in
improved microbe deactivation while simultaneously reducing technological prices.
More research is, however, required. Future work must investigate ZnO impregna-
tion into technologies including AgNPs and evaluate the amount of each species
that are released during use. Research must also evaluate the relationship between
impregnation and elution to maximize the synergistic MNP effects for the least cost.
4 Conclusions
Acknowledgements We wish to thank the Basu research group, Wine to Water East Africa, and
Wine to Water America for their ongoing and continued support. This research was supported by the
The Influence of Nanoparticle–Cell Ratios on the Disinfection … 1061
Natural Sciences and Engineering Council of Canada, Queen Elizabeth Scholarship, and Carleton
University.
References
filter care illustrating how locally relevant education may facilitate water technology
uptake, and that regular interfacing between implementers and participants is critical
to addressing long-term water access.
1 Introduction
2 Methods
This study was based in the Kimokouwa ward of the Longido District in the Arusha
Region of northern Tanzania (see Fig. 1). The total population is estimated at approxi-
mately 10,000 inhabitants with a density of about 24/km2 [4, 30]. The vast majority of
inhabitants identify as Maasai and practice modes of economic activity such as live-
stock keeping and beadwork. They typically live in Maasai homesteads called bomas,
which are widely dispersed across the ward in sub-villages. Note that bomas are
grouped living quarters with multiple circularly oriented homes within an enclosed
area.
Approximately 350 mm of precipitation falls in this area per year, the vast majority
of which occurs between February and April. Water is predominantly collected from
village taps fed by basins atop Mount Longido, of which Kimokouwa sits at the foot.
During the rainy season, some community members will also collect water directly
from streams fed by mountain runoff, from direct rainwater, or from a shallow well.
Conversely, water is commonly sourced from engineered dams or dugout wells when
the weather is dry.
1066 R. A. Venis and O. D. Basu
Fig. 1 Map of Tanzania and Longido district. Data provided by Humanitarian Data Exchange [13]
2.2 Consultations
Like consultations, the program development procedure was highly inclusive with
explicit controls to maintain local ownership of the initiative. Namely, key objec-
tives, participant interaction methods, interview questions, and all other factors were
Long-Term Community Engagement and Participatory Education … 1067
Fifty Maasai women attending TEMBO’s Kimokouwa Adult Literacy Program were
recruited into the study in June 2019. This program utilizes weekly meetings to
provide participants with lessons on how to read and write in Kiswahili, the Tanza-
nian national language, as well as on other topics such as human rights, hygiene,
and financial literacy. The present research was integrated into this existing struc-
ture. Participants were interviewed before intervention, after which all received a
CWF for use in their home. Seven WaSH education lessons were then provided
twice consecutively for 14 weeks, after which participants were interviewed again
(3-month interviews). Lessons were subsequently paused for another three months
before a second follow-up interview cycle (6-month interviews). Due to disruptions
caused by the COVID-19 pandemic, all programming was paused again for three
months, after which the 14-week WaSH education program was restarted. No inter-
views were conducted at this time (9-months) due to restrictions on home visits. As
such, the third set of interviews was conducted after completing the second educa-
tion session (12-month interviews). Note too that gathering restrictions resulted in a
significantly reduced group of participants being available for this interview cycle.
Furthermore, another three-month pause was then taken, after which a fourth set of
interviews was conducted (15-month interviews). Finally, a third education session
was completed, after which the final interviews were taken (18-month interviews).
Participation increased significantly at 15- and 18-month interviews.
All data was collected using the mWater smartphone app using a secure license
procured by project partners. Education sessions and interviews were conducted by
1068 R. A. Venis and O. D. Basu
Fig. 2 Ceramic water filters. From left to right, a ceramic water filter with water from local source,
ceramic water filters within self-contained units upon delivery, a ceramic water filter in use by a
study participant, clean filter effluent being expelled from a spigot
three research assistants, all of whom are Maasai women themselves who were born
and/or educated within the community. Data was downloaded and stored on a secure
hard drive, which was subsequently analyzed with RStudio. This study received
approval from the Carleton University Research Ethics Board B, as well as from the
Longido District Council and Tanzanian Commission on Science and Technology
(COSTECH).
Note that CWFs are low-cost, home-based, and self-contained water treatment
technologies manufactured less than 70 km from the study community (see Fig. 2).
They have been identified as among the most easily used and desirable point-
of-use water treatment solutions and are thus also among those most commonly
implemented [18, 24]. Details on CWF technical efficacy may also be found in [27].
3 Results
Shown in Fig. 3 are the self-reported diarrheal health outcomes among the partic-
ipating women over the 18-month intervention period. At baseline, 38% reported
having diarrhea with some frequency, which is lower than previous estimates but still
more than three times the Tanzanian national average [12, 20]. Subsequently, reported
diarrhea decreased to 8% of the group at the 3-month interviews and increased to 32%
at the 6-month interviews. A clear relationship was thus observed between education
provision and diarrheal outcomes. That is, when education was provided (between
baseline and 3-month interviews), diarrhea significantly decreased. When program-
ming was paused for three months (3- to 6-month interviews), diarrhea significantly
increased to near-baseline levels. Filter usage and WaSH behavior compliance is
thus believed to have changed during these two consecutive periods, resulting in
differences in observed health.
Similar observations are further made when the program resumed after the
COVID-19 disruptions at the 9-month timepoint. More specifically, 0% of the group
Long-Term Community Engagement and Participatory Education … 1069
Fig. 3 Self-reported diarrheal health outcomes among participants between baseline and 18-month
interviews
remained engaged over time were more protective of their filters. This prediction is
supported by the observation that participants with broken filters attended an average
of 5.0 ± 2.6 of the original 14 education classes, compared with 9.3 ± 3.6 among
those whose filters remained intact (p < 0.01). Similarly, the overall monthly breakage
rate of 0.67% observed herein was significantly superior to the 2.11% (214/477 filters
over 44 months), 2.79% (25/101 filters over 11 months), and 13.0% (9/50 filters over
6 weeks) monthly breakage rates observed by Brown [5], Roberts [23], and Lemons
et al. [15], respectively.
4 Discussion
The presented results suggest that this intervention translated to good comparative
technological care and significant improvements in associated diarrheal health. For
instance, the decrease in diarrhea immediately after the intervention (i.e., at the 3-
month interview cycle) among those with working filters indicates that CWFs did
indeed reduce contaminant loads substantially enough to reduce the prevalence of
illness. Correspondingly, these observations suggest that participant CWF usage was
directly impacted by education programming. That is, as the weekly meetings were
structured around participatory knowledge communication, locally oriented topic
discussions, community building, and group behavior change support, participants
are believed to have adopted technology usage via commonly referenced persuasion
factors such as (1) attitude, (2) knowledge, (3) perceived benefit, (4) ability, (5)
social influence, and (6) self-persuasion, as defined by Kraemer and Mosler [14].
The situation of programming within an existing group setting focused on learning
and self-improvement is further believed to have translated to participants helping
each other use the filters and achieve improved health outcomes.
Long-Term Community Engagement and Participatory Education … 1071
5 Conclusions
This study highlights that working closely with community members and tailoring
programming to address specifically identified needs in a manner that matches local
social and cultural institutions is a valuable approach for developing WaSH inter-
ventions. This research investigated methods for improving program outcomes by
implementing a slow research approach, which utilized intensive consultations and
inclusive program development to promote local ownership and management of
the initiative. Fifty Maasai women were subsequently provided with CWFs for
use in their homes in combination with a multi-month participatory WaSH educa-
tion repeated three times over an 18-month evaluation period. Filter breakage and
associated diarrheal health outcomes were measured.
Filter breakage and class attendance were highly correlated, where less filters
broke among those who were more engaged in the program. With this said, filter
1072 R. A. Venis and O. D. Basu
Acknowledgements We wish to thank the Basu research group, Tanzanian Education and Micro
Business Opportunity, Wine to Water East Africa, and Wine to Water America for their ongoing
and continued support. We would also like to specifically thank Virginia Taylor, Paulina Sumayani,
Paulina Laizer, Tepeyani Issai, Marie Laiser, Hoyce Mshida, Shane Hillman, and Messiaki Kimrei.
This research was supported by the Natural Sciences and Engineering Council of Canada, the Queen
Elizabeth Scholarship, and Carleton University.
References
1. Alaerts GJ (2019) Financing for water—water for financing: a global review of policy and
practice. Sustainability 11(821):1–25
2. Bates B, Kundzewicz ZW, Wu S, Palutikof J (2008) Climate change and water: technical paper
of the intergovernmental panel on climate change, IPCC. IPCC Secretariat, Geneva
3. Bishoge OK (2021) Challenges facing sustainable water supply, sanitation and hygeien
achievement in urban areas in sub-Saharan Africa. Local Environ 26(7):893–907
4. Brinkhoff T (2012) Longido population—with Wards. [Online] Available at: https://www.cit
ypopulation.de/en/tanzania/northern/admin/0207_longido/. Accessed 04 2020
5. Brown JM (2007) Ceramic filters for point of use drinking water tratment in rural cambodia:
independent appraisal of interventions from 2002–2005. In: Effectiveness of ceramic filtration
for drinking water treatment in Cambodia (Phd thesis). Proquest Information and Learning
Company; University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, pp 169–246
6. CAWST (2020) WaSH education and training resources. [Online] Available at: https://resour
ces.cawst.org/. Accessed March 2020
7. Chouinard JA (2016) Decolonizing international development evaluation. Canadian J Program
Evaluat 30(3):237–247
8. Clemens B, Douglas TJ (2012) To what degree can potable water foster international economic
development and sustainability? what role does health play? Organiz Manage J 9(2):83–89
9. CMWG (2010) Best practice recommendations for local manufacturing of ceramic pot filters
for household water treatment. Ceramics Manufacturing Working Group, Seattle
10. Dias AP et al (2018) Assesing the influence of water management and rainfall seasonailty on
water Wuality and intestinal parasitism in rural Northeastern Brazil. J Trop Med. https://doi.
org/10.1155/2018/8159354,p.10
Long-Term Community Engagement and Participatory Education … 1073
11. Dreibelbis R et al (2013) The integrated behavioural model for water, sanitation, and hygiene:
a systematic review of behavioural models and a framework for designing and evalu-
ating behaviour change interventions in infrastructure-restricted settings. BMC Public Health
13:1015
12. Edwin P, Azage M (2019) Geographical variations and factors associated with childhood
diarrhea in Tanzania: a national population based survey 2015–2016. Ethiopian J Health Sci
29(4):513–524
13. Humanitarian Data Exchange, 2018. Tanzania Administrative level 0-3 Boundaries. [Online]
Available at: https://data.humdata.org/dataset/tanzania-administrative-boundaries-level-1-to-
3-regions-districts-and-wards-with-2012-population. Accessed 04 2020
14. Kraemer SM, Mosler H-J (2010) Persuassion factors influencing the decision to use sustainable
household water treatment. Int J Environ Health Res 20(1):61–79
15. Lemons A et al (2016) Assessment of the quality, effectiveness, and acceptability of ceramic
water filters in Tanzania. J Water, Sanit Hygiene Dev 6(2):194–204
16. Liodakis G (2010) Political economy, capitalism and sustainable development. Sustainability
2:2601–2616
17. Marshall L, Kaminsky J (2016) When behaviour change fails: evidence for building wash
strategies on existing motivations. J Water Sanit Hygien Dev 6(2):287–298
18. Martin NA et al (2018) Sustained adoption of water, sanitation and hygiene interventions: a
systematic review. Tropical Med Int Health 23(2):122–135
19. Middleton KR, Anton SD, Perri MG (2013) Long-Term adherence to health behavior change.
Am J Lifestyle Med 395–404
20. Mshida HA, Kassim N, Kimanya ME, Mpolya E (2017) Influence of water, sanitation, and
hygiene practices on common infections among under-five children in Longido and Monduli
Districts of Arusha, Tanzania. J Environ Public Health 9235168:8
21. Njoh AJ, Akiwumi FA (2011) The impact of colonization on access to improved water and
sanitation facilities in African cities. Cities 452–460
22. Ray I, Smith KR (2021) Towards safe drinking water and clean cooking for all. Lancet Global
Health 9:e361–e365
23. Roberts M (2004) Field test of a silver-impregnated ceramic water filter: people-centred
approaches to water and environmental sanitation. Vientiane, Lao, 30th WEDC International
Conference
24. Santos J, Pagsuyoin SA, Latayan J (2016) A multi-criteria decision analysis framework for
evaluating point-of-use water treatment technologies. Clean Technol Environ Policy 18:1263–
1279
25. UN (2021) Sustainable development goals report, Geneva: United Nations
26. UNICEF (2016) Strategy for WaSH 2016–2030. United Nations Children’s Fund, Washington,
DC
27. Venis R, Basu O (2020) Mechanisms and efficacy of disinfection in ceramic water filters: a
critical review. Crit Rev Environ Sci Technol 1–41. Online. https://doi.org/10.1080/10643389.
2020.1806685
28. WHO (2020) Water sanitation hygiene. [Online] Available at: https://www.who.int/water_san
itation_health/en/. Accessed March 2020
29. World Bank (2021) World development indicators. World Bank, Washington, DC
30. Worldometer (2020) Tanzanian yearly population growth rate. [Online] Available at: https://
www.worldometers.info/world-population/tanzania-population. Accessed on 04 2020
Integration of Remote Sensing, MCDM,
and GIS Network Analysis to Better
Locate Waste Treatment and Processing
Facilities in Saskatchewan, Canada,
at a Regional Level
N. Karimi (B) · K. T. W. Ng
Environmental Systems Engineering, University of Regina, Regina, Canada
e-mail: nkg797@uregina.ca
K. T. W. Ng
e-mail: Kelvin.Ng@uregina.ca
Solid waste management and treatment is known as one of the global challenges
in recent years due to notable population growth and rapid urban development [5,
17, 46]. Projected waste generation reported by World Bank indicates that almost
100 million tonnes of waste will be added on top of annual waste generation in all
regions (the Middle East and North Africa, Sub-Saharan Africa, Latin America and
the Caribbean, North America, South Asia, Europe, and Central Asia, East Asia, and
Pacific) comparing 2050 with 2016 [48].
Despite the fact that some waste management plans such as 3R (Reduce, Reuse,
and Recycle) were practiced to mitigate the amount of waste ending in landfills [22,
23, 45], landfilling remains as the most popular solid waste treatment method in both
developing [1, 44, 47] and developed [26, 27, 40] countries.
Thus, finding suitable sites for future landfills where adverse environmental foot-
prints are minimized is of practical significance. In this regard, multicriteria decision-
making (MCDM) tools are widely practiced to incorporate all decisive variables in
finding suitable sites [18, 32, 39]. For example, MCDM and fuzzy membership
in the Geographic Information System (GIS) environment was adopted in a study
using environmental and socio-economical parameters and identified around 1% of
the study area (Shiraz, Iran) suitable for future landfill sites. Similarly, Karimi et al.
[28] used AHP as a MCDM tool and remote sensing satellite imagery in Regina,
Canada, and found that potential sites for future landfilling should be located far
from water bodies and protected lands.
Unlike previous studies that only focused on a particular area for siting suitability
analysis, current study integrated remote communities and their access to suitable
sites in the final selection procedure using NASA black marble nighttime light (NTL)
images and GIS network analysis. It is believed that the absence of landfill sites in
remote communities results in adverse consequences including but not limited to
environmental degradation such as presence of odors and leachates, potential effects
on communities’ cultural health, and presence of stacked waste piles [2, 4, 37]. For
example, child death is reported in small northern communities of Canada in 2016
as a consequence of illegal disposal sites harvesting and valuable wastes, lumber and
construction materials, collecting [29].
In addition, site suitability analysis where remote and small communities were
incorporated might benefit from a regional waste management plan as a result
of incorporating financial resources from neighboring municipalities and local
communities [33] and alleviating management responsibilities [39].
In the current study, a framework is developed using remote sensing imagery and
GIS network analysis to prioritize the potential sites for future landfills at a regional
level where population coverage is optimized and adverse environmental footprints
in minimized. NASA NTL imagery is frequently used during recent years to trace
and evaluate anthropogenic effects in the environment [9, 24, 41]. Likewise, NASA
NTL imagery is adopted in the current study to detect population centers in remote
rural communities.
Integration of Remote Sensing, MCDM, and GIS Network Analysis … 1077
Saskatchewan, Canada is selected as the study area to further investigate the results
of the developed method. Saskatchewan remote communities might benefit the most
from the outcomes of the current study as they suffer from either absence or lack of
accessibility to suitable disposal sites [3, 8, 38].
In the current study, the Province of Saskatchewan with a total area of 6.5 ×
105 km2 is evaluated for site suitability for future landfills. Site suitability analysis
for the province of Saskatchewan is well aligned with provincial horizons in devel-
oping innovative solid waste management systems. For example, Saskatchewan has
released a solid waste management strategy in January 2020 addressing six goals
where enhancement in education and awareness about best waste management prac-
tices, development of sustainable waste management solutions, and embracement of
regional collaboration in reducing costs for solid waste infrastructures are compre-
hensively addressed [21]. The study area along with land use classification is shown
in Fig. 1.
The workflow of current study is shown in Fig. 2. Potentially suitable sites for land-
fills can be achieved based on different constraints including relative distance from
protected areas (forest lands), water bodies, and urban areas (Fig. 2). For example,
lower fuzzified membership grades are assigned for those sites that are located in the
vicinity of protected areas. The list of satellite imagery, sources of acquisition, and
their applications are included in Table 1.
Once all remote sensing imagery is collected, they are normalized based on their
suitability within the range of 0–1 using the ArcGIS fuzzification method [10]. Then,
all layers were integrated to provide the suitability map for future landfills using
the Simple Additive Weighting (SAW) method. SAW was frequently used to solve
spatial decision-making issues where different variables are introduced [25, 28, 30].
Integrated suitability map for future landfills was classified into five different classes
using the quantile method where the total area of all classes is almost the same [11].
Majority and boundary clean filters were used to reduce the number of island pixels
and smoothen the edges of different classes in the rasterized image, classified map
for potentially suitable sites, respectively [12, 13].
The most suitable sites in the classified map were changed to polygons where
their centroid locations were used as candidate points for future landfills. Population
centers were also identified using NTL imagery, Table 1, by changing the brighter
areas to polygons and points consecutively.
For evaluation of population center coverage and accessibility by candidate points
for future landfills, a network dataset was constructed using the Saskatchewan Road
vector file, which originated from the Canadian road network [19]. In order to estimate
the total travel time in the network dataset, the speed limit was set to 80 and 50 km/h
for highways and urban streets respectively using Saskatchewan’s Driver’s Handbook
1078 N. Karimi and K. T. W. Ng
Fig. 1 Current study area and land cover extracted from Canada Natural Resources [19, 20]
Integration of Remote Sensing, MCDM, and GIS Network Analysis … 1079
Table 1 Remote sensing imagery, date of acquisition, and details for applications
Satellite Date/source of Details of application
imagery acquisition
MODIS land 2000–2011 Suitable sites for landfills are distant locations from protected
cover product (average areas, water bodies, and urban lands to mitigate environmental
for Canada annual)/[20] footprints [28, 31, 32]
MODIS land August 2021 Suitable sites for landfills might have higher surface
surface (average temperature to ease material handling [7, 28, 32]
temperature monthly)/[34]
NASA NTL 2016 (average Population centers and anthropogenic activities can be
product annual)/[35] identified using NTL imagery [36, 42, 49]
[43]. Global turns and delays in seconds were adopted in the intersections of two
edges of the roads [14].
After the construction of the network dataset, candidate points for suitable land-
fills sites and population centers were assigned to “facilities” and “demand points,”
respectively. The scope of using the network dataset was to detect the candidate points
for suitable landfills that cover the population centers using “minimize facilities” in
location-allocation tools [15]. Maximum allowed travel time between facilities and
demand points can be identified by the user and is assumed to be 77 min for the
current study.
1080 N. Karimi and K. T. W. Ng
Figure 3a, b shows the “classified potential suitable sites” and “final suitable sites”
maps, respectively. As is shown in Fig. 3a, suitability is shown in five classes where
1 and 5 are the least and most suitable regions, respectively. The majority of suitable
sites, only 5th class, are located in the east-central and southern part of the province.
This could be due to the presence of water bodies and protected areas in the central
and northern part of Saskatchewan, Fig. 1. However, there are also some suitable
island regions in the north-western part of the province probably due to the presence
of temperate grassland, Fig. 1. It seems that the presence of urban areas might not be
a decisive factor in identifying suitable sites. This could be due to the smaller size
of urban areas compared to the study area. Identified isolated suitable areas in the
central part of the province make it possible to locate landfills in the neighborhood
of remote communities where consequently can mitigate the adverse consequences
of illegal disposal sites. The second and third most suitable classes, 4th and 3rd
classes, are drawn on the outer edges of the 5th class showing the distance-based
suitability analysis in the current study. Therefore, if any additional constraints such
as socioeconomic factors come into play, these areas can also be considered as
alternative potential sites for future landfills.
(a) (b)
Fig. 3 a Classified potential and b final suitable sites for future landfills in Saskatchewan, Canada
Integration of Remote Sensing, MCDM, and GIS Network Analysis … 1081
Final suitable sites, Fig. 3b, acquired based on 5th class in Fig. 3a after converting
suitable sites to candidate points, 41 points. Based on travel time from candidate
points to population centers, only 18 points were selected. Thus, these 18 points are
in the vicinity of population centers, chosen sites in Fig. 3b. Connections and number
of population center coverage are shown in “connection lines” Fig. 3b. Northern
chosen sites only cover 1 population center while central and southern chosen sites
are covering greater population centers. This might be due to heterogeneous popu-
lation distribution in the province where higher population density can be seen in
southern regions of the province. Similarly, Ghosh and Ng [16] reported that the
lower population density in northern areas of the province, askew distribution of
population, and lack of regulatory plans in remote areas introduced new challenges
to the solid waste management system in Saskatchewan.
Population coverage evaluation is not only useful to prioritize the suitable sites
for future landfills but also is important to identify which population centers are
not covered by the proposed potential sites. For example, there are some population
centers, blue points in Fig. 3b, in central and northern regions of the province that
do not have access to any of the identified candidate sites, green and red points.
Likewise, Burns et al. [6] reported that arctic cities might face unique solid waste
management issues due to their smaller communities and remote locations. Here is
where the presence of illegal disposal sites and their adverse consequences such as
land degradation and underground water pollution come into account in northern
areas [3, 38, 50]. Thus, provincial solid waste managers might employ regional
approaches where remote communities are integrated [21]. Lower suitable areas,
4th, 3rd, and 2nd classes in Fig. 3a, might also be used to locate a landfill in the
vicinity of these remote population centers. Greater containers with longer-distance
traveling trucks linking remote communities with central and southern landfills,
with less frequent trips, in the province might also be proposed as another alternative
solution.
4 Conclusion
Current study developed a framework for landfill site suitability analysis by aggrega-
tion of remote sensing imagery, MCDM tools, and GIS network analysis. Its method’s
applicability is further investigated in Saskatchewan, Canada. Preliminary results
show that the southern and central regions of Saskatchewan are suitable for develop-
ment of future landfills while northern and central regions might not be a potential
candidate. This might be due to the presence of northern forest lands and water bodies.
Less suitable sites are developed from the edges of most suitable sites showing the
linear fuzzified nature of the integrated layer. Population accessibility to landfills
and their coverage implied that only 43.9% of candidate sites are suitable for future
landfills. However, some population centers in the northern and central part of the
province do not have access to neighboring landfills. Locating landfills in less suitable
sites in the vicinity of remote communities and employing long-distance traveling
1082 N. Karimi and K. T. W. Ng
trucks with frequent waste collection schedules might also be alternative plans to
overcome population center coverage issues. The integrated approach of identifying
potential sites for future landfills showed that population coverage particularly in
remote communities should also be addressed along with environmental and socioe-
conomic variables. However, there are other constraints, especially in northern areas
such as the presence of permafrost and different soil conditions that should also be
incorporated in the future studies for accurate selection of landfills.
Acknowledgements The research reported in this paper was supported by a grant from the Natural
Sciences and Engineering Research Council of Canada (RGPIN-2019-06154) to the second author
(K.T.W. Ng), using computing equipment funded by FEROF at the University of Regina. The
authors are grateful for their support. The views expressed herein are those of the writers and not
necessarily those of our research and funding partners.
References
10. ESRI (2022a) Fuzzy membership (spatial analyst), geoprocessing tools. https://pro.arc
gis.com/en/pro-app/2.8/tool-reference/spatial-analyst/fuzzy-membership.htm. Accessed on 22
Feb 2022
11. ESRI (2022b) Quantile classification, data classification methods. https://pro.arcgis.com/en/
pro-app/latest/help/mapping/layer-properties/data-classification-methods.htm. Accessed on
22 Feb 2022
12. ESRI (2022c) Majority filter. Spatial analyst. https://pro.arcgis.com/en/pro-app/2.8/tool-refere
nce/spatial-analyst/majority-filter.htm. Accessed on 22 Feb 2022
13. ESRI (2022d) Boundary clean filter. Spatial analyst. https://pro.arcgis.com/en/pro-app/2.8/
tool-reference/spatial-analyst/boundary-clean.htm#:~:text=The%20Boundary%20Clean%20t
ool%20generalizes,smoothing%20that%20will%20be%20applied. Accessed on 22 Feb 2022
14. ESRI (2022e) Global turns, building and editing the network dataset, network analyst.
https://desktop.arcgis.com/en/arcmap/latest/extensions/network-analyst/global-turns-about-
global-turns.htm#:~:text=Global%20turns%20are%20implicitly%20present,or%20restric
ted%20by%20turn%20features. Accessed on 22 Feb 2022
15. ESRI (2022f) Location allocation analysis, network analysis layers https://desktop.arcgis.com/
en/arcmap/latest/extensions/network-analyst/location-allocation.htm. Accessed on 22 Feb
2022
16. Ghosh A, Ng KTW (2021) Temporal and spatial distributions of waste facilities and solid waste
management strategies in rural and urban Saskatchewan, Canada. Sustainability 13(12):6887.
https://doi.org/10.3390/su13126887
17. Giannis A, Chen M, Yin K, Tong H, Veksha A (2017) Application of system dynamics modeling
for evaluation of different recycling scenarios in Singapore. J Mater Cycles Waste Manage
19(3):1177–1185. https://doi.org/10.1007/s10163-016-0503-2
18. Gorsevski PV, Donevska KR, Mitrovski CD, Frizado JP (2012) Integrating multi-criteria eval-
uation techniques with geographic information systems for landfill site selection: a case study
using ordered weighted average. Waste Manage 32(2):287–296. https://doi.org/10.1016/j.was
man.2011.09.023
19. Government of Canada (2022a) Road network file including road types and directions. https:/
/open.canada.ca/data/en/dataset/82efb454-3241-4440-a5d4-8b03a42f4df8. Accessed on 22
Feb 2022
20. Government of Canada (2022b) Studying Canada’s Land Cover from Space. Land cover.
https://www.nrcan.gc.ca/maps-tools-publications/satellite-imagery-air-photos/application-
development/land-cover/21755. Retrieved from 10 Feb 2022
21. Government of Saskatchewan (2020) Solid waste management strategy, Saskatchewan
waste management. https://www.saskatchewan.ca/residents/environment-public-health-and-
safety/saskatchewan-waste-management/solid-waste-management-strategy. Accessed on 20
Feb 2022
22. Hadidi LA, Ghaithan A, Mohammed A, Al-Ofi K (2020) Deploying municipal solid waste
management 3R-WTE framework in Saudi Arabia: challenges and future. Sustainability
12(14):5711. https://doi.org/10.3390/su12145711
23. Inaba R, Tasaki T, Kawai K, Nakanishi S, Yokoo Y, Takagi S (2022) National and subnational
outcomes of waste management policies for 1718 municipalities in Japan: development of a
bottom-up waste flow model and its application to a declining population through 2030. J Mater
Cycles Waste Manage 24(1):155–165. https://doi.org/10.1007/s10163-021-01303-7
24. Karang IW, Astawa G, Ceria A, Lynham J (2021) Detecting religion from space: Nyepi Day in
Bali. Remote Sens Appl: Soc Environ 24:100608. https://doi.org/10.1016/j.rsase.2021.100608
25. Karimi N, Ng KTW (2022) Mapping and prioritizing potential illegal dump sites using
geographic information system network analysis and multiple remote sensing indices. Earth
3(4):1123–1137. https://doi.org/10.3390/earth3040065
26. Karimi N, Ng KTW, Richter A (2021) Prediction of fugitive landfill gas hotspots using a random
forest algorithm and Sentinel-2 data. Sustain Cities Soc 73:103097. https://doi.org/10.1016/j.
scs.2021.103097
1084 N. Karimi and K. T. W. Ng
27. Karimi N, Ng KTW, Richter A, Williams J, Ibrahim H (2021) Thermal heterogeneity in the
proximity of municipal solid waste landfills on forest and agricultural lands. J Environ Manage
287:112320. https://doi.org/10.1016/j.jenvman.2021.112320
28. Karimi N, Richter A, Ng KTW (2020) Siting and ranking municipal landfill sites in regional
scale using nighttime satellite imagery. J Environ Manage 256:109942. https://doi.org/10.1016/
j.jenvman.2019.109942
29. Keske CM, Mills M, Godfrey T, Tanguay L, Dicker J (2018) Waste management in remote
rural communities across the Canadian North: challenges and opportunities. Detritus 2(1):63.
https://doi.org/10.31025/2611-4135/2018.13641
30. Kontos TD, Komilis DP, Halvadakis CP (2005) Siting MSW landfills with a spatial multiple
criteria analysis methodology. Waste Manage 25(8):818–832. https://doi.org/10.1016/j.was
man.2005.04.002
31. Mallick J (2021) Municipal solid waste landfill site selection based on fuzzy-AHP and geoin-
formation techniques in Asir Region Saudi Arabia. Sustainability 13(3):1538. https://doi.org/
10.3390/su13031538
32. Moeinaddini M, Khorasani N, Danehkar A, Darvishsefat AA, Zienalyan M (2010) Siting MSW
landfill using weighted linear combination and analytical hierarchy process (AHP) method-
ology in GIS environment (case study: Karaj). Waste Manage 30(5):912–920. https://doi.org/
10.1016/j.wasman.2010.01.015
33. Mohit S, Srivastava RK (2021) Regionalization of solid waste management: a proposal for
divisional landfill in the state of Uttar Pradesh, India. J Hazardous, Toxic, Radioactive Waste
25(2):05020006. https://doi.org/10.1061/(ASCE)HZ.2153-5515.0000573
34. NASA (2022a) MODIS Land Surface Temperature and Emissivity (MOD11). https://modis.
gsfc.nasa.gov/data/dataprod/mod11.php. Accessed on 11 Feb 2022
35. NASA (2022b) Backgrounder on VIIRS day/night band and IRS applications. Accessed on
https://www.earthdata.nasa.gov/learn/backgrounders/nighttimelights
36. Oda T, Román MO, Wang Z, Stokes EC, Sun Q, Shrestha RM, Feng S, Lauvaux T, Bun R,
Maksyutov S, Chakraborty S (2021) US cities in the dark: mapping man-made carbon dioxide
emissions over the contiguous US using NASA’s black marble nighttime lights product. Urban
Remote Sens Monit Synthes Model Urban Environ 337–367. https://doi.org/10.1002/978111
9625865.ch16
37. Oyegunle A, Thompson S (2018) Wasting indigenous communities: a case study with garden
hill and Wasagamack First Nations in Northern Manitoba, Canada. J Solid Waste Technol
Manage 44(3):232–247. https://doi.org/10.5276/JSWTM.2018.232
38. Patrick RJ (2018) Adapting to climate change through source water protection: case studies
from Alberta and Saskatchewan, Canada. Int Indigen Policy J 9(3). https://doi.org/10.18584/
iipj.2018.9.3.1
39. Richter A, Ng KTW, Karimi N (2019) A data driven technique applying GIS, and remote sensing
to rank locations for waste disposal site expansion. Resour Conserv Recycl 149:352–362. https:/
/doi.org/10.1016/j.resconrec.2019.06.013
40. Richter A, Ng KTW, Karimi N, Li RYM (2021) An iterative tessellation-based analytical
approach to the design and planning of waste management regions. Comput Environ Urban
Syst 88:101652. https://doi.org/10.1016/j.compenvurbsys.2021.101652
41. Román MO, Wang Z, Sun Q, Kalb V, Miller SD, Molthan A, Schultz L, Bell J, Stokes EC,
Pandey B, Seto KC, Hall D, Oda T, Wolfe RE, Lin G, Golpayegani N, Devadiga S, Davidson
C, Sarkar S, Masuoka EJ (2018a) NASA’s black marble nighttime lights product suite. Remote
Sens Enviro 210:113–143. https://doi.org/10.1016/j.rse.2018.03.017
42. Román MO, Wang Z, Sun Q, Kalb V, Miller SD, Molthan A, Schultz L, Bell J, Stokes EC,
Pandey B, Seto KC, Hall D, Oda T, Wolfe RE, Lin G, Golpayegani N, Devadiga S, Davidson
C, Sarkar S, Masuoka EJ et al. (2018b) NASA’s Black Marble nighttime lights product suite.
Remote Sens Environ 210:113–143. https://doi.org/10.1016/j.rse.2018.03.017
43. SGI (2022) Speed, Saskatchewan driver’s licensing and vehicle registration. https://www.sgi.
sk.ca/handbook/-/knowledge_base/drivers/speed. Accessed on 22 Feb 2022
Integration of Remote Sensing, MCDM, and GIS Network Analysis … 1085
Abstract Illegal dumping activities are identified as a major global issue during
recent years due to their adverse environmental and health impacts. The current study
developed an illegal disposal site (IDS) identification algorithm by aggregating night-
time light (NTL) satellite imagery, remote sensing (RS) indices, and geographical
information system (GIS) network analysis. This algorithm was applied in division
no 6 in the province of Saskatchewan, Canada. Among all four detected potential
IDS, three of them were located in the western edges of the city of Regina with an
intensified road network and no landfills in the neighborhood. Travel time evaluation
showed that no populated points are in the proximity of 15 min from the centroids
of potential IDS. All IDS were laid over railway routes which connected the city
of Regina and the northwestern, northeastern, and western areas. Potential IDS was
also evaluated regarding the location of reserve lands. IDS site’s location analysis
concerning neighboring populated points and reserve lands indicates that IDS 3
should be carefully monitored due to its potential environmental and health impacts.
It is believed that higher travel times from populated points to IDS might reduce the
risk of illegal dumping activities. Results of this study indicate that not only detection
of IDS is important but also its accessibility and coverage by neighboring populated
areas should be carefully considered.
N. Karimi (B) · K. T. W. Ng
Environmental Systems Engineering, University of Regina, Regina, Canada
e-mail: nkg797@uregina.ca
K. T. W. Ng
e-mail: Kelvin.Ng@uregina.ca
Illegal disposal site (IDS) identification, evaluation, and management are major
concerns of every solid waste management systems both in developing [16] and
developed [51] countries. For example, an infectious disease with a high poten-
tial spreading risk was detected in the neighborhood of IDS in the central Tunisia
[3]. Similarly, the presence of around 6000 IDS was identified as a significant
environmental challenge in Slovakia [47].
Adverse environmental impacts of IDS include but are not limited to changes in
local animal habitat pattern, soil and land cover degradation, and water resources
contamination [4, 25]. Additionally, a substantial proportion of the waste manage-
ment budget is dedicated to controlling IDS. For example, over 100 million pounds
of collected taxes in the UK were spent annually for the identification and cleaning
of IDS [5]. USA allocated around 30% of its solid waste management budget for the
reclamation of IDS [24]. Similarly, Glanville and Chang [17] reported an accumu-
lated cost of approximately 17 million USD in 2013 for managing IDS in Queensland,
Australia. Therefore, the identification of IDS has great practical significance.
Literature proposes different methods in determining potential IDS [7, 55]. For
example, high frequent dumping activities are recognized by image classification and
deep learning algorithms in the USA as an alternative way for community-based IDS
manual reporting, camera monitoring, and evaluation [6]. Similarly, video surveil-
lance in potential IDS is adopted using a classification method called “SCPSR”
to recognize the illegal dump trucks [54]. In their method, a four-stage detection
algorithm including foreground, wheels, cab, and hopper zone were detected and
compared to common dump trucks [54]. Furthermore, Lu [31] introduced a compre-
hensive framework consisting of three steps including identification a set of red-flag
indicators, development of an analytical model, and evaluation and calibration of the
analytical model to highlight the main determinants associated with IDS.
Among all methods for mapping potential IDS, remote sensing (RS) and
geographical information system (GIS) analysis are increasingly popular tools due
to the availability of low cost and frequent satellite imagery [9, 26, 30]. For example,
a multi-criteria decision-making (MCDM) tool in GIS is used to integrate decisive
factors for potential IDS in Venice lagoon, Italy, considering the location of landfills,
former contaminated sites, quarries, populated areas, and accessibility to roads [2].
The disturbed spectral signature of degraded vegetation cover over IDS was evalu-
ated using IKONOS satellite imagery to identify similar patterns for IDS mapping
[50].
However, the presence of smaller communities in distant locations and the inac-
cessibility of landfills are not fully addressed in many published studies. This is
especially important due to the opaque nature of IDS and illegal anthropogenic
activities [25, 44]. For example, First Nation communities frequently impacted by
the presence of IDS and its adverse consequences in prairie lands [1, 40]. According
to Oyegunle and Thompson [39], the absence of sanitary landfills, recycling depots,
and waste treatment facilities in Manitoba First Nation communities may promote
Aggregation of Nighttime Light Imagery, Remote Sensing Indices … 1089
open dumping activists which affect soil and water quality. Reserve lands are more
vulnerable due to the lack of enforcement of environmental regulations and fewer
site remediation projects [23]. Therefore, mapping potential IDS with respect to
populated centers will help to safeguard the health of the residents.
The current study aggregated RS indices, vector files, and GIS network analysis
to (i) map and classify potential IDS and (ii) evaluate the accessibility of potential
IDS by the neighboring populated points and reserve lands. Nighttime light (NTL)
imagery is used to pinpoint human activities and settlements [52, 56]. For example,
Karimi et al. [28] used NTL imagery in Regina, Canada, for detecting populated
areas. These areas were then used to evaluate the suitability of future landfills, the
proximity of landfills to populated areas [28]. It is believed that populated areas
derived from NTL images are important to evaluate the true impacts of IDS [29]. This
study adopted NASA Black Marble NTL imagery for the identification of populated
regions and assess their impacts.
2 Methods
Table 1 Details of RS images and vectors files acquisition, details of parameters’ association with
IDSs, and type of fuzzification method implemented in each parameter
Satellite Date Source Details of association with IDSs Fuzzification
imagery or
vector data
MODIS 2021–2022 [35] Disturbed vegetation cover, lower Inverse linear
vegetation (average NVDI, is associated with the
index (NDVI) monthly) presence of IDS [27, 29, 32]
MODIS land 2021–2022 [36] Elevated land surface temperature is Linear
surface (average observed in the vicinity of IDS or
temperature monthly) disposal sites due to presence of
(LST) biodegradable materials [27, 29, 49,
53]
Location of 2022 [22] Absence of landfills might be Linear
Landfills associated with frequent occurrence
of IDS [33, 34]
Canada 2021 [18, 19] Higher potential of illegal dumping Linear
railways and activities in the vicinity of roads and
highways railroads [25, 48]
Canada 2021 [20] Distance of potential IDS from First N/A
reserve lands Nation reserve lands can be further
evaluated to protect their
communities [39]
NASA NTL 2016 [37] Human activities, including illegal –
product (average dumping activities, and settlements
annual) can be identified by NTL satellite
images [28, 29, 41, 43]
of potential IDS might help to improve waste management [4] or mitigate adverse
impacts on human health [8].
Network analysis is used to further investigate the distance between populated
points, reserve lands, and IDS. Network datasets for the study area were created
using road network files including highways and grid roads (Table 1). Speed limits
were assumed 80 km/hr and 50 km/hr for highways and urban areas, respectively
[46]. The “location allocation” tool in network analysis enables the user to assign
different travel time from any presumed facilities and evaluate the neighboring area
coverage [15]. For this purpose, potential IDS points were modeled as facilities, and
three different travel times (15, 30, and 45 min) were considered.
Figure 3 shows the four potential IDS and the neighboring populated points and
reserve lands (RLs). The IDS 2 located at the north has two suspected sites. Three
different travel times from the centroid point of IDS were used to develop the area
1092 N. Karimi and K. T. W. Ng
Fig. 3 Final potential IDS including their covered area in three different travel time equal to 15,
30, and 45 min
coverage by roads diffusing from the centroid points of IDSs. Thus, the shaded areas
from the IDS 1–4 are not consistent. A larger shaded area can be obtained if there
are multiple sites and located in an intensified roads network, such as IDS 2.
As shown in Fig. 3, a total of 75% (IDS 1, IDS 2, IDS 4) were located in the western
part of the city of Regina, probably due to the (i) presence of intensified highways and
railways and the (ii) absence of waste disposal facilities. It is noteworthy to mention
that all four IDS are in close proximity along the railways (Fig. 3). This is probably
due to the elevated anthropogenic activities in these areas. Similarly, Glanville and
Chang [17] indicated that the pattern of IDS distribution can be partially influenced
by the direction and intensification of road and railroad networks in Queensland,
Australia.
The location of IDS is further evaluated in relation to populated points (red circles),
landfills (black circles), and RLs (pink regions). None of the populated points fall
within the 0–15 min travel time toward IDS centroid points, which might mitigate
the adverse health effects of thesis IDS. There are five populated points located at the
southern edges of IDS 1 with no landfills in this neighborhood. More waste facilities
are recommended. There are a number of scattered populated points in the vicinity
of IDS 2; however, this area is surrounded by three landfills in the western, southern,
and eastern edges. Nissim et al. [38] showed that the presence of regulated landfills
with a well-managed collection plan is important to avoid illegal dumping activities.
Aggregation of Nighttime Light Imagery, Remote Sensing Indices … 1093
IDS 3, located at the east side of Regina, should be monitored carefully. In addition
to the presence of populated points at the southern and eastern edges of IDS 3, the
northern edges of this area are entirely covered with major reserve lands (RL 1).
There are a number of landfills in the eastern edge of IDS 3; however, the travel time
generally exceed 45 min. Given the close proximity of populated areas and RLs,
closer monitoring of IDS 3 is recommended to protect the public. Results suggest
that RL 2 and RL 3 are less vulnerable to adverse impacts associated with illegal
dumping activities.
Two landfills located at southeastern edges of IDS 4 with a travel time of 30 and
45 min, respectively. The accessibility to waste disposal near IDS 4 should be accept-
able. Travel time from IDS 4 to any adjacent populated points well exceeds 45 min,
suggesting the lowest immediate risk to human receptors. However, the long-term
effects to human receptors and the adverse impact to the surrounding environment
should be examined. Therefore, the risk associated with IDS 4 should be lower than
other sites. A study on municipal waste collection service in Saskatchewan also
showed a negative association between waste generation sites to drop-off locations
distance and frequency of trips [45].
Cumulative shaded areas with a travel time equal to 15, 30, 45 min in all IDS
are equal to around 1155 km2 (6% of the study area), 5433 km2 (30% of the study
area), and 10,438 km2 (58% of the study area), respectively. Thus, the proposed
method may be used as a good screening tool to map the risk of potential IDS. In
addition, the size of the shaded areas indicates the intensification of road networks
and level of accessibility by neighboring populations. With respect to 45 min travel
time, IDS 2 has the greatest shaded area equal to 2,954km2 , closely followed by IDS
1 with a shaded area of 2898 km2 . However, IDS 3 occupied the smallest shaded
area (2070km2 ).
4 Conclusion
Current study integrates RS imagery, vector files, and GIS network tool to map poten-
tial IDS and further investigate their locations in relation to neighboring populated
points and reserve lands. Division no 6 in Saskatchewan is selected for identifying the
applicability of the proposed method due to provincial movement toward a regional
solid waste management plan. Four potential IDSs are identified. Results indicate
that IDS were more likely to locate near intensified roads and railroad networks
and areas with the absence of landfills. None of the detected populated points fall
within the 15 min travel time zone from the centroid location of IDS; suggesting
a lower immediate risk to human receptors. The long-term human effects and the
IDS effects to other environmental receptors are however not assessed. Southern and
northern edges of IDS 3 are in close proximity with populated points and reserve
lands, respectively. As such, a more stringent monitoring program is recommended
to protect public health. Results of this study suggested that not only the potential IDS
should be mapped and evaluated, but their respective location to populated points,
1094 N. Karimi and K. T. W. Ng
Acknowledgements The research reported in this paper was supported by a grant from the Natural
Sciences and Engineering Research Council of Canada (RGPIN-2019-06154) to the second author
(K.T.W. Ng), using computing equipment funded by FEROF at the University of Regina. The
authors are grateful for their support. The views expressed herein are those of the writers and not
necessarily those of our research and funding partners.
References
1. Baijius W, Patrick RJ (2019) “We don’t drink the water here”: the reproduction of undrinkable
water for First Nations in Canada. Water 11(5):1079. https://doi.org/10.3390/w11051079
2. Biotto G, Silvestri S, Gobbo L, Furlan E, Valenti S, Rosselli R (2009) GIS, multi-criteria and
multi-factor spatial analysis for the probability assessment of the existence of illegal landfills.
Int J Geogr Inf Sci 23(10):1233–1244. https://doi.org/10.1080/13658810802112128
3. Chelbi I, Mathlouthi O, Zhioua S, Fares W, Boujaama A, Cherni S, Barhoumi W, Dachraoui K,
Derbali M, Abbass M, Zhioua E (2021) The impact of illegal waste sites on the transmission of
zoonotic cutaneous leishmaniasis in Central Tunisia. Int J Environ Res Public Health 18(1):66.
https://doi.org/10.3390/ijerph18010066
4. Chu AM (2021) Illegal waste dumping under a municipal solid waste charging scheme: appli-
cation of the neutralization theory. Sustainability 13(16):9279. https://doi.org/10.3390/su1316
9279
5. CIWM (Chartered Institution of Wastes Management) (2022) Fly Tipping (illegal dumping
of waste). https://www.ciwm.co.uk/ciwm/knowledge/fly-tipping.aspx. Accessed on 1 March
2022
6. Dabholkar A, Muthiyan B, Srinivasan S, Ravi S, Jeon H, Gao J (2017) Smart illegal dumping
detection. Paper presented at the 2017 IEEE third international conference on big data
computing service and applications (BigDataService), pp 255–260
7. Di Fiore V, Cavuoto G, Punzo M, Tarallo D, Casazza M, Guarriello SM, Lega M (2017) Inte-
grated hierarchical geo-environmental survey strategy applied to the detection and investigation
of an illegal landfill: a case study in the Campania Region (Southern Italy). Forensic Sci Int
279, 96–105. https://doi.org/10.1016/j.forsciint.2017.08.016
8. Duh D, Hasic S, Buzan E (2017) The impact of illegal waste sites on a transmission of zoonotic
viruses. Virol J 14(1):134. https://doi.org/10.1186/s12985-017-0798-1
9. Esposito G, Matano F, Sacchi M (2018) Detection and geometrical characterization of a buried
landfill site by integrating land use historical analysis, digital photogrammetry and airborne
Lidar data. Geosciences 8(9):348. https://doi.org/10.3390/geosciences8090348
10. ESRI (2022a) Fuzzy membership (Spatial Analyst), Geoprocessing tools. https://pro.arc
gis.com/en/pro-app/2.8/tool-reference/spatial-analyst/fuzzy-membership.htm. Accessed on 3
March 2022
11. ESRI (2022b) Data classification methods, symbolize feature layers. Maps and
scenes. https://pro.arcgis.com/en/pro-app/latest/help/mapping/layer-properties/data-classific
ation-methods.htm. Accessed on 3 March 2022
12. ESRI (2022c) Majority filter (spatial analyst). Geoprocessing tools. https://pro.arcgis.com/en/
pro-app/2.8/tool-reference/spatial-analyst/majority-filter.htm. Accessed on 3 March 2022
13. ESRI (2022d) Boundary clean (spatial analyst). Geoprocessing tools. https://pro.arcgis.com/
en/pro-app/2.8/tool-reference/spatial-analyst/boundary-clean.htm. Accessed on 3 March 2022
14. ESRI (2022e) Converting polygons to points. Production mapping. https://desktop.arcgis.
com/en/arcmap/latest/extensions/production-mapping/converting-polygons-to-points.htm#:~:
Aggregation of Nighttime Light Imagery, Remote Sensing Indices … 1095
text=The%20Production%20Polygon%20To%20Point,values%20for%20all%20common%
20attributes. Accessed on 18 Feb 2022
15. ESRI (2022f) Location-allocation analysis layer. Network analyst. Analysis and geopro-
cessing. https://pro.arcgis.com/en/pro-app/2.8/help/analysis/networks/location-allocation-ana
lysis-layer.htm. Accessed on 15 Feb 2022
16. Ferronato N, Torretta V (2019) Waste mismanagement in developing countries: a review of
global issues. Int J Environ Res Public Health 16(6):1060. https://doi.org/10.3390/ijerph160
61060
17. Glanville K, Chang HC (2015) Mapping illegal domestic waste disposal potential to support
waste management efforts in Queensland, Australia. Int J Geogr Inf Sci 29(6):1042–1058.
https://doi.org/10.1080/13658816.2015.1008002
18. Government of Canada (2022a) Road network files. https://www12.statcan.gc.ca/census-rec
ensement/2011/geo/RNF-FRR/index-eng.cfm. Accessed on 25 Feb 2022
19. Government of Canada (2022b) National railway network—NRWN—GeoBase series.
Accessed on https://open.canada.ca/data/en/dataset/ac26807e-a1e8-49fa-87bf-451175a859b8/
resource/f57bfbee-e8c6-45a9-87a6-fcfb072f5d00. Accessed on 25 Feb 2022
20. Government of Canada (2022c) Aboriginal lands of Canada legislative boundaries. https:/
/open.canada.ca/data/en/dataset/522b07b9-78e2-4819-b736-ad9208eb1067?activity_id=564
231bf-2dad-444c-9ec1-843740d376b1. Accessed on 25 Feb 2022
21. Government of Canada (2022d) Indigenous peoples in Saskatchewan. Saskatchewan region.
https://www.sac-isc.gc.ca/eng/1601920834259/1601920861675. Accessed on 18 Apr 2022
22. Government of Saskatchewan (2022) Location of landfills. Saskatchewan solid waste manage-
ment. https://geohub.saskatchewan.ca/datasets/saskatchewan-solid-waste-management/exp
lore?location=39.533390%2C105.572473%2C2.95. Accessed on 25 Feb 2022
23. Hykin JB (2016) Contaminated sites on first nation lands. http://www.woodwardandcomp
any.com/wp-content/uploads/pdfs/2016-09-20-Contaminated_Sites_on_First_Nation_Lands-
Final.pdf. Accessed on 18 Apr 2022
24. Ichinose D, Yamamoto M (2011) On the relationship between the provision of waste manage-
ment service and illegal dumping. Resour Energy Econom 33(1):79–93. https://doi.org/10.
1016/j.reseneeco.2010.01.002
25. Jakiel M, Bernatek-Jakiel A, Gajda A, Filiks M, Pufelska M (2019) Spatial and temporal
distribution of illegal dumping sites in the nature protected area: the Ojców National Park,
Poland. J Environ Planning Manage 62(2):286–305. https://doi.org/10.1080/09640568.2017.
1412941
26. Karimi N, Ng KTW (2022) Mapping and prioritizing potential illegal dump sites using
geographic information system network analysis and multiple remote sensing indices. Earth
3(4):1123–1137. https://doi.org/10.3390/earth3040065
27. Karimi N, Ng KTW, Richter A, Williams J, Ibrahim H (2021) Thermal heterogeneity in the
proximity of municipal solid waste landfills on forest and agricultural lands. J Environ Manage
287:112320. https://doi.org/10.1016/j.jenvman.2021.112320
28. Karimi N, Richter A, Ng KTW (2020) Siting and ranking municipal landfill sites in regional
scale using nighttime satellite imagery. J Environ Manage 256:109942. https://doi.org/10.1016/
j.jenvman.2019.109942
29. Karimi N, Ng KTW, Richter A (2022) Development and application of an analytical framework
for mapping probable illegal dumping sites using nighttime light imagery and various remote
sensing indices. Waste Manage 143:195–205. https://doi.org/10.1016/j.wasman.2022.02.031
30. Krtalić A, Poslončec-Petrić V, Vrgoč S (2018) The concept of detecting illegal waste landfills
in the Zagreb area using the remote sensing methods. Geodetski List 72(1):37–54
31. Lu W (2019) Big data analytics to identify illegal construction waste dumping: a Hong Kong
study. Resour Conserv Recycl 141:264–272. https://doi.org/10.1016/j.resconrec.2018.10.039
32. Mahmood K, Batool A, Faizi F, Chaudhry MN, Ul-Haq Z, Rana AD, Tariq S (2017) Bio-thermal
effects of open dumps on surroundings detected by remote sensing—influence of geographical
conditions. Ecol Ind 82:131–142. https://doi.org/10.1016/j.ecolind.2017.06.042
1096 N. Karimi and K. T. W. Ng
33. Matos J, Oštir K, Kranjc J (2012) Attractiveness of roads for illegal dumping with regard to
regional differences in Slovenia. Acta Geogr Slov 52(2):431–451. https://doi.org/10.3986/AGS
52207
34. Matsumoto S, Takeuchi K (2011) The effect of community characteristics on the frequency
of illegal dumping. Environ Econ Policy Stud 13(3):177–193. https://doi.org/10.1007/s10018-
011-0011-5
35. NASA (2022a) Timely averaged normalized difference vegetation index (NDVI) with a spatial
resolution of 0.05 degrees over the study area and duration between 2021–Jan to 2022–Jan,
product name: MOD13C2 v006. https://giovanni.gsfc.nasa.gov/giovanni/#service=TmA
vMp&starttime=2021-01-01T00:00:00Z&endtime=2022-01-31T23:59:59Z&bbox=-120.
498,43.6084,-95.3613,65.7568&data=MOD13C2_006_CMG_0_05_Deg_Monthly_NDVI.
Accessed on 22 Feb 2022
36. NASA (2022b) Timely averaged land surface temperature (LST) with a spatial resolution
of 0.05 degrees over the study area and duration between 2021-Jan to 2022-Jan, product
name: MOD11C3 v006. https://giovanni.gsfc.nasa.gov/giovanni/#service=TmAvMp&startt
ime=2021-01-01T00:00:00Z&endtime=2022-01-31T23:59:59Z&bbox=-120.498,43.6084,-
95.3613,65.7568&data=MOD11C3_006_LST_Day_CMG&variableFacets=dataFieldMeasur
ement%3ASurface%20Temperature%3B. Accessed on 22 Feb 2022
37. NASA (2022c) NASA black marble satellite imagery for 2016. https://earthobservatory.nasa.
gov/features/NightLights. Accessed on 18 Feb 2022
38. Nissim I, Shohat T, Inbar Y (2005) From dumping to sanitary landfills—solid waste
management in Israel. Waste Manage 25(3):323–327. https://doi.org/10.1016/j.wasman.2004.
06.004
39. Oyegunle A, Thompson S (2018) Wasting indigenous communities: a case study with garden
hill and Wasagamack First Nations in Northern Manitoba, Canada. J Solid Waste Technol
Manage 44(3):232–247. https://doi.org/10.5276/JSWTM.2018.232
40. Patrick RJ (2018) Adapting to climate change through source water protection: case studies
from Alberta and Saskatchewan, Canada. Int Indigenous Policy J 9(3). https://doi.org/10.18584/
iipj.2018.9.3.1
41. Richter A, Ng KTW, Karimi N (2019) A data driven technique applying GIS, and remote sensing
to rank locations for waste disposal site expansion. Resour Conserv Recycl 149:352–362. https:/
/doi.org/10.1016/j.resconrec.2019.06.013
42. Richter A, Ng KTW, Karimi N (2021) Meshing Centroidal Voronoi Tessellation with spatial
statistics to optimize waste management regions. J Clean Prod 295:126465. https://doi.org/10.
1016/j.jclepro.2021.126465
43. Román MO, Stokes EC (2015) Holidays in lights: tracking cultural patterns in demand for
energy services. Earth’s Future 3(6):182–205. https://doi.org/10.1002/2014EF000285
44. Ruffell A, Dawson L (2009) Forensic geology in environmental crime: Illegal waste movement
and burial in Northern Ireland. Environ Forens 10(3):208–213. https://doi.org/10.1080/152759
20903140346
45. Salkie FJ, Adamowicz WL, Luckert MK (2001) Household response to the loss of publicly
provided waste removal: a Saskatchewan case study. Resour Conserv Recycl 33(1):23–36.
https://doi.org/10.1016/S0921-3449(01)00055-6
46. Saskatchewan Government Insurance (SGI) (2022) Speed. Saskatchewan Driver’s Hand-
book. https://www.sgi.sk.ca/handbook/-/knowledge_base/drivers/speed#:~:text=In%20the%
20absence%20of%20signs,can%20travel%20under%20ideal%20conditions. Accessed on 17
Feb 2022
47. Šedová B (2016) On causes of illegal waste dumping in Slovakia. J Environ Planning Manage
59(7):1277–1303. https://doi.org/10.1080/09640568.2015.1072505
48. Seror N, Portnov BA (2018) Identifying areas under potential risk of illegal construction and
demolition waste dumping using GIS tools. Waste Manage 75:22–29. https://doi.org/10.1016/
j.wasman.2018.01.027
49. Shaker A, Faisal K, El-Ashmawy N, Yan WY (2010) Effectiveness of using remote sensing
techniques in monitoring landfill sites using multi-temporal Landsat satellite data. Al-Azhar
Univ Eng J 5(1):542–551
Aggregation of Nighttime Light Imagery, Remote Sensing Indices … 1097
50. Silvestri S, Omri M (2008) A method for the remote sensing identification of uncontrolled
landfills: formulation and validation. Int J Remote Sens 29(4):975–989. https://doi.org/10.
1080/01431160701311317
51. Vitali M, Castellani F, Fragassi G, Mascitelli A, Martellucci C, Diletti G, Scamosci E, Astolfi
ML, Fabiani L, Mastrantonio R, Protano C, Spica VR, Manzoli L (2021) Environmental status
of an Italian site highly polluted by illegal dumping of industrial wastes: the situation 15 years
after the judicial intervention. Sci Total Environ 762:144100. https://doi.org/10.1016/j.scitot
env.2020.144100
52. Wang J, Lu F (2021) Modeling the electricity consumption by combining land use types and
landscape patterns with nighttime light imagery. Energy 234:121305. https://doi.org/10.1016/
j.energy.2021.121305
53. Yan WY, Mahendrarajah P, Shaker A, Faisal K, Luong R, Al-Ahmad M (2014) Analysis of
multi-temporal landsat satellite images for monitoring land surface temperature of municipal
solid waste disposal sites. Environ Monit Assess 186(12):8161–8173. https://doi.org/10.1007/
s10661-014-3995-z
54. Yang W, Hu X, Gao R, Liao Q (2016) Dump truck recognition based on SCPSR in videos.
Paper presented at the Chinese conference on pattern recognition, pp 325–333.
55. Zhang R, Yuan F, Cheng E (2018) A network-based approach on detecting dredgers’ illegal
behavior of dumping dredged sediments. Int J Distrib Sens Netw 14(12):1550147718818400.
https://doi.org/10.1177/1550147718818400
56. Zhao N, Liu Y, Hsu F, Samson EL, Letu H, Liang D, Cao G (2020) Time series analysis of
VIIRS-DNB nighttime lights imagery for change detection in urban areas: a case study of
devastation in Puerto Rico from hurricanes Irma and Maria. Appl Geogr 120:102222. https://
doi.org/10.1016/j.apgeog.2020.102222