0% found this document useful (0 votes)
188 views18 pages

Article 1

The document presents FashionQ, an AI-driven creativity support tool designed to enhance ideation in fashion design by facilitating divergent and convergent thinking through three cognitive operations: extending, constraining, and blending. Based on user studies with fashion design professionals, the effectiveness of FashionQ was demonstrated in aiding the design process while also identifying challenges in AI utilization. The findings suggest implications for future AI-based creativity support tools across various design domains.

Uploaded by

Aya El qouatli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
188 views18 pages

Article 1

The document presents FashionQ, an AI-driven creativity support tool designed to enhance ideation in fashion design by facilitating divergent and convergent thinking through three cognitive operations: extending, constraining, and blending. Based on user studies with fashion design professionals, the effectiveness of FashionQ was demonstrated in aiding the design process while also identifying challenges in AI utilization. The findings suggest implications for future AI-based creativity support tools across various design domains.

Uploaded by

Aya El qouatli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

FashionQ: An AI-Driven Creativity Support Tool for Facilitating

Ideation in Fashion Design


Youngseung Jeon Seungwan Jin
Ajou University Ajou University
Republic of Korea Republic of Korea
jeonyoungs@ajou.ac.kr jin6491@ajou.ac.kr

Patrick C. Shih Kyungsik Han∗


Indiana University Bloomington Ajou University
United States Republic of Korea
patshih@indiana.edu kyungsikhan@ajou.ac.kr

ABSTRACT (CHI ’21), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY, USA,
Recent research on creativity support tools (CST) adopts artifcial 18 pages. https://doi.org/10.1145/3411764.3445093
intelligence (AI) that leverages big data and computational
capabilities to facilitate creative work. Our work aims to articulate 1 INTRODUCTION
the role of AI in supporting creativity with a case study of Creativity is the ability to generate and refne ideas. It involves
an AI-based CST tool in fashion design based on theoretical coming up with new approaches to problems, original resolutions to
groundings. We developed AI models by externalizing three conficts, or fresh insights from datasets. Furthermore, creativity is
cognitive operations (extending, constraining, and blending) that the interaction among aptitude, process, and environment by which
are associated with divergent and convergent thinking. We present an individual or group produces a perceptible idea that is both novel
FashionQ, an AI-based CST that has three interactive visualization and useful as defned within a social context [56]. Organizations
tools (StyleQ, TrendQ, and MergeQ). Through interviews and a consider creativity an important skill that helps identify potential
user study with 20 fashion design professionals (10 participants opportunities and enables innovation. Creativity relates to design
for the interviews and 10 for the user study), we demonstrate the thinking, one of the core concepts that defne human-computer
efectiveness of FashionQ on facilitating divergent and convergent interaction (HCI) and what HCI aims to support. A 2018 survey
thinking and identify opportunities and challenges of incorporating of creativity-related literature in ACM Digital Library indicates
AI in the ideation process. Our fndings highlight the role and use that HCI is almost exclusively responsible for creativity-oriented
of AI in each cognitive operation based on professionals’ expertise publications [25].
and suggest future implications of AI-based CST development. Displays of creativity or creative thinking vary depending on
the individual, job, or environment. In the case of fashion design,
CCS CONCEPTS in which artistic creativity plays a signifcant role in making
• Human-centered computing → Interactive systems and design task outcomes successful, creativity is highly associated
tools; • Computing methodologies → Artifcial intelligence. with the number of new ideas that design professionals can
generate for a given design task [69]. Importantly, there also
KEYWORDS exists barriers to creativity. For example, during a design task,
designers often encounter design fxation, which is an obstacle
creativity support tool, artifcial intelligence utilization, fashion to the successful completion of a problem [37]. Here divergent
design, ideation process, cognitive operation thinking and convergent thinking comes into play [59, 60]. Divergent
ACM Reference Format: thinking develops new ideas by referring to various materials with
Youngseung Jeon, Seungwan Jin, Patrick C. Shih, and Kyungsik Han. 2021. the aim of expanding or transforming problems of existing ideas,
FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in and convergent thinking progressively delimits one’s research space
Fashion Design. In CHI Conference on Human Factors in Computing Systems and supports fnding a design solution that is both new and adapted
to various constraints [9, 21, 53]. Research has suggested ways
∗ Corresponding author
of supporting divergent and convergent thinking based on the
following three cognitive operations: (1) extending the notion of
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed concepts [77], (2) constraining concepts [7, 9], and (3) blending two
for proft or commercial advantage and that copies bear this notice and the full citation or more concepts [22].
on the frst page. Copyrights for components of this work owned by others than ACM One of the directions taken in creativity research in HCI is to
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specifc permission and/or a elicit design elements or requirements of creativity support and/or
fee. Request permissions from permissions@acm.org. to develop creativity support tools (CST) using computer techniques
CHI ’21, May 8–13, 2021, Yokohama, Japan to facilitate creative thinking [19, 26, 27, 41, 55, 74]. Recently,
© 2021 Association for Computing Machinery.
ACM ISBN 978-1-4503-8096-6/21/05. . . $15.00 a growing body of CST research has been adopting artifcial
https://doi.org/10.1145/3411764.3445093 intelligence (AI) and focusing on AI-based interface development
CHI ’21, May 8–13, 2021, Yokohama, Japan Jeon et al.

to model large-scale datasets and provide analytic insights to • We articulated how AI can be used to help externalize three
users in many design domains, such as app interfaces [19, 71], cognitive operations through the lenses of divergent and
graphics [47], and fashion [39, 74]. Our research shares the same convergent thinking.
goal as that of prior research in AI-based CST, and we extend • We developed a AI-based CST, FashionQ, which leverages
previous eforts by (1) identifying and applying AI capabilities to AI capabilities to support fashion design professionals’
facilitate cognitive operations that could overcome design fxations creativity and decision-making.
based on theoretical groundings in divergent and convergent • We discussed challenges of AI use and possible directions and
thinking, (2) empirically investigating the role of AI through the design implications for reliable AI use in creativity support.
development of an AI-based CST and a user study, and (3) discussing
directions for the efective use of AI in creativity support. Our Our research fndings and contributions not only extend current
research takes the form of a case study of human-AI research. CST research by applying AI informed by theoretical perspectives,
In this paper, we present an AI-based CST, FashionQ. With but also provide insights that can be applied to other domains, such
the availability of AI in a fashion domain [1, 38, 42], the as product design, interior design, and interface design, which are
development of FashionQ was carried out in collaboration with highly dependent on image data of prior design work and case
fashion design professionals. Based on interviews with 10 fashion studies for inspiration.
design professionals, we identifed three phases of the fashion
ideation process (i.e., recognizing a brand, understanding trends, 2 RELATED WORK
and setting design directions) and externalized three cognitive
operations representing the design phases using AI. Based on
2.1 Cognitive operations for supporting
large-scale runway image data (302,772), we developed AI models creativity
with capabilities that include fashion attribute detection, style Finke et al. [23] identifed that restructuring or reorganizing existing
clustering, style forecasting, and style merging, all of which had concepts provides the new understandings in tasks related with
three analytical interfaces (StyleQ, TrendQ, and MergeQ) integrated creativity. Engaging in both divergent and convergent thinking
into FashionQ. is the one of good solutions for people who undertake creative
We conducted a user study with 10 additional fashion design activities [21, 29, 59]. Divergent thinking refers to coming up with
professionals who did not participate in the interview study for the new ideas and unexpected solutions in a creative process [59, 60].
evaluation of FashionQ. We examined the perceived efectiveness Contrary to divergent thinking, convergent thinking refers to
of FashionQ at each design step by means of a comparison analysis the mode of human cognition that strives for the deductive
between the use and the nonuse of FashionQ. The results indicate generation of a single, concrete, accurate, and efective solution [29].
that participants found FashionQ to be signifcantly more efective Eysenck [21] emphasizes that the support of both divergent and
not only in each of the design steps but also in the overall evaluation convergent thinking is essential for creativity support. Woodman
of the design task outcomes. Participants responded that they were et al. [80] mentioned that in order for a creative person to produce
able to expand the concept of a specifc style using the results of socially useful products, his/her divergent thinking must come with
attribute-based style groups (StyleQ) and popular changes over the efective convergent thinking.
years (TrendQ) through visualizations; moreover, they noted their There are three main cognitive operations that support divergent
ability to access many design directions for potential use from the and convergent thinking. The frst operation is extending for
merged information of fashion styles and trends (MergeQ). We also divergent thinking. Ward et al. [78] stated that extending the
observed limitations (e.g., accuracy issues, blackbox algorithms, concepts of instances in conceptual design is helpful for divergent
limited explanations) to AI that the participants perceived during thinking. Bonnardel [8] mentioned that extending the boundary
design tasking. In particular, the study results highlight the role of instances causes an expansion to a new conceptual design,
and use of AI in each cognitive operation based on professionals’ which can entail creative design solutions. Similarly, Srinivasan
expertise. The participants were open and receptive of the results and Chakrabarti [69] demonstrated that increasing the number
of AI when the results could be used as additional fashion of instances in a conceptual design has a signifcantly positive
information in the ideation process of recognizing a brand and relationship with the novelty of design ideas.
understanding trends. However, the participants showed high and The second operation is constraining for convergent thinking.
critical standards toward the AI results, when the results intervened Constraining means the construction of a “constrained cognitive
in their area of expertise in the case of generating new ideas. In environment,” which delimits the space of research, on the basis of
this regard, participants asked for more detailed and controllable diferent kinds of constraints, in order to reach in-depth levels
functionalities to allow them to interact with AI, in hopes of making of understanding. Bonnardel [9] highlighted “management of
AI more customizable, explainable, and interpretable. These results constraints,” delimiting designers’ research space and evaluate
indicate that the utilization of AI or its results should be considered ideas or solutions. These constraints can consist of constructed
along with user or domain characteristics and the application of constraints, which depend on the designers’ expertise, or deduced
human-AI methods, such as human-in-the-loop or crowdsourcing; constraints, which depend on the current state of problem solving
furthermore, interface types for supporting such methods should as well as on previously defned constraints [7]. Constraints provide
be carefully considered in the ideation process. the designer an opportunity to defne, develop, and delimit his/her
The following are our research contributions: design space to make it auspicious for creative performance such
as focusing on the direction of designing [53, 70].
FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in Fashion Design CHI ’21, May 8–13, 2021, Yokohama, Japan

Request Situation Problem Specs Criteria Prototype


DeJonge [18] made explored perceived described established development
Evaluate

Problem Analyse Define Idea Select


Watkins [79] acceptance problem problem generation solution
Implement Evaluate

Lamb & Problem Preliminary Design Prototype


Analysis Evaluation Implement
Kallal [45] identification idea refinement development

Figure 1: Summary of fashion design development processes (adapted from [44]). Yellow boxes indicate the ideation phases.
The third operation is blending for both divergent and convergent provide feedback on users’ work to provide opportunities to revise
thinking. Fauconnier and Turner [22] highlighted that the blending the work in a creative way [64, 67]. The central point here is that it
of two or more instances in a conceptual design is indispensable for is not necessary to include all three processes in the design of a CST;
both divergent and convergent thinking. Louridas [50] argued that focusing on a single process is also of decisive importance [24].
much design is bricolage, which refers to a construction or creation In this work, we focused on the ideation process especially in
of a work from a diverse range of things that are available. Through a fashion design domain (Figure 1), considering three cognitive
blending, designers can have an opportunity to develop a brand new operations (extending, constraining, and belending). Laamanen
concept (convergence), and the concept becomes another instance and Seitamaa-Hakkarainen [43] explained that, during the ideation
that can broaden designers’ insights (divergent) [53]. phase, designers use supporting practices (e.g., collecting, sketching,
In this paper, we articulate how we employed these three experimenting) and triggers (e.g., sources of inspiration, mental
cognitive operations for divergent and convergent thinking support image, primary generator) for framing the design directions.
in the context of fashion design. We present the development of AI Previous work [18, 45, 79] of fashion design development processes
models, externalizing three cognitive operations in a CST with AI indicated that designers defne problems and generate ideas prior to
capabilities for the purpose of supporting creative thinking. implementation (Figure 1). We adopted these insights and guidelines
when conducting interviews with fashion design professionals,
which allowed us to identify detailed processes and challenges in
2.2 Creativity support tool (CST) research the ideation phases. We identify and discuss potential solutions
Frich et al. [24] presented a tentative synthesis defnition of a CST, based on three cognitive operations for creativity in each process.
namely a CST runs on one or more digital systems, encompasses
creativity-focused features, and is employed to positively infuence
users of varying expertise in one or more distinct phases of
2.3 Computer-based support for creativity in
the creative process. Shneiderman [65] proposed a framework to the design domain
support the development of digital-interactive tools for creative Much research has investigated ways of using computer
problem-solving. To enhance creativity with a CST, HCI research technologies to support creativity. Our literature review indicates
emphasizes not only applying creative cognition for developing two main approaches in CST research: crowdsourcing and AI.
CST [17] but also understanding the creative process in the A crowdsourcing-based CST helps users expand the boundaries
domain [24]. of their thought by providing crowdsourced opinions. Voyant [81]
Davis et al. [17] used cognitive theories to explain how CSTs is a CST that allows users to receive feedback on their design
can address the needs in creative tasks. They employed theories of work from the selected “crowd.” Based on multiple elements
embodied cognition, situated cognition, and distributed cognition of design evaluation, such as frst notice, impressions, goals,
for creativity support. Embodied cognition supports to make and guidelines, Voyant ofers feedback with coordinated views.
users’ ideas more concrete and interactive through interaction Decipher [83] provides designers with feedback through various
between users and embodiments [73]. Situated cognition describes computer-based functions, such as categorizing a crowdsourced
a continuum of competency that shows how tools can support feedback, identifying valuable feedback, and prioritizing which
users for creative expression rather than consciously controlling feedback to incorporate in a revision. Designers can recognize
tools [3, 66]. Distributed cognition describes how automating the strengths and weaknesses of various aspects of their design
technical skills can support creative engagement, motivation, and work and compare the feedback of diferent providers. However,
reduce the barrier of entry [34]. Benedetti et al. [4] implemented a crowdsourcing-based CST has some limitations. There may be an
digital painting system, Painting with Bob, considering the concept issue related to the lack of expertise of the crowd [41]. Conversely, a
that refects novices’ unique process of developing creative ideas. (novice or young) designer could experience design fxation because
In addition, CST research primarily focuses on three creative they overemphasize information provided by experts, which may
processes: ideation, implementation, and evaluation. CSTs for inhibit divergent or convergent thinking [15, 52].
ideation provide cultural and conceptual diversity for collaborative An AI-based CST helps users extend their ideas by applying
brainstorming settings and additional ideas [61, 62, 75, 76], whereas various modeling and visualization techniques to analyze big data.
CSTs for implementation perform collaborative digital sketching to Rico [19] supports designing a UI layout for mobile applications.
improve artistic skills [16, 51, 63]. Furthermore, CSTs for evaluation It has functionalities to analyze the visual, textual, structural, and
CHI ’21, May 8–13, 2021, Yokohama, Japan Jeon et al.

Ideation Implementation

StyleQ TrendQ MergeQ


Divergent Convergent Divergent & Convergent
Thinking Thinking Thinking

Figure 2: Schematic overview of our supporting design ideation phases based on divergent and convergent thinking (adapted
from design funnels [11, 57]).

interactive design properties of 72,000 popular designs (based on In the second stage (Section 5), we developed AI models that
Google Play Store star ratings) with an autoencoder deep learning aimed to externalize three cognitive operations for divergent and
model [5]. Rico supports the setting of a design direction in various convergent thinking. We built FashionQ, an AI-based CST for
ways. Vaccaro et al. [74] analyzed text and image data related to fashion ideation. FashionQ has three main interactive visualizations:
fashion design on social networking services (SNS). They used StyleQ, TrendQ, and MergeQ. Each visualization was developed
latent dirichlet allocation (LDA) [6] for clustering fashion style using a single or multiple AI models. These visualizations supported
topics (25 groups). Based on the results, they built a CST that three cognitive operations for creativity (Section 2).
provided fashion design professionals with design ideas that take In the third stage (Sections 6 and 7), we evaluated the
TPO (time, place, and occasion) into consideration. RecipeScape is efectiveness of FashionQ in supporting divergent and convergent
an interactive system for browsing and analyzing the hundreds of thinking, practical usability, and ideation for fashion design. This
recipes of a single dish available online [12]. Based on similarity was achieved by giving the same set of the design tasks to two
metrics of the recipe data from natural language processing and conditions — experimental (use of FashionQ) and control (nonuse
human annotation, it used hierarchical clustering to generate recipe of FashionQ but based on current work practice) — and comparing
clusters. the results of user experience in completing each design task.
FashionQ is an AI-based CST that is designed to support Study results from the survey and interviews demonstrated that
divergent and convergent thinking in the ideation process through FashionQ efectively supported the ideation process. We discuss
three interactive visualization interfaces – StyleQ, TrendQ, and insights gleaned from the study, such as strengths, weaknesses, and
MergeQ (Figure 2). It allows the insights obtained from analyzing a solutions regarding AI application to creativity support, as well as
large-scale fashion image data (302,772) to be efectively used. With design implications for the development of an AI-based CST.
deep learning models designed for fashion attribute detection, style
clustering, and popularity forecasting, FashionQ provides users 4 FORMATIVE STUDY
with the results of AI-based data analyses with visualizations as
We conducted interviews with 10 fashion design professionals to
well as the ability to interact with the results.
understand the ideation process for fashion design, the challenges
that interfere with ideation, and solutions to address these
challenges using AI-based cognitive operations.
3 RESEARCH PROCEDURE
This study primarily comprises three stages: (1) Formative study:
the design stage of AI-based CST for fashion ideation, (2) FashionQ:
4.1 Interviews with fashion design
the development stage of AI and the CST interface, and (3) User professionals
study & discussion: the evaluation stage conducted through a user All 10 fashion design professionals (8 females and 2 males) majored
study. Figure 3 illustrates the overall research procedure. in fashion design, and work in a fashion design company. Their
In the frst stage (Section 4), we interviewed 10 fashion design work experience ranges from 3 to 15 years (mean=7.5, SD=3.1).
professionals to obtain an understanding of the fashion design The interviews were conducted in a lab seminar room on an
ideation phases, the challenges of each phase, and solutions to university campus between October 1-15, 2019. Each interview
the challenges. Based on the results of the interviews, we applied took approximately 60 minutes. Two researchers (the frst and
three cognitive operations (i.e., extending, constraining, blending) second authors) conducted the interviews. The interviews were
to support divergent and convergent thinking. audio-recorded and transcribed for later analysis.
FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in Fashion Design CHI ’21, May 8–13, 2021, Yokohama, Japan

Formative study FashionQ User study & Discussion


(Section 4) (Section 5) (Sections 6 & 7)

Study fashion Demonstrate


Externalize cognitive
design professionals’ effectiveness in supporting
mechanisms with AI
experience & needs creativity

Ideation process Object detection


Survey
Challenges Clustering

Solutions Forecasting Interview

Develop a CST
Identify solutions with for ideation Discuss AI-based
cognitive operations (FashionQ) CST future

Extending StyleQ Strengths of AI

Constraining TrendQ Challenges of AI


Interview
Blending MergeQ Design implications

Figure 3: Overview of the study procedure.


During the interview, we asked about (1) their current practices colors, detailed attributes) that are continuously used or that have
of fashion design ideation, (2) barriers and challenges that interfere high production or sales volumes. Based on this understanding
with their creative tasks, and (3) potential solutions and coping of past design results, designers fnally grasp their brand identity.
strategies to address these challenges. Note that at this stage, designers work individually, rather than
After the interviews were completed, we applied thematic together.
analysis and iterative open coding [14] to analyze the interview In the understanding trends phase, designers try to analyze
transcripts. Two researchers coded and analyzed the transcripts major fashion shows (e.g., “fashion weeks”), which is the most
for emerging themes, and the fndings were discussed among the efcient and accurate method to identify current fashion style
co-authors iteratively until consensus is reached. trends [68]. A fashion week is a fashion industry event lasting
approximately one week, during which fashion designers, brands,
4.2 Results or houses showcase their latest collections in runway fashion
shows to buyers and the media. These events infuence trends for
Below, we summarize the fashion design ideation process and
current challenges and possible solutions emerged from the the current and upcoming seasons. The most prominent fashion
weeks are held in the fashion capitals of the world: New York,
interviews. When reporting interview quotes, we use P fX to denote
London, Milan, and Paris. These so-called “Big Four” receive the
participant number X in the formative study.
majority of press coverage. Designers individually analyze the
4.2.1 Fashion design ideation process. Our fndings identifed three styles at fashion weeks on websites that provide fashion image
phases of the fashion design ideation process: data, such as U.S. Vogue1 where designers can explore the entire
range of major fashion weeks from 1993 to now. In addition,
• Recognizing one’s brand: designers individually analyze the designers can exploit the trend reports published by trend analysis
style of the past fashion designs of their brand to recognize companies, such as WGSN,2 which allow them to access fashion
their brand style. design professionals’ trend analyses of the styles at major fashion
• Understanding trends: designers identify fashion trends by weeks. In this way, designers share opinions with other co-workers
analyzing the fashion designs that appear at popular major at meetings in order to defne fashion style trends for their
fashion shows through websites and trend reports from a company.
fashion trend analysis company. In the setting design directions phase, designers establish design
• Setting design directions: designers establish the direction of directions by mixing the style of their brand and with those in
design development based on their understandings of their trend reports. In other words, ideation means combining their style
brand style and trends. with the characteristics of trends in order to redesign their style
In the recognizing one’s brand phase, designers try to identify to be more fashionable, attractive, and valuable at sale. Usually,
the brand identity based on their understanding of the brand’s at this phase, designers need to consider the combinations of
past design results. Generally, the fashion style that has been used
consistently for a long time is understood as the identity of the 1 https://www.vogue.com/fashion-shows

brand. Designers identify design attributes (e.g., type of clothes, 2 https://www.wgsn.com/en/products/fashion


CHI ’21, May 8–13, 2021, Yokohama, Japan Jeon et al.

representative fashion attributes (e.g., types of clothes, dominant prototypes. The designer with seven years of experience noted,
colors, detailed attributes) of their style and trend styles by roughly “By trying various combinations, designers try to fnd valuable and
sketching clothes. For example, if the representative cloth type fashionable combinations. Less experienced designers will likely spend
of their style is an ankle-length maxi-skirt and the representative a large amount of time making combinations much more than experts”
detail of trend styles is beads, a designer might set the direction to (P f8 ). The second challenge is having too many designs to consider.
include designing a maxi-skirt adorned with beads. Designers try According to Guardian,3 there are more than 300 fashion shows a
to make as many combinations as possible to extend variations of season in New York Fashion Week, one of the four major fashion
designs and then establish a direction among the combinations, weeks. Given that designers need to analyze designs of multiple
which normally takes a signifcant amount of time and fashion shows spanning across the multiple seasons, the amount
efort. of designs is simply out of any individual’s control. Designers can
only reasonably analyze trends in a limited range (e.g., fashion
4.2.2 Challenges and possible solutions. We also identifed fashion shows, season, cloth types), resulting in an information overload
design professionals’ thoughts on the challenges that interfere with problem. Having limited time constraints to cover a wide amount
their ideation during each phase, as well as possible solutions to of information poses a signifcant challenge for designers, and one
address these challenges. designer with 15 years of experience expressed the following: “It
First, in the recognizing one’s brand phase, designers have was inefcient to spend a great deal of time on this design task, but
difculty defning fashion styles in the absence of quantitative a bigger problem is that I could not fnd more diverse design sources
standards. Since designers tend to defne the style of their brands in a limited time” (P f1 ). These limitations of personal ability may
by themselves (relying on experience or intuition), they might be a fundamental factor preventing creative thinking. Designers
recognize a particular style diferently. One designer with 15 years wanted to efciently combine valuable and fashionable designs (i.e.,
of experience noted, “The diference of recognizing styles could be convergent thinking) while considering various design materials
resolved by having a meeting. However, the gray area still exists” (P f5 ), to the greatest extent possible (i.e., divergent thinking) within a
which means 5th participant from the formative study). Another relatively short period of time.
designer with eight years of experience remarked, “If we could defne In summary, the fashion design professionals responded that
a style with some quantitative standard, it would have been very useful the key challenges preventing creative thinking are ambiguous and
for me to extend the boundary to understand a style” (P f9 ). Designers volatile qualitative criteria in defning a style and limitations to
feel difculties that come from the limitation of defning a style large-scale data access and analysis. To address such challenges,
with ambiguous standards. This reveals the potential usefulness of a the professionals suggested (and fervently requested) a tool for
quantitative metric as a design guide to defning and understanding analyzing a large number of designs across multiple fashion shows
design styles and boundaries. and time periods and identifying style trends quantitatively, while
In the understanding trends phase, designers also face challenges also suggesting data-driven style combinations.
in the absence of quantitative standards. Designers use trend
reports regularly published by third-party fashion companies to 4.3 System design goals
understand style trends. However, since these trend reports focus Based on the interview results, we derived three major goals
on identifying trends with a single season, it is difcult for designers for the design of an AI-based CST (Table 1): Goal 1 provides
to obtain a holistic and comprehensive overview of style trends attribute information on design and style clustering based on the
over multiple seasons. Understanding longitudinal style trends is attributes for divergent thinking; Goal 2 provides visualizations
useful for gaining insights in overall style trends. One designer with for the popularity analysis of a particular style over the season
four years of experience observed, “In the trend meeting, designers for convergent thinking; and Goal 3 combines designs based on
tend to infer style trends based on their experience and intuition rather attributes of users’ styles with a trend style and provides additional
than quantitative data. For example, they might say that I have seen fashion show data for ideation for both convergent and divergent
a recent trend of minimalist styles on the street and on social media” thinking.
(P f2 ). Lack of explicit criteria in collecting and analyzing style trends
hinders the ability for fashion designers to quickly and accurately 5 FASHIONQ
gain the fashion trends and set design directions, which also means
narrowing down the boundary of selecting trends. Conversely, Based on three design goals, FashionQ supports creativity through
to facilitate creative thinking, designers strongly wanted to have divergent and convergent thinking in ideation processes (Figure 4).
quantitative and multi-year, large-scale trend information which FashionQ provides three main visualizations: StyleQ, TrendQ and
helps fgure out quantitative popularity of each trend. MergeQ (Figure 5). With FashionQ, fashion design professionals
In the setting design directions phase, we observed two can recognize their style quickly and analytically in a quantitative
challenges. The frst is that a task for idea combination is highly way, identify fashion trends across the seasons, and broaden the
time-consuming. Making fashionable combinations between two extent of ideation with a combination of styles.
styles requires expertise. The lack of expertise interrupts with
designer’s ability to diferentiate common and popular style trends
versus unique design elements that could highlight the designer’s 3 https://www.theguardian.com/fashion/fashion-blog/2011/sep/16/new-york-
brand and are worthy of being introduced in design combination fashion-week-numbers
FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in Fashion Design CHI ’21, May 8–13, 2021, Yokohama, Japan

Table 1: Ideation goals of Fashion Design (Formative Study)

Ideation Requirements Goals Interactive Cognitive


phase Visualizations operations
Recognizing Style Provide attribute information on design and style StyleQ Extending
one’s brand classifcation clustering based on the attributes. This helps extend the
boundaries of understanding styles (divergent thinking).
Understanding Style Provide visualizations for the popularity analysis of TrendQ Constraining
trends comparison particular styles over the season. This helps narrow down
the boundary of trend styles needed in ideation (convergent
thinking).
Setting design Style Combine designs based on attributes of users’ style MergeQ Blending
directions combination with a trend style and additional fashion show data
for ideation. This helps merge into the design directions
(convergent thinking) and extend boundaries to facilitate new
directions in design development (divergent thinking).

Possible Recognizing Understanding Setting


solutions one’s brand trends design directions
for ideation
challenges Defining design Analyzing Combining styles
boundaries quantitative trends in efficient ways

AI models
Visualizations Object detection
1
with AI StyleQ 1 2 TrendQ 3 MergeQ 1 2 (RetinaNet)

Retina One hot

Attribute-based Style popularity Intersection Net encoding

style clustering over seasons style information


Retina One hot
Clustering
Net encoding
+

(NMF) 2


Fashion attribute
Divergent &
Creativity
Vectorization Dataset
Divergent Convergent
Extending Constraining Convergent Blending
supporting thinking thinking thinking
Retina
Net
One hot
encoding

Trend
Cognitive Style X’ Forecasting
mechanisms
Style
T (ARIMA)
3
T

Extending Style Style +


X X’ 16%
12%

Constraining Style Style


X’ T 8%
X’ 4%
0%
2010 2012 2014 2016 2018 2020
Blending Trend styles

Figure 4: Supporting creativity through divergent and convergent thinking support. FashionQ was designed to support three
cognitive operations – extending, constraining, and blending – by providing StyleQ, TrendQ, and MergeQ, with the support of
AI.

5.1 StyleQ: Attribute-based quantitative style user-selected attributes among the attributes found by the object
recommendation (Goal 1) detection model will be retained. The user can take a closer look
at the attribute used as a criterion for quantitative style clustering.
StyleQ provides clustered styles based on quantitative fashion
StyleQ then calculates the similarity between the attributes (A) and
attributes to extend the boundary of concept of a particular by
the representative attributes (B) of each of the 25 clustered styles
recognizing diferences in the criteria of individual designers
by using Jaccard similarity [35]. These 25 styles were derived from
(extending). This is expected to increase divergent thinking
327,772 fashion show images with attribute information (this will
possibilities by allowing designers to think about that they have
be explained in the following section).
not considered before during the early design process.
StyleQ deals with the similarity search results by presenting the
For more accurate attribute identifcation, StyleQ allows a user
top three styles with 15 representative fashion images for each
to choose appropriate attributes among the detected ones. Only
CHI ’21, May 8–13, 2021, Yokohama, Japan Jeon et al.

style-Q Design image uploaded by a user The image’s attributes. The user
can add or exclude attributes
S1 Attribute detection S2

CHCKED Jacket Outer Double Stripe Jacket Wide Quilted Lapel High Pants Gray
Biker Basic coat Breasted Blouson Pants waist Skinny

Type of clothes Dominant Garments parts Textile pattern Decorations Finishing


Colors

Your style among 25 fashion clusters S3 Three styles suggested based on the similarity
between the user-selected attributes and the
Style 6 - 28% Style 10 - 23% Style 14 - 18% representative attributes of 25 styles

Type of clothes Dominant Garments parts Textile pattern Decorations Finishing


Colors

When the user chooses a


style, FashionQ displays 15
representative looks and Representative looks
information about the
attributes of each look

trend-Q T1
Trending Declining Upcoming Steady Four trend groups

Style 1 Style 3 Style 4


T2
12 12 12
Forecasting

Forecasting
Forecasting

Three styles for each trend % 6 % 6 % 6

group. Each style includes


the information about trend
0 0 0
changes and the six 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020
representative looks

Five representative looks from the Links to the fashion shows in


Styles that combine whole and
merge-Q part attributes from two styles
fashion shows where the merged
styles were presented
which many representative looks
appeared

Intersection attributes M1 Intersection looks M2 Intersection shows M3

Whole attributes - style 6


Dress basic mini Outer parke Fw2010 neil barrett
Top t-shirt regular
Part attributes - style 4 Fw2017 bottega veneta
Gray Black
Lapel Solid Fw2017 neil barrett
Single breasted Long sleeves
Sleeveless Double breasted

Whole attributes - style 4


Outer basic coat Top blouse regular
fw2016 sportmax
Jacket blouson
Part attributes - style 6
Fw2018 gucci
Gray Pleat
Crochet Geometric
ss2019 sacai
Stripe Checked
Multi pattern Sleeveless

Figure 5: FashionQ system with three main interactive visualization interfaces – StyleQ, TrendQ, and MergeQ.
FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in Fashion Design CHI ’21, May 8–13, 2021, Yokohama, Japan

style to help the user understand the style characteristics and Data V is decomposed r (the number of styles) times via matrix
relationships with other styles. decomposition into W consisting of attributes i and style a, and
In summary, StyleQ has the order of usage as fellows. feature matrix consisting of a and image u (Equation 2).
• Image upload allows the user to upload his/her own design r
Õ
image (Figure 5-S1). Viu ≈ (W Hiu ) = Wia Hau (2)
• Attribute detection allows the user to check the image’s a=1
attributes and add or exclude them (Figure 5-S2). In the feature, for matrix H, the style group is labeled according to
• Style recommendation provides the user with three styles the attributes of a particular image, and the specifc attributes and
(Figure 5-S3) which were based on the Jaccard similarity their importance that make up that style can be seen in W.
between the user-selected attributes and the representative From collaboration with with fashion design professionals, we
attributes of 25 styles. When the user chooses a style, identifed the appropriate number of groups (clusters), and verifed
FashionQ displays 15 representative looks and information that each style group accurately represents a fashion style. We
about the attributes of each look. narrowed down the number of clusters by merging similar style
In the following subsections, we will explain the algorithms used clusters into one cluster. The process consists of the followings.
for StyleQ implementation.4 • Range identifcation: In the interview with fashion design
professionals, they suggested the range of the proper number
5.1.1 Atribute detection modeling. We defned 146 fashion of clusters to be between 25 to 40.5
attributes used in fashion design in the interviews conducted with • Creation of the frst style group: Initially, 40 style clusters
10 fashion design professionals. The attributes are composed of the which is based on the maximum number were created based
type of clothes (60 attributes), dominant colors (14), garments parts on NMF.
(27), textile patterns (21), decorations (15), and textile fnishing (9). • Selection of representative images: We took 25 images based
We collected a total of 25,470 fashion images from the Fashion14 on the descending order of having the top 10 attributes in
dataset (12,190) [72] and by web crawling Google Images (13,280). each group and prepared a 1,000 sample dataset.
We worked on labeling with three fashion design students for a • Merging style: We asked three fashion design professionals
month, and the labeling results were double-checked by a fashion who participated in the previous interviews to merge style
design professional. After the labeling work, we developed a clusters. As a result, we obtained 25 style clusters. This
model which detected 146 defned fashion attributes in the fashion number seemed quite appropriate, given the results of the
images, using RetinaNet [48] which shows the best performance previous studies (14 groups [72]; 30 groups [1]), hence we
in object detection tasks. Our model yielded good performance fnally extracted 25 style groups using the NMF results.
(Precision=0.47, Recall=0.47, and F1-score=0.45) over the baseline
performance (Precision=0.32, Recall=0.46, and F1-score=0.36) of 5.2 TrendQ: Quantitative defnition of trends
Faster RCNN [58] which is widely used for object detection tasks for 25 styles (Goal 2)
in a computer vision domain.
TrendQ defnes “popularity” based on the ratio of the number
5.1.2 Style clustering. In order to further populate the fashion of style frequencies in the four fashion cities over a 10-year
image dataset labeled with attributes, we crawled a total of 302,772 period (2010-2019). The 25 styles were grouped into four
images from 8,121 fashion shows from U.S. Vogue between 2010 categories–Trending, Declining, Comeback, and Steady–depending
and 2019. The fashion images cover 987 brand names ranging on the changes in popularity. Using the group selection button,
from mega couture (e.g., Gucci, Chanel) to high street brands (e.g., selecting a specifc popularity group presents three representative
JCrew, Topshop) [30]. We labeled attributes in each image using styles of that group. The y-axis refers to the percentage of a certain
our attribute detection model. Finally, we obtained 302,772 images style’s frequency in a given year, and the x-axis refers to the season
with 146 attributes. of the fashion show.
We used a non-negative factorization (NMF) algorithm [82] for
5.2.1 Trend. A fashion style trend can be defned as a change in
style clustering. It has benefts because each axis in the space
popularity of a particular style over time [36]. The fashion trend
derived by the NMF has a straightforward correspondence with
index determines how many times a style is shown in a given year.
each document cluster, and document clustering results can be
As the number of images shown on runways varies across years,
directly derived without additional clustering operations.
we used the relative frequency of each style in a given year (y st ).
In NMF, the entire data V is divided into matrix parameters and
The number of images I of a particular style s, divided by the total
expressed in times of matrix W (Weight Matrix), and matrix H
number of images Q in a given year t, was used as the fashion style
(Feature Matrix) [46]. Data V , which consists of attribute i which is
trend indicator (Equation 3).
the number of attributes (146) and u which is the number of images
(302,722) (Equation 1). I st
y st = (3)
Qt
i×u i×a a×u
V ≈ WH where V ∈ R ,W ∈ R ,H ∈ R (1)
5 We used some algorithmic methods to set the number of clusters based on distance or
using the elbow test to remove clusters in PCA based on eigenvalues. However those
4 Our work that details the deep-learning algorithms and model performance is algorithms generated 5-10 clusters, which were not in the range that the professionals
currently under review at a diferent venue. suggested; thus, we employed NMF instead.
CHI ’21, May 8–13, 2021, Yokohama, Japan Jeon et al.

StyleQ TrendQ MergeQ

Style 3 Style 22 Version1 Version


Version22
StyleQ
Dress Basic
Maxi
Dress Basic Dress Basic
Midi Midi

Whole Part Whole Part Whole Part Whole Part


Dress Basic Lace Dress Basic Black Dress Basic Lace Dress Basic Black
Maxi Midi Midi Maxi
Applique Solid Applique Solid
Dress Basic Dress Sheath Dress Sheath Dress Basic
Midi Floral Midi U-neck Midi Floral Midi U-neck

Figure 6: The procedure of merging two styles in MergeQ. In this example, a designer chose Style 3 from the images that he/she
uploaded (StyleQ) and Style 22 from style trend interface (TrendQ). A style consists of whole attributes and part attributes.
When merging two styles, the whole attributes from one style and the part attributes from another style (vice versa) will be
mixed and extracting the most relevant style with those attributes will be suggested (MergeQ).

For the purpose of developing the forecasting model, we used the garment. Then, the Whole of one style and a Part of another style
data between 2010 and 2017 for training, those in 2018 for validation, are mixed, and vice versa. Through this process, two types of
and those in 2019 for testing. We used an auto-regressive integrated combinations are generated. For example (Figure 6, MergeQ-Style
moving average model (ARIMA) [10] which has a convincing Version 1), if the Part attributes of StyleQ are lace, applique,
performance in a prediction task with a relatively simple structure, and foral and the representative Whole attributes of the TrendQ
and the mean absolute error (MAE) of our model is 0.0254. Given the style are midi basic dress and short sheath dress, the creation
number of samples, this was a reasonable performance considering of a midi basic dress and a short sheath dress decorated with
prior work [1] predicting style popularity with ARIMA with a large laces, appliques, and a foral pattern become the representative
number of samples (MAE=0.0186).6 intersection attribute. Furthermore, the opposite case is also
In summary, TrendQ has the order of usage as follows. conducted (Figure 6, MergeQ-Style Version 2). We can clearly see
• Trend group selection (Figure 5-T1) provides four trend groups the diferent image suggested between two style versions.
based on the frequency of the style’s appearance at the four MergeQ ofers 10 representative looks related to the combination
major fashion shows by year. of attribute information, as well as a link to a fashion show with
• Style selection (Figure 5-T2) provides three styles for each designs that include many combinations of fashion attributes and
trend group. Each style includes the information about trend detailed explanations of each style. In this way, we hoped to support
changes and the six representative looks. The two styles designers in efectively developing and expanding upon their ideas.
chosen from StyleQ and TrendQ are considered in MergeQ. MergeQ uses the information about whole and part attributes
from the object detection model and about styles from the clustering
model to suggest a style with the best match.
5.3 MergeQ: Style combinations (Goal 3)
In summary, MergeQ has the order of usage as fellows.
MergeQ proposes a style to the user that contains the styles that the
user selected from StyleQ and TrendQ. The purpose of this function • Intersection attributes (Figure 5-M1) presents styles that
is to support the creation of a new combination of attributes by combine whole attributes from one style and the part
providing a proper combination of the two selected styles of the attributes from another style (or vice versa).
attributes. By suggesting a style that the designer had not thought • Intersection looks (Figure 5-M2) shows 10 representative
of before, MergeQ is expected to expose a designer to more design looks from the fashion shows.
possibilities and facilitate more divergent and convergent thinking • Intersection shows (Figure 5-M3) provides web links to the
opportunities in the fashion design ideation process. fashion shows. A user can check more representative looks
For style combinations, 146 attributes were divided into two from the shows.
groups: “Whole” (60 attributes) representing the form of the
garment and “Part” (86 attributes), representing the details of the 6 USER STUDY
6 Since the sample size in our work is not large enough to be used with a more advanced
The goal of our user study was to determine whether FashionQ
deep learning model such as LSTM [32], we used another popularly used algorithm, supports divergent and convergent thinking, practical usability, and
ARIMA [10], that is more appropriate for analyzing our dataset. ideation for fashion design.
FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in Fashion Design CHI ’21, May 8–13, 2021, Yokohama, Japan

6.1 Participants • Step 3: After a fve-minute break, the participants switched


We conducted a user study with 10 fashion design professionals groups (from experimental to control or vice versa) and
(7 females and 3 males) who are currently working in the fashion completed the task again for 30 minutes in accordance with
design industry. Note that these participants were newly recruited the instructions for their new group. They then completed
for the FashionQ evaluation study diferent from those in the the same survey as in Step 2.
formative study. Career experience of the professionals ranged • Step 4: The participants were asked to be interviewed by the
from 2 to 11 years (mean=7.0, SD=3.3). Each participant was invited researchers about the degree of divergent and convergent
to a university laboratory for the study. Our study was approved thinking support ofered by FashionQ and and its practical
by the Institutional Review Board (IRB), and the consent of the use.
participants was sought before the study. Each participant was
given a $30 gift certifcate after the study. 6.3 Survey questions
The survey questions consisted of two themes. The frst focused on
divergent and convergent thinking support and was used only for
6.2 Study procedure the evaluation of FashionQ. In the frst question set, we used 7-point
We asked the participants to assume that they were a designer Likert scales for all questions (1: Strongly Disagree; 7: Strongly
of famous brands and were asked to conduct ideation for fashion Agree). The questions are as follows.
design and perform the following three tasks: (1) identify the brand
• Q1-1: Did StyleQ help you explore more fashion designs in
style that represents their own, (2) examine style trend to refect,
a particular style? (extending—divergent thinking)
and (3) generate new fashion design ideas. We selected 15 popular
• Q1-2: Did TrendQ help you learn about popular
brands (e.g., Burberry, Prada, Chanel) that are highly infuential in
styles from current and historical fashion trends?
the fashion industry (these brands participated in all four major
(constraining—convergent thinking)
fashion shows between 2010 and 2019). We prepared 15 fashion
• Q1-3: Did MergeQ help you think of other styles through
show images for each brand’s latest fashion show (2020FW). None
attribute combinations? (blending—divergent thinking)
of the participants worked at nor had close business relationships
• Q1-4: Did MergeQ help you consider possible future
with those 15 fashion brands so that we were able to minimize the
design directions through attribute combinations?
efect of any understanding or experience that they might have of
(blending—convergent thinking)
those brands in performing the user study tasks.
We conducted a within-subjects study. Participants were asked The second theme refers to one’s perceived confdence in
to be in both the experimental and control groups, and the order AI-based results. This type was used in both the experimental and
was randomly assigned. Participants in the experimental group control groups. In the second question set, we used 7-point Likert
were asked to use FashionQ to complete three tasks as follows: (1) scales for all questions (1: Not confdent at all; 7: Very confdent).
identify the brand style, (2) identify a style trend to refect, and (3) The questions are as follows.
generate a new fashion design by combining the brand style and • Q2-1: How confdent were you about the style you have
the trend style. The control group relied on the participants’ own labeled?
experience and ability without using FashionQ. This is considered • Q2-2: How confdent were you about the trend information
to be the appropriate baseline condition based on the fndings of our you found?
formative study. The tasks to complete were the same as those in the • Q2-3: How confdent were you about the ideation results of
experimental group: (1) brand style identifcation (select a picture the style and trend you combined?
of their favorite brand from among 15 pictures and determine the • Q2-4: How confdent were you about the overall design
style of the picture), (2) style trend identifcation (proceed with process?
the search work, such as by using social media, fashion magazine
webpages, and fashion blogs, in the way they usually would), and
(3) new fashion design generation (they were allowed to refer to 6.4 Statistical analysis
the U.S. VOGUE homepage, which has information about fashion For the questions about divergent and convergent thinking, we
show collections, and then select fashion images, and make a simple computed descriptive summary statistics. For the questions about
sketch). perceived confdence in the AI results, we used a paired sampled
The study proceeded as follows: t-test to determine the statistical signifcance of the survey
responses between the two groups.
• Step 1: The participants provided demographic (age, gender) First, we confrmed that the three cognitive operations for
and background (length in years of career as fashion design divergent and convergent thinking are generally well supported.
professional) information. The average scores for StyleQ (divergent thinking), TrendQ
• Step 2: The participants were randomly assigned to either (convergent thinking), and MergeQ (divergent thinking), MergeQ
the experimental or control group, and asked to complete (convergent thinking) were 5.4, 5.8, 5.6, and 4.1, respectively
the task in 30 minutes. For the participants who were in the (Figure 7). We noted that the score of MergeQ-convergent was
experimental group were instructed for 10 minutes on how the lowest.
to use FashionQ. After the task, they were asked to answer Second, we found that the experimental group showed
the survey questions. signifcantly higher scores than the control group for all fve
CHI ’21, May 8–13, 2021, Yokohama, Japan Jeon et al.

questions (p < 0.05; Figure 8). This indicates that FashionQ representative photo of the style, there was a design that I hadn’t
supported the fashion design ideation outcomes quite well. usually thought of, but it was a recommendation in an understandable
range, which expanded the range of the style I was thinking of” (Pu1 ).
6.5 Interview analysis “The style I chose is a sexy style. Looking at the attribute information,
such as the representative blazer and boxy sweater, which are far from
During the interview, the participants explained in detail how
the sexy style standard I thought of. It seemed necessary to distinguish
AI externalized the three cognitive operations and infuenced
the sexy style boundary that I already knew in greater detail. I was
ways of divergent and convergent thinking. Table 2 summarizes
able to come up with a style group that I hadn’t thought of” (Pu4 ).
the interview results and implications. When reporting interview
Participants answered that the attributes in FashionQ are all used
quotes, we use PuX to denote participant number X in the user study.
by designers in the feld and seem adequate for use in analysis. They
mentioned that creating fashion styles based on those attributes
6.5.1 Fashion design ideation with AI-based CST. All participants
seems accurate, for example: “Since the 146 attributes used in
answered that FashionQ provided sufcient support in promoting
FashionQ are quite essential in fashion design work, I think there
divergent and convergent thinking.
is a high possibility of covering all styles” (Pu7 ). In addition, all
StyleQ provides attributes-based style clustering information to
participants appreciated the number of fashion images (302,772)
support the extending cognitive operation in divergent thinking.
used in modeling because accessing and analyzing such a large
This helped designers determine and expand the range of styles
number of images individually is almost impossible. “The fact that
in their repertoire. Participants mentioned that they were able to
it was centered on 300,000 images of the four major fashion shows over
expand the scope of concepts or increase the number of concepts
10 years gave us great confdence in the system” (Pu1 ). “The data from
in a specifc style by means of StyleQ. For example: “In the

Q1-1

Q1-2

Q1-3

Q1-4

1 2 3 4 5 6 7
Scale
Figure 7: Support for divergent and convergent thinking.

Q1 (t(19)=-2.74, p=0.0147*) Q2 (t(19)=-2.21, p=0.0424*) Q3 (t(19)=-2.92, p=0.0107*) Q4 (t(19)=-3.04, p=0.009**)

7
6
5
Scale

4
3
2
1
0
C E C E C E C E

Figure 8: Fashion design ideation outcomes between the no-FashionQ (C = Control) and FashionQ (E = Experimental)
conditions (∗p < 0.05, ∗∗p < 0.01).
FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in Fashion Design CHI ’21, May 8–13, 2021, Yokohama, Japan

Table 2: Strengths and weaknesses of AI output. Strengths highlight efciency and objectivity of data analysis. Weaknesses
highlight accuracy, explainability, and interpretability issues. Possible solutions through human-in-the-loop, crowdsourcing,
and initiative provision were proposed to address the weaknesses.

AI Strengths of AI Weaknesses of AI Possible solutions


Attribute AI detects fashion attributes from the Inaccurate prediction of AI decreases Allow designers to revise the data used in
detection image with high accuracy. designers’ trust of the results. detection modeling (e.g., attribute re-labeling).
Style AI employs fashion attributes as Unclear explanation of clustering Allows designers to make new clusters based
clustering criteria to cluster styles quantitatively results (e.g., number of the clusters, on combinations of attributes they want to see
and provides results from large-scale meaning of the cluster). and to compare them with others’ clusters to
data analysis. fnd reasonable combinations.
Trend AI provides results from multi-year, Unclear explanation of AI decreases Provide fashion designers with other
prediction large-scale data analysis and helps designers’ trust of the results. designers’ evaluations of prediction results or
designers understand fashion trends. allow them to see the forecast accuracy of the
same model in diferent timeframes of past
years.
Style AI provides style intersections and Inaccurate and uninterpretable Provide an opportunity for designers to refect
merging helps designers understand the style prediction of AI that conficts with their intention toward AI (e.g., attribute weight
merging process. designers’ expertise and experience manipulation, interdomain materials).
decreases their trust of the results

the four major fashion shows are familiar to designers, who always my expectation. These styles are quite interesting. I need to take a
refer them in the ideation process. The familiar data gave me a sense closer look” (Pu1 ). “When I combined style 1 and style 4, the system
of confdence in the overall results for AI” (Pu10 ). suggested a look that partially used the pattern of style 4 on the bottom.
TrendQ was designed to help fashion design professionals I personally like to use the pattern on tops or all over the clothes, but
recognize the particular styles that designers should consider for looking at the suggested results, it was interesting to see that I had the
ideation by providing information on the popularity of styles. opportunity to try diferent ideations and compared with the existing
This supports the constraining cognitive operation in convergent styles that I am interested in” (Pu3 ). “MergeQ recommended to me the
thinking. Participants stated that they were able to focus on the 2019 Missoni show and the 2010 Rodarte show based on my selection
salient styles based on the types of temporal variations and compare in style merging. The Missoni brand itself is famous, and I personally
their existing knowledge with the changes in long-term trends, for remember several designs because they were recently announced.
example: “Based on the number-based popularity trend information Rodarte is a brand I’ve never heard of, and was announced 10 years
for each style, I was able to identify six styles that I should consider ago. It was old and unfamiliar, but I found quite a few interesting
for ideation. Personally, I tend to fnd many trends for ideation work, points to refer to that could blend with my own design. I see the
but by using styleQ, I was able to materialize the particular styles possibility of a new ideation method in a forgotten design” (Pu2 ).
to refer to” (Pu5 ). “It was helpful to show the style that is currently
trending in a trend group and an upcoming group. I think the ideas 6.5.2 Weaknesses of AI in CST. One of the critical aspects in AI
can be focused more. Also, I can exclude styles in the declining group is its accuracy. We asked the participants about their concerns
during ideation” (Pu8 ). or reservations when accessing the AI results. First, inaccurate
Participants mentioned that the trend forecasting model in results from the attribute detection model made certain results
TrendQ gave them a sense of confdence derived from the in TrendQ and MergeQ somewhat questionable. Second, some
number-based trend information. Their knowledge or idea of participants were not sure about the number of styles used in
historical information for a certain fashion style was somewhat clustering and whether this number covers all fashion styles. Third,
vague, but TrendQ helped them shape style concepts. For example: when there was a confict between the prediction of style trends
“This was the frst time I encountered trend data based on frequency and the participants’ expectations, they were not sure whether they
of 10 years! The popularity of the style in 2020, which was predicted could trust the prediction results. Lastly, when the suggested results
based on the number of changes in the popularity of a particular style in MergeQ were completely diferent from what was expected, the
over the years, was also very meaningful” (Pu2 ). “The 10-year data participants found the results confusing. Overall, it is important to
covers all the designs we need to refer to” (Pu3 ). note that all of these cases pertain to the accuracy, explainability,
MergeQ was designed to support the blending cognitive and interpretability of AI models.
operation in both divergent and convergent thinking. By providing
information on the intersection of two styles, MergeQ shows users 6.5.3 Diferent evaluation criteria. Our interview results highlight
new style information and style combinations that have not been one interesting aspect. Although the participants mentioned their
tried before. This helps the users think of new ideation methods perceived issues with the AI models and results, the evaluation
related to design applications and gives them an opportunity to criteria were diferent for each of the ideation phases, and this
use forgotten old designs as ideation materials. The following aspect was quite salient among the participants. In other words,
responses were collected: “The suggested merged styles were beyond the level of tolerance toward AI was diferent in diferent phases.
CHI ’21, May 8–13, 2021, Yokohama, Japan Jeon et al.

First, the participants were quite fexible in accepting the results focus on other trendy styles by excluding those declining ones.
of StyleQ (extending) and TrendQ (constraining). They mentioned Participants also chose steady styles to ideate styles that could be
that they often exchange many design ideas or opinions at work, generally acceptable to the public for a long period of time.
which is helpful in coming up with new design ideas, refning
ideas, or making design decisions. Thus, for them, the StyleQ and 7.2 Human and AI interaction for creativity
TrendQ results provided additional design information or insights support
that could facilitate design discussions and assist in making better
One of our study results highlighted the application of AI
design decisions. “There were times when the accuracy of StyleQ and
to cognitive operations, based on the characteristics of tasks,
TrendQ was low, but that was not a big problem. At least we could get
and human evaluation on it. This means that in CST, it is
some additional insights” (Pu4 ). “Design work requires a great deal of
necessary to carefully consider design implications to increase the
collaboration, so communication with others is very important. I feel
user-perceived accuracy, explainability, and interpretability of AI.
like FashionQ is another collaborator who gives quantitative insights.
In the following subsections, we will discuss the implications of
Our team doesn’t have such a person” (Pu7 ).
these fndings and how to better use AI results in the context of
Second, on the other hand, the participants exhibited
creativity support.
high standards when evaluating MergeQ (blending) outcomes.
Participants responded that compared to StyleQ and TrendQ, the 7.2.1 High level of tolerance for inaccuracy in extending and
involvement based on their own experience was necessary in the constraining tasks. In recognizing brand style (StyleQ) and
process of deriving results from MergeQ. “Unlike recognizing brand understanding trends (TrendQ), users’ tolerance for AI in accuracy
style and understanding trends, the ideation stage is very important was quite high. When developing an AI model that supports the
for designers because it is a process that requires creating a brand new tasks at ideation phases that require extending and constraining ,
direction” (Pu5 ). We noted that seven participants indicated their focusing on developing an AI model that provides perfect prediction
high standard or more strict decision over the MergeQ outcomes. accuracy may not be entirely necessary. This may be due to the fact
For example, “It seems like MergeQ has more creativity components, that the primary objectives of divergent thinking and constraint
but my trust in it is a bit low. If I can select the attribute I want to discovery ideation processes are to maximize fashion designers’
emphasize in MergeQ, might be able to trust it more” (Pu4 ). “Analyzing exposure to diverse style and attributes and market trend and
big data that humans cannot cover is very impressive and meaningful, popularity constraints that could help them explore, shape, and
but the creative ability is difcult to trust. I still think people are redefne design possibilities in order to generate more creative
more creative than AI” (Pu2 ). Therefore, our participants are more design ideas. In our user study, participants placed high value of
satisfed with MergeQ’s ability to generate novel style combinations being presented with out-of-the-box styles and trends that they
for expanding the design possibilities (blending—divergent thinking) were unfamiliar with or had not previously thought of without
but they have reservations when using MergeQ outcomes for the assistance of FashionQ. This means that slightly inaccurate
setting future design directions (blending—convergent thinking) predictions of style or trend outcomes will not negatively impact the
compared to the style combinations that they could generate on ideation process, and in some cases might even beneft it because the
their own. This explains the relatively lower rating of survey item bewildering predictions could sometimes inspire creative outcomes.
Q1-4 compared to the other cognitive operations that FashionQ Even if the predictions are completely inaccurate, the designers
aims to facilitate. could quickly discard those ideas and move onto the next concepts.

7 DISCUSSION 7.2.2 Demand for high customizeability in blending tasks to


support convergent thinking. On the other hand, when setting
In this section, we summarize the fndings of the study and discuss
design directions (MergeQ), participants demanded high level
its implications. We also report the limitations of the study and our
of customizeability when they engaged in style combinations,
plans for future work.
which involves the blending cognitive operation that supports
both divergent and convergent thinking. Designers wanted to
7.1 CST for creativity support take the lead in the ideation work of customizing combination
Both the quantitative and qualitative results of our user study attributes when creating new style concepts. Regarding the attribute
confrmed the possibility of externalizing cognitive operations (i.e., interaction information in MergeQ, this would mean providing
extending, constraining, and blending) to support divergent and the designers with the authority to manipulate the weight of
convergent thinking using an AI-based CST. FashionQ supports the attribute for a fashion image and corresponding style. For
extending designer’s idea space by revealing styles, attributes, example, if a user wants to see the intersection information in
popularity variations (StyleQ), constraining styles based on their which the fower pattern is more emphasized, there could be an
trend and popularity information, and blending two styles by added feature that allows the designer to increase the attribute
presenting the possibility of new style design through style weight of the fower pattern or tweak the weight of other pattern
combinations. attributes. Research has emphasized the importance of granting
FashionQ presented various paths that could support creative humans the initiative or control to generate creative outcomes when
tasks. Participants responded that they discovered the possibility of they co-work with AI. For example, in the case of collaborative
creative ideation work through divergent thinking supported by our drawing with AI, previous research has found that giving humans
CST. Accessing declining styles in TrendQ helped the participants more control over a major portion of the fgure and allowing AI to
FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in Fashion Design CHI ’21, May 8–13, 2021, Yokohama, Japan

(A) - 1 (A) - 2 (B) - 1 (B) - 2


Figure 9: Examples of interdomain materials suggested by fashion design professionals. (A)-1 and (A)-2 are cases of inspiration
derived from focusing on a single feature. (B)-1, and (B)-2 are cases of inspiration derived from focusing on multiple features
(source: Pinterest).7

supplement the rest will lead to signifcantly higher usability than is important in refning the ideas into more desirable ones. And
the case without such considerations [54]. given the highly iterative nature of the ideation process in blending,
designers further require additional source materials to help them
7.2.3 Provide non-fashion-related material in blending tasks to expand the design possibilities when combining multiple styles,
support divergent thinking. The fashion designers7 also wanted a attributes, materials, and colors into design ideas. In the following
proactive and fundamental approach to divergent thinking beyond section, we discuss some of the challenges in developing AI models
manipulating the weight of the attributes in MergeQ. “For divergent for AI-based CST.
thinking to be more active, it would be great if we could provide not
only fashion product photos but also material information that is not
a fashion product that could ofer additional inspirations. It would 7.3 Challenges in developing AI-based CST
be nice to present a picture of a building that is similar in shape to a To help researchers, practitioners, and designers in a variety of
fashion product picture, or a picture with a similar color combination domains that engage in highly creative processes by utilizing
used in fashion products” (Pu2 ). Some participants recommended AI-based CST presented in our study, we discuss challenges and key
pictures used in ideation (Figure 9) for this purpose. They wanted lessons that need to be considered in future AI-based CST research
to be provided with various interdomain materials (non-fashion) and development.
that represent possible designs of the selected style combinations. During our design and development of FashionQ, we worked
Depending on the feature(s) of interdomain materials, fashion closely with fashion design professionals to articulate and defne
design professionals fnd opportunities for ideation from various the design processes and fashion attributes. We also asked fashion
sources. In Figure 9, (A)-1 and (A)-2 are cases of inspiration derived design students to annotate a dataset of 25,470 images for
from the silhouette of a work of architecture (a single feature), constructing FashionQ’s object-detection and clustering models.
and (B)-1 and (B)-2 are cases of inspiration derived from various These steps are highly time-consuming and labor-intensive and
features, such as mood, color, fabric, and pattern (multiple features). took us over a period of 2-months. Recruiting and securing
Providing interdomain materials corresponds to moving. Bonnardel fashion design professionals and students for this work was
and Marmèche [9] found that when supporting the ideation of not easy, as they still face high workload demands while they
a furniture designer, supporting interdomain materials plays an assisted us with this study. For this reason, we propose utilizing
important role in creativity support. algorithm-generated attributes in future development of AI-based
In summary, our study fndings indicate that the participants CST. For example, Banaei et al. [2] summarized 1,104 attributes used
found AI-based CST to be highly valuable for supporting divergent in an interior CAD program. Liu et al. [49] used the naming data
thinking and constraint discovery, but demands additional of the online fashion market and obtained 1,000 attributes. Given
customizeability features to support convergent thinking and that designers expressed a high level of tolerance of AI prediction
further expansion of creative non-fashion interdomain source outcomes in extending and constraining tasks, an AI-based CST that
materials to facilitate divergent thinking. This can be explained by incorporates algorithm-generated attributes should not drastically
the fact that the objective of divergent thinking is to be exposed lower user experience. However, image annotation may vary highly
to as many possible ideas as possible (whether they are good or depending on the design domain of inquiry. Therefore, plans for
bad), whereas in convergent thinking the goal is to flter down to data collection and annotation should be made carefully.
the “best” ideas, and therefore the requirement of customizeability There is another challenge when determining the number of
conceptual instances (in our case, the number of style clusters).
7 http://www.pinterest.com/ This was also highlighted by some of the participants, who were
CHI ’21, May 8–13, 2021, Yokohama, Japan Jeon et al.

not sure whether 25 is representative enough. Number of styles our study may not be applicable to some case. In addition, the study
varied drastically in prior fashion design research (e.g., 30 styles [1], results could be infuenced by the carry-over efect derived from
14 styles [72], 5 styles [40]). In this work, we initially applied a the within-subjects design [31].
clustering algorithm that automatically determine the number of As future work, we plan to conduct future AI-based CST research
clusters (i.e, using the elbow test to remove clusters in principal on idea implementation, which is the step that follows ideation
component analysis based on eigenvalues) and generate a small (Figure 2). Research has demonstrated that utilizing a model that
number of clusters, and worked with fashion designers in an provides design suggestions [33] through attribute conversion
iterative design process that ultimately arrived at a desired number based on generative adversarial networks (GAN) [28] can support
of 25 clusters. Other design domains may have diferent design rapid prototyping [20]. Applying the FashionQ framework could
constraints, and other clustering methods can be considered and further contribute to creative research in the idea implementation
employed in the AI-based CST depending on the specifc context. phase. In addition, we will consider applying the CSI (Creativity
Adhering to the expressed desires for high customizeability by our Support Index) [13] to assess the overall creativity outcomes and
study participants and the general spirit of promoting transparent usability of FashionQ in future studies. Finally, we plan to apply the
AI that could improve explainability and interpretability, future FashionQ framework to other design domains beyond the fashion
AI-based CST could consider preparing the results by diferent design industry.
cluster counts and giving users the option to navigate the cluster
results and select a cluster number that is deemed appropriate to 8 CONCLUSION
their use and context.
Modern AI is constantly developing and expanding. Its value and
importance are increasing as it is applied to many environments for
7.4 Limitations and future work various purposes. This paper aims to investigate how AI can support
creativity and to uncover salient aspects that need to be considered
Although our study results provide many insights, there still exist
in designing AI-based CST in the context of ideation in the fashion
some limitations that we plan to address in future studies.
design domain. Creativity is a subjective concept that is applied
First, the attributes used in our study did not include all possible
diferently depending on people and environments. In this work, we
attributes. We intentionally excluded some of the attributes, such
engaged fashion design professionals to understand their current
as fabric type due to an attribute detection accuracy issue of the
design practices, goals, and challenges. Through an iterative process
model. Since the fashion design professionals in our study exhibited
with the fashion designers, we carefully designed and developed
a high tolerance of receiving suggestions based on a wide range
an AI-based CST that externalizes three cognitive operations —
of AI prediction accuracy when performing extending (StyleQ)
extending, constraining, and blending — in overcoming design
and constraining (TrendQ) tasks, it would be reasonable for us
fxation during the fashion design ideation process. Our user
to include some of the challenging attributes at the tradeof of
study showed many promising results and important insights for
further increasing their exposure to more design possibilities in
improving future designs of AI-based CST. We propose future work
order to facilitate divergent thinking. In addition, future AI-model
that could improve FashionQ AI models and interactive features
development could include other types of time series data for
to further support divergent thinking, constraint discovery, and
additional analytical insights in trend forecasting. For example,
convergent thinking creative processes, apply FashionQ to the idea
Al-Halah et al. [1] expanded the range of use of forecasting data by
implementation phase and longitudinal feld deployment studies,
combining Amazon sales and style concepts. FashionQ can also be
and expand the FashionQ framework to other creative domains
expanded using these data, which can be useful for fashion design
beyond the fashion design industry.
professionals when they perform the constraining tasks to come
up with new and creative design ideas.
Second, our user study was limited to 30-min of ideation tasks ACKNOWLEDGMENTS
comparing the ideation outcomes and user experiences between the This research was supported by the MSIT (Ministry of
use and nonuse of FashionQ conditions. A more realistic experiment Science and ICT), Korea, under the program (2019-0-01584,
would be a randomized controlled, longitudinal feld trial with real 2020-0-01523) supervised by the IITP (Institute for Information
fashion designer teams throughout an actual ideation design life & Communications Technology Planning & Evaluation) and the
cycle, which could span a period of several months. A longitudinal program (2020R1F1A1076129) by the NRF (National Research
feld-based study will allow us to further understand how fashion Foundation).
designers perceive, adopt, and incorporate FashionQ into their
existing workfow. Despite the experimental nature of the study,
REFERENCES
we believe that our study results clearly contributed to a better
[1] Ziad Al-Halah, Rainer Stiefelhagen, and Kristen Grauman. 2017. Fashion forward:
understanding of AI-based tool for creativity support, which also Forecasting visual style in fashion. In Proceedings of the IEEE International
advances the body of knowledge in human-AI research. Conference on Computer Vision. 388–397.
[2] Maryam Banaei, Ali Ahmadi, and Abbas Yazdanfar. 2017. Application of AI
Third, although our research has been framed based on close methods in the clustering of architecture interior forms. Frontiers of Architectural
work with 20 fashion professionals (10 participants for the Research 6, 3 (2017), 360–373.
interviews and 10 for the user study), their insights may not [3] Randall D Beer. 2014. Dynamical systems and embedded cognition. The
Cambridge handbook of artifcial intelligence 812 (2014), 856–873.
represent the perspectives of all professionals nor all the dynamics [4] Luca Benedetti, Holger Winnemöller, Massimiliano Corsini, and Roberto
of the fashion industry. Thus, the attributes or styles derived from Scopigno. 2014. Painting with Bob: assisted creativity for novices. In Proceedings
FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in Fashion Design CHI ’21, May 8–13, 2021, Yokohama, Japan

of the 27th annual ACM symposium on User interface software and technology. [33] Wei-Lin Hsiao, Isay Katsman, Chao-Yuan Wu, Devi Parikh, and Kristen Grauman.
419–428. 2019. Fashion++: Minimal edits for outft improvement. In Proceedings of the
[5] Yoshua Bengio. 2009. Learning deep architectures for AI. Now Publishers Inc. IEEE International Conference on Computer Vision. 5047–5056.
[6] David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. [34] E Hutchins. 1995. Cognition in the wild MIT Press. Cambridge, MA (1995).
Journal of Machine Learning Research 3, Jan (2003), 993–1022. [35] Paul Jaccard. 1901. Distribution de la fore alpine dans le bassin des Dranses et
[7] Nathalie Bonnardel. 1993. Comparison of Evaluation Processes in Design dans quelques régions voisines. Bull Soc Vaudoise Sci Nat 37 (1901), 241–272.
Activities and Critiquing Systems: A Way to Improve Design Support Systems; [36] Tim Jackson. 2007. The process of trend development leading to a fashion season.
CU-CS-681-93. (1993). In Fashion Marketing. Routledge, 192–211.
[8] Nathalie Bonnardel. 2000. Towards understanding and supporting creativity [37] David G Jansson and Steven M Smith. 1991. Design fxation. Design studies 12, 1
in design: analogies in a constrained cognitive environment. Knowledge-Based (1991), 3–11.
Systems 13, 7-8 (2000), 505–513. [38] Youngseung Jeon, Seung Gon Jeon, and Kyungsik Han. 2020. Better targeting of
[9] Nathalie Bonnardel and Evelyne Marmèche. 2004. Evocation processes by novice consumers: Modeling multifactorial gender and biological sex from Instagram
and expert designers: Towards stimulating analogical thinking. Creativity and posts. User Modeling and User-Adapted Interaction (2020), 1–34.
Innovation Management 13, 3 (2004), 176–186. [39] Youngseung Jeon, Seungwan Jin, Bogoan Kim, and Kyungsik Han. 2020. FashionQ:
[10] George EP Box, Gwilym M Jenkins, Gregory C Reinsel, and Greta M Ljung. 2015. An Interactive Tool for Analyzing Fashion Style Trend with Quantitative Criteria.
Time series analysis: forecasting and control. John Wiley & Sons. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing
[11] Bill Buxton. 2010. Sketching user experiences: getting the design right and the right Systems. 1–7.
design. Morgan kaufmann. [40] M Hadi Kiapour, Kota Yamaguchi, Alexander C Berg, and Tamara L Berg. 2014.
[12] Minsuk Chang, Léonore V Guillain, Hyeungshik Jung, Vivian M Hare, Juho Kim, Hipster wars: Discovering elements of fashion styles. In Proceedings of the
and Maneesh Agrawala. 2018. Recipescape: An interactive tool for analyzing European Conference on Computer Vision. Springer, 472–488.
cooking instructions at scale. In Proceedings of the 2018 CHI Conference on Human [41] Joy Kim, Mira Dontcheva, Wilmot Li, Michael S Bernstein, and Daniela Steinsapir.
Factors in Computing Systems. 1–12. 2015. Motif: Supporting novice creativity through expert patterns. In Proceedings
[13] Erin Cherry and Celine Latulipe. 2014. Quantifying the creativity support of the 2015 CHI Conference on Human Factors in Computing Systems. 1211–1220.
of digital tools through the creativity support index. ACM Transactions on [42] Hyokmin Kwon, Jaeho Han, and Kyungsik Han. 2020. ART (Attractive
Computer-Human Interaction (TOCHI) 21, 4 (2014), 1–25. Recommendation Tailor) How the Diversity of Product Recommendations Afects
[14] Victoria Clarke, Virginia Braun, and Nikki Hayfeld. 2015. Thematic analysis. Customer Purchase Preference in Fashion Industry?. In Proceedings of the
Qualitative Psychology: A Practical Guide to Research Methods (2015), 222–248. 29th ACM International Conference on Information & Knowledge Management.
[15] William H Cooper, R Brent Gallupe, Sandra Pollard, and Jana Cadsby. 1998. Some 2573–2580.
liberating efects of anonymous electronic brainstorming. Small Group Research [43] Tarja-Kaarina Laamanen and Pirita Seitamaa-Hakkarainen. 2014. Interview Study
29, 2 (1998), 147–178. of Professional Designers’ Ideation Approaches. The Design Journal 17, 2 (2014),
[16] Nicholas Davis, Chih-PIn Hsiao, Kunwar Yashraj Singh, Lisa Li, Sanat Moningi, 194–217.
and Brian Magerko. 2015. Drawing apprentice: An enactive co-creative agent [44] Karen L LaBat and Susan L Sokolowski. 1999. A three-stage design process
for artistic collaboration. In Proceedings of the 2015 C&C Conference on Creativity applied to an industry-university textile product design project. Clothing and
and Cognition. 185–186. Textiles Research Journal 17, 1 (1999), 11–20.
[17] Nicholas Davis, Holger Winnemöller, Mira Dontcheva, and Ellen Yi-Luen Do. [45] Jane M Lamb and M Jo Kallal. 1992. A conceptual framework for apparel design.
2013. Toward a cognitive theory of creativity support. In Proceedings of the 9th Clothing and Textiles Research Journal 10, 2 (1992), 42–47.
ACM Conference on Creativity & Cognition. 13–22. [46] Daniel D Lee and H Sebastian Seung. 1999. Learning the parts of objects by
[18] JO DeJonge. 1984. Forward: The design process. Clothing: The portable non-negative matrix factorization. Nature 401, 6755 (1999), 788–791.
environment (1984), 40–48. [47] Jianan Li, Jimei Yang, Jianming Zhang, Chang Liu, Christina Wang, and Tingfa
[19] Biplab Deka, Zifeng Huang, Chad Franzen, Joshua Hibschman, Daniel Afergan, Xu. 2020. Attribute-conditioned Layout GAN for Automatic Graphic Design.
Yang Li, Jefrey Nichols, and Ranjitha Kumar. 2017. Rico: A mobile app dataset IEEE Transactions on Visualization and Computer Graphics (2020).
for building data-driven design applications. In Proceedings of the 30th Annual [48] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. 2017.
Symposium on User Interface Software and Technology. 845–854. Focal loss for dense object detection. In Proceedings of the IEEE International
[20] Claudia Eckert and Phillip Delamore. 2009. Innovation the Long Way Round: Conference on Computer Vision. 2980–2988.
Transferring Rapid Prototyping Technology into Fashion Design. In DS 58-1: [49] Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaoou Tang. 2016.
Proceedings of ICED 09, the 17th International Conference on Engineering Design, Deepfashion: Powering robust clothes recognition and retrieval with rich
Vol. 1, Design Processes, Palo Alto, CA, USA, 24.-27.08. 2009. annotations. In Proceedings of the IEEE Conference on Computer Vision and Pattern
[21] HJ Eysenck. 2003. Creativity, personality and the convergent-divergent continuum. Recognition. 1096–1104.
Hampton Press. [50] Panagiotis Louridas. 1999. Design as bricolage: anthropology meets design
[22] Gilles Fauconnier and Mark Turner. 1998. Conceptual integration networks. thinking. Design Studies 20, 6 (1999), 517–535.
Cognitive Science 22, 2 (1998), 133–187. [51] Pedro Lucas and Carlos Martinho. 2017. Stay Awhile and Listen to 3Buddy, a
[23] Ronald A Finke, Thomas B Ward, and Steven M Smith. 1992. Creative cognition: Co-creative Level Design Support Tool.. In ICCC. 205–212.
Theory, research, and applications. (1992). [52] Tanushree Mitra, Clayton J Hutto, and Eric Gilbert. 2015. Comparing person-and
[24] Jonas Frich, Lindsay MacDonald Vermeulen, Christian Remy, Michael Mose process-centric strategies for obtaining quality data on amazon mechanical turk.
Biskjaer, and Peter Dalsgaard. 2019. Mapping the landscape of creativity support In Proceedings of the 2015 CHI Conference on Human Factors in Computing Systems.
tools in HCI. In Proceedings of the 2019 CHI Conference on Human Factors in 1345–1354.
Computing Systems. 1–18. [53] Michael Mose Biskjaer, Peter Dalsgaard, and Kim Halskov. 2017. Understanding
[25] Jonas Frich, Michael Mose Biskjaer, and Peter Dalsgaard. 2018. Twenty Years of creativity methods in design. In Proceedings of the 2017 Conference on Designing
Creativity Research in Human-Computer Interaction: Current State and Future Interactive Systems. 839–851.
Directions. In Proceedings of the 2018 Conference on Designing Interactive Systems. [54] Changhoon Oh, Jungwoo Song, Jinhan Choi, Seonghyeon Kim, Sungwoo Lee, and
1235–1257. Bongwon Suh. 2018. I lead, you help but only with enough details: Understanding
[26] Karni Gilon, Joel Chan, Felicia Y Ng, Hila Liifshitz-Assaf, Aniket Kittur, and Dafna user experience of co-creation with artifcial intelligence. In Proceedings of the
Shahaf. 2018. Analogy mining for specifc design needs. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13.
2018 CHI Conference on Human Factors in Computing Systems. 1–11. [55] Cecil Piya, Vinayak , Senthil Chandrasegaran, Niklas Elmqvist, and Karthik
[27] Victor Girotto, Erin Walker, and Winslow Burleson. 2017. The efect of peripheral Ramani. 2017. Co-3Deator: A team-frst collaborative 3D design ideation tool. In
micro-tasks on crowd ideation. In Proceedings of the 2017 CHI Conference on Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems.
Human Factors in Computing Systems. 1843–1854. 6581–6592.
[28] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, [56] Jonathan A Plucker, Ronald A Beghetto, and Gayle T Dow. 2004. Why isn’t
Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial creativity more important to educational psychologists? Potentials, pitfalls, and
nets. In Advances in Neural Information Processing Systems. 2672–2680. future directions in creativity research. Educational Psychologist 39, 2 (2004),
[29] Joy Paul Guilford. 1967. The nature of human intelligence. McGraw-Hill. 83–96.
[30] Yu-I Ha, Sejeong Kwon, Meeyoung Cha, and Jungseock Joo. 2017. Fashion [57] Stuart Pugh. 1991. Total design: integrated methods for successful product
conversation data on instagram. (2017). arXiv:1704.04137 engineering. Addison-Wesley.
[31] Carrie Heeter. 1992. Being there: The subjective experience of presence. Presence: [58] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn:
Teleoperators & Virtual Environments 1, 2 (1992), 262–271. Towards real-time object detection with region proposal networks. In Advances
[32] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural in Neural Information Processing Systems. 91–99.
Computation 9, 8 (1997), 1735–1780. [59] Mark A Runco. 2014. Creativity: Theories and themes: Research, development, and
practice. Elsevier.
CHI ’21, May 8–13, 2021, Yokohama, Japan Jeon et al.

[60] R Keith Sawyer. 2011. Explaining creativity: The science of human innovation. [72] Moeko Takagi, Edgar Simo-Serra, Satoshi Iizuka, and Hiroshi Ishikawa. 2017.
Oxford University Press. What makes a style: Experimental analysis of fashion prediction. In Proceedings
[61] Jana Schumann, Patrick C Shih, David F Redmiles, and Graham Horton. 2012. of the IEEE International Conference on Computer Vision Workshops. 2247–2253.
Supporting initial trust in distributed idea generation and idea evaluation. In [73] Paul Thagard. 2005. Mind: Introduction to cognitive science. Vol. 17. MIT press
Proceedings of the ACM 2012 International Conference on Supporting Group Work. Cambridge, MA.
199–208. [74] Kristen Vaccaro, Sunaya Shivakumar, Ziqiao Ding, Karrie Karahalios, and Ranjitha
[62] Patrick C Shih, David H Nguyen, Sen H Hirano, David F Redmiles, and Gillian R Kumar. 2016. The elements of fashion style. In Proceedings of the 29th Annual
Hayes. 2009. GroupMind: supporting idea generation through a collaborative Symposium on User Interface Software and Technology. 777–785.
mind-mapping tool. In Proceedings of the ACM 2009 International Conference on [75] Hao-Chuan Wang, Dan Cosley, and Susan R Fussell. 2010. Idea expander:
Supporting Group Work. 139–148. supporting group brainstorming with conversationally triggered visual thinking
[63] Patrick C Shih and Gary M Olson. 2009. Using visualization to support idea stimuli. In Proceedings of the 2010 ACM Conference on Computer Supported
generation in context. In Proceedings of the 2009 C&C Conference on Creativity Cooperative Work. 103–106.
and Cognition in Engineering Design. [76] Hao-Chuan Wang, Susan R Fussell, and Dan Cosley. 2011. From diversity
[64] Patrick C Shih, Gina Venolia, and Gary M Olson. 2011. Brainstorming under to creativity: Stimulating group brainstorming with cultural diferences and
constraints: why software developers brainstorm in groups. (2011). conversationally-retrieved pictures. In Proceedings of the ACM 2011 Conference
[65] Ben Shneiderman. 2001. Supporting creativity with advanced on Computer Supported Cooperative Work. 265–274.
information-abundant user interfaces. In Frontiers of Human-centered [77] Thomas B Ward and Cynthia M Sifonis. 1997. Tosh demands and generative
Computing, Online Communities and Virtual Environments. Springer, 469–480. thinking: What changes and what remains the same? The Journal of Creative
[66] Ben Shneiderman, Gerhard Fischer, Mary Czerwinski, Mitch Resnick, Brad Myers, Behavior 31, 4 (1997), 245–259.
Linda Candy, Ernest Edmonds, Mike Eisenberg, Elisa Giaccardi, Tom Hewett, et al. [78] Thomas B Ward, Steven M Smith, and Jyotsna Vaid. 1997. Conceptual structures
2006. Creativity support tools: Report from a US National Science Foundation and processes in creative thought. American Psychological Association.
sponsored workshop. International Journal of Human-Computer Interaction 20, 2 [79] Susan M Watkins. 1988. Using the design process to teach functional apparel
(2006), 61–77. design. Clothing and Textiles Research Journal 7, 1 (1988), 10–14.
[67] Vikash Singh, Celine Latulipe, Erin Carroll, and Danielle Lottridge. 2011. [80] Richard W Woodman, John E Sawyer, and Ricky W Grifn. 1993. Toward a
The choreographer’s notebook: a video annotation system for dancers and theory of organizational creativity. Academy of Management Review 18, 2 (1993),
choreographers. In Proceedings of the 2011 C&C Conference on Creativity and 293–321.
Cognition. 197–206. [81] Anbang Xu, Shih-Wen Huang, and Brian Bailey. 2014. Voyant: generating
[68] Lise Skov. 2006. The role of trade fairs in the global fashion business. Current structured feedback on visual designs using a crowd of non-experts. In Proceedings
Sociology 54, 5 (2006), 764–783. of the ACM 2014 Conference on Computer Supported Cooperative Work & Social
[69] V Srinivasan and Amaresh Chakrabarti. 2010. Investigating novelty–outcome Computing. 1433–1444.
relationships in engineering design. AI EDAM 24, 2 (2010), 161–178. [82] Wei Xu, Xin Liu, and Yihong Gong. 2003. Document clustering based on
[70] Martin Stacey and Claudia Eckert. 2010. Reshaping the box: creative designing non-negative matrix factorization. In Proceedings of the 26th annual international
as constraint management. International Journal of Product Development 11, 3-4 ACM SIGIR Conference on Research and Development in Information Retrieval.
(2010), 241–255. 267–273.
[71] Amanda Swearngin and Yang Li. 2019. Modeling Mobile Interface Tappability [83] Yu-Chun Grace Yen, Joy O Kim, and Brian P Bailey. 2020. Decipher: An
Using Crowdsourcing and Deep Learning. In Proceedings of the 2019 CHI Interactive Visualization Tool for Interpreting Unstructured Design Feedback
Conference on Human Factors in Computing Systems. 1–11. from Multiple Providers. In Proceedings of the 2020 CHI Conference on Human
Factors in Computing Systems. 1–13.

You might also like