Abstract

Decision-making assistance by artificial intelligence (AI) during design is only effective when human designers properly utilize the AI input. However, designers often misjudge the AI’s and/or their own ability, leading to erroneous reliance on AI and therefore bad designs occur. To avoid such outcomes, it is crucial to understand the evolution of designers’ confidence in both their AI teammate(s) and themselves during AI-assisted decision-making. Therefore, this work conducts a cognitive study to explore how to experience various and changing (without notice) AI performance levels and feedback affects these confidences and consequently the decisions to accept or reject AI suggestions. The results first reveal that designers’ confidence in an AI agent changes with poor, but not with good, AI performance in this work. Interestingly, designers’ self-confidence initially remains unaffected by AI accuracy; however, when the accuracy changes, self-confidence decreases regardless of the direction of the change. Moreover, this work finds that designers tend to infer flawed information from feedback, resulting in inappropriate levels of confidence in both the AI and themselves. Confidence in AI and self-confidence are also shown to affect designers’ probability of accepting AI input in opposite directions in this study. Finally, results that are uniquely applicable to design are identified by comparing the findings from this work to those from a similar study conducted with a non-design task. Overall, this work offers valuable insights that may enable the detection of designers’ dynamic confidence and their consequent misuse of AI input in the design.

References

1.
Chen
,
H. Q.
,
Honda
,
T.
, and
Yang
,
M. C.
,
2013
, “
Approaches for Identifying Consumer Preferences for the Design of Technology Products: A Case Study of Residential Solar Panels
,”
ASME J. Mech. Des.
,
135
(
6
), p.
061007
.
2.
Camburn
,
B.
,
Arlitt
,
R.
,
Anderson
,
D.
,
Sanaei
,
R.
,
Raviselam
,
S.
,
Jensen
,
D.
, and
Wood
,
K. L.
,
2020
, “
Computer-Aided Mind Map Generation Via Crowdsourcing and Machine Learning
,”
Res. Eng. Des.
,
31
(
4
), pp.
383
409
.
3.
Williams
,
G.
,
Meisel
,
N. A.
,
Simpson
,
T. W.
, and
McComb
,
C.
,
2019
, “
Design Repository Effectiveness for 3D Convolutional Neural Networks: Application to Additive Manufacturing
,”
ASME J. Mech. Des.
,
141
(
11
), p.
111701
.
4.
Nie
,
Z.
,
Lin
,
T.
,
Jiang
,
H.
, and
Kara
,
L. B.
,
2021
, “
TopologyGAN: Topology Optimization Using Generative Adversarial Networks Based on Physical Fields Over the Initial Domain
,”
ASME J. Mech. Des.
,
143
(
3
), p.
031715
. .
5.
Zhang
,
W.
,
Yang
,
Z.
,
Jiang
,
H.
,
Nigam
,
S.
,
Yamakawa
,
S.
,
Furuhata
,
T.
,
Shimada
,
K.
, and
Kara
,
L. B.
,
2019
, “
3D Shape Synthesis for Conceptual Design and Optimization Using Variational Autoencoders
,”
Proceedings of the IDETC/CIE
,
Anaheim, CA
,
Aug. 18–21
, ASME Paper No. DETC2019-98525.
6.
Raina
,
A.
,
McComb
,
C.
, and
Cagan
,
J.
,
2019
, “
Learning to Design From Humans: Imitating Human Designers Through Deep Learning
,”
ASME J. Mech. Des.
,
141
(
11
), p.
111102
.
7.
Raina
,
A.
,
Puentes
,
L.
,
Cagan
,
J.
, and
McComb
,
C.
,
2021
, “
Goal-Directed Design Agents: Integrating Visual Imitation With One-Step Lookahead Optimization for Generative Design
,”
ASME J. Mech. Des.
,
143
(
12
), p.
124501
.
8.
Lopez
,
C. E.
,
Miller
,
S. R.
, and
Tucker
,
C. S.
,
2019
, “
Exploring Biases Between Human and Machine Generated Designs
,”
ASME J. Mech. Des.
,
141
(
2
), p.
021104
.
9.
Song
,
B.
,
Zurita
,
N. F. S.
,
Zhang
,
G.
,
Stump
,
G.
,
Balon
,
C.
,
Miller
,
S. W.
,
Yukish
,
M.
,
Cagan
,
J.
, and
McComb
,
C.
,
2020
, “
Toward Hybrid Teams: A Platform to Understand Human-Computer Collaboration During the Design of Complex Engineered Systems
,”
Proceedings of the Design Society: DESIGN Conference
,
Virtual
,
Oct. 26–29
, pp.
1551
1560
,
1
.
10.
Wilson
,
H. J.
, and
Daugherty
,
P. R.
,
2018
, “
Collaborative Intelligence: Humans and AI are Joining Forces
,”
Harv. Bus. Rev.
,
96
(
4
), pp.
114
123
11.
Zhang
,
G.
,
Raina
,
A.
,
Cagan
,
J.
, and
McComb
,
C.
,
2021
, “
A Cautionary Tale About the Impact of AI on Human Design Teams
,”
Des. Studies
,
72
, p.
100990
.
12.
Lee
,
J. D.
, and
See
,
K. A.
,
2004
, “
Trust in Automation: Designing for Appropriate Reliance
,”
Human Factors
,
46
(
1
), pp.
50
80
.
13.
Parasuraman
,
R.
, and
Riley
,
V.
,
1997
, “
Humans and Automation: Use, Misuse, Disuse, Abuse
,”
Human Factors
,
39
(
2
), pp.
230
253
.
14.
Zhang
,
Y.
,
Vera Liao
,
Q.
, and
Bellamy
,
R. K. E.
,
2020
, “
Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
,”
Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
,
Barcelona, Spain
,
Jan. 27–30
, pp.
295
305
.
15.
Richtel
,
M.
, and
Dougherty
,
C.
,
2015
, “
Google’s Driverless Cars Run into Problem: Cars With Drivers
,”
New York Times
, September 2, 2015. https://www.nytimes.com/2015/09/02/technology/personaltech/google-says-its-not-the-driverless-cars-fault-its-other-drivers.html.
16.
Bansal
,
G.
,
Nushi
,
B.
,
Kamar
,
E.
,
Lasecki
,
W. S.
,
Weld
,
D. S.
, and
Horvitz
,
E.
,
2019
, “
Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance
,”
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing
,
Skamania, WA
,
Oct. 28–30
, pp.
2
11
,
7
.
17.
Bansal
,
G.
,
Nushi
,
B.
,
Kamar
,
E.
,
Weld
,
D. S.
,
Lasecki
,
W. S.
, and
Horvitz
,
E.
,
2019
, “
Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff
,”
Proceedings of the AAAI Conference on Artificial Intelligence
,
Honolulu, HI
,
Jan. 27–Feb. 1
.
18.
Dzindolet
,
M. T.
,
Peterson
,
S. A.
,
Pomranky
,
R. A.
,
Pierce
,
L. G.
, and
Beck
,
H. P.
,
2003
, “
The Role of Trust in Automation Reliance
,”
Int. J. Human Comput. Stud.
,
58
(
6
), pp.
697
718
.
19.
Hoffman
,
R. R.
,
Johnson
,
M.
,
Bradshaw
,
J. M.
, and
Underbrink
,
A.
,
2013
, “
Trust in Automation
,”
IEEE Intell. Syst.
,
28
(
1
), pp.
84
88
.
20.
Siau
,
K.
, and
Wang
,
W.
,
2018
, “
Building Trust in Artificial Intelligence, Machine Learning, and Robotics
,”
Cutter Business Technol. J.
,
31
(
2
), pp.
47
53
.
21.
Chong
,
L.
,
Zhang
,
G.
,
Goucher-Lambert
,
K.
,
Kotovsky
,
K.
, and
Cagan
,
J.
,
2022
, “
Human Confidence in Artificial Intelligence and in Themselves: The Evolution and Impact of Confidence on Adoption of AI Advice
,”
Comput. Human Behav.
,
127
, p.
107018
.
22.
Mayer
,
R. C.
,
Davis
,
J. H.
, and
Schoorman
,
F. D.
,
1995
, “
An Integrative Model of Organizational Trust
,”
Acad. Manage. Rev.
,
20
(
3
), pp.
709
734
.
23.
Rousseau
,
D. M.
,
Sitkin
,
S. B.
,
Burt
,
R. S.
, and
Camerer
,
C.
,
1998
, “
Not So Different After All: A Cross-Discipline View of Trust
,”
Acad. Manage. Rev.
,
23
(
3
), pp.
393
404
.
24.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2015
, “
Rolling With the Punches: An Examination of Team Performance in a Design Task Subject to Drastic Changes
,”
Des. Studies
,
36
, pp.
99
121
.
25.
Hu
,
W. L.
,
Akash
,
K.
,
Reid
,
T.
, and
Jain
,
N.
,
2019
, “
Computational Modeling of the Dynamics of Human Trust During Human-Machine Interactions
,”
IEEE Trans. Human-Machine Syst.
,
49
(
6
), pp.
485
497
.
26.
Moré
,
J. J.
, and
Sorensen
,
D. C.
,
1983
, “
Computing a Trust Region Step
,”
SIAM J. Sci. Statist. Comput.
,
4
(
3
), pp.
553
572
.
27.
Hancock
,
P. A.
,
Billings
,
D. R.
,
Schaefer
,
K. E.
,
Chen
,
J. Y. C.
,
De Visser
,
E. J.
, and
Parasuraman
,
R.
,
2011
, “
A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction
,”
Human Factors
,
53
(
5
), pp.
517
527
.
28.
Schoorman
,
F. D.
,
Mayer
,
R. C.
, and
Davis
,
J. H.
,
2007
, “
An Integrative Model of Organizational Trust: Past, Present, and Future
,”
Acad. Manage. Rev.
,
32
(
2
), pp.
344
354
.
29.
Campbell
,
W. K.
, and
Sedikides
,
C.
,
1999
, “
Self-Threat Magnifies the Self-Serving Bias: A Meta-Analytic Integration
,”
Rev. Gen. Psychol.
,
3
(
1
), pp.
23
43
.
30.
Larson
,
J. R.
,
1977
, “
Evidence for a Self-Serving Bias in the Attribution of Causality
,”
J. Pers.
,
45
(
3
), pp.
430
441
.
You do not currently have access to this content.