Abstract

Understanding design processes and behaviors is important for building more effective design outcomes. During design tasks, teams exhibit sequences of actions that form strategies. This article investigates patterns of design actions in a paired parameter design experiment to discover design strategies that influence outcomes. The analysis uses secondary data from a design experiment in which each pair completes a series of simplified cooperative parameter design tasks to minimize completion time. Analysis of 192 task observations uses exploratory factor analysis to identify design strategies and regression analysis to evaluate their impacts on performance outcomes. The article finds that large actions and high action size variability significantly increase completion times, leading to poor performance outcomes. However, results show that frequently changing input controllers within and among designers significantly reduces completion times, leading to higher performance outcomes. Discussion states that larger actions can introduce unexpected errors, while smaller and consistent actions enhance designers’ understanding of the effects of each action, aiding in better planning for subsequent steps. Frequent controller switching reflects effective communication and understanding within design teams, which is crucial for cooperative tasks.

1 Introduction

In today’s world, engineering design teams deal with complex problems. Design behaviors and strategies shape design outcomes, making design processes vital to achieving desired outcomes. During design processes, teams explore, communicate, and conduct decision-making processes that determine their actions and strategies. Understanding design behaviors and identifying strategies that lead to desired outcomes can create more efficient design processes.

Designers exhibit different actions based on their experience level and the complexity of design tasks [1]. Within a narrowly scoped design task, micro strategies are defined as sequences of actions that designers perform to reach expected outcomes in a design process [1]. Identifying successful design strategies by grouping observed actions during a design task would help to inform future studies and industries to enhance their design processes.

Complex collaborative engineering systems, such as an aircraft or spacecraft design, consist of multiple interdependent subsystems. Changes made in one subsystem interact with the other subsystems, and all subsystems need to be in harmony for the entire design to work once integrated. For instance, in the case of an aircraft, if the wings and fuselage subsystems do not meet each other’s requirements, the aircraft would not be able to function.

As an example of system design activities during preliminary concept selection, architectural trade space exploration searches for efficient solutions within large set of alternatives [2,3]. Typically, a morphological matrix identifies a set of design features (parameters) and their possible values, producing a large combinatorial trade space of alternatives [3]. Evaluating a preference for each alternative can be practically difficult or computationally time-intensive, limiting direct use of design optimization methods [3]. Instead, interactive trade space exploration uses humans to “steer” evaluations toward efficient solutions [46]. Furthermore, complex systems may require multiple participants to work together in a shared trade space exploration activity. Multi-stakeholder trade space exploration assigns control over design parameters or preference attributes among multiple decision-makers [7,8].

Trade space exploration can be viewed as an instance of a parameter design task that searches a well-defined parameterized design space for a preferred solution [9]. Research using abstract parameter design tasks can eliminate domain-specific complexities and focus on general design parameters and designer behaviors. Using parameter design tasks as an experimental procedure gives the control of varying the technical complexity of tasks [10] and reduces external complexities [9,11]. These features of parameter design experiments provide a more concentrated way to investigate specific research purposes in design settings.

For instance, Thekinen and Grogan introduce a domain-specific parameter design experiment for an aircraft design problem [12]. Aircraft design is a complex process with interconnected subsystems that require careful coordination among all subsystems. For example, the propulsion subsystem’s thrust requirements depend on the aerodynamic properties of the fuselage and airfoils. Aircraft design involves three phases: conceptual, preliminary, and detailed design focusing here on the preliminary phase, which involves selecting design parameters for a chosen concept through an iterative process. Thekinen and Grogan developed a parameter aircraft design task with four participants working on different subsystems as fuselage, payload (battery), propulsion (motor and propeller), and airfoil (wing and tail), and 12 design parameters. The integrated system only works if every design parameter meets the requirements for all subsystems.

This article uses a parameter design problem with the same logic, but eliminates the domain specificity and reduces the number of participants to two to decrease social and technical complexity and have a more concentrated way to investigate the specific research purpose. In the experimental procedure, each participant can be considered to be dealing with a different subsystem, with N number of design parameters, trying to meet the system-level requirements working with their pair.

This article studies design processes in an abstract parameter design problem that enables the identification of design strategies, which can be applied to a wide range of design problems. Identifying design strategies in abstract design problems can bring interventions enhancing design processes that are not specific to any particular design problem or domain. This article defines design strategy as a similar set of actions designers follow that are generalizable over broad design problems and investigates how groups of designer actions form strategies to understand better the design process and its effects on performance outcomes in paired parameter design tasks.

The analysis uses secondary data from a human parameter design experiment consisting of tasks with different levels of complexity, yielding a total of 192 activity logs. The experiment consisted of 48 participants and 24 pair teams. The analysis first identifies observable design actions from the experimental log, next performs exploratory factor analysis (EFA) to identify design strategies exhibited during the experiment, and finally performs regression analysis to evaluate the significance of design strategies on performance. Results show that a design strategy with large-magnitude and variably-sized changes to design parameters increases completion times. In contrast, a strategy with frequent parameter and designer switching reduces completion times in the parameter design experiment.

2 Literature Review

2.1 Design Actions and Strategies.

Gero defines design as a goal-oriented, constrained, exploration, decision-making, and learning activity [13] with sequences of actions where designers perform micro strategies [1]. Micro strategies are self-sustaining actions focusing on the current state of the design process. Identifying similar actions observed in the design process and then following and grouping them will provide specific micro strategies that designers choose. Gero and Mc Neill also note that the designer’s experience level and the complexity level of the task impact number of different micro strategies found in the design process [1]. From the stated definition, this article focuses on the designer’s decision-making and the strategy-building process by identifying designers’ actions and grouping similar actions to differentiate some successful and unsuccessful strategies in paired parameter design tasks.

Literature includes various studies that identify relationships between the design process and different focuses. McComb et al. conducted a human experiment with a truss design problem to identify successful problem-solving strategies [14]. Their findings indicate that proficient teams employ distinct problem-solving methods, opting for simpler designs and concentrating their search efforts on specific regions within the design space. In a later study, McComb et al. employ data-mining techniques to quantitatively analyze the problem-solving processes utilized by designers when addressing configuration design problems [15]. The findings reveal that designers progress through four distinct procedural stages while working on configuration design problems, transitioning from topology design to shape and parameter design. High-performing designers stand out due to their adeptness at adjusting parameters early in the process, facilitating a more effective and nuanced search for solutions.

Raina et al. used the term design strategy as a designer’s approach, plan, or heuristic process for ordering the steps involved in solving a design problem [16]. They investigate the significance of design strategies in guiding the resolution of configuration design problems, employing a team of cognitive agents that mimic human behavior. Results indicate that human design heuristics were successfully represented through probabilistic models, establishing a common foundation between human designers and computational agents for representing design strategies. Later, Raina et al. introduced design strategy network, a data-driven method that learns from historical trajectory data and swiftly generates an action probability distribution based on the input state [17].

Rahman et al. developed a framework for clustering designers with similar sequential design patterns by characterizing designers’ action sequences [18]. They identify a network-based clustering approach for identifying behavioral design patterns. Jablokow et al. investigated whether cognitive styles and team interaction behaviors affect team design outcomes [19]. Their results indicate that certain team interaction behaviors are associated with generating more unique and varied ideas, which vary significantly across different teams. Additionally, their findings reveal that interaction sequences tended to be diverse rather than following specific patterns. Mirabito and Goucher-Lambert refer to performance as idea fluency and the overall output of exceptional ideas. They investigate factors that predict improved performance during concept generation in early-stage design settings [20].

Austin-Breneman et al. studied team behavior in distributed complex system design tasks to identify factors affecting subsystem decision-making processes and their influence on the overall system [21]. Their findings conclude that design teams prefer global rather than local searches, optimizing individual design parameters separately and sequential rather than concurrent optimization strategies.

2.2 Parameter Design Problems.

Parameter design tasks present a set of input variables (design parameters) to designers that influence a set of output variables [22]. Parameter design generalizes the trade space exploration activities performed during early-stage system concept selection. Yu and Gero defined parametric design as a dynamic, rule-based process controlled by variations and parameters, in which multiple design solutions can be developed in parallel [23]. Using parameter design tasks to study designer behaviors helps control external factors’ effects, such as domain knowledge [11]. Using parameter design tasks supports the creation and organization of complex digital models [24]. Parameter tasks can have coupled and uncoupled characteristics. Uncoupled parameter design tasks include a one-to-one mapping between input and outputs, whereas, where this condition is lacking, the parameter design tasks would be coupled [22].

Hirschi and Frey conduct one of the first parameter design experiments on human subjects [22]. They used a computer user interface and assigned participants tasks ranging from two-input, two-output parameters to five-input, five-output parameters. Results show the task completion time grows linearly with the number of parameters for uncoupled tasks but geometrically for coupled tasks. Later, Grogan and de Weck performed a human parameter design experiment by following the principles introduced by Hirschi and Frey but adding collaborative tasks [10]. They gave participants coupled and uncoupled parameter design tasks with varying technical and social complexity levels. Their results show that increasing technical complexity negatively impacts performance outcomes meaning that as the number of variables (parameters) increase in a task, the completion time increases with a power-law relationship. Their other significant conclusion was that as the team size grows, the completion times of design teams increase significantly due to increased social complexity.

Alelyani et al. used secondary data from Grogan and de Weck [10] to investigate factors contributing to designers’ behavior for parameter design tasks [11]. To quantify the relationship among design features, they identified three behavioral characteristics as the number of design actions, performance outcomes, and experienced error. Yu et al. conducted a human parameter design experiment where participants engaged with simulated design processes involving seawater reverse osmosis plants [9]. Their goal was to investigate the relationship between behavior and performance. Their findings showed that the best strategy was simulated annealing optimization algorithm for higher performance outcomes, and the worst strategy was pseudo random-search strategy with lower performance outcomes.

Avsar and Grogan adopt the parameter design problem experiment from the study by Grogan and de Weck [10] to investigate the effects of locus of control (LOC) personality trait on performance outcomes [25]. Their findings show statistically significant relationship between LOC and performance of pairs in parameter design tasks.

Wöhr et al. builded on the parameter design framework from the study by Grogan and de Weck [26]. They conduct a human parameter design experiment to investigate the effect of the varying time interval between each integration and verification. Their findings show that varying the frequency of integration and verification significantly impacts performance outcomes. They show shorter time intervals between each integration, and verification improves designer performance outcomes by decreasing the completion times of tasks.

2.3 Teamwork and Design Process.

Teamwork has been the subject of extensive study in various fields because of its wide usage and advantages. Teamwork can provide greater productivity and competitiveness [27], and literature shows that design teams can achieve higher quality than individuals in product development [28]. Teamwork brings a wider range of knowledge and expertise [29], enabling decomposition and allocation of design decisions and actions among team members to apply specialized knowledge [30]. However, interactions between design actors generate iteration loops and rework that may outweigh potential benefits [31].

By cooperation, teams can achieve better productivity and performance outcomes, but having distributed cognition and communication among different members makes the process challenging [32]. As team effectiveness impact outcomes in design settings and team effectiveness depend on various factors [3335], this article focuses on how design team processes affect design outcomes.

2.4 Literature Gap.

Literature offers various insights investigating design behaviors and strategies with different focuses. This article aims to contribute to design strategy literature by studying the effects of different design strategies in parameter design tasks. Although identifying specific design strategies in parameter design settings does not provide certain recipes for domain-specific design problems, it enables identifying more generalizable strategies that can be implemented across different domains of design problems. This article aims to identify generalizable strategies that can be applied to various design problems and situations instead of recommending specific behaviors or strategies for selected design problems. Accordingly, the article defines design strategy as a similar set of actions designers follow that are generalizable over broad design problems.

Literature shows that parameter design tasks provide a controlled environment to study design processes [10,2224]. Parameter design problems involve abstract design activities and eliminate domain-specific complexity, providing complete control over technical variables. As a result, they offer a suitable environment for investigating more broadly applicable design strategies. This article aims to conduct an initial study toward identifying design strategies that can be applied to engineering design tasks across multiple domains. These strategies should be broadly applicable and not specific to any particular domain.

2.5 Research Objective.

The objective of the article is to fill the literature gap by studying design strategies in a parameter design problem to identify generalizable successful and unsuccessful strategies. Identifying and differentiating successful and unsuccessful strategies used by design teams in parameter design tasks can help future studies and industries to bring interventions that direct teams towards successful design strategies.

This article uses secondary data from a human parameter design experiment originally adapted from Grogan and de Weck’s parameter design work to explore the effects of personality traits on team performance outcomes in design tasks [25]. The human experiment consists of cooperative paired parameter design tasks. The parameter design problem in the experiment represents an abstract level collaborative design problem without any domain-specific knowledge. During the design tasks, each designer in a pair can be thought of as representing a subsystem of a complex collaborative engineering design product.

This article investigates the relationship between process variables and task outcomes, and the experiment design process is illustrated in Fig. 1. The analysis seeks to identify successful and unsuccessful design team design strategies by identifying action types and grouping them by EFA technique to differentiate strategies. For this purpose, this article investigates the following hypothesis: Teams follow distinct design strategies that affect their performance outcomes in parameter design tasks.

Fig. 1
The design system consists of the parameter design task with two designers who iteratively make actions following a revealed design strategy. Inputs include social and demographic factors. Outputs measure performance via task efficiency.
Fig. 1
The design system consists of the parameter design task with two designers who iteratively make actions following a revealed design strategy. Inputs include social and demographic factors. Outputs measure performance via task efficiency.
Close modal

3 Methodology

This article analyzes secondary data from a parameter design experiment that originally studied the effect of LOC on design behavior described in Ref. [25]. Secondary analysis further investigates how designer behaviors influence outcomes for cooperative pair design tasks irrespective of LOC. The design experiment uses the same parameter design features from the framework of Ref. [10] with an updated software platform.2 The following sections review the methodology (design task, protocol, instruments, and data) of the source experiment.

3.1 Design Task.

The underlying parameter design task defines a column vector of N scalar input variables (design parameters) x=[x1,,xN]T each taking values on the interval xi ∈ [−1, 1] and a column vector of N scalar output variables (performance attributes) y=[y1,,yN]T associated with functional requirements. Although abstracted from this experiment, example inputs in an aircraft design task include wingspan and mean chord length; example outputs include lift and drag. An N × N system matrix M = [mij] relates inputs to outputs as a linear system of equations y = Mx, where element mij represents the sensitivity of output yj on input xi. In other words, M is the system model that evaluates the multidimensional performance of a given design configuration given by selected parameter values. Starting from an initial zero input vector (xi=0i), the task objective is to choose input variables x to achieve a target output vector y with a maximum allowable error |yiyi|<ε=0.05 in each output variable. The task duration measures the time required to meet all requirements.

Coupled task instances with mij0i,j are generated as follows to achieve certain invariant conditions. First, generate M as the orthonormal basis of a random N × N matrix with elements sampled from a uniform (0,1) distribution. Next, generate a candidate y as the orthonormal basis of a random N × 1 column vector with elements sampled from a uniform (−1,1) distribution. Compute the task solution as x=MTy and, if any solution variables are close to the initial design point with xi = 0 (i:|xixi|0.2), generate a new target (repeat as necessary). Resulting tasks preserve identical unit Euclidian distance from initial to final inputs/outputs irrespective of N, i.e., x=y=1, to control for distance scales in larger design problems.

The design tasks are adapted to collaborative design problems by assigning control over input variables and visibility over output variables to n individual design actors. A binary control matrix n × N control matrix C = [cij] assigns designer i to have control of input variable j. A binary n × N visibility matrix V = [vij] assigns designer i to have visibility of output variable j. Each input and output variable is assigned to only one designer.

Designers interact with design tasks in a graphical, rather than numerical, format. The browser-based user interface in Fig. 2 illustrates the user interface from each of two designer’s perspective. Vertical sliders ranging between −1 and 1 represent controlled input variables (xi) and horizontal sliders with target regions between black bars display output variables (yi) and target requirements (yi±ε). Quantitative information is hidden to prevent mathematical solutions. Designers are limited to visual feedback on their own interface and face-to-face communication with teammates. Designers modify inputs by dragging the slider thumb up and down (using the touch-pad or touchscreen) and inputs update once released. Designers may also use arrow keys on the vertical sliders to change the input by 0.1 or 0.01 units.

Fig. 2
Example interfaces for a N = 3 design task with two inputs/outputs assigned to designer 1 (a) and one input/output for designer 2 (b). Input parameters appear as editable vertical sliders and output requirements appear as uneditable horizontal sliders with black bars marking the target region. Red cross marks unsatisfied requirements, while green check marks satisfied requirements. Outputs update in response to input changes by either designer. A timer counts down from a maximum duration allowed for each task.
Fig. 2
Example interfaces for a N = 3 design task with two inputs/outputs assigned to designer 1 (a) and one input/output for designer 2 (b). Input parameters appear as editable vertical sliders and output requirements appear as uneditable horizontal sliders with black bars marking the target region. Red cross marks unsatisfied requirements, while green check marks satisfied requirements. Outputs update in response to input changes by either designer. A timer counts down from a maximum duration allowed for each task.
Close modal

Designers attempt to finish each task as quickly as possible. A timer visible in the interface counts down from a maximum duration allowed for each task. Individual tasks require the designer to meet the target region of all horizontal sliders, changing the signal icon from a red cross to green check mark. Pair tasks require both partners to meet the target region of all horizontal sliders at the same time. Completed tasks award points to all participating designers based on the relative efficiency (one point per second remaining). Cumulative points earned throughout an experiment determine rankings for monetary incentives.

3.2 Experiment Protocol.

The source experiment follows a between-subjects design with replication at group and task units to study the effect of LOC on design processes. LOC is a personality trait that characterizes an individual’s perception of control on two extremes: internal and external [36]. Individuals with external LOC believe their life is guided by fate, luck, or other external circumstances they cannot control. In contrast, people with internal LOC believe their decisions and efforts influence the events around them and create their own outcomes. The experiment controls group factors pairing LOC types (I: internal or E: external) in design pairs as I-I, I-E, or E-E. Each pair works on a sequence of design tasks of varying size to yield multiple observations of process and outcome variables. The protocol was approved by the Institutional Review Board at Stevens Institute of Technology (#2019-025).

The study includes two distinct cohorts, each consisting of four replications of each group factor (I-I, I-E, and E-E) across six sessions (total: 24 pairs). The cohorts were separated in time by several months and followed different task sequences described below; however, both cohorts used the same experiment rooms, computers, instructions, and overall procedures. Participants were recruited from adult on-campus student populations via email and flyers.

All sessions were conducted in university classrooms using a standard room layout with assigned seats. Paired participants sit face to face with each team at a separate table. Tables are arranged such that each computer display is only visible to the seated individual. Pairs may communicate face to face but not share any computer displays. Each session evaluates two pairs in parallel, both working on equivalent tasks and scheduled based on mutual availability. The two teams in each session may not communicate with each other.

Sessions consist of five training tasks and ten experimental tasks described in Table 1. Training tasks introduce the task objectives and computer interface and take about 20 min to complete. The remaining ten experimental tasks take about 40 min to complete. Sessions in cohort 1 include both individual and pair tasks administered in a fixed order, of which this article only considers the six pair tasks. Sessions in cohort 2 include ten pair tasks administered in a randomized order subject to the constraint that tasks with four variables must take place in the second half of the experiment.

Table 1

Training and experimental design tasks for cohort 1 and cohort 2

Training tasks (fixed order)Cohort 1 tasks (fixed order)Cohort 2 tasks (random ordera)
TypeSizeRepl.Time (s)TypeSizeRepl.Time (s)TypeSizeRepl.Time (s)
Indiv.1190Indiv.22120Pair24180
Indiv.21120Indiv.32240Pair34360
Pair2b1270Pair23180Pair42720
Pair21270Pair33360
Pair31540
Training tasks (fixed order)Cohort 1 tasks (fixed order)Cohort 2 tasks (random ordera)
TypeSizeRepl.Time (s)TypeSizeRepl.Time (s)TypeSizeRepl.Time (s)
Indiv.1190Indiv.22120Pair24180
Indiv.21120Indiv.32240Pair34360
Pair2b1270Pair23180Pair42720
Pair21270Pair33360
Pair31540
a

Size 4 tasks cannot appear within first five tasks.

b

Uses an identity coupling matrix M to simplify training.

To incentivize efficiency, participants earn 1 point per second a task is finished ahead of the maximum time and 0 points for an incomplete task. At the end of a session, participants are ranked based on total accumulated points and privately paid in gift cards ranging from minimum of $ 8 to maximum of $ 15 based on their successive ranks. Aggregated scores are only released at the end of a session to limit strategic behavior including end-of-session boundary effects.

3.3 Experiment Instruments.

Prior to working on tasks, participants complete a demographics survey with six items including age (years), gender (male, female, or other), postsecondary education (years), professional work experience (years), native language, and English proficiency. English proficiency is measured on a scale with five levels: fluent/native, high (TOEFL ≥ 95 or IELTS > 7), medium-high (TOEFL 85-94 or IELTS 6.5-7), medium-low (TOEFL 60-84 or IELTS 6), or low (TOEFL < 60 or IELTS < 6). Analysis assigns numerical values from 1 to 5 scale to English language ability (1: low; 5: lluent/lative).

During a design task, an automated log records all design actions (i.e., input slider movements) as time-stamped events. Postprocessing computes the time to complete each task (task efficiency) as the timestamp difference of the first and last design action. Each design task requires the input sliders to move a total of 1.0 units from the initial state to reach the target solution, regardless of the problem size N; however, design strategies may produce different patterns of size, timing, and sequence of design actions.

3.4 Experiment Data.

A total of 48 subjects (20 women and 28 men) participated in the experiment. Subjects ranged from 20 to 40 years of age with a mean of 26.7. All participants either previously completed or were in their last year of STEM undergraduate studies, and more than half were currently pursuing a graduate engineering degree, and the mean value of educational experience is 6.7. Thirty-nine participants listed one of 19 different languages other than English as their native language. Twenty-one subjects claimed to be fluent English speakers, 19 reported TOEFL scores equal or above 95 (IELTS > 7.0), 6 between 85 and 94 (IELTS 6.5–7.0), and 2 between 60 and 84 (IELTS 6.0) prior to starting their studies.

The experimental design yields observations from 192 design tasks (12 × 6 = 72 from cohort 1 and 12 × 10 = 120 from cohort 2) summarized in Table 2 by task size and median, first, and third quartile task completion times. Approximately 17% (32/192) of the tasks were not solved in the given maximum time limit and were assigned the maximum completion time in Table 1 as a conservative assumption for subsequent analysis. Here, conservative is used in a statistical sense (rather than related to task efficiency) in that it reduces apparent differences between conditions.

Table 2

Summary of design completion time by task size

Task size (N)No. samplesNo. incomplete tasksMedian completion time (s) (T~)First quartile completion time (s), Q1(T)Third quartile completion time (s), Q3(T)
284546.529.386.9
38418160.776.1310.7
4249492.1202.6720.0
All19232220.444.8220.4
Task size (N)No. samplesNo. incomplete tasksMedian completion time (s) (T~)First quartile completion time (s), Q1(T)Third quartile completion time (s), Q3(T)
284546.529.386.9
38418160.776.1310.7
4249492.1202.6720.0
All19232220.444.8220.4

4 Analysis and Results

To address the hypothesis that differential design strategies affect performance outcomes in parameter design tasks, the analysis first performs EFA to reduce the dimensionality of process factors (PFs) and identify the underlying relationships (strategies) between measured variables. Finally, regression analysis investigates whether process factors (strategies) have a significant effect on task performance. While preliminary analysis includes demographic factors, they are removed from further analysis because their effects are not as practically significant as preserved factors and their impacts might be due to other uncontrolled variables outside the scope of analysis relating design strategies to outcomes.

4.1 Exploratory Factor Analysis for Process Variables.

Postprocessing of the experimental log computes nine candidate process-oriented metrics in five categories described below based on samples from each design action. Action size and action time variables consider first (mean), second (standard deviation), and third (skew) moments to capture distribution shape.

  1. Action size (mean, standard deviation, skew): distance traveled by the input slider for a single action. User interface buttons permit action sizes of 0.1 and 0.01 and moving the slider thumb permits arbitrary action sizes.

  2. Action time (mean, standard deviation, skew): elapsed time between successive actions.

  3. Input delta (mean): indicator variable for changes in input parameter modified between successive actions; each action (after the first) encodes a sample of 0 (same input parameter changed) or 1 (different input parameter changed). A mean value of 1.0 indicates a different parameter for each successive input, and a value of 0.0 indicates all actions modify the same parameter.

  4. Designer delta (mean): indicator variable for changes in input controller (designer) between successive actions; each action (after the first) encodes a sample of 0 (same designer action) or 1 (different designer action). A mean value of 1.0 indicates alternating actions between designers, and a mean value of 0.0 indicates sequential actions from only one designer.

  5. Designer share (mean): indicator variable for the input controller (designer) for each action; each action encodes a sample of 0 (minority-acting designer) or 1 (majority-acting designer). A mean value of 0.5 indicates equal numbers of actions among both designers, and a mean value of 1.0 indicates actions by only one designer.

Figure 3 visualizes the Pearson correlation matrix (p-values in parentheses) to illustrate correlation and multicolinearity among process factors.

Fig. 3
Pearson correlation matrix (p-values in parentheses) for nine identified design process features confirming the presence of significant multicolinearity.
Fig. 3
Pearson correlation matrix (p-values in parentheses) for nine identified design process features confirming the presence of significant multicolinearity.
Close modal

Next, analysis uses the FactorAnalyzer function from the python library factor_analyzer (version 0.5.0) to run EFA on identified nine process factors. Bartlett’s Test of sphericity confirms presence of significant correlation (χ2 = 1026, p < 1 · 10−100). The Kaiser–Meyer–Olkin test suggests that the data are marginally acceptable for factor analysis (KMO = 0.552). The relatively low KMO score is not unexpected as a behavioral factor like design strategy is not expected to exhibit high predictive power for recorded process metrics. EFA employs a varimax factor rotation, minimum residual (minres) solution technique, and the Kaiser criterion that selects the number of factors based on eigenvalues greater than one (three in this case).

The radar plot in Fig. 4 visualizes the resulting three PFs. Distinguishing characteristics include:

  1. PF1: High input delta mean and high designer delta mean (i.e., frequent switching between parameters and designers).

  2. PF2: High action size mean and standard deviation (large-magnitude and variably-sized parameter changes).

  3. PF3: High action time standard deviation and skew (high variation in time in between actions with a long distribution “tail”).

Table 3 shows mean process factor values for each task size, reinforcing that PF3 is primarily associated with the most complex parameter design tasks.

Fig. 4
Exploratory factor analysis loadings of process variables. PF1 shows frequent switching of inputs and designers between actions. PF2 shows large average action size and variation in size. PF3 shows large variation and skew in action time.
Fig. 4
Exploratory factor analysis loadings of process variables. PF1 shows frequent switching of inputs and designers between actions. PF2 shows large average action size and variation in size. PF3 shows large variation and skew in action time.
Close modal
Table 3

Summary of mean process factor values observed for tasks of variable size

Task size (N)Mean PF1Mean PF2Mean PF3
20.070.03−0.16
3−0.06−0.03−0.03
4−0.050.010.83
Task size (N)Mean PF1Mean PF2Mean PF3
20.070.03−0.16
3−0.06−0.03−0.03
4−0.050.010.83

4.2 Regression Analysis.

The research question investigates the effect of the process variables (designer behavior) on task completion time while controlling for differences in LOC and task structure. The analysis proposes a linear model with the same transformations used in Ref. [10]. Due to its distribution (skewed distribution), task completion time has a logarithmic transformation (lnT). Task size (N is the number of parameters in a task) is expected to have a power-law relationship with task completion time (lnTN2). Task order (O), the ordered task number in a session (ranging between 1 and 10 for the 10-task sessions), quantifies learning effects accumulated in sequential task ordering with a geometric relationship based on Henderson’s law for learning curves (lnT ∝ lnO) [37].

Analysis also considers an input factor for the experimentally controlled conditions from the primary source experiment. The categorical variable (LOC) denotes six levels of cohort-specific LOC: II1, II2, IE1, IE2, EE1, and EE2. For example, II1 represent internal–internal pairs from cohort 1. II1 serves as the reference condition against which others are evaluated.

Analysis constructs an ordinary least-square regression model to investigate the effects of the experimental control (LOC), all process variables (PF1, PF2, and PF3), task order (O), and task size (N) on completion times. Summary results find that PF1 (t(181) = −2.152, p = 0.033) and PF2 (t(181) = 4.575, p = 8.81 × 10−6) have a significant effect on task completion times, but PF3 has no significant effects on task completion times (t(181) = −0.897, p = 0.371). Subsequent analysis eliminates PF3 and only considers statistically significant process factors (PF1 and PF2). Additional analysis using least absolute shrinkage and selection operator confirms significant factors.

Equation (1) presents the resulting linear model with process factors as drivers of task completion time.
(1)
Analysis of Eq. (1) model performs both ordinary least-square (OLS) regression and mixed effects models (more suitable for repeated observations), finding that both yield substantially similar results with easier interpretation for OLS. Table 4 shows OLS regression results using python library statsmodels (version 0.12.2) function ols. Visualization of model residuals via a quartile-quartile plot verifies normality assumptions. Results indicate that the expression of PF1 behaviors significantly decrease task completion time (t(182) = −2.261, p = 0.025), and the expression of PF2 significantly increases task completion times (t(182) = 4.576, p = 8.76 · 10−6). Aligning with literature, both task size (t(182) = 12.527, p = 5.72 · 10−30) and task order (t(182) = −3.787, p = 1.29 · 10−5) are statistically significant factors for task completion times. Analysis also indicates a significant performance difference in IE1 and II2 pairs with reference to II1 pairs (p = 0.017 and p = 0.001, respectively).
Table 4

Regression of the effect of process variables on time

FactorCoefficientStd. Err.t-stat.p-Value
Intercept3.4900.23714.7277.80 × 10−33
ln(O)−0.3590.095−3.7872.07 × 10−4
N20.1770.01412.5272.30 × 10−26
PF1−0.1310.058−2.2610.025
PF20.2330.0514.5768.76 × 10−6
LOCEE10.2600.2081.2520.212
LOCEE20.2010.1941.0340.302
LOCIE10.5030.2082.4200.017
LOCIE20.3080.1961.5680.119
LOCII20.6610.2023.2740.001
FactorCoefficientStd. Err.t-stat.p-Value
Intercept3.4900.23714.7277.80 × 10−33
ln(O)−0.3590.095−3.7872.07 × 10−4
N20.1770.01412.5272.30 × 10−26
PF1−0.1310.058−2.2610.025
PF20.2330.0514.5768.76 × 10−6
LOCEE10.2600.2081.2520.212
LOCEE20.2010.1941.0340.302
LOCIE10.5030.2082.4200.017
LOCIE20.3080.1961.5680.119
LOCII20.6610.2023.2740.001

As this study uses secondary data that originally focused on investigating the effects of LOC on design team performance outcomes, analysis preserves LOC in the final regression model in Eq. (1). Table 4 indicates that that process factors have a significant effect on completion time after considering the previously controlled variable (LOC) in the analysis.

4.3 Summary of Analysis Results.

Postprocessing of event logs produces nine candidate process-oriented metrics in five categories of process from the paired parameter design experiment: (1) action size as the distance traveled by the input slider, (2) action time as the elapsed time between successive actions, (3) input delta as the indicator variable for input slider changes between successive actions, (4) designer delta as the indicator variable for input controller (designer) changes between successive actions, and (5) designer share as the indicator variable for input controller (designer) changes between successive actions. EFA combines co-observed process variables into three behavioral design strategies identified as PF1: frequent switching between inputs and designers, PF2: large average action size and variation, and PF3: long action time standard deviation and skew. PF3 is noted to only be associated with the most complex design tasks having four parameters. Analysis of the effects of these design strategies on pair performance outcomes in the parameter design experiment finds that PF1 significantly reduces completion times and PF2 significantly increases completion times.

5 Discussion

5.1 Research Reflection.

The hypothesis investigates the effects of design strategies on team performance outcomes in paired parameter design tasks. Results show that statistically significant variation in performance outcomes can be traced to differential designer behavior. Analysis indicates that the PF1 strategy, which describes frequent switching between inputs and designers (strategy of changing input parameters and changing control over inputs within a pair frequently), has a statistically significant effect on task completion times. As the PF1 strategy significantly lowers task completion times, this strategy significantly improves pair performance outcomes in the parameter design task. Frequently switching of the input parameter can help understand its effects on outputs, leading to more purposeful actions. Frequent switching between designers can indicate high levels of communication and shared understanding. As the experiment is a cooperative design task, inputs of each designer affect the outputs of their partner. Accordingly, designers not only need to understand the impacts of their actions on outcomes but also need to understand the impacts of their actions on their partners’ to achieve high performance outcomes.

The analysis also indicates that PF2 significantly increases completion times, meaning that the PF2 strategy significantly lowers pair performance outcomes in the parameter design experiment. The PF2 strategy refers to large action sizes and high variations in action sizes. The strategy of using large-sized actions can lead to unexpected errors and less understanding of the relationship between inputs and outputs. The PF2 strategy can also be associated with random actions because actors might not understand how inputs influence outputs. This finding also suggests that shorter and more consistent design actions can significantly reduce task completion times. Teams following the strategy of taking small actions with lower variation in action size may have more informed next steps, leading to consistently effective actions and successful design outcomes.

The analysis also shows that task order and the number of variables in a task significantly affect the completion times of pairs, aligning with the existing literature [10]. Results suggest that completion times decrease as the task order increases, an indication of learning effects. Later in a task sequence, designers leverage their experience and understanding of tasks, leading to better performance outcomes. Analysis also shows a significant super-linear relationship between the number of variables in a task and designer completion times. This article supports the findings of Grogan and de Weck, suggesting that an increase in the number of variables in a task increases the technical complexity level of a task leading to lower performance outcomes [10].

5.2 Connecting Findings to Design Practice.

As the given example of an aircraft design in Sec. 1, each designer can be considered as representing a different subsystem (e.g., fuselage) with different design parameters and requirements. Designers work with partners who represent a different subsystem (e.g., airfoil), to create a product that functions when every subsystem is integrated together. The results of this article emphasize that, in real-world design problems, certain strategies can lead to more efficient design processes and higher performance outcomes. This article identifies some generalizable design strategy recipes applicable over broad design practices:

  1. Avoiding major/large changes in a design parameter to prevent unexpected errors.

  2. Making small changes in the design parameters to understand the effects of each action and building efficient next steps.

  3. Frequently switching among design parameters, rather than concentrating on one, to better understand how each design parameter impacts the other requirements.

  4. Switching control frequently among team members to better understand the effects between actions and outcomes controlled by others to successfully integrate all design parameters and subsystems.

5.3 Limitations.

Results from this article are subject to several limitations. First, it uses secondary data from an experiment on the effect of the LOC personality trait on team performance outcomes in parameter design tasks [25]. However, as the analysis preserves the controlled LOC factor and still shows significant impacts of process factors on the completion times, secondary data are suitable for the main investigation of this article. Furthermore, no experimental control was exerted over the identified process factors, so it is possible that a confounding factor influences both the observed strategies and outcomes.

The experiment uses a highly simplified parameter design task representative of cooperative design only at an abstract level. Although using a parameter design framework helps understand the design process, it also greatly simplifies the design tasks by neglecting factors such as domain knowledge and creativity. The parameter design task should be considered a component of design, for example, searching over a trade space of alternatives in early-stage system concept selection rather than a holistic representation of end-to-end design.

Constraints on session duration limited the number of pair tasks to keep the total experiment time less than 1 h and retain participant attention. Additionally, experimental resources only allowed for 12 sessions, limiting the amount of data collected. Finally, experimental tasks consider interactions between two participants at a time, take place over a short time period (minutes), have a small number of design variables without any domain-specific design context, and incentivize behavior using a financial reward tied to relative ranking in a design session. These limitations indicate results of this experiment might show variations with a larger team size or with the application of domain-specific design tasks.

6 Conclusion

Identifying successful design strategies for design teams is important for creating more efficient design processes and achieving more successful design outcomes. This article analyzes secondary data from a pair parameter design task experiment to find specific groups of actions that comprise design strategies, which, in turn, are associated with performance outcomes. Results show that EFA can help identify specific design strategies on a design task by combining observed action groups during design processes.

Results show that design strategies with larger action sizes with higher variation in action size lead to higher completion times and worse performance outcomes. On the contrary, smaller and consistent actions can lead to lower completion times and more successful design outcomes. Analysis also shows that frequent switching of inputs by a designer and between designers within a team significantly lowers completion times and increases design team performance outcomes. The discussion explains that larger actions can cause unexpected errors. In contrast, smaller and more consistent actions can lead to a better understanding of each action’s effects, helping designers have better-planned next steps. Also, frequent switching of controllers can indicate frequent communication and better understanding between designers in a team. In a cooperative task, it is crucial to comprehend how your teammate’s actions influence the outcomes in order to build successful strategies. Findings also align with the literature that there is a negative relationship between the number of variables in a task and performance outcomes, whereas a positive relationship between task order and performance outcomes in parameter design tasks [10].

In summary, this article recommends certain design strategy recipes: (1) avoiding major/large changes in a design parameter to prevent unexpected errors, (2) making small changes in the design parameters to understand the effects of each action, (3) not concentrating on one parameter but making frequent switching among all design parameters to understand better how each design parameter impacts the other requirements, and (4) switching control frequently among team members to better understand the influence of actions on outcomes controlled by others. The generalizable design strategy recipes that this article suggests can help real-world design problems achieve more efficient design processes, higher design team performance outcomes, and lower unexpected errors in the design integration part. Future studies can bring interventions before the parameter design tasks to help designers exhibit preferred design strategies. Future studies should also investigate how successful design strategies for design teams would show variations in domain-specific tasks.

Footnote

2

Available under an open-source license at https://github.com/code-lab-org/collab-web

Acknowledgment

This work was supported in part by a Provost’s Doctoral Fellowship from Stevens Institute of Technology and National Science Foundation (Grant No. 1943433).

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Gero
,
J. S.
, and
Mc Neill
,
T.
,
1998
, “
An Approach to the Analysis of Design Protocols
,”
Design Stud.
,
19
(
1
), pp.
21
61
.
2.
Ross
,
A. M.
,
Hastings
,
D. E.
,
Warmkessel
,
J. M.
, and
Diller
,
N. P.
,
2004
, “
Multi-Attribute Tradespace Exploration as Front End for Effective Space Systems Design
,”
J. Spacecraft Rockets
,
41
(
1
), pp.
20
28
.
3.
Turner
,
C. J.
,
Masoudi
,
N.
,
Stewart
,
H.
,
Daniels
,
J.
,
Gorsich
,
D.
,
Rizzo
,
D.
,
Hartman
,
G.
,
Agusti
,
R.
,
Skowronska
,
A.
,
Castanier
,
M.
, and
Rapp
,
S. H.
,
2022
, “
A Synthetic Tradespace Model for Tradespace Analysis and Exploration
,”
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. 2: 42nd Computers and Information in Engineering Conference
,
St. Louis, MO
,
Aug. 14–17
, ASME, p. V002T02A080.
4.
Wolf
,
D.
,
Hyland
,
J.
,
Simpson
,
T. W.
, and
Zhang
,
X.
,
2011
, “
The Importance of Training for Interactive Trade Space Exploration: A Study of Novice and Expert Users
,”
ASME J. Comput. Inf. Sci. Eng.
,
11
(
3
), p. 031009.
5.
Simpson
,
T. W.
,
Carlsen
,
D.
,
Malone
,
M.
, and
Kollat
,
J.
,
2011
, “Trade Space Exploration: Assessing the Benefits of Putting Designers ‘Back-in-the-Loop’ During Engineering Optimization,”
Human-in-the-Loop Simulations
,
L.
Rothrock
, and
S.
Narayanan
, eds.,
Springer
,
London
, pp.
131
152
.
6.
Miller
,
S. W.
,
Simpson
,
T. W.
,
Yukish
,
M. A.
,
Bennitt
,
L. A.
,
Lego
,
S. E.
, and
Stump
,
G. M.
,
2013
, “
Preference Construction, Sequential Decision Making, and Trade Space Exploration
,”
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. 3A: 39th Design Automation Conference
,
Portland, OR
,
Aug. 4–7
, ASME, p. V03AT03A014.
7.
Ross
,
A. M.
,
McManus
,
H. L.
,
Rhodes
,
D. H.
, and
Hastings
,
D. E.
,
2010
, “
Role for Interactive Tradespace Exploration in Multi-stakeholder Negotiations
,”
AIAA SPACE 2010 Conference & Exposition
,
Anaheim, CA
,
Aug. 30–Sept. 2
.
8.
Fitzgerald
,
M. E.
, and
Ross
,
A. M.
,
2015
, “
Effects of Enhanced Multi-Party Tradespace Visualization on a Two-Person Negotiation
,”
Procedia Comput. Sci.
,
44
, pp.
466
475
.
9.
Yu
,
B. Y.
,
Honda
,
T.
,
Sharqawy
,
M.
, and
Yang
,
M.
,
2016
, “
Human Behavior and Domain Knowledge in Parameter Design of Complex Systems
,”
Des. Stud.
,
45
(
B
), pp.
242
267
.
10.
Grogan
,
P. T.
, and
de Weck
,
O. L.
,
2016
, “
Collaboration and Complexity: An Experiment on the Effect of Multi-Actor Coupled Design
,”
Res. Eng. Des.
,
27
(
3
), pp.
221
235
.
11.
Alelyani
,
T.
,
Yang
,
Y.
, and
Grogan
,
P. T.
,
2017
, “
Understanding Designers Behavior in Parameter Design Activities
,”
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. 7: 29th International Conference on Design Theory and Methodology
,
Cleveland, OH
,
Aug. 6–9
, ASME, p. V007T06A030.
12.
Thekinen
,
J.
, and
Grogan
,
P. T.
,
2021
, “
Information Exchange Patterns in Digital Engineering: An Observational Study Using Web-Based Virtual Design Studio
,”
ASME J. Comput. Inf. Sci. Eng.
,
21
(
4
), p.
041012
.
13.
Gero
,
J. S.
,
1990
, “
Design Prototypes: A Knowledge Representation Schema for Design
,”
AI Mag.
,
11
(
4
), p.
26
.
14.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2015
, “
Studying Human Design Teams Via Computational Teams of Simulated Annealing Agents
,”
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. 7: 27th International Conference on Design Theory and Methodology
,
Boston, MA
,
Aug. 2–5
, ASME, p. V007T06A030.
15.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2017
, “
Mining Process Heuristics From Designer Action Data Via Hidden Markov Models
,”
ASME J. Mech. Des.
,
139
(
11
), p.
111412
.
16.
Raina
,
A.
,
Cagan
,
J.
, and
McComb
,
C.
,
2019
, “
Transferring Design Strategies From Human to Computer and Across Design Problems
,”
ASME J. Mech. Des.
,
141
(
11
), p.
114501
.
17.
Raina
,
A.
,
Cagan
,
J.
, and
McComb
,
C.
,
2022
, “
Design Strategy Network: A Deep Hierarchical Framework to Represent Generative Design Strategies in Complex Action Spaces
,”
ASME J. Mech. Des.
,
144
(
2
), p.
021404
.
18.
Rahman
,
M. H.
,
Gashler
,
M.
,
Xie
,
C.
, and
Sha
,
Z.
,
2018
, “
Automatic Clustering of Sequential Design Behaviors
,”
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. 1B: 38th Computers and Information in Engineering Conference
,
Quebec City, Quebec, Canada
,
Aug. 26–29
, ASME, p. V01BT02A041.
19.
Jablokow
,
K. W.
,
Sonalkar
,
N.
,
Edelman
,
J.
,
Mabogunje
,
A.
, and
Leifer
,
L.
,
2019
, “
Investigating the Influence of Designers’ Cognitive Characteristics and Interaction Behaviors in Design Concept Generation
,”
ASME J. Mech. Des.
,
141
(
9
), p.
091101
.
20.
Mirabito
,
Y.
, and
Goucher-Lambert
,
K.
,
2022
, “
Factors Impacting Highly Innovative Designs: Idea Fluency, Timing, and Order
,”
ASME J. Mech. Des.
,
144
(
1
), p.
011401
.
21.
Austin-Breneman
,
J.
,
Honda
,
T.
, and
Yang
,
M. C.
,
2012
, “
A Study of Student Design Team Behaviors in Complex System Design
,”
ASME J. Mech. Des.
,
134
(
12
), p.
124504
.
22.
Hirschi
,
N.
, and
Frey
,
D.
,
2002
, “
Cognition and Complexity: An Experiment on the Effect of Coupling in Parameter Design
,”
Res. Eng. Des.
,
13
(
3
), pp.
123
131
.
23.
Yu
,
R.
, and
Gero
,
J.
,
2016
, “
An Empirical Foundation for Design Patterns in Parametric Design
,”
Int. J. Archit. Comput.
,
14
(
3
), pp.
289
302
.
24.
Woodbury
,
R.
,
2010
,
Elements of Parametric Design
,
Routledge
,
New York
.
25.
Avşar
,
A. Z.
, and
Grogan
,
P. T.
,
2020
, “
Effects of Locus of Control Personality Trait on Team Performance in Cooperative Engineering Design Tasks
,”
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. 8: 32nd International Conference on Design Theory and Methodology (DTM)
,
Virtual, Online
,
Aug. 17–19
, ASME, p. V008T08A036.
26.
Wöhr
,
F.
,
Uhri
,
E.
,
Königs
,
S.
,
Trauer
,
J.
,
Stanglmeier
,
M.
, and
Zimmermann
,
M.
,
2023
, “
Coordination and Complexity: An Experiment on the Effect of Integration and Verification in Distributed Design Processes
,”
Des. Sci.
,
9
(
e1
), pp.
1
31
.
27.
Hackman
,
J. R.
, and
Morris
,
C. G.
,
1975
, “
Group Tasks, Group Interaction Process, and Group Performance Effectiveness: A Review and Proposed Integration
,”
Adv. Exp. Soc. Psychol.
,
8
, pp.
45
99
.
28.
Gibbs
,
G.
,
1995
,
Assessing Student Centred Courses
,
Oxford Centre for Staff Development
,
Oxford, UK
.
29.
Dunne
,
E.
, and
Rawlins
,
M.
,
2000
, “
Bridging the Gap Between Industry and Higher Education: Training Academics to Promote Student Teamwork
,”
Innov. Educ. Train. Int.
,
37
(
4
), pp.
361
371
.
30.
Smith
,
R. P.
, and
Eppinger
,
S. D.
,
1998
, “
Deciding Between Sequential and Concurrent Tasks in Engineering Design
,”
Concurrent Eng.
,
6
(
1
), pp.
15
25
.
31.
Smith
,
R. P.
, and
Eppinger
,
S. D.
,
1997
, “
Identifying Controlling Features of Engineering Design Iteration
,”
Manag. Sci.
,
43
(
3
), pp.
257
402
.
32.
Steen
,
M.
,
2013
, “
Co-design as a Process of Joint Inquiry and Imagination
,”
Design Issues
,
29
(
2
), pp.
16
28
.
33.
Thurston
,
D. L.
,
2001
, “
Real and Misconceived Limitations to Decision Based Design With Utility Analysis
,”
ASME J. Mech. Des.
,
123
(
2
), pp.
176
182
.
34.
Tucker
,
R.
,
Abbasi
,
N.
,
Thorpe
,
G.
,
Ostwald
,
M.
,
Williams
,
A.
, and
Wallis
,
L.
,
2014
, “
Enhancing and Assessing Group and Team Learning in Architecture and Related Design Contexts
,” Final Report, Office for Learning and Teaching, Department of Education, Sydney, Australia.
35.
Takai
,
S.
, and
Esterman
,
M.
,
2017
, “
Towards a Better Design Team Formation: A Review of Team Effectiveness Models and Possible Measurements of Design-Team Inputs, Processes, and Outputs
,”
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. 3: 19th International Conference on Advanced Vehicle Technologies; 14th International Conference on Design Education; 10th Frontiers in Biomedical Devices
,
Cleveland, OH
,
Aug. 6–9
, ASME, p. V003T04A018.
36.
Fournier
,
G.
, and
Jeanrie
,
C.
,
2003
, “Locus of Control: Back to Basics,”
Positive Psychological Assessment: A Handbook of Models and Measures
,
S. J.
Lopez
, and
C. R.
Snyder
, eds.,
American Psychological Association
,
Washington, DC
, pp.
139
154
.
37.
Boston Consulting Group
,
1968
,
Perspectives on Experience
,
The Boston Consulting Group, Inc.
,
Boston, MA
.