Abstract
An important challenge in particle-based modeling of electron–phonon interactions is the large difference in the statistical weight of the particles in the two simulated populations. Each change in the state of a simulated phonon during scattering is statistically representative of an interaction with multiple simulated electrons, which results in a large numerical burden accurately represent both populations. We developed two stochastic approaches to mitigate this numerical problem. The first approach is based on Poisson modeling of the scattering processes coupled with a thinning algorithm, which works effectively at steady-state, but it is prone to statistical errors in the energy during the transient regime. The second approach is based on point process (PP) modeling of the scattering, allowing stochastical book-keeping, which corrects the energy error. Here, we present a mathematical description of the problem and the two stochastic approaches along with the numerical results we obtained for the synchronous transient simulation of the electron and phonon populations.
1 Introduction
The increasing density and operation frequency of active and passive electron devices drives the need for more sophisticated heat management techniques. As a result, effective thermal management and controlled heat dissipation play a crucial role in device engineering because of their effects on both the device performance and reliability.
The fundamental equation governing semiclassical transport of electrons is Boltzmann's transport equation (BTE) [1], while the heat in a crystal lattice can be modeled in terms of phonons [2] described by a formulation of the BTE derived by Peierls [3]. These two equations describe the interaction of free charge carriers with the crystal in terms of absorption and emission of phonons [4].
There are a number examples in current electronics and photonics in which both the electronic and phonon populations are strongly coupled and driven far from equilibrium, necessitating a full solution of the BTE. The current problem in power dissipation in integrated circuit technology has already slowed Moore's law in terms of the scaling of clock speeds. For small dimension transistors and/or high power applications, the driving fields in the device are quite high, approaching MV/cm scale for wide bandgap technologies such as GaN, where electronic transport is quasi-ballistic and highly nonstationary. Such nonequilibrium electrons dissipate their energy on the drain side creating a phonon hot spot [5], where a localized far from equilibrium population of optical phonons is established, and diffusive transport of these nonequilibrium optical phonons as well as their harmonic decay to acoustic phonons are required to dissipate heat away from the device and to the environment. A second example is that of quantum cascade lasers based on intersubband emission of light in multiquantum well semiconductor structures, in which optical phonon emission plays a crucial role in the performance of such technologies. It is increasing recognized that the build-up of a nonequilibrium population of phonons in the active lasing region of III-V material-based quantum cascade lasers lead to detrimental effects in the lasing performance [6,7].
In general, a closed-form solution of the BTE cannot be obtained without severe approximations being made on the nature of the collision integral (e.g., the relaxation time approximation [8]), or on the distribution function itself (e.g., the near-equilibrium approximation). For this reason, both deterministic and stochastic integration schemes [5,9–18] have been developed to numerically determine the distribution function. Deterministic approaches are often based on lower order terms of the moment expansion of the distribution function [19], with approximations required to obtain closure. Furthermore, numerical representation of the energy dispersion cannot be included in such models, which have a relatively low computational cost. However, they usually require reliable closure relations in terms of phenomenological transport parameters [20,21], with their scope being limited [22] by the accuracy of such parameters. Alternatively, the particle-based ensemble Monte Carlo (EMC) method [23] provides a space-time stochastic solution of the full BTE, which is exact in statistical terms [9].
Previous coupled EMC approaches treated electrothermal interactions by employing the net energy lost via electron scattering to calculate a heat generation rate [24]. This rate was then used to obtain a temperature map by solving the heat transport equation either analytically [24,25] or stochastically [26,27]. This work expands on the particle-based EMC approach for phonon transport presented in Ref. [18] by directly modeling phonon transport within a particle-based framework. Both charge and heat are modeled simulating a statistically significant ensemble of particles, with each simulated particle representing an arbitrary number of physical particles [28]. In particular, each simulated charge carrier represents Wc physical electrons while each simulated phonon represents Wp physical phonons. Therefore, each simulated electron scattering represents the scattering of Wc physical electrons, and each simulated phonon scattering represents the scattering of Wp physical phonons. In this context, the direct modeling of the emission and absorption processes corresponds to the addition and removal of a simulated phonon from the simulation domain. Therefore, in the ideal case Wp = Wc, where one simulated phonon is either added or removed with each scattering event. Unfortunately, this approach is not feasible since, in a typical semiconductor, the density of phonons is orders of magnitude higher than the density of free charges.
For example, at room temperature, the phonon density is typically comparable to the density of atoms, about , while in a highly doped semiconductor, the electron density is at most . That means that a particle-based simulation with Wp = Wc requires at least a thousand simulated phonons for each simulated electron. Under these conditions, a Monte Carlo simulation requiring tens of thousands of simulated electrons would require 10 × 106 of simulated phonons. Moreover, the number of simulated electrons is typically in the tens (or hundreds) of thousands, since their number must be high enough to accurately represent the charge dynamics of the device; thus, this number cannot be arbitrary reduced. It is then obvious that imposing Wp = Wc is usually not a practical approach.
The next logical choice of allowing Wp > Wc complicates the numerical simulation of interactions between them. Since one physical electron interacts with one physical phonon, the scattering of one simulated electron represents the interaction of Wc physical electrons with Wc physical phonons. However, each simulated phonon represents Wp physical phonons; therefore, each scattering of a simulated electron must be treated as a scattering with a fraction of a simulated phonon.
In Sec. 2, we present a stochastic solution to this problem based on the traditional (for Monte Carlo simulation) premise of dealing with processes that obey Poisson statistics. We first start with a simple assumption of stationarity to describe the phonon emission (absorption) process with homogeneous Poisson statistics. This assumption allows us to monitor the creation (destruction) of simulated phonons to statistically represent the correct rate of emission (absorption) of physical phonons by employing a technique based on the so-called thinning algorithm [29], discussed in Sec. 3.
While this first approach results in a statistical error in the conservation of the system energy, during steady-state simulation, this error averages to zero. However, the equivalent approach (based on nonhomogeneous Poisson) for the transient, nonstationary regime results in a net error in the conservation of the system energy. For this reason, we developed another approach based on the Hawkes point process (PP), that is presented in 2 Sec. 4 of the paper.
This second approach uses the point process formalism to address the statistical error resulting from the simulated electron–phonon (Markovian) interaction during a transient by introducing a non-Markovian treatment of the error. The goal of this approach is to closely replicate the Poisson behavior in the stationary regime, while employing a stochastic form of book keeping capable of preserving (in statistical terms) the distribution of the generated and absorbed phonons, with negligible computational and memory requirement.
Section 5 presents numerical tests for the implementation of the presented algorithms for the case of a semiconductor system consisting of electrons and phonons. In particular, we show how the point process approach results in much smaller error in the system energy conservation.
2 The Monte Carlo Procedure
The particle-based Monte Carlo approach solves the semiclassical Boltzmann transport equation stochastically by modeling complicated transport phenomena as elementary interactions. In this approach, the many-body system is modeled by representing a statistically significant sample of its components as individual particles carrying physical quantities in phase space and time. Elementary interactions are treated as collisions, where the duration of scattering events is assumed much shorter than the particle mean free flight times. If the probability of the scattering event follows a Poisson distribution, the collision integral can be solved using the stochastic Monte Carlo procedure, that requires only the average time between scattering events. In this approach, the properties of the many-body system are obtained from the average properties of the simulated distribution.
where fc and fp are the distributions of electrons and phonons, respectively, is the position-space vectors while represents a state in the jth band with momentum . The terms on the right side of the equations represent the change of states due to collisions.
The reciprocal dependencies of each distribution function on the other have been explicitly written to highlight the coupled nature of these transport phenomena. Finally, for the sake of notation clarity, from here on the band index will be dropped, and the simple momentum notation will be used to represent both a state and the corresponding momentum vector.
2.1 Scattering Algorithm.
The cellular Monte Carlo method (CMC) extended via the acceptance/rejection technique is used in this work to reduce the number of calculations required by each scattering and maintain the accuracy of the EMC technique. In this approach, the scattering procedure [18] can be separated into four steps. First, a transition is selected from a precomputed look-up table. Then, new occupation numbers are obtained by sampling the run-time distribution of the simulated particles. A new scattering probability is calculated by using the new occupation numbers. Finally, the acceptance/rejection technique is used to sample the new transition probability from the precomputed one.
where M is the matrix element, E(k0) is the energy of the state , and is the density of states in the cell . Explicit selection rules are not necessary since they arise from the interaction of the dispersion/band structure with momentum and the energy conservation. Momentum conservation is explicitly enforced in the matrix element, while the energy conservation is implicitly accounted for in the Dos operator.
With these considerations, the momentum space can be subdivided into a finite number of cells. In this space, allowed transitions from states of each cell to all other cells can be computed and stored in the look-up table. The scattering procedure can then simply pick one particular transition from the precomputed table that is loaded at the start of a simulation. This approach exchanges fast access memory (RAM) usage for the computations used in each EMC scattering to: search for all the possible final states, calculate all the scattering rates, and compute the scattering partial sums.
The simulations presented in this paper employ full-band electron band structures computed via the empirical pseudo-potential method, and a phonon dispersion computed via a shell model. Details regarding the matrix elements can be found in Refs. [18] and [30].
The second step requires computing the occupation number, that is the distribution function of a particular state. This task is achieved by setting the occupation number of a state equal to its average value around that state. In other words, the occupation number in the position r of the state is calculated as the ratio of the number of simulated particles in a phase-space volume centered around and the number of states in the same volume. The sampling volume itself is defined by a component in position space, , centered around r, and a component in momentum space, , centered around .
The final step employs the acceptance/rejection method, which is a basic statistical technique used to generate a distribution, difficult to obtain directly, by rejection sampling from a different, more convenient, distribution. In this approach, the acceptance/rejection method provides the flexibility and the accuracy of the EMC method while keeping the computational demands comparable to that of the CMC method.
2.2 Numerical Considerations.
Compared to a more standard CMC method [30], there are two additional aspects impacting the computational demands of this approach: the acceptance/rejection technique, and the run-time calculation of the occupation number. In this work, the latter is the largest contributor to the computational complexity. Indeed, each occupation number requires searching over a portion of the simulated particles resulting in a computational complexity of , where Np is the number of simulated phonons. Therefore, getting a new occupation number with each scattering event gives a total computational complexity for the scattering algorithm of the order of , where N is the total number of simulated particles. While this method is computationally intensive, and further optimizations may be used, it allows altering each scattering rate according to the local run-time phase-space distribution.
In contrast, the simple acceptance/rejection method requires a constant number of calculations for each scattering event. When using less demanding methods to obtain the occupation number, e.g., an analytic distribution, this technique results in an increase of the wall-clock simulation time of a few percent.
Finally, in position-space/drift simulations, the use of a position-space component of the sampling volume, , requires a tradeoff between itself and N. Smaller requires an increased number of particles to reduce the statistical noise of the occupation number. Larger limits the spatial resolution and may result in nonphysical interactions between particles too distant from each other. The values of and N will depend on the phenomena of interest and material properties. For example, phenomena strongly localized in phase-space will require smaller and a larger number of particles, while phenomena requiring low spatial resolution may use larger and smaller N.
3 Phonon Absorption and Emission as a Poison Distribution
3.1 Poisson Model.
When the statistics of the scattering modeling for the electron–phonon emission process is stationary, we can make the following assumptions:
The number of emissions occurring during any time interval is an integer.
The emissions occur independently.
The scattering rate is not time-dependent.
The number of scattering events during any time interval is proportional to the length of the interval.
The probability of two emission events happening in the exact same time is vanishing small.
where P represents a probability, the superscript e is used to indicate an emission process, is the total number of (physical) charges emitting a phonon into the state during the time interval , k is an integer number, and the parameter is called the rate or intensity of the process, and, since we assumed a homogeneous Poisson distribution, is constant in time.
where the symbol E represents the expectation operator [31] such that the expected value, , of the random variable x is the mean value of x, ηc represents the total number of (physical) charged particles (either holes or electrons), and is the charge-phonon emission rate.
where N is used to represent the number of simulated particles and the superscript c is used to indicate charged particles. Consequently, represents the number of simulated charged electrons, each one representing Wc physical particles undergoing phonon emission scattering. Finally, is the intensity of the Poisson process modeling the simulated scattering.
where the superscript p is used to indicate a simulated phonon representing Wp physical particles, and is the intensity of the Poisson process modeling the emission (creation) of simulated phonons.
Consequently, since . Therefore, this approach would generate a number of simulated phonons larger than the expected value calculated in Eq. (7). A similar treatment in the case of phonon absorption leads to an equivalent conclusion; consequently, this simple approach cannot be used.
An intuitive solution to this problem is to create a new simulated phonon as result of only a fraction of the scattering events, while preserving the Poisson nature of the phonon generation process. Indeed, this goal can be achieved by employing a technique called thinning [29]. A theoretical overview and the implementation details of this approach are presented in the next Secs. 3.2 and 3.3.
3.2 Thinning Algorithm.
where is the intensity of the target distribution. The events are then retained with probability or rejected with probability . Finally, is typically chosen to be a constant in order to simplify the algorithm that generates the sampling distribution. In the specific case of phonon emission, represents the intensity of the target distribution and the intensity of the dominating distribution is chosen to be .
Where is the conditional expectation operator such that is the expected value of the event A computed with respect to the conditional probability distribution function [31]. Equation (11) shows that the average number of simulated phonons created by the thinning algorithm corresponds to the expected value as calculated in Eq. (7).
3.3 Transient Regime.
where the product is the expectation, or the average value, of .
where is defined in Eq. (7), and, as usual, Wp is the number of physical phonons represented by each simulated phonon. Finally, the product is the expectation of .
Since the thinning algorithm requires that , it is clear that the average energy lost by the population of simulated charge carriers equals the energy gained by the population of simulated phonons.
A similar approach can be applied to analyze the thinning algorithm applied to the absorption process. In this case, the equations show that the average energy gained by the population of simulated charge carriers via absorption, , equals the energy lost because of the removal of simulated phonons, .
Since Wp > Wc (typically, ), the variance of the energy gained by the simulated phonon population is larger than the variance of the energy lost by the simulated charge population: . In the case of phonon absorption, the same argument gives .
In the steady-state regime, instantaneous differences between the energy gained by the simulated phonon population, , and the energy lost by the simulated charged carriers, , have no lasting effect since, from Eq. (11), both quantities have the same average value. The total energy of the charge-phonon system is therefore conserved on the average, and the effect of the difference in variance results in larger noise in the energy of the simulated phonon population as compared to the simulated charge population. As a consequence, the relative difference between the energy of the two populations existing before reaching the steady-state regime will remain unaltered. More explicitly, the conditions present at the end of a transient are preserved in the steady-state regime, as they should be. Following the same argument, any random fluctuation of the difference in energy present at the end of a transient will be preserved in the steady-state regime. In other words, the statistical fluctuation of the transient regime carries over in the steady-state regime. This creates an irremovable discrepancy between the energy gained by the simulated crystal and the energy lost by the simulated charge particles. The larger the fluctuation, the larger the error in the conservation of energy. In other words, the thinning algorithm is not fast enough to ensure the system energy conservation in the presence of fast transients.
It is, in principle, possible to employ a simple book-keeping scheme that accounts for the error in the conservation of the energy exchanged between the two populations. This approach would compensate for any error by adding or removing the appropriate number of simulated phonons, when the energy discrepancy justifies it. However, a simple book-keeping scheme requires somewhat arbitrary decisions on the state of the simulated phonon to be removed and in which state a new simulated phonon should be created. In order to avoid such arbitrary decisions, a technique based on the probabilistic model called the point process [35] is used in place of the Poisson distribution approach presented in Sec. 3.1. The proposed framework allows modeling the creation and removal of simulated phonons in a way that statistically enforces the conservation of the system energy. The mathematical basis for the modeling of absorption and emission processes and the algorithm employed in this work will be discussed in Sec. 4.
4 Phonon Absorption and Emission as Point Processes
4.1 Point Process.
In general terms, a PP [36] can be described as a sequence of points randomly located in some space. However, in the context of this work, the only type of point process considered is defined on the one-dimensional time space, represented by the letter t. In this simple case, a PP can be thought of as being a collection of points (in our case the scattering events) randomly placed on the time line, and the specific location patterns and these points constitute the process itself. In other words, the term process will denote here a collection of events occurring in time, and the term stochastic process will identify a process that develops in time according to some probabilistic rule. A point process is commonly described in terms of its counting measure [35,37,38], which represents the cumulative count of events in a process. This means that for two points in times t1 and t0, where , the value is a positive integer representing the number of events occurring in the time interval . In particular, is called a random counting measure when the counted events are parts of a stochastic process.
is a positive integer
is nondecreasing, right-continuous step function with jumps of finite size.
where is the time of the ith event.
Figure 2 shows a graphical representation of a PP described by Eq. (18). The dots on the time line mark the corresponding events, while solid line represents the PP intensity as a function of time. A shorter (average) distance between events corresponds to a higher probability of a new event happening. Since Eq. (19) states that the probability and intensity are proportional to each other, it follows that a higher density of events results in a higher intensity while a more sparse distribution of events results in a lower intensity. In other words, the average distance between events, centered around a specific time t, has an inverse relationship with the corresponding intensity, λ(t). This particular case shows the alternation between high and low intensity and the corresponding clustering of events centered around the peak of intensity.

Example of a one-dimensional PP. The solid line represents the intensity of the PP as a function of time, while each dot on the time line marks an event. A higher clustering of events corresponds to a higher intensity. In this particular example the events are obtained with a stochastic procedure from the assumed sinusoidal intensity.

Example of a one-dimensional PP. The solid line represents the intensity of the PP as a function of time, while each dot on the time line marks an event. A higher clustering of events corresponds to a higher intensity. In this particular example the events are obtained with a stochastic procedure from the assumed sinusoidal intensity.
The point process framework provides extensive flexibility in the definition of the intensity λ(t) of Eq. (18). When λ(t) is a deterministic function, the number of events in an interval dt obeys a Poisson distribution with intensity λ(t). In other words, the quantity is a nonhomogeneous Poisson random variable with intensity λ(t) [31,35]. In this general case, the PP is referred to as a nonhomogeneous Poisson point process. Obviously, if λ(t) is a constant, the quantity is a homogeneous Poisson random variable, and the PP is referred to as a homogeneous Poisson point process. As one would expect, this definition of Poisson PP in terms of a counting measure allows the derivation of all the properties of the Poisson distribution [31,32]. If λ(t) is not a deterministic function, but another independent Poisson random variable, the PP is referred to as a Cox Doubly stochastic Poisson process [32,39,40], or simply a Cox process, which is considered a generalization of the Poisson process where the intensity itself is a random variable. The term Cox process is also commonly used in the literature to describe any PP with a Markovian [35] intensity λ(t). In the case of a non-Markovian intensity, the PP is commonly referred to as a Hawkes process [32,35,39]. In the same way that the Cox process can be considered a generalization of the Poisson process, the Hawkes process can be thought of as a non-Markovian extension of the Poisson PP. In a sense, the Hawkes PP is a Poisson PP which accounts for the effect of past events. For this reason, an approach based on the Hawkes PP seems a natural extension to the model based on the Poisson distribution presented in Sec. 3.2. It may seem unusual to use such a non-Markovian approach in the context of a stochastic method to solve the collision integral of the BTE that assumes Markovian interactions. The reasons behind this approach can be understood by comparing the PP-based approach and a basic book-keeping scheme. Specifically, both cases keep track of the error, then in the book-keeping scheme, particles are added or removed outside of the scattering process. However, the PP approach transfers this second part of the book-keeping scheme directly into the scattering algorithm by altering the probability of generating or removing particles according to the value of the tracked error. As such, this PP approach can be thought of as a statistical form of book-keeping. More explicitly, since the statistical weight of the simulated phonon is much higher than the statistical weight of the simulated electron, as such particles cannot be simply created or removed with each scattering event, and a choice must be made on when to alter the number of simulated phonons. In this work, we decided to make such choice by a non-Markovian PP that compensates for the error in the energy conservation. In other words, it is not the Markovian electron–phonon interaction that is modeled via a non-Markovian PP, but how the simulated phonons are created and removed as a result of scattering.
4.2 Hawkes Model.
The Hawkes [39] point process is typically used to model systems where each new event impacts the whole stochastic process. More explicitly, the chance of a new event has some dependence on the sequence of events that preceded it. This type of process is typically employed to model systems where the observed events tend to cluster in time. For example, seismic events present short-term clustering; an earthquake main-shock is both preceded and followed by intermediate-sized events [41]. Hawkes processes have been employed to model criminal activity such as property crime, gang violence, and terrorism. Each initial event increases the likelihood of more events, as the successful offender tends to replicate the crime in a nearby location in the following weeks [42]; or, in the case of gang violence, each violent event tends to ignite retaliation [43].
for any , where cov denotes the covariance of the two quantities. When the covariance is negative, the PP is called self-correcting or self-regulating; in that case each event makes the occurrence of the following events less probable for some period of time. It is clear that by simultaneously modeling two such opposite processes, that the energy error can be both positive and negative. Consequently, the employed PP can be neither self-exciting nor self-correcting, but it should account for both positive and negative covariance.
It is now clear that each event instantaneously increases the arrival intensity by A, and that this influence decays exponentially over time.
This expression clearly shows the relationship between the exciting function, past events, and the intensity. More explicitly, each past event influences the future intensity thought its weight h.
4.3 Multivariate Hawkes Point Process.
In this work, the creation and removal of simulated phonons are modeled in terms of a multivariate [44] Hawkes PP in order to minimize the effects of the statistical fluctuations of the energy conservation. The definition of the PP must ensure that the variation in energy of the phonon population closely mirrors the variation in the population of simulated charges, in spite of the difference between the statistical weights of the particles in the two populations. In other words, in order to correctly model transients, the model must alter the rate of generation or removal of simulated phonons to achieve conservation of the system energy.
where Nv is the number of variates, defines how the jth variate affects the ith variate, and is the random counting measure of the jth variate.
The electron–phonon interactions are modeled by employing four types of variates to represent all the possible energy exchanges happening between the simulated populations. In particular, each phonon state with momentum is connected to four variates
- –
Electron emission of a phonon in , represented by the superscript ce.
- –
Electron absorption of a phonon in , represented by the superscript ca.
- –
Phonon in created by phonon emission, represented by the superscript pe.
- –
Phonon in removed via phonon absorption, represented by the superscript pa.
The time integral of is a measure of the error in the conservation of energy of the simulated interacting populations.
where and are the intensities corresponding to the scattering of simulated charged particles as defined in Sec. 3.3. The quantities and are the intensities corresponding to the removal and generation of simulated phonons, respectively. A simple algebraic substitution shows that the thinning algorithm presented in Sec. 3.2 produces an expected value of zero for at steady-state.
When the creation and the removal of simulated phonons are modeled as Hawkes processes, the intensities and are designed to minimize the error in the conservation of energy between the simulated populations, which is represented by the integral . In particular, the desired properties for the intensity are as follows:
- –
has a background intensity of .
- –
has an inverse dependence on .
- –
has a maximum intensity of .
The first property ensures that, in absence of an error, the process behaves like the Poisson model described in Sec. 3.2; the second property imposes a negative feedback on the value of , while the final property allows the simplification of the algorithm by ensuring that each simulated charged particle interacts with no more than one simulated phonon.
In a similar fashion, the desired properties for the intensity λpe are
- –
has a background intensity of .
- –
has a forward dependence on .
- –
has a maximum intensity of .
This approach does not burden the algorithm and it reproduces the desired Poisson-like distribution behavior for . However, there is no effect on the intensity until , and this can result in large delay of the algorithms response to the error in . This effect results in a statistical difference between the momentum distribution of the created phonons and the phonon states which created the error by scattering with the simulated charge carriers. In other words, the momentum distribution of the generated phonons does not correspond to the variation in the momentum of the scattered charged particles.
where the proportionality constant a determines the sensitivity of the intensity. Small values of a closely reproduce the Poisson-like behavior presented in Sec. 3.2 but allow for large values of the error . In contrast, while large values of a limit the magnitude of the error, the behavior of the intensity would largely differ from the Poisson case presented in Sec. 3.2.
This work employs a log-linear model to balance the efficiency of the algorithm, the need for an instantaneous response, and the need to limit the error . This approach provides an immediate and modulated response to the error represented by . In particular, the effect is minimal and the rate of removal and creation of simulated phonons does not diverge much from the Poisson model presented in Sec. 3.2. As the error grows, the effect on the intensity becomes more pronounced, ensuring the desired small delay between the increase of the error value and the algorithm response.
4.4 Log-Linear Model of the Intensity.
where the factor is assumed constant in q, the summation and integration order have been reversed, and is the same as in Eq. (30).
where the minimum function accounts for the limit imposed on the maximum intensity. The factor a has the dimensions of the inverse of an energy; intuitively, this factor determines the sensitivity of the intensity to changes in .
where ti is the time of the ith event. A simple substitution proves that in the special case of Ei = 0. At the start of the simulation we can assume no error in the energy exchange between the simulated populations, i.e., we impose . Therefore, Ei = 0 for each n, and the expected value for the integral of must be zero for each t. Since is a measure of the error in the conservation of energy in scattering between the simulated populations, the energy gained by the simulated phonon population must equal the energy lost by the population of simulated charged particles.
where and represent the maximum and minimum value of x, respectively.
where is the maximum phonon energy.
This equation confirms the intuitive notion that the variance depends on the ratio of Wp and Wc. Moreover, the equation provides a practical approach for the selection of the parameter a by choosing the magnitude of the variance.
4.5 Hawkes Model Implementation.
The literature proposes various sampling algorithms [47–50] to generate samples of point processes. This work employs a technique based on Ogata's [49] thinning approach.
Ogata frames the thinning algorithm in terms of a target and a sampling PP when both PPs are simple, as defined in Sec. 4.1. In this context, the rejection probability is expressed as a ratio of the PP intensity [35,49]. If the intensities of both PPs are deterministic, as in a Poisson PP, the thinning procedure follows the basic thinning algorithm [29]. However, when at least one of the intensities is a random variable, as in a Hawkes processes, the algorithm must account directly [35] for the inequality in Eq. (9). That is, at any time the intensity of the sampling PP must be larger than the intensity of the target PP. Ogata achieves this goal by employing a sequential variant of the thinning algorithm. In this sequential approach, the inequality is ensured by recomputing the intensity of the sampling PP after each event alongside to the value of the intensity of the target PP.
5 Simulation Results
In this section, we provide a validation of the approach described in Sec. 4. These simulations employ the full-band Monte Carlo method presented in Refs. [18] and [30] with the addition of the techniques presented in this work for the creation and removal of simulated phonon as result of electron–phonon scattering. Moreover, both types of particles are simulated in a synchronous heterogeneous ensemble. This approach employs numerically calculated electron band structure (or the full dispersion for phonons) to calculate all the possible interactions and their respective scattering rates. All this data is precomputed and stored in a compressed (but rather large) look-up table that is loaded at the start of the simulation. This approach allows one to simply choose a transition from the look-up table without the need to search for all the possible final states with each scattering event. Since more than one phonon can occupy a state, the absorption (emission) rate for a phonon state depends on the state occupation number. This effect is accounted for by employing stochastic acceptance/rejection technique to adapt the scattering rate to the run-time simulation distributions. This technique requires to first store the expected scattering maximum rate in the look-up table; then the scattering algorithm extracts the state occupation number from the local phonon distribution at the time of scattering. Finally, the occupation number is used in a stochastic procedure to decide whether to proceed or not with the scattering event.
5.1 Conservation of Energy.
This first set of simulations tested the influence of the scattering algorithm and the weight of the simulated particles on energy conservation in the system. It can be inferred from Eqs. (15) and (51) that the error in the energy conservation strongly depends on the values of Wc and Wp. Moreover, these values impact the simulation performances since smaller values require a higher number of particles. It is clear that there must be a tradeoff between the computational burden of the simulation and the error in the conservation of energy . As explained in the introduction, the value of is determined by the characteristic of the electron simulation; consequently, it is convenient to think about the computational performances and noise level of electrothermal simulation in terms of Wc/Wp. When scattering happens one to one and dE always equal zero. Since typical Monte Carlo simulations have tens of thousands of simulated electron, imposing would requires a number of simulated phonons of the order of 10 × 106. Even with modern high performance computing (HPC) systems, simulating such a large number of particles extremely impractical. Decreasing the value of reduces the number of simulated particles while increasing the value of dE, and hence some sort of trade off must be made between the value dE and the computational demands of the simulation.
The simulated system is an isolated Si cube with an electron-hole pair population initialized with a Maxwellian distribution shifted in energy, and a simulated phonon population that is initially in thermal equilibrium at 100 K, i.e., Bose–Einstein distribution. At time , the system is allowed to evolve according to the simulation dynamics as described at the start of this section and in Ref. [18]. During the relaxation process, the instantaneous particle distribution is recorded at intervals of 0.1 fs. Figures 3–5 show the time evolution of the system in terms of the total energy lost by the charges (dashed line) and the energy gained by the simulated phonons (solid line) in three different cases. Since scattering events are supposed to conserve energy, the two lines should superimpose perfectly in each case. The ratios Wc/Wp are obtained from the simulation output after calculating both Wc and Wp as the ratio between the calculated number of physical particles and the imposed number of simulated particles.

Evolution of the energy balance of the charge and phonon populations in an isolated cube of Si. The quantities are expressed in terms of the total kinetic energy lost by the simulated holes and electrons (dashed line) and the total energy gained by the simulated phonons (solid line). Here, the phonon emission and absorption are treated in terms of a Poisson probability distribution. While the ratio Wc/Wp is close to , this approach performs much worse than the case in Fig. 4.

Evolution of the energy balance of the charge and phonon populations in an isolated cube of Si. The quantities are expressed in terms of the total kinetic energy lost by the simulated holes and electrons (dashed line) and the total energy gained by the simulated phonons (solid line). Here, the phonon emission and absorption are treated in terms of a Poisson probability distribution. While the ratio Wc/Wp is close to , this approach performs much worse than the case in Fig. 4.

Evolution of the energy balance of charge (both holes and electrons) and phonon populations in an isolated cube of Si. The quantities are expressed in terms of the total energy lost by the simulated charges (dashed line) and the energy gained by simulated phonons (solid line). Here, the phonon emission and absorption processes are treated as a Hawkes processes.

Evolution of the energy balance of charge (both holes and electrons) and phonon populations in an isolated cube of Si. The quantities are expressed in terms of the total energy lost by the simulated charges (dashed line) and the energy gained by simulated phonons (solid line). Here, the phonon emission and absorption processes are treated as a Hawkes processes.
Figure 3 shows what happens when the charge-phonon transient is simply treated in terms of the Poisson probability distribution, as described in Sec. 3.2. This simulation employs a rather large ratio, Wc/Wp, but the difference between the energy gained by the simulated phonons and the energy lost by the simulated charges is a sizable fraction of the total energy exchange.
In Figs. 4 and 5, the electron–phonon interaction is treated as a Hawkes process with different Wc/Wp ratios, both smaller than in the case presented in Fig. 3. In particular, the simulation plotted in Fig. 4 has a ratio Wc/Wp that is a fifth of the case presented in Fig. 3 and is half compared to the case presented in Fig. 5. The importance of this parameter is shown in Eq. (51), which suggests that the average distance between the two curves should scale with the inverse square of Wc/Wp. This means that simulations with a larger Wc/Wp ratio should perform better. Indeed, compared to the case of Fig. 4, the plot in Fig. 5 shows a much closer agreement between the energy gained by the simulated phonons and the energy lost by the simulated charges.
The case in Fig. 3 presents a ratio Wc/Wp at least twice as large than in the cases modeled as Hawkes processes presented in Figs. 4 and 5 and an overall larger error. Indeed, this is the expected behavior predicted in Sec. 4.3. These plots show the importance of using an appropriate scattering algorithm and a sufficiently large ratio Wc/Wp.
5.2 Electron Decay.
The full algorithm for the coupled charge-phonon dynamics has been tested by simulating the thermalization process of electron and hole in an isolated cube of Si that is initialized out of equilibrium. In particular, the simulated electrons-hole pairs are initialized with a Maxwellian distribution that is shifted in energy, and the simulated phonons are initialized at 100 K thermal equilibrium before the system is left to relax. The focus of this test is to observe the energy exchange between charges and phonons when modeling absorption and emission in terms of a Hawkes process, as discussed in Sec. 4.4.
Figures 6 and 7 show the evolution of the energy distribution of holes and electrons at time 0.1 fs and 30.1 fs, respectively. The ratio Wc/Wp used is , which shows good energy conservation performance in Fig. 5. These plots represent the evolution of the charge population in the early transient period of the simulation. In Fig. 6, few carriers have scattered and the majority of the charges are still at near the position of the initial energy distribution. It is interesting to note that secondary peaks are already visible for both holes and electrons.

Distribution of holes and electrons in the isolated cube of Si at time t = 0.1 fs. The populations are concentrated at the initialization energy.

Distribution of holes and electrons in an isolated cube of Si at t = 30.1 fs with a ratio ; the zero of the horizontal axis is placed at the top of the valence band. This picture clearly shows that the relaxation process of the charges results in regularly spaced peaks in the energy distribution.
The distributions in Fig. 7 show a clear pattern in the carrier decay. In particular, the picture shows evenly spaced density peaks for both electrons and holes. In order to better observe this behavior, Fig. 8 shows an enlarged representation of the electron distribution where the distance between the minor vertical grid lines is set to 0.06 eV, which is roughly the average energy of an optical phonon. From this picture, it is clear that the density peaks are evenly spaced, with the distance between consecutive peaks corresponding to the optical phonon energy. Indeed, this behavior is expected since the emission of optical phonons is the dominant scattering mechanism, and hence the energy relaxation process, for high energy carriers out of equilibrium.

Enlarged picture of the electron population depicted in Fig. 7. In this graph, the distance between the vertical grid lines has been set to 0.06 eV, which is roughly the average energy of an optical phonon. It is clear that optical emission is the dominant scattering mechanism since the distance between density peaks corresponds to one optical phonon.

Enlarged picture of the electron population depicted in Fig. 7. In this graph, the distance between the vertical grid lines has been set to 0.06 eV, which is roughly the average energy of an optical phonon. It is clear that optical emission is the dominant scattering mechanism since the distance between density peaks corresponds to one optical phonon.
5.3 Electron Thermalization.
In the final set of tests, all the simulated electrons in the isolated Si cube are initialized with a delta function distribution with kinetic energy of 20 meV, while the simulated phonons are initialized at 300 K. This setup tests the code's capability to correctly thermalize the simulated population of particles by both absorption and emission for different ratios of Wc/Wp. Figure 9 compares an equilibrium distribution of simulated electron (circles) used as a reference against distributions obtained after 0.6 fs (dashed line) and 300 fs (solid line) with a ratio . At the 0.6 fs mark, few electrons have scattered and the majority of the carriers are still close to the initial energy distribution (dashed line). From the picture, it is clear that the distribution after 300 fs corresponds to the reference distribution, while the initial population was out of equilibrium. In Sec. 5.2, optical phonons were the predominant scattering mechanism for high energy electrons; however, in the present simulation, the acoustic phonons play the critical role in defining the final shape of the electron distribution function.

Thermalization of electrons in Si via electron–phonon scattering. The reference equilibrium distribution of the simulated electrons (circles) is compared with simulated distributions obtained at time 0.6 fs (dashed line) and 300 fs (solid line) with a ratio . The 300 fs closely matches the reference distribution.

Thermalization of electrons in Si via electron–phonon scattering. The reference equilibrium distribution of the simulated electrons (circles) is compared with simulated distributions obtained at time 0.6 fs (dashed line) and 300 fs (solid line) with a ratio . The 300 fs closely matches the reference distribution.
Figure 10 compares the time evolution of the electron distribution for the simulation in Fig. 9 (dashed line) and for two other ratios of Wc/Wp corresponding to (solid line) and (dotted line). In this picture, the distributions are expressed in terms of their distance from the reference distribution by employing the Jensen–Shannon divergence [51]. We can see in this figure small differences in the transient regime: all three cases reach steady-state at the same time, around the 150 fs mark. After this point, the divergence value is dominated by the statistical noise.
![Time evolution of the charge distribution in the isolated Si cube. The quantities are represented in terms of their distance from the reference distribution. The vertical axis shows the logarithm of the divergence (difference) from the ideal distribution function calculated via the Jensen–Shannon divergence [51]. This measure is typically employed to compare the difference between probability distributions. The three lines correspond to simulation with ratios Wc/Wp. All three cases reach steady-state around the 150 fs mark; then the divergence shows the magnitude of the statistical noise. The black squarecorresponds to the divergence for the case of Wc/Wp=5.42×10−3 at 300 fs that is shown in Fig. 9.](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/heattransfer/142/1/10.1115_1.4045137/1/m_ht_142_01_012402_f010.png?Expires=1744131451&Signature=bzYDZzF~jEgXgrvuVF77B3tHf7plF7TZwCL-YJuVk7rfHUmMTJOXG7apY21Fv86K6GaM-cBbspo9aRTxUJDoMP1UWSnJsDIIVHMPNjkcO57~5QPch4q1K3WxbD4FLH4Gg~mLrAK8hMhDywpoFotTIG8XRy3F0CiJ80Ra2Y---F1zhiCry3bCyCv2MFzn~WLSFTO~RWsTQbvNURv1WyLuYaHjdYubU6yb7t4nU8ptIZihXO6OGTBR67DXziPo0ETMw~79POg2-mBRwzXhP6kbcCUQxkqkL1Hxhgoez5MSHjrGn-~7-9zG2tmMPC8qrDtzphkwBKNDuEagIWHTHiqDvA__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
Time evolution of the charge distribution in the isolated Si cube. The quantities are represented in terms of their distance from the reference distribution. The vertical axis shows the logarithm of the divergence (difference) from the ideal distribution function calculated via the Jensen–Shannon divergence [51]. This measure is typically employed to compare the difference between probability distributions. The three lines correspond to simulation with ratios Wc/Wp. All three cases reach steady-state around the 150 fs mark; then the divergence shows the magnitude of the statistical noise. The black squarecorresponds to the divergence for the case of at 300 fs that is shown in Fig. 9.
![Time evolution of the charge distribution in the isolated Si cube. The quantities are represented in terms of their distance from the reference distribution. The vertical axis shows the logarithm of the divergence (difference) from the ideal distribution function calculated via the Jensen–Shannon divergence [51]. This measure is typically employed to compare the difference between probability distributions. The three lines correspond to simulation with ratios Wc/Wp. All three cases reach steady-state around the 150 fs mark; then the divergence shows the magnitude of the statistical noise. The black squarecorresponds to the divergence for the case of Wc/Wp=5.42×10−3 at 300 fs that is shown in Fig. 9.](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/heattransfer/142/1/10.1115_1.4045137/1/m_ht_142_01_012402_f010.png?Expires=1744131451&Signature=bzYDZzF~jEgXgrvuVF77B3tHf7plF7TZwCL-YJuVk7rfHUmMTJOXG7apY21Fv86K6GaM-cBbspo9aRTxUJDoMP1UWSnJsDIIVHMPNjkcO57~5QPch4q1K3WxbD4FLH4Gg~mLrAK8hMhDywpoFotTIG8XRy3F0CiJ80Ra2Y---F1zhiCry3bCyCv2MFzn~WLSFTO~RWsTQbvNURv1WyLuYaHjdYubU6yb7t4nU8ptIZihXO6OGTBR67DXziPo0ETMw~79POg2-mBRwzXhP6kbcCUQxkqkL1Hxhgoez5MSHjrGn-~7-9zG2tmMPC8qrDtzphkwBKNDuEagIWHTHiqDvA__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
Time evolution of the charge distribution in the isolated Si cube. The quantities are represented in terms of their distance from the reference distribution. The vertical axis shows the logarithm of the divergence (difference) from the ideal distribution function calculated via the Jensen–Shannon divergence [51]. This measure is typically employed to compare the difference between probability distributions. The three lines correspond to simulation with ratios Wc/Wp. All three cases reach steady-state around the 150 fs mark; then the divergence shows the magnitude of the statistical noise. The black squarecorresponds to the divergence for the case of at 300 fs that is shown in Fig. 9.
While all three cases reach steady-state at the same time, one can note that the solid and dashed lines are clearly less noisy during the 150 fs transient. This is due to by the fact that the three simulations have the same Wc but different Wp. Since all simulations have the same starting temperature, larger Wp corresponds to fewer simulated phonons. This causes a difference in the statistical noise when computing the momentum-dependent occupation number.
6 Conclusion
This work presented two stochastic approaches to tackle the challenges in modeling electrothermal phenomena within an all particle-based Monte Carlo method. The first approach models the electron–phonon interactions n terms of Poisson processes. This method excels in the steady-state regime but it results in energy conservation issues in the transient regime. This statistical error in energy conservation is addressed by modeling the electron–phonon interactions via the point process formalism. This formalism effectively implements a form of statistical book-keeping, which was found to mitigate the energy conservation issues. Compared to a full book-keeping scheme, this approach presents negligible computational and memory requirement, but it is capable of preserving (in statistical terms) the distribution of the generated and absorbed phonons. We implemented these algorithms within the framework of the particle-based full-band cellular Monte Carlo, and provided some numerical results obtained from the simulation of synchronous and interacting populations of electron and phonon.
Funding Data
Air Force Office of Scientific Research (AFOSR) (Grant No. FA9550-16-1-0406; Funder ID: 10.13039/100000181).
Air Force Research Laboratory (AFRL) (Grant No. FA8650-14- 1-7418; Funder ID: 10.13039/100006602).
Nomenclature
- a =
A constant number
- BZ1 =
first Brillouin zone
- =
energy
- E =
expectation
- f =
distribution function
- h =
the collection of previously occurred events of a process
- =
the collection of previously occurred events of a system
- kB =
Boltzmann constant
- =
counting measure
- N =
number of simulated particles
- =
random number obtained from the uniform distribution
- p =
probability distribution function
- P =
Poisson statistic
- P =
probability
- =
set of real numbers
- S =
step function
- t =
time
- Var =
variance
- W =
number of physical particles represented by a simulated particle
- x =
random variable
- =
scattering rate
- ζ =
real function
- η =
number of physical particles
- λ =
intensity of a point process
- σ =
thermal conductivity
- =
function from R to R+
- ω =
frequency
- =
value for absorption
- =
value for charges (electrons)
- =
value for emission
- =
value for phonons
Footnotes
In measure theory, is called a filtration or history [35], that is an increasing sequence of σ-algebras on the measurable space.
is the probability of the event A happening given the event B has already occurred.