Giorgio Tosti Balducci
Aerospace Structure and Materials
Delft University of Technology
Kluyverweg 1, 2629HS,
Delft, The Netherlands
&Boyang Chen
Aerospace Structure and Materials
Delft University of Technology
Kluyverweg 1, 2629HS,
Delft, The Netherlands
&Matthias Möller
Applied Mathematics
Delft University of Technology
Mekelweg 4, 2628CD,
Delft, The Netherlands
&Marc Gerritsma
Flow Physics and Technology
Delft University of Technology
Kluyverweg 1, 2629HS,
Delft, The Netherlands
&Roeland De Breuker
Aerospace Structure and Materials
Delft University of Technology
Kluyverweg 1, 2629HS,
Delft, The Netherlands
Corresponding author. Email: b.chen-2@tudelft.nl
Abstract
Modeling open hole failure of composites is a complex task, consisting in a highly nonlinear response with interacting failure modes. Numerical modeling of this phenomenon has traditionally been based on the finite element method, but requires to tradeoff between high fidelity and computational cost. To mitigate this shortcoming, recent work has leveraged machine learning to predict the strength of open hole composite specimens. Here, we also propose using data-based models but to tackle open hole composite failure from a classification point of view. More specifically, we show how to train surrogate models to learn the ultimate failure envelope of an open hole composite plate under in-plane loading. To achieve this, we solve the classification problem via support vector machine (SVM) and test different classifiers by changing the SVM kernel function. The flexibility of kernel-based SVM also allows us to integrate the recently developed quantum kernels in our algorithm and compare them with the standard radial basis function (RBF) kernel. Finally, thanks to kernel-target alignment optimization, we tune the free parameters of all kernels to best separate safe and failure-inducing loading states. The results show classification accuracies higher than 90% for RBF, especially after alignment, followed closely by the quantum kernel classifiers.
Keywords Composites Support Vector Machines Quantum Machine Learning
1 Introduction
Modern aviation industry makes wide use of composite materials, thanks to their lightweight and favorable mechanical properties. Frequently, aeronautical structural elements are often not textbook flat composite panels, but tailored components with complex mechanical responses. For instance, composite panels often show cutouts in order to allow fastening or lightening the structure or even for allowing the passage of wiring or cables. However, the presence of holes in a composite plate induces stress concentrations that can initiate damage which can propagate into intricate failure mechanisms involving different modes.
Models for open hole composite failure have developed in different directions.On the one hand, semi-empirical models were proposed to predict the allowables of these structures, such as ultimate stength, and their statistical distribution with respect to hole geometry, loading conditions, stacking sequence, ply thickness, etc. Early attempts required experimental properties from testing both the unnotched and notched laminate [1], while later models removed the need of directly testing the open hole laminate [2, 3] or just required the ply properties [4]. Despite being fast to evaluate and suitable for preliminary design, semi-empirical models can make large errors when extensive delaminations propagate from the notch, as it happens with ply-scaled laminates.
Finite Element (FE) simulations allow for improved modeling of open hole laminates failure. Open hole tension (OHT) has been extensively studied numerically both for capturing the in-plane [5] and thickness size effects [6, 7, 8] on the ultimate strength and for reproducing the different failure modes and their interactions [9, 10] with increasing detail. Furthermore, FE simulations managed to quite accurately predict open-hole compression (OHC), even though still struggling to predict the precise kink band formation[11, 12, 13]. However, the accuracy offered by FE models generally comes at the price of high computational costs, possibly making them unfeasible when many design iterations are required.
Therefore, there is a practical need for computationally efficient yet accurate models that can simulate open hole composite laminates. A possibility is offerred by machine learning surrogates, which have been employed in composite design and optimization [14, 15, 16], constitutive law modeling and multiscale analyses (see [17] for a comprehensive review) and damage characterisation [18, 19]. Concerning open-hole composite failure, Furtado et al. proposed a methodology to define allowables using four different machine learning models [20]. Their methodology was applied to open-hole tensile strength prediction for different dimensions, layups and material properties. While their methods are demonstrated on data generated analytically [4], the authors suggest using high fidelity finite element analyses for training, potentially providing accurate data-based models.
Similarly, in this work we propose a machine learning surrogate for open hole composites, which is accurate and efficient in inference. Differently from [20] however, the approach we suggest is not to have a fast allowables generator, but a classifier for ultimate failure of open hole composite laminates. More precisely, our trained model takes a loading state as input, such as the far field hom*ogenized plane strain components and returns a binary valuable () as output, depending on whether the load applied is lower or higher than the notched laminate strength. In this sense, the surrogate acts as a data-based generalized failure criterion which predicts at the structural component level, rather than at the material level.
This paper also aims at comparing classical and quantum computation for a classification problem in composite mechanics. To do this, we train the machine learning surrogate using kernel-based support vector machines (SVMs) [21], where the kernel function can be computed both in classical and quantum logic. As it will be clear in the next sections, quantum computation offers a way to encode information into exponentially large Hilbert spaces and to define an inner product in this spaces, effectively generating a kernel. This allows to explore the generalization potential of quantum machine learning, while leaving the SVM optimization to well-established classical quadratic optimization algorithms.
The rest of the paper is structured as follows. Section2 describes the machine learning problem, by defining the input, the data sampling strategy and the labeling criterion. Section3 briefly introduces the SVM dual problem, the RBF kernels and the quantum kernels. More details about these methods are available in the appendices following the main body of the manuscript. Finally, Section4 presents the classification results for all kernels and Section5 outlines conclusions and future work.
All data and code used in this work are made publicly available (see [22], [23] respectively).
2 Machine learning problem
Our method was applied to predict failure of an open hole composite specimen similar in geometry and material properties to the one experimentally tested in [24]. The specimen was modeled and meshed with the Abaqus finite element code [25] and it was loaded with different combinations of axial and shear strains and constrained with periodic boundary conditions. All the details of the specimen properties and of the finite element analyses are left to AppendixA.
The input of our surrogate models are hom*ogenized far field strains , which derive from enforcing periodic boundary conditions on opposite faces of the plate. The displacements of the left/right and top/bottom faces respectively can be linked through some reference degrees of freedom
(1) | ||||
where directions and are the horizontal and vertical directions in Figure1. The hom*ogenized strains are then obtained as
(2) | ||||
where and are the planar dimensions of the plate.
As mentioned, the input space was sampled through nonlinear incremental-iterative finite element analyses. Figure2 illustrates the sampling strategy used in this work in the simplified case of two-dimensional input. We refer to this technique as radial sampling, due to the fact that the design of experiments (DoE) does not directly affect all the points in this input space, but only the ones on the boundary. On the other hand, all the intermediate points are generated internally by the FE solver and they correspond to the hom*ogenized strain values at every time increment111Of course, the user maintains a certain control on the inner samples values, by the choice of initial, minimum and maximum time steps.. For this work, we chose the sampling space to be the hypercube in , meaning that all three components of the applied strains vector have the same bounds.
2.1 Labeling criterion
Each strain sample was assigned a label based on an ultimate failure criterion. In particular, we defined failure by the loss of stiffness of the laminate for given a user-defined threshold.
From the results of the FE analyses with periodic boundary conditions, one obtains the reaction forces , , and conjugate to the degrees of freedom in Equation1. These provide the hom*ogenized stresses, which can then be derived via the Hill-Mandel principle of energy balance as
(3) | ||||
where is the thickness of the plate.
The laminate stiffness in the two axial directions and in shear can thus be defined at every timestep as
(4) | ||||
The stiffness degradation is defined as the minimum ratio between the instantaneous stiffness and the corresponding stiffness measure in the linear elastic region,
(5) |
Therefore, given the total number of samples, every sample () is assinged a label if and otherwise.
3 Methodology
As already mentioned, we solve the ultimate failure binary classification problem using the SVM algorithm [21]. This consists in the following quadratic optimization problem in dual form
(6) | ||||
where , are the labels, respectively non-failed and failed, are the Lagrange multipliers and is the slack penalty. The kernel function is a similarity metric between two samples in a higher-dimensional feature space. More details on the SVM algorithm are left to AppendixB.
The performance of the dual SVM depends on the choice of its hyperparameters, namely the kernel function and slack penalty . To restrict the search space, the kernel function is generally parametrized via one or more parameters and the standard practice is to perform a grid-search cross-validation procedure in the space. In this work, we use instead a mixed procedure, where the kernel function is determined by optimizing the kernel-target alignment (KTA) [26] and the slack penalty is found by grid search cross-validation. The overall methodology is illustrated in Figure3, where we refer to the two steps as kernel training and SVM selection. Once the SVM has been fully determined, it can be trained by solving Equation6 and its learning ability can be measured as the accuracy on unseen test data, for different training dataset sizes.
We compare one classical and two quantum kernels. The classical kernel is the radial basis function (RBF) kernel, defined as
(7) |
RBF is a powerful kernel which corresponds to a feature map in an infinite-dimensional feature space [27]. It induces a Gaussian similarity function, whose width is controlled by the hyperparameter .
On the other hand, the quantum kernel is defined via a quantum embedding, which is constructed via data-depending unitary transformations that prepare the quantum state
(8) |
Given two samples and , the quantum kernel is simply the inner product
(9) |
Figure4 shows the generic quantum embedding and the two specific ones used in this work, which are the hardware efficient embedding (HE2) [28] and the instantaneous quantum polynomial (IQP) [29] one. To have more expressive feature mapping, either the width or the depth of the quantum embedding can be increased. The first one is the number of qubits, which can be even higher than the number of features in the dataset, by cyclically re-encoding the features to generate a highly nonlinear and potentially better separable feature space. Meanwhile, the embedding’s depth can be increased by repeating a base data-encoding block, such as IQP and HE2. Even in this case, re-encoding of the features may lead to a higher expressivity of the overall feature map [30]. For a short summary of relevant quantum computing concepts, we refer the reader to AppendixC.
4 Results
We tested our machine learning models on a dataset of 1960 labelled strain vectors , which we obtained by uniformly sampling the hom*ogeneous strain/stress pairs from the FE simulations of the open-hole composite specimen. The input hom*ogeneous strains in both normal and shear directions were varied between and microstrains and a stiffness degradation threshold of 0.9 was used to discriminate non-failed and failed loading states.
Both classical- and quantum-kernel SVMs were implemented using different Python libraries. We used PyTorch for training the RBF kernel and PennyLane for the quantum kernels. These libraries implement automatic differentiation (AD), which allows to optimize the KTA with gradient-based methods. We also used JAX together with PennyLane to just-in-time compile the quantum kernel functions. Concerning the classification problem, we employed the SVM and grid-search cross validation routines available from the Scikit-Learn Python package.
The KTA of both RBF and quantum kernels was maximized using stochastic gradient descent and Adam parameters update [31]. Figure5 shows the kernel alignment training of the RBF kernel. Figure6 presents instead the KTAs before and after training for nine different quantum kernels with HE2 embedding. It can be seen that increasing width and depth of these kernels generally improves their KTA. A higher number of qubits means that the strain features are mapped in a higher dimensional space, which can favor separability of the classes. On the other hand, increasing the depth benefits the kernel alignment, since it results in more expressive feature maps. Also, every additional layer of the HE2 embedding doubles the number of free parameters, explaining why optimization of deeper kernels mostly leads to higher gains in KTA. However, the advantage of increasing these quantum encoding resources does not scale uniformly. Already with 6 qubits and 3 HE2 layers, the optimization only modestly improves the KTA, likely due to the vanishing KTA gradients [32].
To find the hyperparameter that guarantees the highest off-training accuracy of the SVM algorithm, we used grid-search cross validation for the kernels considered. The validation accuracy values are reported in Figure7 for multiple values of and . We observe that kernels with achieve the higest scores, with the highest-KTA scoring first for the whole range of values. Furthermore, the accuracy of the maximally-aligned RBF kernel increases monotonically with , which suggests the usefulness of maximizing the KTA, but also that the class boundary in this feature space is densely populated and still requires a tight margin.
The same analysis was performed for all the quantum kernels considered, where we wanted to take into account the effect on accuracy of different embeddings and of maximizing the kernel-target alignment. The results are reported in Figure8, which shows accuracies roughly between 67% and 87% for all embeddings with different values of . Except for IQP case, increasing leads to higher accuracies, hinting to the the need of a tight bound when mapping with these embeddings, similar to the RBF kernel. Unfortunately, for , the optimization of the dual SVM failed to converge for the quantum kernels, likely due to numerical ill-conditioning, presumably preventing from reaching higher accuracies. In fact, we observe that the accuracies of both untrained and trained HE2 kernels monotonically increase with . For the trained HE2 case, this is true regardless of the number of qubits and depth. Furthermore, Figure8 also shows that increasing the embedding resources, especially the number of qubits, pays off more when also optimizing the KTA.
Classical and quantum kernels are finally compared in Figure9, which shows how 5 different models classify a test set of strain loading data when fitted on progressively larger training sets. A similar comparison on additional classification metrics can be found in AppendixD. The RBF kernel achieves 80% accuracy with just 10% the total training set size, and with it reaches over 90% with just half the training points. In comparison, all quantum kernel classifiers are at least 5% less accurate than the best RBF-kernel SVM. However, especially for HE2 embeddings, the scores are similar to the RBF case, suggesting that RBF and HE2 kernels separate the non-failed and failed classes to a similar extent. Changing the embedding from HE2 to IQP, there is a drop in accuracy for small training set sizes, while the performance is similar when more than half the training set is used. On the other hand, the effect of training the kernel is less visible at this stage, reflecting the fact that the accuracies obtained during grid-search cross validation are alike for untrained and trained HE2.
5 Conclusion
In this paper, we proposed a methodology to build a binary classifier from finite element analyses data for the particular case of an open hole composite specimen. We studied the case of in-plane strain loading of the specimen where the objective is to correctly label strain combinations that lead to ultimate failure.
From a design of experiment point of view, we demonstrated a radial sampling strategy technique, where the choice of which simulations to make to cover the input space takes into account the incremental-iterative nature of the nonlinear FE method. We then proposed a labelling criterion of hom*oegenized strain-stress pairs based on residual in-plane stiffness.
For classification of the labelled data, we used the kernel-based SVMs, which also allowed us to compare the performance of the recently proposed quantum kernels against the more traditional RBF. Furthermore, we employed kernel-target alignment to improve class separbility of both RBF and the HE2 embedding kernel.
For all the kernel examined, the corresponding SVMs separate non-failed and failed loading states with good accuracy. The RBF-based model classify more accurately than its quantum counterparts, although this likely happens due to numerical ill-conditioning in the current quantum SVM implementation. These numerical issues can likely be fixed by studying the dual SVM problem for the problematic instances, which will be the subject of future work.
Regarding kernel alignment, optimizing the KTA is shown to be powerful for RBF, since the SVM for the trained kernel outperforms the other RBF-based models in terms of accuracy. Aligning quantum kernels for this dataset also helps them to better separate the two classes, but for simple architectures the improvement is moderate, while more complex embeddings only reach the scores of the more simple ones after they have been aligned. Furthermore, one should remember that optimizing quantum kernels is almost always more computationally involved than for RBF, as the formers can have highly parametrized embeddings, while RBF is completely defined by the single parameter .
Extensions of this work can go in many directions. From the point of view of the problem, it would be interesting to increase the number of degrees of freedom, by allowing the notch radius or the lamination sequence to also change. The latter could be written in terms of lamination parameters [33] to have a continuos representation.
In terms of algorithms, both classical and quantum kernels can be explored further. RBF is the most popular choice for classical kernels, but certainly not the only one. Due to Mercer’s condition, any function which defines a positive semi-definite kernel matrix is a valid kernel function [27]. Obviously, the design space is vast, but automated procedures help reduce the search for instance by exploring combinations of only a fixed set of standard kernel functions.
On the other hand, the freedom of designing and parametrizing quantum embedding circuit also makes the choice of a quantum kernel nontrivial. Within the limits of classical simulation of quantum circuits, one could experiment with increasing number of qubits or different layering strategies, for instance the one proposed in [34] for the task of satellite image classification. From an optimization point of view, a recent technique has been proposed to maximize the quantum kernel alignment KTA and solving the SVM in a single optmization loop [35], which would of course greatly reduce the computational cost. Nevertheless, to truly understand a potential competitiveness of quantum kernels, it is probably most important to remove layers of simulation and study the effects of statistical and hardware noise on SVM convergence and accuracy.
Appendix A Open hole specimen features and finite element model details
A.1 Geometry and material properties
The plate’s hole has a 6 mm diameter and the in-plane dimensions are both 5 times the hole diameter. The ply material is IM7/8552 prepreg (carbon fibres and epoxy matrix) and each ply has mm thickness. We considered the lamination sequence for a total of 8 plies and 1 mm laminate thickness.
A.2 Details of the FE models
All finite element models were done using the Abaqus finite element code [25] and Python scripting was used to automatically generate a different FE models for each of the strain loading combinations [36, 37].
The meshed part is illustrated in Figure10, which shows that a radial mesh was obtained by seeding the hole edge 4 times as much as the outer edges. Since no delaminations were expected due to the absence of ply blocks, the elements were chosen to be S4 shells elements of the Abaqus Standard Element Library [38], whose in-plane and bending behaviour are described by the classical lamination theory (CLT), once the stacking sequence and ply thicknesses are specified.
Damage initiation was modeled with the Hashin criterion, while damage evolution was represented in a smeared crack fashion. For this purpose, the cohesive law available in Abaqus [38] was employed to model the stiffness degradation due to matrix and fiber tensile and compressive failure.
Appendix B Support vector machines, kernel methods and kernel-target alignment
B.1 Primal SVM
The SVM is the linear decision model
(10) |
which assigns labels through the sign function
(11) |
In Equations10 and11, is the vector normal to the decision hyperplane and is the intercept.
The optimal hyperplane is found by maximizing the geometric margin of the dataset, which can be proved to be
(12) |
By minimizing the squared norm one obtaines the primal optimization problem of the SVM,
(13) | ||||
where identifies the sample and is the total number of training samples.
Equation13 enforces exact separability, which can lead to overfitting. A way to improve generalization is the so-called soft margin SVM, which modifies Equation13 by introducing the constraints slack variables and the penalty constant ,
(14) | ||||
B.2 Dual SVM and kernels
By introducing the Lagrange multipliers and , one can write the Lagrangian of the SVM optimization problem,
(15) | ||||
The dual soft-margin SVM is obtained by setting all the derivatives of the Lagrangian in Equation15 equal to zero,
(16) | ||||
Equation16 is still a linear model in the original feature space. However, by introducing a feature map
(17) |
we can map the features nonlinearly and potentially to a manifold where they are more easily separable. Furthermore, replacing with in Equation16, we see that the mapped features only appear in the inner product
(18) |
which is known as the kernel of the feature map. The advantage of having only inner product of features (kernel trick) is the possibility of classifying in nonlinear feature spaces without having to compute the feature map explicitly.
The kernels mostly used in machine learning are the polynomial, Gaussian and sigmoid kernels
(19) |
B.3 Kernel-target alignment
The alignment between two kernels is defined as
(20) | ||||
where is the kernel matrix, obtained by taking the kernel of all pairs of features, and
The alignment between two kernels is always lesser or equal to 1, where 1 corresponds to perfect alignment.
Assume a kernel , parametrized by and define the target kernel matrix as
(21) |
The kernel-target alignment (KTA) of is the alignment between the chosen kernel and the target,
(22) | ||||
where is the kernel matrix of .
The KTA enjoys theoretical properties such as concentration around its expected value and generalisation [26] and therefore it is indicative of the ability of a kernel to separate classes of data.
Appendix C Quantum computing notions
C.1 Quantum states
The basic logical unit in quantum computing is the qubit. Mathematically speaking, this is a unit-norm vector in the complex 2-dimensional space defined as a linear combination of two orthogonal basis states, and , i.e.
(23) |
where the notation is used to indicate unit vectors.
As opposed to classical bits, Equation23 shows that a single qubit can be in any complex superposition of the two basis states. However, reading of a quantum state can only happen through a measurement, which will make the qubit collapse to one of the two basis states, or . More specificallly, the qubit is measured as with probability and as with probability . Since these are the only two possible outcomes, it must be that , which explains the unitary norm of the qubit.
Similarly, a states of qubits is defined as a superpositions of basis states that correspond to bitstrings, that is
(24) |
where .
The exponential relation between the number of qubits and the number of possible bitstrings speaks for the potential advantage of quantum superposition, which allows multiple classical information states to be processed simultaneously through a quantum algorithm. Quantum superposition is at the heart of fundamental algorithms with proved complexity improvement such as quantum integer factoring [39] and quantum database search [40].
Nevertheless, the quantum state is inaccessible as readable information and measurement will collapse the wavefunction to only one of the basis states. Similarly to the single-qubit case, the basis state has probability of being measured and
(25) |
Quantum states can be prepared by applying unitary transformations to a reference state, such as the all-zero state,
(26) |
where is the generic unitary transformation.
Figure11 shows a unitary operation as a quantum circuit, i.e. a sequence of single- and two-qubit operations. Here, Hadamard gates are first applied to every qubit, where
(27) | ||||
This first layer of Hadamard gates creates the uniform superposition state
(28) |
where each basis state can be sampled with the equal probability . This is often a starting point state in many quantum algorithms.
Following, CNOT gates act between neighbouring couples of qubits as
(29) | ||||
CNOT gates are used to set qubits in an entangled state, a condition in which any operation on any of the qubits affects also the rest of the state. In particular, the series of CNOT gates creates one of the maximally entangled Greenberger-Horne-Zeilinger (GHZ) states [41], specifically
(30) |
C.2 Quantum embedding
State preparation can be used to embed classical data into quantum states, by mapping the features to a unitary transformation.
(31) |
A complete review of the different types of quantum embeddings is beyond the current scope and the interested reader is pointed to [42] for a critical overeview.
C.3 Quantum kernels
Quantum embeddings are effectively feature maps in the Hilbert space . The kernel associated with it computes the overlap between quantum feature vectors in , that is
(32) |
where the braket notation indicates the inner product between two vectors in .
By introducing Equation31 in Equation32, the quantum kernel can be rewritten as
(33) |
which shows that the quantum kernel can be computed as the probability of the all-zeros state, after applying the direct embedding for and the reversed embedding for .
Appendix D Classical and quantum SVM comparison on different classification metrics
Accuracy | Jaccard Index | Precision | Recall | Specificity | |
---|---|---|---|---|---|
RBF kernel | |||||
156 | 0.694 | 0.627 | 0.784 | 0.759 | 0.795 |
313 | 0.750 | 0.708 | 0.835 | 0.824 | 0.838 |
470 | 0.788 | 0.751 | 0.886 | 0.832 | 0.895 |
627 | 0.788 | 0.750 | 0.893 | 0.825 | 0.903 |
784 | 0.838 | 0.813 | 0.939 | 0.859 | 0.944 |
940 | 0.824 | 0.797 | 0.915 | 0.861 | 0.921 |
1097 | 0.848 | 0.827 | 0.938 | 0.874 | 0.943 |
1254 | 0.878 | 0.866 | 0.943 | 0.913 | 0.945 |
1411 | 0.873 | 0.860 | 0.937 | 0.912 | 0.939 |
1568 | 0.882 | 0.869 | 0.955 | 0.906 | 0.958 |
HE2W6D3 kernel | |||||
156 | 0.699 | 0.626 | 0.804 | 0.739 | 0.823 |
313 | 0.731 | 0.675 | 0.835 | 0.780 | 0.846 |
470 | 0.756 | 0.708 | 0.858 | 0.802 | 0.869 |
627 | 0.779 | 0.741 | 0.878 | 0.826 | 0.887 |
784 | 0.795 | 0.762 | 0.892 | 0.839 | 0.900 |
940 | 0.794 | 0.757 | 0.897 | 0.829 | 0.907 |
1097 | 0.805 | 0.773 | 0.902 | 0.844 | 0.910 |
1254 | 0.797 | 0.765 | 0.884 | 0.849 | 0.891 |
1411 | 0.814 | 0.785 | 0.911 | 0.851 | 0.917 |
1568 | 0.818 | 0.790 | 0.912 | 0.855 | 0.919 |
References
- [1]J.M. Whitney and R.J. Nuismer.Stress fracture criteria for laminated composites containing stress concentrations.Journal of Composite Materials, 8(3):253–265, July 1974.
- [2]P.P. Camanho, G.H. Erçin, G.Catalanotti, S.Mahdi, and P.Linde.A finite fracture mechanics model for the prediction of the open-hole strength of composite laminates.Composites Part A: Applied Science and Manufacturing, 43(8):1219–1225, August 2012.
- [3]Giuseppe Catalanotti, RuiM. Salgado, and PedroP. Camanho.On the stress intensity factor of cracks emanating from circular and elliptical holes in orthotropic plates.Engineering Fracture Mechanics, 252:107805, July 2021.
- [4]C.Furtado, A.Arteiro, M.A. Bessa, B.L. Wardle, and P.P. Camanho.Prediction of size effects in open-hole laminates using only the young’s modulus, the strength, and the r-curve of the 0° ply.Composites Part A: Applied Science and Manufacturing, 101:306–317, 2017.
- [5]P.P. Camanho, P.Maimí, and C.G. Dávila.Prediction of size effects in notched laminates using continuum damage mechanics.Composites Science and Technology, 67(13):2715–2727, October 2007.
- [6]S.R. Hallett, B.G. Green, W.G. Jiang, and M.R. Wisnom.An experimental and numerical investigation into the damage mechanisms in notched composites.Composites Part A: Applied Science and Manufacturing, 40(5):613–624, May 2009.
- [7]F.P. vander Meer, L.J. Sluys, S.R. Hallett, and M.R. Wisnom.Computational modeling of complex failure mechanisms in laminates.Journal of Composite Materials, 46(5):603–623, September 2011.
- [8]B.Y. Chen, T.E. Tay, P.M. Baiz, and S.T. Pinho.Numerical analysis of size effects on open-hole tensile composite laminates.Composites Part A: Applied Science and Manufacturing, 47:52–62, April 2013.
- [9]F.P. vander Meer, C.Oliver, and L.J. Sluys.Computational analysis of progressive failure in a notched laminate including shear nonlinearity and fiber failure.Composites Science and Technology, 70(4):692–700, April 2010.
- [10]B.Y. Chen, T.E. Tay, S.T. Pinho, and V.B.C. Tan.Modelling the tensile failure of composites with the floating node method.Computer Methods in Applied Mechanics and Engineering, 308:414–442, August 2016.
- [11]C.Soutis, N.A. Fleck, and P.A. Smith.Failure prediction technique for compression loaded carbon fibre-epoxy laminate with open holes.Journal of Composite Materials, 25(11):1476–1498, November 1991.
- [12]ZCSu, TETay, MRidha, and BYChen.Progressive damage modeling of open-hole composite laminates under compression.Composite Structures, 122:507–517, 2015.
- [13]R.Higuchi, S.Warabi, A.Yoshimura, T.Nagashima, T.Yokozeki, and T.Okabe.Experimental and numerical study on progressive damage and failure in composite laminates during open-hole compression tests.Composites Part A: Applied Science and Manufacturing, 145:106300, 2021.
- [14]C.Bisagni and L.Lanzi.Post-buckling optimisation of composite stiffened panels using neural networks.Composite Structures, 58(2):237–247, November 2002.
- [15]M.A. Bessa and S.Pellegrino.Design of ultra-thin shell structures in the stochastic post-buckling range using bayesian machine learning and optimization.International Journal of Solids and Structures, 139-140:174–188, May 2018.
- [16]Zilan Zhang, Zhizhou Zhang, Francesco DiCaprio, and GraceX. Gu.Machine learning for accelerating the design process of double-double composite structures.Compos. Struct., 285:115233, April 2022.
- [17]Xin Liu, SuTian, Fei Tao, and Wenbin Yu.A review of artificial neural networks in the constitutive modeling of composite materials.Composites Part B: Engineering, 224:109152, November 2021.
- [18]Navid Zobeiry, Johannes Reiner, and Reza Vaziri.Theory-guided machine learning for damage characterization of composites.Composite Structures, 246:112407, August 2020.
- [19]Johannes Reiner, Reza Vaziri, and Navid Zobeiry.Machine learning assisted characterisation and simulation of compressive damage in composite laminates.Compos. Struct., 273:114290, October 2021.
- [20]C.Furtado, L.F. Pereira, R.P. Tavares, M.Salgado, F.Otero, G.Catalanotti, A.Arteiro, M.A. Bessa, and P.P. Camanho.A methodology to generate design allowables of composite laminates using machine learning.International Journal of Solids and Structures, 233:111095, December 2021.
- [21]Corinna Cortes and Vladimir Vapnik.Support-vector networks.Mach. Learn., 20(3):273–297, September 1995.
- [22]Boyang Chen and GiorgioTosti Balducci.Nonlinear responses of metals and composites.https://doi.org/10.5281/zenodo.7409612, December 2022.
- [23]Giorgio TostiBalducci.oh-comp-kernels: A python code for the prediction of the open-hole tensile strength of composite laminates using kernel methods.https://github.com/debrevitatevitae/oh-comp-kernels, 2023.
- [24]B.G. Green, M.R. Wisnom, and S.R. Hallett.An experimental investigation into the tensile strength scaling of notched composites.Composites Part A: Applied Science and Manufacturing, 38(3):867–878, March 2007.
- [25]Dassault Systèmes.ABAQUS Finite Element Analysis Software, 2022.
- [26]Tinghua Wang, Dongyan Zhao, and Shengfeng Tian.An overview of kernel alignment and its applications.Artif. Intell. Rev., 43(2):179–192, February 2015.
- [27]Bernhard Schoelkopf and AlexanderJ Smola.Learning with kernels.Adaptive Computation and Machine Learning Series. MIT Press, London, England, June 2019.
- [28]Thomas Hubregtsen, David Wierichs, Elies Gil-Fuster, Peter-Jan H.S. Derks, PaulK. Faehrmann, and JohannesJakob Meyer.Training quantum embedding kernels on near-term quantum computers.Phys. Rev. A, 106(4):042431, October 2022.
- [29]Oleksandr Kyriienko and EinarB. Magnusson.Unsupervised quantum machine learning for fraud detection.8 2022.
- [30]Maria Schuld, Ryan Sweke, and JohannesJakob Meyer.Effect of data encoding on the expressive power of variational quantum-machine-learning models.Phys. Rev. A, 103(3):032430, March 2021.
- [31]DiederikP. Kingma and Jimmy Ba.Adam: A Method for Stochastic Optimization.arXiv:1412.6980, December 2014.
- [32]JarrodR. McClean, Sergio Boixo, VadimN. Smelyanskiy, Ryan Babbush, and Hartmut Neven.Barren plateaus in quantum neural network training landscapes.Nature Communications, 9, 12 2018.
- [33]Shahriar Setoodeh, MostafaM. Abdalla, and Zafer Gürdal.Design of variable-stiffness laminates using lamination parameters.Composites Part B: Engineering, 37:301–309, 6 2006.
- [34]Artur Miroszewski, Jakub Mielczarek, Grzegorz Czelusta, Filip Szczepanek, Bartosz Grabowski, BertrandLe Saux, and Jakub Nalepa.Detecting clouds in multispectral satellite images using quantum-kernel support vector machines.arXiv:2302.08270, February 2023.
- [35]Gian Gentinetta, David Sutter, Christa Zoufal, Bryce Fuller, and Stefan Woerner.Quantum kernel alignment with stochastic gradient descent.In 2023 IEEE International Conference on Quantum Computing and Engineering (QCE), volume01, pages 256–262, 2023.
- [36]Tom Gulikers.Computational framework to implement an artificial neural network based constitutive model in abaqus for mesh coarsening.https://github.com/tgulikers/ABAQUS_ANN_constitutive_model, 2018.
- [37]Boyang Chen.Training ANNs with FEM data on Open Hole Composite plate - for summer school 2022 in Delft.https://github.com/BoyangChenFEM/Summer2022, 2022.
- [38]Dassault Systèmes.ABAQUS Analysis User Manual.Dassault Systèmes, Providence, RI, version 2022 edition, 2022.
- [39]P.W. Shor.Algorithms for quantum computation: discrete logarithms and factoring.In Proceedings 35th Annual Symposium on Foundations of Computer Science, SFCS-94. IEEE Comput. Soc. Press, 1994.
- [40]LovK. Grover.A fast quantum mechanical algorithm for database search.In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, STOC ’96, page 212–219, New York, NY, USA, 1996. Association for Computing Machinery.
- [41]DanielM. Greenberger, MichaelA. Horne, and Anton Zeilinger.Going Beyond Bell’s Theorem, pages 69–72.Springer Netherlands, Dordrecht, 1989.
- [42]Maria Schuld and Francesco Petruccione.Machine Learning with Quantum Computers.Springer International Publishing, Cham, Switzerland, 2021.