0% found this document useful (0 votes)
17 views2 pages

One-Page Review

Uploaded by

xieshenming4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views2 pages

One-Page Review

Uploaded by

xieshenming4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

1 Summary

This paper put forward a general machine learning-based topology optimization framework,
which greatly accelerates the design process of large-scale problems, without sacrifice in
accuracy. Four design examples are used to demonstrate the effectiveness of the proposed
framework.
Table. 1. The objectives of examples
Design Type Aims
2D MBB beam design To illustrate the proposed framework
Benchmark 3D cantilever To demonstrate the scalability and accuracy of the proposed
beam design framework for meshes with different sizes
3D MBB beam design Study the impact of various choices of NB and Pdrop
3D bridge design Showcase the potential to accurately capture the presence of non-
designable regions and the influence of various choices of hyper-
parameters of the DNN

2 Discussions
2.1 Strengthnesses
1) The study poposed a machine learning approach for topology optimization which does not
require a pre-collected dataset. The work adopts an online training and update strategy where the
history data of topology optimization are used as the training data. This is why the framework
can be used universally.
2) To ensure scalability of the approach, they introduced a two-resolution setup consisting of a
coarse-resolution mesh and a fine-resolution mesh. A void sample drop out scheme is also

successfully demonstrates speedup up to ∼10 times for design problems with roughly one
devised to improve the training efficiency and prediction accuracy. The approach proposed

million design variables.

2.2 Weaknesses ( Possible future directions )


1) The reason why chooses DNN need to be further discussed. Most of approaches before were
based on CNN or GAN, which has been proved to be effective. Future work could explore other
machine learning and deep learning models and compare them.
2) This study use NI and NF to select data used for training which can be cosidered as a form of
passive learning. Further research can try active learning to enhance the training process.
3) The performance of the model is significantly influenced by factors such as the depth of the
network, the number of neurons, and the choice of activation functions. Further research should
delve into finding these hyper-parameters to improve model performance.
4) The online training and update strategy hinders achieving higher speedup in larger problems.
This limitation arises from two factors: (1) Training the model during topology optimization can
be computationally expensive for very large-scale problems. (2) The solution of the state
equation on the fine-resolution mesh is still needed at the optimization steps where the training
data are collected, which restricts the size of problems that can be handled when given limited
hardware resources.

You might also like