-
Notifications
You must be signed in to change notification settings - Fork 90
PAALS Tutorial 2014
Glen Hansen edited this page Feb 8, 2019
·
3 revisions
[Recursive Inertial Bisection (RIB)]: http://www.cs.sandia.gov/Zoltan/ug_html/ug_alg_rib.html) [PAALS]: http://www.fastmath-scidac.org/research/parallel-albany-adaptive-loop-scorec.html [Albany]: https://SNLComputation.github.io/Albany/ [Trilinos]: http://trilinos.sandia.gov/ [MeshAdapt]: http://www.fastmath-scidac.org/research/meshadapt-unstructured-high-order-mesh-adaptation-procedure.html
- Demonstrates
- Parallel unstructured mesh generation of 13 million element mesh on a complex geometric model from mesh controls defined in GUI (shown above).
- Software
- Algorithms
- Source code
/projects/FASTMath/ATPESC-2014/install/scorec/src/generate.cc
- Example data files
- Executable:
/projects/FASTMath/ATPESC-2014/install/scorec/bin/generate
- Input files:
/projects/FASTMath/ATPESC-2014/examples/paals/ex1/upright.smd
- Output files:
/projects/FASTMath/ATPESC-2014/examples/paals/ex1/out/
- Executable:
- Executing the example
- Execution time: ~6 minutes
- Number of cores: 128
- Number of nodes: 16
- Setup
mkdir $HOME/paals cd $HOME/paals cp -r /projects/FASTMath/ATPESC-2014/examples/paals/ex1/ . cd ex1
- Run the mesh generation tools and write scorec mesh files and paraview files.
qsub -A ATPESC2014 -q Q.ATPESC -O generate -n 16 --mode c8 --proccount 128 -t 10 ./generate upright.smd 13M
- Examining results
- Follow the output as the example runs
tail -f generate.output
, holdCtrl
then pressc
to exit. - Compare
generate.output
toout/out.txt
. - The partition used by parallel mesh generation is balanced for its procedures. The image below shows 16 of the 128 parts.
Parts 0-15 of initial 128 part partitioning of 13 million element mesh of upright model. - In Exercise 3 the mesh will be re-partitioned and the subsequent change to partition qualities computed.
- Optional - Follow Exercise 2 to visualize the partition.
- Follow the output as the example runs
- Download ParaView
- Download ParaView
- Transfer the
*vtu
directory to your machine - Load the mesh
- Select the folder icon in the top left then browse to your home directory and select one of the
*vtu/m.pvtu
files. - Click 'Apply' to render.
- Select 'Surface With Edges' as shown.
Enabling edge rendering in Paraview. - Select 'apf_parts' to color the mesh elements by their part id.
Enabling part id coloring in Paraview. - Click 'Apply' to render.
- Select the folder icon in the top left then browse to your home directory and select one of the
/projects/FASTMath/ATPESC-2014/install/scorec/src/zsplit.cc
* Example data files
* Executable: /projects/FASTMath/ATPESC-2014/installs/paals/zsplit
* Input files: /projects/FASTMath/ATPESC-2014/examples/paals/ex3
* Output files: /projects/FASTMath/ATPESC-2014/examples/paals/ex3/out/
* Executing the example
* Execution time: ~5 mins
* Number of cores: 512
* Number of nodes: 16
* Setup cd $HOME/paals cp -r /projects/FASTMath/ATPESC-2014/examples/paals/ex3 . cd ex3 tar xzf 13M.tar.gz* Run the partitioning tools and write Parview (
*vtu
) files.cd $HOME/paals/ex3 mkdir vtu qsub -A ATPESC2014 -q Q.ATPESC -O zsplit -n 16 --mode c32 --proccount 512 -t 20 ./zsplit upright.smd 13M/ 13M512/ vtu/13M512p 4* Examining results * Compare
zsplit.output
to out/out.txt
.
* Partition quality statistics are output for the initial partition and the final partition; respectively marked with 'Pre' and 'Post'. The number of disconnected components, neighbors, and shared vertices have all decreased with the Zoltan partition. Also note the reduction of the 175% (2.75) element imbalance to 5% (1.05).
* _Optional_ - Transfer the *vtu
files as was done in Exercise 2. Note the differences in shape and connectedness between the initial partition and the Zoltan/ParMETIS partition.
- Motivational exercise: predict (computationally) the peak strength of the welded joint,
and the drop in the load-bearing capacity of the joint as the weld fails.
- The inset pictures show the state of the welded joint at various locations on the load-displacement diagram.
- Note that none of the simulations match the experimental data beyond a displacement of 0.4 mm.
- The simulation without synthetic voids (blue curve) shows that the simulated joint is significantly stronger than the actual welded joint. The presence of porosity in the actual weld makes it weaker.
- We then attempt to increase the fidelity of the simulation by adding synthetic voids that approximate the porosity present in
the actual weld.
- All simulations containing voids, whether fine or coarse mesh, fail to predict the load shedding behavior as the joint begins to fail in the ligament (the area shown in red in the "onset of necking" diagram).
- Even more problematic, the linear solver in the simulation code fails to converge near the apex of the loading curve, ending the simulation prematurely.
- What might be the cause of this inability to predict the load shedding behavior in the simulations with voids?
- What might be responsible for the lack of convergence of the linear solver?
- How might these issues be mitigated?
- Problem Description
- Upright geometry
- Boundary conditions:
- Max y face: Zero x,y, and z displacement
- Min y face: Applied y displacement over 5 load steps
- u_y = {-.001,-.002,-.003,-.004,-.005}
- Maximum strain of 2.5% in y-direction
- J2 Plasticity Model (isotropic hardening)
- Elastic Modulus: 1 GPa
- Poisson's Ratio: 0.3
- Yield Strength: 50 MPa
- Hardening Modulus: 100 Mpa
- Demonstrates
- Example data files
- Executable:
/projects/FASTMath/ATPESC-2014/install/paals/Albany.exe
- Input files:
/projects/FASTMath/ATPESC-2014/examples/paals/ex5/
- Output files:
/projects/FASTMath/ATPESC-2014/examples/paals/ex5/out/
- Executable:
- Executing the example
- Execution time: 20 mins
- Number of cores: 1024
- Number of nodes: 64
- Setup
cd $HOME/paals cp -r /projects/FASTMath/ATPESC-2014/examples/paals/ex5/ . cd ex5 tar xzf mesh.tar.gz
- Run PAALS
qsub -A ATPESC2014 -q Q.ATPESC -O albany --proccount 1024 -n 64 -t 30 --mode c16 ./Albany.exe
- Examining results
1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|
The yy component of the Cauchy stress tensor at the first five load steps.
-
Weld Failure Presentation
- J. Foulk, M. Veilleux, J. Emery, J. Madison, H. Jin, J. Ostien, A. Mota, Resolving the evolution of pore structures in 304-L laser welds, USNCCM 13, San Diego, July 30, 2015, SAND2015-9289C
-
SCOREC tools
- M. Zhou, O. Sahni, T. Xie, M.S. Shephard and K.E. Jansen, Unstructured Mesh Partition Improvement for Implicit Finite Element at Extreme Scale, Journal of Supercomputing, 59(3): 1218-1228, 2012. DOI 10.1007s11227-010-0521-0
- M. Zhou, T. Xie, S. Seol, M.S. Shephard, O. Sahni and K.E. Jansen, Tools to Support Mesh Adaptation on Massively Parallel Computers, Engineering with Computers, 28(3):287-301, 2012. DOI: 10.1007s00366-011-0218-x
- M. Zhou, O. Sahni, M.S. Shephard, K.D. Devine and K.E. Jansen, Controlling unstructured mesh partitions for massively parallel simulations, SIAM J. Sci. Comp., 32(6):3201-3227, 2010. DOI: 10.1137090777323
- M. Zhou, O. Sahni, H.J. Kim, C.A. Figueroa, C.A. Taylor, M.S. Shephard, and K.E. Jansen, Cardiovascular Flow Simulation at Extreme Scale, Computational Mechanics, 46:71-82, 2010. DOI: 10.1007s00466-009-0450-z
-
Mesh data and geometry interactions
- Seol, E.S. and Shephard, M.S., Efficient distributed mesh data structure for parallel automated adaptive analysis, Engineering with Computers, 22(3-4):197-213, 2006, DOI: 10.1007s00366-006-0048-4
- Beall, M.W., Walsh, J. and Shephard, M.S, A comparison of techniques for geometry access related to mesh generation, Engineering with Computers, 20(3):210-221, 2004, DOI: 10.1007s00366-004-0289-z.
- Beall, M.W. and Shephard, M.S., A general topology-based mesh data structure, Int. J. Numer. Meth. Engng., 40(9):1573-1596, 1997, DOI: 10.1002(SICI)1097-0207(19970515)40:9<1573::AID-NME128>3.0.CO;2-9.
-
Adaptivity
- "Aleksandr Ovcharenko, Parallel Anisotropic Mesh Adaptation with Boundary Layers, Ph.D. Dissertation, RPI, 2012":http://www.scorec.rpi.edu/REPORTS/2012-20.pdf
- Q. Lu, M.S. Shephard, S. Tendulkar and M.W. Beall, Parallel Curved Mesh Adaptation for Large Scale High-Order Finite Element Simulations, Proc. 21 Roundtable, Springer, NY, pp. 419-436, 2012, DOI 10.1007978-3-642-33573-0.
- A. Ovcharenko, K. Chitale, O. Sahni, K.E. Jansen and M.S. Shephard, S. Tendulkar and M.W. Beall, Parallel Adaptive Boundary Layer Meshing for CFD Analysis, Proc. 21st International Meshing Roundtable, Springer, NY, pp. 437-455, 2012, DOI 10.1007978-3-642-33573-0
- X.-J. Luo, M.S. Shephard, L.-Q. Lee and C. Ng, Moving Curved Mesh Adaption for Higher Order Finite Element Simulations, Engineering with Computers, 27(1):41-50, 2011. DOI: 10.1007/s00366-010-0179-5
- O. Sahni, X.J. Luo, K.E. Jansen, M.S. Shephard, Curved Boundary Layer Meshing for Adaptive Viscous Flow Simulations, Finite Elements in Analysis and Design, 46:132-139, 2010. DOI: 10.1007/s00366-008-0095-0
- Alauzet, F., Li, X., Seol, E.S. and Shephard, M.S., Parallel Anisotropic 3D Mesh Adaptation by Mesh Modification, Engineering with Computers, 21(3):247-258, 2006, DOI: 10.1007s00366-005-0009-3
- Li, X., Shephard, M.S. and Beall, M.W., 3-D Anisotropic Mesh Adaptation by Mesh Modifications, Comp. Meth. Appl. Mech. Engng., 194(48-49):4915-4950, 2005, doi:10.1016/j.cma.2004.11.019
- Li, X., Shephard, M.S. and Beall, M.W., Accounting for curved domains in mesh adaptation, International Journal for Numerical Methods in Engineering, 58:246-276, 2003, DOI: 10.1002/nme.772
-
Albany
- M. Gee, C. Siefert, J. Hu, R. Tuminaro, and M. Sala. ML 5.0 Smoothed Aggregation Users Guide. Technical Report SAND2006-2649, Sandia National Laboratories, 2006. http://trilinos.sandia.gov/packages/ml/mlguide5.pdf
- Qiushi Chen, Jakob T. Ostien, Glen Hansen. Development of a Used Fuel Cladding Damage Model Incorporating Circumferential and Radial Hydride Responses. Journal of Nuclear Materials, 447(1-3):292-303, 2014. http://dx.doi.org/10.1016/j.jnucmat.2014.01.001
- Michael A. Heroux, Roscoe A. Bartlett, Vicki E. Howle, Robert J. Hoekstra, Jonathan J. Hu, Tamara G. Kolda, Richard B. Lehoucq, Kevin R. Long, Roger P. Pawlowski, Eric T. Phipps, Andrew G. Salinger, Heidi K. Thornquist, Ray S. Tuminaro, James M. Willenbring, Alan Williams, and Kendall S. Stanley. An Overview of the Trilinos Package. ACM Trans. Math. Softw., 31(3):397–423, 2005. http://trilinos.sandia.gov
- Roger P. Pawlowski, Eric T. Phipps, and Andrew G. Salinger. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part I: Template-based Generic Programming. Scientific Programming, 20(2):197–219, 2012. http://dx.doi.org/10.3233/SPR-2012-0350
- Roger P. Pawlowski, Eric T. Phipps, Andrew G. Salinger, Steven J. Owen, Christopher M. Siefert, and Matthew L. Staten. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part II: Application to Partial Differential Equations. Scientific Programming, 20(3):327–345, 2012. http://dx.doi.org/10.3233/SPR-2012-0351
- Eric Phipps. A Path Forward to Embedded Sensitivity Analysis, Uncertainty Quantification and Optimization. https://cfwebprod.sandia.gov/cfdocs/CompResearch/docs/Phipps_Embedded_SA_UQ_Opt_web.pdf
- Eric Phipps and Roger Pawlowski. Efficient Expression Templates for Operator Overloading-based Automatic Differentiation. Preprint, 2012. http://arxiv.org/abs/1205.3506v1
- Eric Phipps, H. Carter Edwards, Jonathan Hu, and Jakob T. Ostien. Exploring Emerging Manycore Architectures for Uncertainty Quantification through Embedded Stochastic Galerkin Methods. International Journal of Computer Mathematics, 91(4):707-729, 2014. http://www.tandfonline.com/doi/abs/10.1080/00207160.2013.840722
- A. Salinger et al. Albany website. http://SNLComputation.github.io/Albany