From being computed by biological neurons to being simulated by software to running on artificial neurons – Computational Fluid Dynamics has come full circle.

 

Aravinth C K, a 5th-year Dual Degree student from the Department of Aerospace Engineering, gives us a glimpse into his Dual Degree Project – which involves the development of a hybrid algorithm rooted in Computational Fluid Dynamics and Machine Learning to predict flow parameters and characterise ‘turbulence’ – a strategy that could potentially revolutionize the way we model turbulence today.

‘We actually study rocket science’ – you must’ve heard countless Aerospace Engineers (both students and graduates) drop the mic using this line.

Well, it’s a joke that never seems to get old. I would be lying to you if I said that I didn’t belong to this group of people who try to glorify their Aerospace degree. I must, however, confess that the research that I am currently pursuing does not necessarily fall under the classification of ‘rocket science’. Nonetheless, it is still very exciting for me to work on it and I find it unique.

My work revolves around a word that most of us who have travelled in aeroplanes would have surely come across at some point in their journey.

Have you ever experienced the occasional disturbance when you are travelling in an aeroplane?

It surely sends a chill down the spine to hear a pilot requesting us to fasten our seat-belts almost instantaneously as a response to such a disturbance. More often than not, the pilot coins the term ‘turbulence’ to explain the reason for the discomfort experienced. My work is primarily concerned with this very term – turbulence.

I deal with the modelling of turbulent flows with the help of simulations accelerated by data-driven techniques.

I did not have a clue what the word turbulence meant then, before a formal engineering education, and do not completely understand what it means now, even after spending several months working on this field! This sentiment is quite common among researchers working in this field.

Turbulence remains an unresolved problem in physics even after several decades of dedicated research in this area.

Please fasten your seatbelts – we’re expecting some Turbulence.


Laminar
and Turbulent flows are the two regimes in fluid flows. The first one corresponds to streamlined motion, and the other corresponds to chaotic motion. Just like anything else in science, these flows are also governed by a set of equations. In fact, they are the very same conservation equations in fluid mechanics – specifically the Navier-Stokes equation that helps us study such flows.

My research entails the examination of such flows in very rudimentary geometries, like inside that of a channel [Figure 1], a flow which happens to be one of the most extensively investigated in fluid mechanics. Conducting simulations for such geometries and collecting relevant data that could be used later is the bulk of my research focus. In addition to this, I employ tools from Machine Learning to come up with rectified and revised interpretations to these simulations with an intention to make these solutions as closely-matched to the ground truth as possible.

 

Figure 1: Mean velocity field variation inside a channel flow investigated using RANS simulations.

 

The Machine Learning Bandwagon is here – and Computational Fluid Dynamics has jumped aboard!

Machine Learning (ML), which forms an integral part of my work, is quite the buzzword these days. The craze for ML stems from its application-oriented nature and ability to provide the user with endless capabilities to extract information. From text translation to face recognition, it has found use in a multitude of applications and research areas. It isn’t surprising to see that ML has found use cases in the field of Computational Fluid Dynamics (CFD) as well.

It is quite intriguing to note that the first papers and articles relating to ML trace back to as early as the 1950 – 60s and yet it is only recently that it has become such an attractive field in science and engineering.

What, then, took ML so long to attain the popularity that it enjoys today? What could have possibly changed so drastically from the past that helped ML flourish the way it does today? According to me, the reasons are two-fold: the (much) improved processing power, and the ability to generate large amounts of data and store it compactly.

It is estimated that there has been a trillion-fold increase in computational performance over the last five decades! To get a better feel for numbers – digest this:

The best Apollo (a space mission by the US to land a human on the moon) era computers performed instructions hundreds of million times slower than the modern smartphones!

 

The command module computer used by the astronauts to control the spacecraft during the lunar missions possessed the processing power equivalent to a pair of Nintendo consoles!

While such comparisons from two different generations may not always be appropriate and meaningful, they do help us visualize technological advancements over the years with ease.

The Project

 

Coming back to my project, my research itself is thoroughly inter-disciplinary in its approach as it requires the integration of ideas from both CFD and Machine Learning.

The basic idea is to consider two simulation techniques, one far superior but inordinately expensive: the Direct Numerical Simulations (DNS) when compared to the other: the Reynolds Averaged Navier-Stokes or RANS models. The first step involves collecting data for a specific quantity under investigation: Reynolds Stress Tensor using both the turbulence models. This is followed up by ‘learning’ (the complexities and correlations from the model) from the obtained data with Machine Learning and using it to predict the stress tensor by starting from the inferior technique.

The two-equation k-Ɛ model, a modelling paradigm that falls under the Reynolds Averaged Navier-Stokes (RANS) category, was one of the techniques considered for simulations. As the name suggests, RANS involves time-averaging and the solutions obtained describe the mean flow characteristics – such as the mean velocity field. RANS is most commonly used for engineering design purposes. They suffer from the closure problem, incapacitating its ability to solve the problem in hand, hence requiring modelling with assumptions.

They (RANS based modelling) can give notoriously unreliable solutions.

On the other hand, Direct Numerical Simulation (DNS) can provide complete information about the flow, that too unaffected by any assumptions, unlike the RANS model. It is an excellent tool for conducting research but comes at a very high computational price (several months on the fastest supercomputers available to us), thereby rendering it infeasible as a general-purpose design tool. For this project, RANS simulations were run on OpenFOAM whereas the DNS solutions were taken from available literature.

After the collection of the data from both RANS and DNS simulations, the two were linked through a deep neural network architecture [Figure 2]. The neural network is used to estimate a parameter in the RANS case that has poor accuracy due to the assumptions made. The network was trained with velocity gradients obtained from RANS model (less accuracy, but faster compute time) as inputs while keeping DNS (superior accuracy, but incredibly high compute time) solutions as the ground truth.

By incorporating ML in the CFD problem, we were able to build a time-efficient alternative by potentially building a bridge between RANS, a commercially used model and DNS.

This also means that we get a thorough understanding of the flow at a much lower computational expense.

This project, inspired by the works previously done in this field[1][2], rightly exploits the vast amount of data available that is already available to us from experiments and simulations. However, the robustness of such ML assisted flow modelling should not be overlooked as there are still questions over its potential to predict equally good results for new flows unknown to the neural network model.

Figure 2 : Depiction of a neural network architecture. Here, the hidden layers help build a highly non-linear relationship between inputs and outputs. [1]

Through this research, I was able to pursue my passion for working on a project involving multi-disciplinary learning. I believe that traditional boundaries between disciplines are obsolete in this day and age and may even be counter-productive in achieving innovative solutions.

With just about a month to go, I am very excited to continue my research, and in the process, I hope to add value to the research community.

Aravinth C K is a 5th-year Dual Degree Student from the Department of Aerospace Engineering. This article sheds light on the work he is pursuing as his Dual Degree Project under the mentorship of Dr. Balaji Srinivasan and Dr. A Sameen, with the constant support of Gaurav Yadav (PhD Scholar, Department of Mechanical Engineering – IIT Madras).

 

 

References:

[1] J. Ling, A. Kurzawski, and J. Templeton (2016) “Reynolds averaged turbulence modelling using deep neural networks with embedded invariance”, Journal of Fluid Mechanics, 807, 155–166.

[2] K. Duraisamy, G. Iaccarino and H. Xiao, “Turbulence Modeling in the Age of Data”, Annual Review of Fluid Mechanics 2019, 51, 1–23.

Featured Photo:

Myoungkyu Lee and Robert D. Moser “Direct numerical simulation of turbulent channel flow up to Re_tau = 5200”, Journal of Fluid Mechanics, 2015, vol. 774, pp. 395-415

 

(Visited 161 times, 1 visits today)