top of page

First Class Fidelity: A Conversation with Clint Geller

  • Writer: Ken Roberts
    Ken Roberts
  • Mar 8, 2021
  • 18 min read

Updated: Mar 15, 2021

Materials Design presents the first in a series of interviews of influential figures in computational materials science and engineering.

Clint Geller recently joined Materials Design as Senior Advisor Scientist working closely with Erich Wimmer following many years at the Naval Nuclear Laboratory in Pittsburgh. Clint’s applied use of novel modeling techniques has led to patents, processes, products, and many solved problems. Here Clint shares some of his background and experiences in computational materials science and engineering.


MATERIALS DESIGN: Clint Geller, it is a pleasure and privilege to meet with you today. Thank you very much for taking the time.


GELLER: You’re very welcome, I am delighted to do this.


To acquaint the readers with you, could you please tell us a little about yourself? For example, where were you born and raised, what were your educational and formative work experiences?


GELLER: Sure. Well, I was born in 1954 in Brooklyn New York, the son of a New York City Police officer and a part-time clothing saleswoman. I attended public schools and became the first member of my immediate family to obtain a college degree. My father passed away shortly before my tenth birthday. I had one sibling, my older brother Mitch, who kindled a love of science in me and a desire to become a scientist at a very early age. I attended college at the Cooper Union for the Advancement of Science & Art in the East Greenwich Village section of Manhattan, graduating in 1975 with a B.S. degree in Physics. Cooper Union is a small, private school founded in 1858, located a block from St. Marks Place, an address briefly made famous by a contemporaneous Bob Dylan lyric (“smelled like midnight on Saint Mark’s Place”). In February 1860, Abraham Lincoln made a very important speech there, his Cooper Union Address, in the CU Great Hall, the same auditorium in which my commencement exercise was held in 1975. Until recently, tuition was free for all students, a fact which suited my family’s modest financial circumstances extremely well. I was actually a physics and math double major, since as tuition was free, taking extra courses that interested me cost nothing. I was introduced to computational quantum chemistry in the summer between my junior and senior years when I worked as an NSF undergraduate research assistant for Professor Jules Moskowitz in the NYU Chemistry Department. I ran computations with the MIT Polyatom code and the Cal Tech GVBOne code using punch card decks on the CDC 6600 at NYU’s Courant Institute, which was just down the street from Cooper Union.


One of my physics professors at Cooper, Aaron Yalow, encouraged me to pursue a graduate degree in physics at the University of Illinois at Urbana-Champaign. Aaron and his wife Rosalyn Yalow, who had won a Nobel Prize in Medicine for radioimmunoassay, met at the U of I when they were both in the graduate physics program there. U of I was, and may still be the academic Mecca for what was my primary interest, solid state physics, especially since John Bardeen taught there at the time I attended. I accepted a physics teaching assistantship, then became a research assistant in my second semester. Torn between computational solid state physics and renormalization group methods in critical phenomena, I eventually opted for the former and wrote a PhD thesis entitled “Correlated Hartree-Fock Band Structures for Transition Metals.” The U of I physics program was excellent, but having just come from New York City, Urbana-Champaign seemed like a sensory deprivation experiment in the middle of a cornfield. So, with relatively few distractions, my graduate career progressed briskly, and I received my PhD in 1979. I remember printing out my thesis on a daisy wheel printer and writing in the equations by hand with a Rapidograph pen.


At the time I graduated, the Bettis Atomic Power Laboratory, located near Pittsburgh, PA, was looking for a couple of solid state physicists to help them solve an important engineering problem. Bettis offered me a permanent job with an attractive starting salary and my nearly 41 year career as part of the US Naval Reactors (NR) Program began. The Bettis Laboratory, now part of the Naval Nuclear Laboratory (NNL), has been operated for NR by various corporate contractors since I started there in 1979 – Westinghouse, CBS, Bechtel, and most recently, Fluor – but those changes never affected the continuity of the work or the mission of the organization.

It’s unusual nowadays for a professional to remain with the same technical organization so long. What was it about the NNL job that kept you there?


GELLER: It was a combination of factors, not the least of which were that I liked the work and I liked living in Pittsburgh. It’s a friendly town, small enough to be livable and affordable, but big enough to have a vibrant culture and an intellectual life. My only previous terms of reference had been New York City and Urbana-Champaign, so Pittsburgh seemed like a good compromise between uncomfortable extremes. And the air is much cleaner now than it used to be when the steel mills were operating. Interestingly, the year I came to Pittsburgh, the city’s professional sports teams happened to win the Superbowl, the World Series and the Stanley Cup all in the same year! And so, Pittsburgh named itself the “City of Champions.” Personally, though, I was more attracted to the traditional music community I found there than spectator sports. In fact, I met my wife Maria, a lifelong Pittsburgher, at a square dance in 1985 and we were married in 1986. Maria is a retired educational psychologist who worked for the University of Pittsburgh. Our daughter Anne, who goes by Annie, was born in 1991. She will be defending her PhD in immunology in a couple of months, after which she will complete her final two years of medical school. In Pittsburgh I also rediscovered an early childhood interest of mine – tournament chess.

Another significant factor in my decision to make a career at Bettis was the feeling that I was doing something important for the nation. The Bettis lab is an historically important place and to this day boasts an exceptional team of world leading experts and practitioners. More than any other single organization, the Bettis Laboratory and the NR Program are responsible for having turned nuclear power from an idea into a practical reality. The first marine nuclear propulsion plant and the first commercial nuclear power plant were both designed at Bettis. During the Cold War, the most invulnerable leg of the American nuclear deterrent triad was our nuclear submarine fleet, which was directly supported by Bettis. In terms of career opportunities, being a scientist embedded in an engineering development organization posed numerous unique challenges and opportunities. I spent about half my Bettis career functioning as a development engineer and half as a research scientist, but all the while serving as a technical interpreter and bridge between various science and engineering communities at the lab. Communication has always been a strong suit for me. In fact, writing is my hobby as well as a major aspect of my professional work. Aside from my physics and engineering publications, I have published two books on antiquarian horology (the study of timekeeping and timekeepers) and also a sci-fi/fantasy trilogy, “Gennebar Rising.”


My work assignments spanned a wide range of subjects: corrosion and hydriding of zirconium alloys, corrosion fatigue of stainless steel and nickel superalloys, high temperature creep of refractory metals, piezoelectric material performance, optoelectronic properties of III-V semiconductors, developing radiation hardened sensors for a nuclear-powered NASA mission to the icy moons of Jupiter, developing thermionic, thermoelectric and thermophotovoltaic energy conversion systems, and developing hard facing alloys for nuclear applications. The breadth and scope of my varying assignments kept me learning new things and growing all the time. Along the way I garnered two US patents, one for a novel thermionic energy converter, and one for a commercially produced molybdenum alloy.


You said that being a scientist embedded in an engineering organization created unique challenges and opportunities. Could you tell us more about that please?


GELLER: I’ll offer two examples. First, engineering pays the bills, so the science must serve the engineering. Nothing there is ever done for the greater glory of science alone, though some of the science it has been my privilege to be associated with has been glorious. And so, I always needed to ask the question: How might the model or the innovation I am developing impact the operating envelope of the engineering system of which it is a part? One of the first things I recognized was that much of the data I had available with which to develop models at the lab was ill suited for the purpose of understanding the underlying physical mechanisms governing the material behavior I wanted to understand. So, in my first several years in the program, I became a vigorous advocate of separate effects testing. At the time, this idea ran counter to the prevailing material testing philosophy that relied on trying to make most tests as “prototypical” as possible. This was a seat-of-the-pants engineering philosophy which precluded the systematic isolation of specific variables and effectively abandoned any real possibility of understanding the underlying physical mechanisms governing the behaviors observed. At the time, most of the material models in use were empirical, essentially curve fits, so mechanistic understanding was seen as an unnecessary luxury. Second, I learned that it is not usually the mean performance of a material, but more often the uncertainties and variabilities surrounding that mean performance that determines the operating envelope of an engineering system or component of which that material is a part. Thus, the greatest value of a physics based material performance model is often that it can inform these variabilities. Over the course of four decades, I helped the NR Program to see that physics based material performance models were essential to reducing material performance uncertainty, and thereby expanding component operating envelopes without increased risk.

Could you give us a concrete example of how you applied one of these insights?


GELLER: Here is one example that I can speak about when I successfully tailored a scientific approach to engineering needs. In the mid-1990’s, I led a small team of scientists developing performance models for thermophotovoltaic (TPV) energy conversion devices. TPV converters are akin to solar cells, except that they use an IR energy source rather than the sun. Since the energy isn’t free, thermal efficiency assumes greater importance for TPV devices. The lower light temperature requires the use of active semiconductor layers with smaller bandgap energies than offered by silicon. Consequently, the np junction at the heart of a TPV device is composed of doped ternary or quaternary III-V semiconductors. The performance of a TPV device depends sensitively not only upon the bandgap energies of the active layers, but also upon other optoelectronic properties of these materials such as the spectral absorption coefficient and the minority carrier diffusion lengths. All of these properties sensitively depend in turn on doping levels. In the early 1990’s, ab initio theory was unable to predict bandgap energies for ternary III-V semiconductors, much less doped ones, at a level of accuracy useful for either engineering device performance prediction or optimization. (The GW method of Hybertsen and Louie was brand new, and was as yet too computationally intractable for application to ternary III-V alloys.) Without accurate bandgap energies or band dispersion relations, accurate absorption spectra and carrier effective masses likewise could not be obtained. Further, this put realistic estimates of radiative or Auger carrier recombination rates, which dictate minority carrier lifetimes and diffusion lengths, out of the question.

An engineer outside of my team had developed a relatively simple TPV device performance model, but he was forced to assume the values for most of the input material properties needed to generate power density and thermal efficiency estimates, the two most important device parameters for engineering. Not surprisingly, this model initially had limited value for device optimization because one needed to input a dark current value an order of magnitude lower than the values being measured in the lab in order to obtain reasonable power density and efficiency estimates. A complete model would have predicted the dark current value, rather than using it as a tuning parameter. This major weakness inspired little confidence in the engineering model. My team set out to calculate all the necessary optoelectronic properties correctly, so that the engineering model, or a variant of it, could be effectively used.


The first step in the project was to be able to predict direct bandgap energies and other critical points of various III-V semiconductor band structures with reasonable accuracy. We also needed to accurately predict band dispersions, so that carrier effective masses, which figure into both carrier mobilities and doping-dependent bandgap energy shifts (so called Moss-Burstein shifts), could be satisfactorily determined. This was in the mid-1990’s, before Materials Design, Inc. (MDI) existed, and Erich Wimmer and Walter Wolf were working for an earlier atomistic simulation company. The prior company offered access to a DFT code of the day, but the interface to that code was inadequate for our application. The code was giving me different spectral absorption coefficient results when I ran a unit cell than when I ran a supercell of the same composition, because its k-point sampling algorithm was too crude. Further, it could not reflect doping effects at all. That wasn’t the worst. Numerical issues aside, standard DFT itself was inadequate. The local density approximation (LDA) of DFT typically underestimates bandgap energies by about 40% and the relative errors are even worse for narrow gap semiconductors. The LDA actually predicts indium arsenide, an important binary III-V compound, to be a metal with a negative bandgap energy.


Furthermore, indium-bearing III-V semiconductors, like the ones in which I was especially interested, have very low conduction band effective masses with extreme dispersion near the conduction band minimum, so that doping effects on the bandgap energy and the optical absorption edge are especially large. Moreover, indium is also heavy enough that spin-orbit relativistic effects nontrivially influence both the bandgap energy and the band dispersion.

The practical solution I adopted was to have Erich Wimmer and Walter Wolf import the FLAPW band structure code into their user environment, code the screened non-local exchange (“sX-LDA”) functional proposed by Bylander and Kleinman into FLAPW, and then build a GUI for all of it. The sX-LDA functional incorporates a nonlocal exchange term modified by a Thomas-Fermi screening factor with a screening length based on the mean valence charge density of the unit cell. The sX-LDA functional is in the same spirit as subsequently developed hybrid functionals, except that in the sX-LDA approach, the split between local and nonlocal exchange is determined by the valence charge density of the specific material system to which it is applied, rather than being fixed a priori based on a fit to a static training set. sX-LDA worked like a charm for Column IV, III-V, and II-VI semiconductors, and even for some large gap insulators on which it was subsequently tested. Agreements with experimental bandgap and other critical point values within a tenth or two of an eV were typical. Effective mass predictions matched well too. But the work didn’t end there. Walter needed to develop and code a k-point interpolation scheme to handle the extreme band dispersions near the conduction band minima for some of the materials of interest, and a separate group under the late Prof. Arthur Freeman at Northwestern University needed to work out a method for calculating Auger recombination coefficients based on the sX-LDA band energies and wave functions coming out of FLAPW. They successfully implemented a novel approach I suggested to them that exploits the detailed balance principle. Spin-orbit effects were handled self-consistently through a second variation approach.

I took the accurate doping-dependent spectral absorption coefficients provided by Erich and Walter, along with the Auger results provided by NWU, and I calculated minority carrier lifetimes as a function of doping, material composition, and temperature. This had never been done before. To do so, I needed to calculate the large photon recycling factor for the specific TPV device geometry in question. (III-V semiconductors have large refractive indices, so the escape cones for electron-hole recombination photons can be narrow.) Doping effects were incorporated by means of a rigid band approximation. Temperature effects were more complicated. First, the band structure was calculated at a temperature-appropriate lattice parameter. Then, because this was an engineering effort rather than a PRB publication, a very small rigid band shift – a “scissor operator” – was applied to represent any residual effects of electron-phonon coupling not captured by the lattice thermal expansion. Finally, Fermi-Dirac occupation functions were applied in the calculation of the spectral absorption coefficient. The same calculations informed estimates of carrier mobilities by providing accurate, doping-dependent effective mass values for both conduction band electrons and valence band holes.

When all the pieces were assembled, our TPV device performance model agreed with measured efficiency and power density values to within experimental error margins. The engineering device model was then used to optimize doping levels and to explore alternate device geometries and material compositions. Before the project ended, record-breaking device thermal efficiency values were being measured in our laboratories. Hence, my team accomplished all of its objectives. However, as events unfolded, the original vendor was purchased, and the new management had different priorities. As a result, support for FLAPW within their user environment was discontinued. Fortunately, though, since then all the relevant capabilities of the FLAPW code have been incorporated into the VASP code, which is more convergently robust and more computationally efficient than FLAPW. VASP is now accessible through MDI’s powerful and much more comprehensive MedeA user interface. So, as one door closed another opened, and the decision to negate all the application value embodied in the original FLAPW interface was the determining factor in my fateful suggestion to Erich Wimmer that spurred the birth of MDI!

Could you please say more about the “fateful suggestion”?


GELLER: Sure. Well, that original vendor’s corporate structure never quite seemed to work for their materials science customers. At least it didn’t work for materials science customers with unique needs like myself, and I wasn’t alone. Especially in the inorganic materials area, their organizational heart never seemed to be fully in the business. The only reason I ever had any confidence in their ability to deliver in that area was the apparent dedication and outstanding skill of two specific employees: Erich Wimmer and Walter Wolf. Upon making their acquaintances, I gravitated to them immediately. Yet, as time progressed, it became obvious that Erich and Walter weren’t getting the support they needed to continue to deliver for me. The bottom line was that computational materials science and engineering were not a primary interest either for the original vendor or their successor.

So, at the APS March meeting in Atlanta in 1999, Erich and I had dinner somewhere. Towards the end of the meal, Erich revisited the various problems he was having trying to get support for me and for his own development vision. I suspected he was leading up to some kind of a proposal, but I was way ahead of him. Halfway through his exposition of the outstanding issues, I asked, “Erich, why don’t you start your own company?” I could tell he was startled, that I had answered the question he was leading up to asking, and that he’d gotten the answer he had hoped for. Erich just smiled at first. I can’t say whether my statement of support was the key to Erich’s decision to move forward with his new venture.


Two intended collaborators, Paul Saxe and John Harris had already started the company that became Materials Design, Inc. in San Diego. However, I am convinced that a major potential customer’s vote of confidence likely played a nontrivial role in Erich’s decision to start a new French company called Materials Design, SARL. This he did, together with Paul, John, and Jürgen Sticht. The European company then joined forces with the San Diego company and the rest, as they say, is history. I was MDI’s first contract research customer in materials science.


So thereby began your long association with MDI. What happened next?


GELLER: The first thing that needed to be done was to recover the capabilities that Walter had developed. But the heart of MDI’s new computational tool suite was to be the VASP electronic structure code out of the University of Vienna. Though Erich had been one of the original authors of the FLAPW code, the switch to VASP was a wise decision. The stable convergence and computational efficiency of VASP subsequently enabled numerous other developments which would have been far more difficult to implement using FLAPW. Yet VASP incurred very little if any sacrifice of theoretical rigor, especially after projector augmented waves (PAWs) were implemented. VASP’s principal architect, Georg Kresse, subsequently was directly involved in more than one of my Bettis-sponsored projects with MDI. Erich was the public face of the early company who listened to customers and fashioned modeling approaches to their needs. Walter Wolf expertly ran computations and wrote code, and Paul Saxe went about creating the all-important user interface – MedeA.


Erich’s vision for MDI was that of a small, versatile company dedicated to making first principles and atomistic computational tools widely available and accessible outside of specialized academic circles, and to augment the tool suite with expert software support and contract research options for especially challenging problems. I am an expert at formulating very challenging computational problems – just ask Erich – and as an NNL project sponsor I have made extensive use of MDI’s modeling expertise. Erich’s goals dovetailed perfectly with my own vision for the future of material science at NNL. In particular, I recognized that the purpose of MedeA was not just computational efficiency, but human software user efficiency – my efficiency. Immersed as I was in engineering development issues with limited time to invest in the care and feeding of computations, I needed a sophisticated, comprehensible user interface with intelligent defaults, appropriate execution warnings and monitoring capabilities, and a broad suite of model building and post processing utilities. These goals were not achieved overnight. They took probably hundreds of expert man-years at MDI to develop. But because NNL had a legitimate long-term interest in this critical technology transfer, I was able to make NNL a partner with MDI in this important undertaking. Thus, NNL sponsored the development of MDI’s optical properties and Fermi Surface Visualizer modules, the phonon computational module, the surface and interface builders, the automated computational convergence module, and a transmission electron microscopy simulator.


The convergence module takes the guesswork out of convergence questions, especially when modeling unfamiliar material systems. Especially useful for nonexpert users, this was an important innovation in the effective technology transfer of electronic structure computational tools to engineering settings. The phonon module, which enables computations of vibrational entropy and finite temperature free energy, effectively freed ab initio methods from Zero Temperature Prison, where they had been trapped since their inception. With this module, one can now obtain very reasonable estimates of phase transition temperatures and thermal expansion coefficients. The phonon module has added even more value in informing rates of dynamic processes such as atomic diffusion and inelastic neutron scattering. The surface and interface builders have been invaluable from the beginning, and the Fermi Surface visualizer is a powerful tool for exploring the optoelectronic properties of semiconductors. Currently, MDI is engaged in what is clearly its most ambitious toolmaking collaboration with NNL of all, the Advanced Materials Simulation Engineering Tool (AMSET) Project. This project is directed at the creation of an automated machine learning atomic potential generator (MLPG) for MLPs with near first principles accuracy. These potentials are needed to improve fidelity and reliability of large-scale simulations of the kind that support mesoscale modeling of material properties. Upon the completion of AMSET, if not before, the MLPG will be incorporated into MedeA. Along with it, we will have several new, first-of-their-kind MLPs for three important alloy systems.

That’s an impressive history. Now that you’ve joined the MDI team, what’s next for you and for the ongoing MDI-NNL collaboration?


GELLER: Well, the AMSET Project is in full swing, and we are already seeing that the project will provide new tools that we will be able to leverage for other customers. In fact, we are already engaged in such collaborations now. But more than that, MDI is moving aggressively into mesoscale modeling, and AMSET is an important stepping stone on that path. While many chemical and optoelectronic properties of materials can be modeled directly from the electronic length scale, the mechanical properties of materials, other than elastic coefficients, and the environmental degradation behavior of material components generally are controlled by physical processes occurring on longer length and time scales. For instance, as is well known, plastic deformation behavior in metals is dictated by the behavior of, and the interactions between, crystalline line defects – dislocations. For another example, surface corrosion processes are often dictated by the nature of the resulting corrosion film microstructures and the transport channels that may develop within them. These processes occur on the micro and mesoscales, on which individual electronic degrees of freedom can no longer explicitly be tracked. Different kinds of models – dislocation dynamics models, phase field models, crystal plasticity models, etcetera – are needed to predict material behavior on these scales. The role of first principles and atomistic modeling in such multiscale projects becomes one of providing essential inputs to the longer length and time scale models. I see my role at MDI in part as a continuation of the role I played at NNL, one part scientist and one part “imagineer.” Working closely with Erich Wimmer, I seek to help develop and implement effective multiscale physics programs to address customer problems that combine the most effective, state-of-the-art scientific approaches on each length and time scale involved.


MDI’s recent collaboration with NNL and Georgia Tech scientists on NNL’s Environmentally Influenced Crack Evolution Modeling (EICEM) Program is a recent case in point. As its name implies, the purpose of EICEM was to develop predictive, physics based engineering models for aqueous corrosion fatigue crack growth in stainless steel and nickel based alloys. As part of this program, a team at GA Tech was tasked by NNL to study the interactions of screw dislocations with discrete glide obstacles such as vacancy nanoclusters in face centered cubic 304 stainless steel. For this purpose, GA Tech needed an appropriate atomic potential to use to approximate the properties of stainless steel. A potential for pure iron wouldn’t do because pure iron is body centered cubic as well as being ferromagnetic. The closest GA Tech could come to what actually was needed was a pre-existing embedded atom (EAM) potential for pure nickel. Nickel has the right crystal structure but a very different stacking fault energy – an important parameter in dislocation dynamics – than stainless steel, and hydrogen also behaves very differently in nickel. So, as the technical lead for EICEM at the time, I contracted with MDI to create a first-of-its-kind EAM potential for the quaternary Fe-Ni-Cr-H system, which is a much higher fidelity approximation than nickel to the crack tip environment of stainless steel immersed in water. This EAM potential was a major step forward for EICEM and a major accomplishment for MDI, but it also represented the practical limit of EAM potential development. AMSET and machine learning were the logical next step. That step is now being taken.

Thank you, Clint. A final question, do you have any quick words of advice for people thinking of applying computational materials science methods?


GELLER: Well, if you are thinking of scientists or engineers just starting up, I would recommend that they are ambitious, aim high, and work with the best. That is what I have always done anyway, and I always will! The developments of the last few years alone have been extraordinary. The accuracy of DFT can now be scaled to large systems and timescales using machine learning methods. This really was not possible just a few years ago and this development alone will allow tremendous insights to be obtained for the scientists and engineers who deploy these methods. Such developments, and the continuous improvement in computational resources, will make this field a rich contributor to technological advancement for many years to come.


For engineers and scientists working in an engineering environment, I would add the following: Take ab initio methods as far as they can go, but then do not hesitate to make whatever empirical adjustments to the output that may be necessary to maximize engineering value. I would urge scientists to remember that fidelity is a richer standard by which to judge the quality of one’s results than mere accuracy. The difference is that fidelity includes not only numerical accuracy and the physical validity of the specific energy functional one chooses, but also how faithful your chosen geometric model is to the actual physical system of interest. It is often the geometric model and its limitations that most limit the utility of computational results for engineering. This crucial consideration makes speed an accuracy issue! For if a less accurate but less time and resource intensive computational approach allows one to analyze a more realistic geometric model, it often ends up being the better choice for the job. MedeA offers numerous computational options for fashioning an optimum approach to a problem.



 
 
 

Comments


  • linkedin3
  • YouTube Social  Icon
  • Facebook Social Icon
  • Researchgate

© 2025 by Materials Design, Inc. 

Privacy Policy
Materials Design® and MedeA® are registered trademarks of Materials Design, Inc.

bottom of page