- Our Capabilities
- Specialty Builders
- Analysis and Properties
- Mechanical/Thermal Properties
- Phonon - Thermodynamic Properties
- Transition State Search
- P3C Polymer Property Prediction using Correlations
- LAMMPS-Thermal conductivity
- Embedded Atom Potentials
- MedeA Surface Tension
- Compute Engines
- Climbing Length Scales
- Application Notes
- Atomistic Simulations of Multi-Phase Systems
- MedeA ICME seminar
- Users Group Meeting 2015
What’s New in MedeA® version 2.7?
- OS Support and Computational Efficiency
- VASP version 5.2.11
OS Support and Computational Efficiency
MedeA®version 2.7 provides substantial enhancements on many fronts, but perhaps the most notable change is support for the MedeA®graphical interface on the Linux operating system. You now have the option of running the graphical interface, databases, and the JobServer and TaskServer on either Linux or Windows, and any desired combination thereof. We have long supported Linux as an operating system for computational servers, but adding native Linux support for the MedeA®graphical interface provides tremendous additional flexibility. (In some organizations, for example, the administration of Linux and Windows support are the responsibility of different departments and this update to platform support for MedeA®provides customers with greater flexibility in configuring their computational resources.) If you have Linux workstations that you would like to put to use in atomistic or first-principles simulation using MedeA®, please let us know!
The advantages of running the graphical interface of MedeA®on Linux will be clear to all who prefer this OS for their primary computing environment. Linux is mature in providing a standardized graphical desktop without a need to install hand-picked drivers. This facilitates the ease of running MedeA’s graphical interface, which fully benefits from hardware acceleration and OpenGL. We take care of connecting the databases to MySQL. The look and feel of the graphical user interface under Linux is identical to that under Windows allowing you to focus on the science possible with MedeA®.
The option to run the JobServer under Linux provides options for greater efficiency: Placing the JobServer on the head-node of a computing cluster allows for tighter integration and faster response times. In the past years we have seen the great results achieved with MedeA®modules such as Phonon, MT, and Transition State Search, which use intelligent coarse-grain parallelization to maximize the efficiency and throughput of simulations. Placing the JobServer inside the cluster allows for faster communication and often greater convenience, with no external Windows computer being required.
Streamlined Upgrade Process
The same upgrade program works under either Windows or Linux, which means TaskServers can be upgraded as easily as in the past. Simply start the upgrade program, connect to Materials Design® to get the newer components, and be assured that we are taking care of all the nitty-gritty details such as Unix file permissions while keeping all your site specific adaptations in place.
With all the talk about Linux, we did not forget about Windows. Did we mention that we support Windows XP, Vista, and Windows 7? MedeA®comes with all required components, and with release 2.7 we have fundamentally simplified the installation of the Windows SQL server powering our JobServer and MedeA®. Nowadays, most computers have multiple cores so we took care to install MPI with one click to fully harvest your computing power: You will notice the difference when running VASP, Gibbs, or LAMMPS in parallel. Also, as you know, MedeA®is designed to use coarse-grain parallelization of your simulations to maximize the efficiency of your available compute resources through every means possible. You spend your time thinking about science and your projects, while MedeA®increases your productivity on every level.
Windows HPC and Unix Queuing Systems
In case you have access to a windows cluster running HPC, our TaskServer can handle that as well. Just install the TaskServer on the head-node and submit your calculations to HPC. You will only be surprised that your results are coming back faster than expected. The connector to HPC is as flexible as our support for PBS, TORQUE, LSF, GridEngine, and SLURM. Materials Design’s support has the proven track record of integrating your queuing system to MedeA®.
Moving on with the JobServer
You want to migrate your JobServer from its existing location to the head-node of your Linux or Windows cluster?
We can help! We can guide you through installing the Linux JobServer and then importing all existing jobs and results from the older JobServer. This means that all your computed Bandstructures, Trajectories, Phonon Dispersion plots, etc. are available from the new JobServer. Just redirect MedeA®to your new JobServer and you’ll probably not even notice that a new JobServer is in place. In fact, you don’t have to contact support to get a license, but we are still ready to assist you in migrating your valuable results, taking care that nothing gets lost.
VASP version 5.2.11
MedeA®version 2.7 incorporates the new VASP version 5.2.11, and is fully supported on 64-bit architectures. This version of VASP incorporates the feedback from hundreds of users following the groundbreaking release of VASP 5.2. The user experience is greatly enhanced, with notable improvements when running in parallel and on multi-core clusters.
While it is great to be able to install MedeA®more easily, and to be able to run calculations faster and more easily, it is even better to easily build the systems that you need to simulate. To that end, we have made significant changes to builders in MedeA®and also added a number of specialized builders:
We added the ability to “passivate” a molecule with one or more dangling bonds. This is very useful when building nanostructures, but also quite useful in its own right. For example, if you want to build perfluorobutane, just build the 4-carbon backbone and passivate with fluorine. It’s a lot easier than clicking ten times putting the fluorines in.
Nanotubes are definitely not easy to build by hand but ideal for a computer, and fun too. We have provided a builder that does it all. The builder can create periodic systems with the shortest repeat unit for nested tubes in armchair or zigzag configurations, or a single chiral nanotube. Or if you wish nonperiodic systems, then you can have nested chiral tubes, too, with a specified length. In either case, you can easily pack the tubes in a simple square or hexagonal arrangement. For nonperiodic tubes you can cap the ends with anything you like, using the new passivate feature in the molecular builder. We work out the gap between nested tubes and all the other little details, so you don’t have to grab your calculator. And of course, you can specify the elements, so boron nitride is as easy as carbon.
Another thing that is challenging to build by hand is a nanoparticle. Say you want a 20 Å sphere of gold, capped with thiol groups, just to make things interesting. You need to start with a large enough chunk of material, so start with a supercell bigger than your final particle. By doing the builder this way, we don’t tie your hands to pure compounds or anything like that: if you can make a chunk of material as the base, we can carve it into a sphere or cylinder, and you can then cap it with the passivation tool in the molecular builder. And shortly, as we complete our basic geometry class, we’ll give you more shapes in your toolbox.
Special Quasirandom Structures
Another useful but specialized builder that we have implemented builds special quasirandom structures. Now if that isn’t a mouthful, we don’t know what is! But the idea is pretty nice. If you are interested in random alloys, this is the way to go. With brute force, you could make a lot of random structures for the alloy, an ensemble as it is called, calculate the properties you want and average them all in the appropriate way to get an ensemble average. Or you can use a quasirandom structure as a shortcut. The quasirandom structures are special, small structures that perfectly mimic the radial distribution functions of true random alloys up to e.g. 4th order distribution functions. So with a bit of luck, the properties that you calculate for the single quasirandom structure is the ensemble average that you would have painfully calculated by the brute-force approach. Of course, if your property depends on the 5th order distribution function you might be out of luck, but the brute-force approach in such a case would probably be some computationally demanding that you couldn’t handle it anyway.
We hid this little gem in the Edit menu. It currently works on periodic systems, where it will randomly substitute atoms of one element for another. Remember the brute-force method that we were just talking about? This is one way of tackling that. This tool also knows about isotopes, so you can make isotopically pure compounds, or use the average mass, or use the natural mixture of isotopes. Finally, you can create arbitrary mixtures of isotopes. Why the interest in isotopes? Well, thermal conductivity can depend on them quite dramatically. In materials such as silicon and diamond, the thermal conductivity of isotopically pure materials can be larger by a factor of 5 or even more than that of the natural mix of isotopes.
Under the Edit menu, you will find a new item to merge two systems together. This is useful if you want a molecule on a surface, for example.
If you work with LAMMPS and forcefields, you probably realize that this is an area that you can’t get enough of. You always need more parameters. We do too, so it seemed like a good idea to put quite a bit of effort into improving the existing forcefields and adding some new ones. We’ve added all the published parameters that we could find to OPLSAA, and also added quite a lot of parameters to pcff+, our own high-accuracy forcefield. We’ve added parameters for small molecules like SO₂, NO₂, O₂, N₂, NO, CO, CO₂, CS₂, H₂, He, Ne, Ar, Kr and Xe. We’ve reoptimized the parameters in pcff+ for alkanes, alkenes, and alkynes, added parameters for monovalent ions like Li+, Na+, K+, Rb+, Cs+, F-, Cl-, Br-, and I-. We’ve added parameters for sulfides, thiols, aldehydes, ketones, and polyureas, and as temperatures are rising, water, of course. We also added the BKS forcefield for silicates and aluminophosphates, and have added parameters for GaN to both the Tersoff and Stillinger-Weber forcefields.
Whew! We’ve done a lot on Gibbs – too much to detail here, but some highlights are: the ability to restart a calculation to run longer; calculation of more properties such as the chemical potential and fugacity in multicomponent monophasic systems, and the isosteric heat of adsorption in μVT ensembles; visual analysis of the convergence of main outputs (density, pressure,…); viewing snapshots of configurations of fluid molecules with or without microporous adsorbent; and more control over probabilities of moves, etc. In addition, there have been a number of enhancements to the AUA and UA-Trappe forcefields including adding parameters for olefins, alcohols, ethers, ketones, aldehydes, thiols, and sulfides
We also got a start on MOPAC. It is a big code that does many different things, so we couldn’t do everything. But we figured we had to start some place. Through MedeA®, you can handle periodic and molecular systems, can calculate the energy and optimize structures. And you can calculate and work with the IR/Raman spectra. MOPAC can calculate the frequencies and normal modes, and MedeA®can display the spectrum. You can click on a line and animate the normal mode to see what it looks like. And you can read in your experimental spectra to match against the calculated spectra. This can help identify the modes and also check if the compound is what you think it is. As we said, this only scratches the surface of what MOPAC can do, but stay tuned!