Turns out my G09 geometry optimization output files are split between listing the optimized coordinates (if the calc began with a z-matrix) and the xyz coordinates. The above split makes it difficult to extract geometries in an automated fashion. Several tools came up during a Google search, but these only work to pull out the xyz coordinates if G09 lists the final geometry in the cartesian format.
Surprisingly, cclib was nowhere near the top of my search results after multiple different sets of search terms, though it eventually came up and it looked like a perfect solutions. My initial stab at installing it using:
$ sudo apt-get install python3-numpy cclib python3-cclib
was not met with resounding success as the former did not really give much of anything useful (no ccget or ccwrite), and the latter still lacked ccwrite. Moreover, the current Ubuntu 18.04 package is a rather ancient 1.3 (perhaps this explains the lack of ccwrite).
My next step was finding the source for 1.6.2 and attempting to install that (which worked, but ccwrite barfed on itself when I tried the xyz option). This made me think it was not quite ready, and some Ubuntu pages listed 1.6-1 as the most current stable branch, so I started over with this slightly older version. Instead of manually installing, I thought it might be useful to try pip3 (my pip instance links to python 2, so I had to be careful to tack 3s on the end of most of these).
$ sudo apt-get install python3-setuptools python3-pip
$ tar xzvf cclib-1.6.tar.gz
$ cd cclib-1.6
$ pip3 install .
This worked like a charm (using -t to install cclib in a different location did not, and browsing cclib bug reports makes me think this feature is not currently working), and I now had ccget, cda, ccwrite, etc in ~/.local/bin.
$ ~/.local/bin/ccwrite xyz test.log
produced a decent xyz file at test.xyz. Success!
Sunday, September 22, 2019
Saturday, September 14, 2019
Cooling X11SDV-12C-TLN2F
Unlike the custom heatsinks I designed and procured for the X10SDVs, I thought it would be worth looking at off-the-shelf HSFs for cooling the X11SDV boards. That being said, there are no HSFs available for BGA 2518 except through Supermicro (and maybe Cooljag at this point). One point worth mentioning, all of the geometry is different from the X10SDV, which requires a complete reworking of any work I did before. There is plenty of room, given all of these units are in 4U cases.
Full load constitutes 12 threads in MOLPRO for a DFT calculation, which gave the highest temperatures. Using stress in the bash shell was lower by at least 5 C.
The baseline for the OEM heatsink (this was the non + version, so no active cooling here) was 70+ C at full load with a 70 mm fan resting on the heatsink. Making a fan shroud and using a high pressure Delta brought this down to 60-65 C at full load (needless to say, this solution was rather loud). Additionally, replacing the standard goop with Arctic Silver 5 certainly did not hurt (it was probably work a few degrees Celsius).
Now on to the off-the-shelf HSF, I went with a Noctua NH-L12S after reading through all of the other posts I could find, primarily the one below.
https://forums.servethehome.com/index.php?threads/cooling-the-cpu-x11sdv-4c-tln2f.22285/
This HSF is relatively low profile (in that it has 1" of clearance in a 4U case) and the heatpipes should not block any of the memory slots because they are fairly vertical (another option might be the Noctua NH-D9DX i4 3U). Looking at the included mounting hardware and the X11SDV heatspreader and mounting holes I came to the conclusion that modifying them was not going to work, here goes a voyage into making custom mounting bracket.
Using aluminum angle seemed like the way to go because:
1) aluminum is easy to work with using hand tools and common power tools.
2) the vertical bit of the angle should provide far more rigidity than a flat bar of aluminum.
The trick with fashioning custom brackets was to get both sets of holes in the right place relative to each other and the 3 mm x 40 mm section that had to be removed so the bracket did not hit the CPU heatspreader. Ideally, I would have good enough measurements to mark off all of the holes and such, which I would then machine using a grinder and drill. Reality set in and made me realize the HSF and brackets could not be placed and marked prior to removing the 3 mm x 40 mm section.
NOTE: The HSF base is not as large as the heatspreader, so the position of the holes will dictate if the gap is spread evenly on both sides.
I tried making these brackets 2 ways, and will go into the first method because it turned out the best.
1) Cut 85 mm of aluminum angle (3/4" by 1/8").
2) Mark the center so the removed material and sets of holes span equally on either side of this mark.
3) Make 6-32 (M3 would also work) tapped holes 4 mm in from the edge of the angle and 48 mm apart and make sure the HSF can bolt onto the brackets.
4) Mark drill spots 69.25 mm apart, again symmetric about the center line from (2), and 3 mm in from the edge (these will be for the larger shoulder screws going into the HSF support on the mobo.
- Overdrilling these holes (15/64") should be just fine because the retaining rings will keep them in place, and larger holes will give you room to move the shoulder screws around to line up with the backplane holes.
5) Grind out 3 mm x 40 mm on each bracket (this got infinitely easier once I used an electric grinder instead of a file).
6) Use a screwdriver to pry off the retaining washers on the OEM heatsink and install them in the larger outer holes.
7) Use a 1/4" to 3/8" drill bit to make 4 holes through the HSF fins so you can get a screwdriver through them to tighten the shoulder screws (this looks pretty gross and there is likely a better way to accomplish this step).
8) Add heatsink compound and install on motherboard.
9) Install the fan (the CPU will get up to 80 C without it under load).
This post has gone on far longer than expected, so I am going to wrap up with some pics that are hopefully helpful. The end result was temps at full load of 50 C and nearly silent operation.




Full load constitutes 12 threads in MOLPRO for a DFT calculation, which gave the highest temperatures. Using stress in the bash shell was lower by at least 5 C.
The baseline for the OEM heatsink (this was the non + version, so no active cooling here) was 70+ C at full load with a 70 mm fan resting on the heatsink. Making a fan shroud and using a high pressure Delta brought this down to 60-65 C at full load (needless to say, this solution was rather loud). Additionally, replacing the standard goop with Arctic Silver 5 certainly did not hurt (it was probably work a few degrees Celsius).
Now on to the off-the-shelf HSF, I went with a Noctua NH-L12S after reading through all of the other posts I could find, primarily the one below.
https://forums.servethehome.com/index.php?threads/cooling-the-cpu-x11sdv-4c-tln2f.22285/
This HSF is relatively low profile (in that it has 1" of clearance in a 4U case) and the heatpipes should not block any of the memory slots because they are fairly vertical (another option might be the Noctua NH-D9DX i4 3U). Looking at the included mounting hardware and the X11SDV heatspreader and mounting holes I came to the conclusion that modifying them was not going to work, here goes a voyage into making custom mounting bracket.
Using aluminum angle seemed like the way to go because:
1) aluminum is easy to work with using hand tools and common power tools.
2) the vertical bit of the angle should provide far more rigidity than a flat bar of aluminum.
The trick with fashioning custom brackets was to get both sets of holes in the right place relative to each other and the 3 mm x 40 mm section that had to be removed so the bracket did not hit the CPU heatspreader. Ideally, I would have good enough measurements to mark off all of the holes and such, which I would then machine using a grinder and drill. Reality set in and made me realize the HSF and brackets could not be placed and marked prior to removing the 3 mm x 40 mm section.
NOTE: The HSF base is not as large as the heatspreader, so the position of the holes will dictate if the gap is spread evenly on both sides.
I tried making these brackets 2 ways, and will go into the first method because it turned out the best.
1) Cut 85 mm of aluminum angle (3/4" by 1/8").
2) Mark the center so the removed material and sets of holes span equally on either side of this mark.
3) Make 6-32 (M3 would also work) tapped holes 4 mm in from the edge of the angle and 48 mm apart and make sure the HSF can bolt onto the brackets.
4) Mark drill spots 69.25 mm apart, again symmetric about the center line from (2), and 3 mm in from the edge (these will be for the larger shoulder screws going into the HSF support on the mobo.
- Overdrilling these holes (15/64") should be just fine because the retaining rings will keep them in place, and larger holes will give you room to move the shoulder screws around to line up with the backplane holes.
5) Grind out 3 mm x 40 mm on each bracket (this got infinitely easier once I used an electric grinder instead of a file).
6) Use a screwdriver to pry off the retaining washers on the OEM heatsink and install them in the larger outer holes.
7) Use a 1/4" to 3/8" drill bit to make 4 holes through the HSF fins so you can get a screwdriver through them to tighten the shoulder screws (this looks pretty gross and there is likely a better way to accomplish this step).
8) Add heatsink compound and install on motherboard.
9) Install the fan (the CPU will get up to 80 C without it under load).
This post has gone on far longer than expected, so I am going to wrap up with some pics that are hopefully helpful. The end result was temps at full load of 50 C and nearly silent operation.
Tuesday, July 30, 2019
X11SDV U.2 boot challenges
After switching over the head node to an X11SDV-4C-TLN2F board (my original X10SDV-4C did not have 10G ports, and I wanted the PCI-E slot for a graphics card), I figured putting together a couple compute nodes using X11SDV-12C-TLN2F models would be a good idea for having a larger number of cores (all of the previous compute nodes have 8) and testing out AVX-512.
The one hitch to using the new X11SDV offerings is the lack of a M.2 slot, and a U.2 port + Oculink in its place. Said U.2 port did not work with M.2 adapters, so I got a couple Intel Optane 905P drives having U.2 connectors. The first board worked flawlessly, and I used it to set up both Optane drives. The second X11SDV would never see the Optane drive no matter what BIOS options I tried (EFI only and dual were both attempted as first measures), though the USB thumb drive with Ubuntu 18.04.2 server worked just fine. Even more maddening was the Ubuntu installer saw the U.2 drive just fine, only the BIOS was myopic when it came to detecting this boot drive.
After much fussing (and swearing), I finally remembered the most recent version of the BIOS was 1.1a, but the second board was only blessed with 1.0b. Why these two boards, from the same vendor and bought at the same, had different BIOS versions is beyond me. Long story short, updating the BIOS on the second board to 1.1a solved the boot issue.
The one hitch to using the new X11SDV offerings is the lack of a M.2 slot, and a U.2 port + Oculink in its place. Said U.2 port did not work with M.2 adapters, so I got a couple Intel Optane 905P drives having U.2 connectors. The first board worked flawlessly, and I used it to set up both Optane drives. The second X11SDV would never see the Optane drive no matter what BIOS options I tried (EFI only and dual were both attempted as first measures), though the USB thumb drive with Ubuntu 18.04.2 server worked just fine. Even more maddening was the Ubuntu installer saw the U.2 drive just fine, only the BIOS was myopic when it came to detecting this boot drive.
After much fussing (and swearing), I finally remembered the most recent version of the BIOS was 1.1a, but the second board was only blessed with 1.0b. Why these two boards, from the same vendor and bought at the same, had different BIOS versions is beyond me. Long story short, updating the BIOS on the second board to 1.1a solved the boot issue.
Thursday, September 14, 2017
Cooling X10SDV (Xeon D-1540/1541)
This all began with the purchase of an X10SDV-TLN4F-O for Gaussian09 and MOLPRO calculations (using tmpfs, so IO should not be limiting). When running calculations on all 8 cores I noticed that the cores were rarely above 2100 MHz and the temperatures were in the 60 - 75 °C. For comparison, my two X99 systems with Core i7 5960X processors having water cooling rarely get above 47 °C and alway ran at maximum turbo speed of 3500 MHz.
Time for a couple related notes. I changed the governor to performance using cpufreq and intel_pstate. Disabling pstate was not helpful because the BIOS does not deal properly with turbo (so far as I can tell) because the maximum speed was 2100 MHz when I tried that. This led me to believe thermal throttling was causing the issues.
What to do about the temperatures? Would changing the heatsink compound be enough? Should I just change the fan? Do I need a complete HSF overhaul? Finally, how would I test and assess performance of different solutions? The short answers were I wanted to try everything and HPCC is a decent stand in for Gaussian09 and MOLPRO because it has DGEMM and Linpack sections pretty much represent the computations in those programs (and previous testing with AMD systems showed it produced similar temperatures). HPCC with N = 24000 provided 25 minutes of run time (most of that being in the linpack section). Ambient temperatures fluctuated 1 - 2 °C, and were measured using a calibrated thermistor accurate to better than 0.1 °C. The CPU temperatures were measured using the built-in temperature sensor, and all reported temperatures are differentials. A bash script recorded the CPU temperatures every minute.
As for the heatsink, Alpha Nova Tech produced some custom 70 mm x 70 mm x 40 mm aluminum heatsinks having the same footprint as the stock heatsink (so I could reuse the screws and springs). Next up came the fan ducts, which I designed using OnShape. The two basic geometries involve one fan blowing down or a push-pull arrangement (at 50 ° from horizontal). These were printed using HIPS on a Lulzbot Mini.
Enough details, now on to the results.
Time for a couple related notes. I changed the governor to performance using cpufreq and intel_pstate. Disabling pstate was not helpful because the BIOS does not deal properly with turbo (so far as I can tell) because the maximum speed was 2100 MHz when I tried that. This led me to believe thermal throttling was causing the issues.
What to do about the temperatures? Would changing the heatsink compound be enough? Should I just change the fan? Do I need a complete HSF overhaul? Finally, how would I test and assess performance of different solutions? The short answers were I wanted to try everything and HPCC is a decent stand in for Gaussian09 and MOLPRO because it has DGEMM and Linpack sections pretty much represent the computations in those programs (and previous testing with AMD systems showed it produced similar temperatures). HPCC with N = 24000 provided 25 minutes of run time (most of that being in the linpack section). Ambient temperatures fluctuated 1 - 2 °C, and were measured using a calibrated thermistor accurate to better than 0.1 °C. The CPU temperatures were measured using the built-in temperature sensor, and all reported temperatures are differentials. A bash script recorded the CPU temperatures every minute.
As for the heatsink, Alpha Nova Tech produced some custom 70 mm x 70 mm x 40 mm aluminum heatsinks having the same footprint as the stock heatsink (so I could reuse the screws and springs). Next up came the fan ducts, which I designed using OnShape. The two basic geometries involve one fan blowing down or a push-pull arrangement (at 50 ° from horizontal). These were printed using HIPS on a Lulzbot Mini.
Enough details, now on to the results.
HSF | Max | Avg |
Stock | 42 | 38 |
Stock-AS5 | 39 | 37 |
60 mm Everflow | 23 | 20 |
60 mm Fractal | 32 | 31 |
60 mm Panaflo | 28 | 25 |
60 mm San Ace | 30 | 28 |
60 mm YS Tech | 28 | 25 |
60 mm Everflow | 24 | 22 |
60 mm Fractal | 32 | 31 |
60 mm San Ace | 56 | 53 |
60 mm YS Tech | 55 | 53 |
80 mm Akasa | 29 | 27 |
80 mm Noctua | 32 | 30 |
80 mm Sunon | 23 | 21 |
80 mm Vantec | 32 | 30 |
80 mm Sunon | 24 | 22 |
92 mm Everflow | 28 | 26 |
92 mm Gelid | 30 | 27 |
92 mm Noctua | 31 | 29 |
PP 60mm Everflow/San Ace | 21 | 19 |
PP 60 mm Fractal/Noctua | 29 | 26 |
PP 60 mm Noctua/Fractal | 37 | 35 |
PP 60 mm Noctua/San Ace | 30 | 27 |
PP 60 mm San Ace/Everflow | 23 | 21 |
PP 60 mm San Ace/Fractal | 27 | 26 |
PP 60 mm San Ace/Noctua | 27 | 25 |
PP 70 mm Everflow15/Everflow15 | 23 | 20 |
PP 70 mm Everflow15/Everflow25 | 23 | 21 |
PP 70 mm Everflow25/Everlfow15 | 23 | 21 |
PP 70 mm Everflow25/Everflow25 | 23 | 21 |
Ok, so that tells us a couple things. The heatsink compound does reduce the temperature by a few degrees. The custom heatsink lowers the temperatures by approximately 10 °C, and the fan choice provides another 10 °C drop.
High static pressure fans reduce the temperature more than high flow rate fans (no real surprise here). Hence, the 60 mm San Ace and YS Tech, 80 mm Sunon, and 92 mm Everflow are the winners for single fan setups. The push-pull setup definitely lowers the temperatures drastically, especially the maxima. The Everflow 25 mm fans having the lowest noise output.
The losers were the Noctua and Gelid fans, though these were also the lowest noise models.
During all of the tests the processor frequency (as monitored by cat /proc/cpuinfo | grep MHz remained near 2600 MHz. Returning to the Gaussian09 and MOLPRO calculations still showed throttling, though not as bad as before (on average the frequencies remain around 2400 MHz). Hmmmmm...perhaps it is not thermal throttling after all, but at least I feel much better about running these X10SDV units full throttle.
Thursday, August 6, 2015
Getting useful XYZ geometries from G09 and transforming them (to angstroms)
I finally got to the point when optimizing geometries of many, many molecules led me to needing more automation to create input files for the subsequent single-point calculations. Methinks this is as good of a time as any to learn python (and by learn I mean cobble together something functional, but ugly). Ok, with that truth out of the way, on to the goal of this exercise. I wanted to optimize geometries in G09, export the geometry (some of the geometry optimizations were in cartesian space and some used an input Z-matrix, so XYZ coordinates are the common output), and use the exported geometries to calculate single-point energies using MOLPRO.
1) First hurdle, getting a useful XYZ coordinates from G09 (also known as, why is there no easy way to grep a final set of cartesian coordinates from the log file?). There are lots of punch options and IOPs, but none that dump the geometry to the log file. This means using "punch=coord" and dealing with the fort.7 file that is generated. Here is the line from my bash script that submits all of the geometry optimizations.
$ for i in $( ls *pbe1pbe-vtz*.com ) ; do echo ${i}; NAME=$(basename $i ".com" ) ; /cluster/software/g09/g09 <./${NAME}.com >./${NAME}.log ; mv fort.7 ${NAME}.xyz ; done
2) Ok, that really was not so terrible, and I will stop complaining now. Second hurdle, transforming the fort.7 file into something useful because it writes the XYZ coordinates using the atomic number (instead of the associated element abbreviation) and the units are bohr. The basic parts of the python script that follow are reading and parsing the input file (this includes substituting "C" for "6", etc), finding the center of mass, shifting all of the atoms so the center of mass is at the origin, converting the distances to angstroms, and writing a new XYZ file. HUGE NOTE: Any improvements would be greatly appreciated.
#!/bin/python
bohr_per_ang = 1.88971616463207
ang_per_bohr = 0.5291772109217
from sys import argv
script, filename = argv
infile1 = open(filename,"r") #opens file with name of "test.txt"
geom1 = []
count = 0
for temp_line in infile1 :
temp_line = temp_line.strip()
line = temp_line.split()
if int(line[0]) == 1 :
atom = "H"
elif int(line[0]) == 5 :
atom = "B"
elif int(line[0]) == 6 :
atom = "C"
parsed_line = [ atom, int(line[0]), float(line[1].replace("D","E")), float(line[2].replace("D","E")), float(line[3].replace("D","E")) ]
count += 1
geom1.append(parsed_line)
infile1.close()
#calculate the center of mass in x, y, and z directions
mx = 0
my = 0
mz = 0
mass_total = 0
for i in range(0,count) :
mx = mx + geom1[i][1]*geom1[i][2]
my = my + geom1[i][1]*geom1[i][3]
mz = mz + geom1[i][1]*geom1[i][4]
mass_total = mass_total + geom1[i][1]
com1 = [mx/mass_total, my/mass_total, mz/mass_total]
#shift all atoms so the center of mass is at 0,0,0
geom1_shifted=[[0 for j in range(0,3)] for i in range(0,count)]
for i in range(0, count) :
geom1_shifted[i][0] = geom1[i][2] - com1[0]
geom1_shifted[i][1] = geom1[i][3] - com1[1]
geom1_shifted[i][2] = geom1[i][4] - com1[2]
outfile1 = open(filename, "w")
outfile1.write("%d\n" % (count) )
outfile1.write("\n")
for i in range(0, count) :
outfile1.write( "%s %14.10f %14.10f %14.10f\n" % (geom1[i][0] , geom1[i][2]*ang_per_bohr , geom1[i][3]*ang_per_bohr , geom1[i][4]*ang_per_bohr ))
outfile1.close()
And the bash line:
$ for i in $( ls c6h7-int*ub3lyp-6311ppg*.xyz ) ; do echo $i ; python bohr_to_ang.py ./${i} ; done
3) Now that we have that out of the way all we need to do is create a MOLPRO input file. The top part of my template is below, along with the bash line I use to create all of the input files for a given set of geometries (usually belonging to a particular method/basis set).
***,template
memory,800,M
gthresh,oneint=1.d-14,twoint=1.d-14,zero=1.d-14
angstrom
symmetry,nosym
geomtyp=xyz
geom={
}
basis=6-31G*;
{multi;canon,3100.2;
occ,22;closed,21}
basis=6-311G**;
{multi;canon,3101.2;
occ,22;closed,21}
basis=aug-cc-pvtz;
{multi;canon,3102.2;
occ,22;closed,21}
basis={
default,vtz-f12
...
$ for i in $( ls geom-method-tests/c6h7-int*ub3lyp-6311ppg*.xyz ) ; do outname="$( basename $i "-a.xyz" )-uccsdtf12-vtzf12-ad.inp" ; echo $outname; cp c6h7-template-uccsdtf12-vtzf12-ad.inp $outname ; tail -n13 $i >tmpfile ; sed -i -e '/geom={/r tmpfile' $outname ; done
This last bash line uses the bulk of the coordinate filename and concatenates the single-point method and basis set onto the end when making the MOLPRO input file name. I then use sed to put the geometry into the newly minted template just after geom={. Voila!
1) First hurdle, getting a useful XYZ coordinates from G09 (also known as, why is there no easy way to grep a final set of cartesian coordinates from the log file?). There are lots of punch options and IOPs, but none that dump the geometry to the log file. This means using "punch=coord" and dealing with the fort.7 file that is generated. Here is the line from my bash script that submits all of the geometry optimizations.
$ for i in $( ls *pbe1pbe-vtz*.com ) ; do echo ${i}; NAME=$(basename $i ".com" ) ; /cluster/software/g09/g09 <./${NAME}.com >./${NAME}.log ; mv fort.7 ${NAME}.xyz ; done
#!/bin/python
bohr_per_ang = 1.88971616463207
ang_per_bohr = 0.5291772109217
from sys import argv
script, filename = argv
infile1 = open(filename,"r") #opens file with name of "test.txt"
geom1 = []
count = 0
for temp_line in infile1 :
temp_line = temp_line.strip()
line = temp_line.split()
if int(line[0]) == 1 :
atom = "H"
elif int(line[0]) == 5 :
atom = "B"
elif int(line[0]) == 6 :
atom = "C"
parsed_line = [ atom, int(line[0]), float(line[1].replace("D","E")), float(line[2].replace("D","E")), float(line[3].replace("D","E")) ]
count += 1
geom1.append(parsed_line)
infile1.close()
#calculate the center of mass in x, y, and z directions
mx = 0
my = 0
mz = 0
mass_total = 0
for i in range(0,count) :
mx = mx + geom1[i][1]*geom1[i][2]
my = my + geom1[i][1]*geom1[i][3]
mz = mz + geom1[i][1]*geom1[i][4]
mass_total = mass_total + geom1[i][1]
com1 = [mx/mass_total, my/mass_total, mz/mass_total]
#shift all atoms so the center of mass is at 0,0,0
geom1_shifted=[[0 for j in range(0,3)] for i in range(0,count)]
for i in range(0, count) :
geom1_shifted[i][0] = geom1[i][2] - com1[0]
geom1_shifted[i][1] = geom1[i][3] - com1[1]
geom1_shifted[i][2] = geom1[i][4] - com1[2]
outfile1 = open(filename, "w")
outfile1.write("%d\n" % (count) )
outfile1.write("\n")
for i in range(0, count) :
outfile1.write( "%s %14.10f %14.10f %14.10f\n" % (geom1[i][0] , geom1[i][2]*ang_per_bohr , geom1[i][3]*ang_per_bohr , geom1[i][4]*ang_per_bohr ))
outfile1.close()
And the bash line:
$ for i in $( ls c6h7-int*ub3lyp-6311ppg*.xyz ) ; do echo $i ; python bohr_to_ang.py ./${i} ; done
3) Now that we have that out of the way all we need to do is create a MOLPRO input file. The top part of my template is below, along with the bash line I use to create all of the input files for a given set of geometries (usually belonging to a particular method/basis set).
***,template
memory,800,M
gthresh,oneint=1.d-14,twoint=1.d-14,zero=1.d-14
angstrom
symmetry,nosym
geomtyp=xyz
geom={
}
basis=6-31G*;
{multi;canon,3100.2;
occ,22;closed,21}
basis=6-311G**;
{multi;canon,3101.2;
occ,22;closed,21}
basis=aug-cc-pvtz;
{multi;canon,3102.2;
occ,22;closed,21}
basis={
default,vtz-f12
...
$ for i in $( ls geom-method-tests/c6h7-int*ub3lyp-6311ppg*.xyz ) ; do outname="$( basename $i "-a.xyz" )-uccsdtf12-vtzf12-ad.inp" ; echo $outname; cp c6h7-template-uccsdtf12-vtzf12-ad.inp $outname ; tail -n13 $i >tmpfile ; sed -i -e '/geom={/r tmpfile' $outname ; done
This last bash line uses the bulk of the coordinate filename and concatenates the single-point method and basis set onto the end when making the MOLPRO input file name. I then use sed to put the geometry into the newly minted template just after geom={. Voila!
Thursday, September 11, 2014
Installing 7zip on CentOS 7.0
CentOS does not come with 7zip installed, and it is not even readily available on the installation media or default repository. Looks like the next step is to tell yum about another repository and install 7zip from there. Here we go:
# wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
# rpm -ivh rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
# yum install p7zip
That's it (other than typing "y" when asked if you really want to install 7zip), and now you should have access to the binary, 7za.
# wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
# rpm -ivh rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
# yum install p7zip
That's it (other than typing "y" when asked if you really want to install 7zip), and now you should have access to the binary, 7za.
Wednesday, September 10, 2014
Inconsistent booting on Intel 5960X/Asus X99-deluxe
I just put together a 5960X node because 8 cores and AVX2 should do very well at electronic structure calculations (or anything using matrix-matrix and matrix-vector multiplication). I ran into a problem after it had run flawlessly for a day with half of the RAM. I added in the rest of the RAM (G.Skill Ripjaws DDR4 2400, 8 x 8 GB) and two more SSDs, and then it would no longer POST. After removing the drives and the RAM it was fine, but various combinations of the new hardware did not work, and the Qcode would stop at 00 (not used) or Ad (not even listed). I found out that I had routinely bumped the video card while putting in the RAM (this is not in a case, so nudging the card can lead to poor contact), and it was this improper seating of the vid card in its PCI-E slot that was causing the boot process to hang before getting to POST.
Subscribe to:
Posts (Atom)