7.26. Geometry Optimization

ORCA is able to calculate equilibrium structures (minima and transition states) using the quasi Newton update procedure with the well known BFGS update [67, 241, 399, 762, 763, 764], the Powell or the Bofill update. The optimization can be carried out in either redundant internal (recommended in most cases) or Cartesian displacement coordinates. As initial Hessian the user can choose between a diagonal initial Hessian, several model Hessians (Swart, Lindh, Almloef, Schlegel), an exact Hessian and a partially exact Hessian (both recommended for transition state optimization) for both coordinate types. In redundant internal coordinates several options for the type of step to be taken exist. The user can define constraints via two different paths. He can either define them directly (as bond length, angle, dihedral or Cartesian constraints) or he can define several fragments and constrain the fragments internally and with respect to other fragments. The ORCA optimizer can be used as an external optimizer, i.e.the energy and gradient calculations done by ORCA.

7.26.1. Input Options and General Considerations

The use of the geometry optimization module is relatively straightforward.[1]

%method RunTyp Opt # use geometry optimization.
                   #(equivalent is RunTyp=Geom)
        end

# or simply "! Opt" in the keyword line

# details of the optimization are controlled here
%geom 
      MaxIter 50  # max. number of geometry iterations
                  #  (default is 3N (N = number of atoms), at least 50 )
      # coordinate type control
      coordsys redundant     # redundant internal coords (2022)
               cartesian     # Cartesian coordinates
      # fallback option to Cartesian step if internals fail
      cartfallback true
      # transition state (TS) optimization
      TS_search EF      # Switch on TS search, EF means
                        #  "eigenvector following"
                        #  alternatively use "! OptTS"
      TS_Mode {M 0} end # Choose the mode to follow uphill in the
                        #  TS optimization. {M X}: eigenvector of
                        #  the Hessian with X. lowest eigenvalue
                        #  (start counting at zero) (default: X=0)
      # Instead of a mode choose an internal coordinate strongly
      #  involved in the eigenmode followed uphill
      TS_Mode {B 0 1} end      # bond between atoms 0 and 1  or
      TS_Mode {A 2 1 0} end    # angle between atoms 2, 1 and 0  or
      TS_Mode {D 3 2 1 0} end  # dihedral of atoms 3, 2, 1 and 0
      # add or remove internal coordinates from the automatically
      #  generated set of redundant internal coords
      modify_internal
          { B 10 0 A }      # add a bond between atoms 0 and 10
          { A 8 9 10 R }    # remove the angle defined
                            #  by atoms 8, 9 and 10
          { D 7 8 9 10 R }  # remove the dihedral angle defined
          end               #  by atoms 7, 8, 9 and 10
      # constrain internal coordinates:
      Constraints
          { B N1 N2 value C }       # the bond between N1 and N2
          { A N1 N2 N1 value C }    # the angle defined by N1, N2
                                    #  and N3
          { D N1 N2 N3 N4 value C } # the dihedral defined by N1,
                                    #  N2, N3 and N4
          { C N1 C }                # the cartesian position of N1
          { B N1 * C}       # all bonds involving N1
          { B * * C}        # all bonds
          { A * N2 * C }    # all angles with N2 as central atom
          { A * * * C }     # all angles
          { D * N2 N3 * C } # all dihedrals with N2 and N3 as
                            #  central atoms
          { D * * * * C }   # all dihedrals
      end
      # scan an internal coordinate:
      Scan B N1 N2 = value1, value2, N end
        # perform constrained optimizations with varying N1-N2-
        #  distance from value1 up to value2 in N steps;
        #  works as well for angles (use A N1 N2 N3) and for
        #  dihedrals (use D N1 N2 N3 N4)
      Scan B N1 N2 [value1 value2 value3 ... valueN] end
        # perform constrained optimizations with N1-N2-distances
        #  as given in the list;
        #  works as well for angles (use A N1 N2 N3) and for
        #  dihedrals (use D N1 N2 N3 N4)
      # perform a simultaneous multidimensional scan
      # possible for up to 3 different scan parameters. They each must 
      # have an identical number of scan steps
      simul_Scan true # default false
      fullScan true   # if !ScanTS is requested, fullScan assures
                      #  that the relaxed surface scan is fully
                      #  carried out before the TS optimization is
                      #  started (Default is false)
      # fragment optimization:
      #  1. all atoms have to belong to a fragment
      #  2. you have to connect the fragments
      ConnectFragments
          {1 2 C}      # constrain the internal coordinates
                       #  connecting fragments 1 and 2
          {1 2 C N1 N2}# constrain the internal coordinates
                       #  connecting fragments 1 and 2, the
                       #  fragments are connected via atoms
                       #  N1 and N2
          {1 3 O}      # optimize the internal coordinates
                       #  connecting fragments 1 and 3
          {1 3 O N1 N2}# optimize the internal coordinates
                       #  connecting fragments 1 and 3, the
                       #  fragments are connected via atoms
                       #  N1 and N2
      end
      #  3. you can constrain the fragment internally
      ConstrainFragments  # constrain all internal coordinates
          { 1 }           #  containing only atoms of fragment 1
      end
      # optimize hydrogens
      optimizeHydrogens true
        # in the context of a normal optimization all internal
        #  coordinates not involving any hydrogens are constrained
        # in the context of a fragment optimization all internal
        #  coordinates involving hydrogens are optimized (also in a
        #  constrained fragment)
      # freeze the hydrogen positions with respect to the
      #  heteroatoms
      freezeHydrogens true
      # invert the defined constraints, i.e. optimize the
      #  constraints and constrain the remaining coordinates
      # this only works for the redundant internal coordinates
      # Cartesian coordinates are not affected by invertConstraints
      invertConstraints true      # step type control
      Step     qn       # quasi-Newton step
               rfo      # Rational function step (Default for !Opt)
               prfo     # partitioned RFO step (Default for !OptTS)
      # Step size control
      MaxStep 0.3 # maximum step length in internal coordi-
                  #  nates. Default is 0.3 au
      Trust -0.3 # Initial trust radius. Default is -0.3 au
                 # Trust <0 - use fixed trust radius
                 #  of size -trust. I.e. -0.3 means fix
                 #  the trust radius at 0.3
                 # Trust >0 - use trust radius update. I.e. 0.3
                 #  means start with trust radius 0.3 and update
                 #  the trust radius after each optimization step
      # Convergence tolerances. Note that the calculation is
      # only converged if all criteria are fullfilled. All
      # values given are default values.
      TolE 5e-6      # Energy change (a.u.)
      TolRMSG  1e-4  # RMS gradient (a.u.)
      TolMaxG  3e-4  # Max. element of gradient (a.u.)
      TolRMSD  2e-3  # RMS displacement (a.u.)
      TolMaxD  4e-3  # Max. displacement (a.u.)
      # keyword for frequently used sets of convergence thresholds
      Convergence normal    # Default
                  loose
                  tight
      ProjectTR false # project translation and rotation
                      # default is false. MUST be false for
                      # redundant internals
  end

Keywords for the control of the Hessian (especially important for the TS optimization):

# initial Hessian control
      inhess   unit      # unit matrix
               Read      # Hessian in a .hess file (e.g. from
                         #  a previous NumFreq run), this command
                         #  comes with the following:
               InHessName "filename.hess" # filename of
                                          # Hessian input file
                                          # a previous .opt file
                                          # or a previous .carthess file
               # these only for redundants
               Lindh     # Lindh's model Hessian
               Almloef   # Almloef's model Hessian
               Schlegel  # Schlegel's model Hessian
               Swart     # Swart and Bickelhaupt`s model Hessian
               XTB0      # GFN0-xTB Hessian
               XTB1      # GFN1-xTB Hessian
               XTB2      # GFN2-xTB Hessian
               GFNFF     # GFN-FF Hessian
      # additional Hessian control for TS optimization
      Calc_Hess true  # calculate the Hessian numerically at the beginning
      Recalc_Hess  5  # calculate the Hessian at the beginning
                      #  and recalculate it after 5,10,.. cycles
      Hybrid_Hess {0 1 5 6} end # calculates a Hybrid Hessian
                                # exact calculation for
                                # atoms 0, 1, 5 and 6; works also
                                # with Calc_Hess and Recalc_Hess
      NumHess true      # requests use of numerical Hessian
      # modification of the internal Hessian
      Hess_Internal
        {A 3 2 1 D 2.0} # define a diagonal Hessian value of
                        #  2 Eh/Bohr2 for the angle between
                        #  atoms 3 2 1. This can also be done for
                        #  bonds, dihedrals and Cartesian
                        #  coordinates.) The Hessian values of
                        #  multiple coordinates can be modified
        reset 5    # reset the modified internal Hessian values
                   #  after 5 cycles
        # The following is only recommended
        #  after a relaxed surface scan
        #  in this example of the scan coordinate B 1 0;
        #  "basename.004.xyz" contains the optimized structure
        #  of the scan step with highest energy
        {B 1 0 C}
        XYZ1 "scanName.003.xyz" # the xyz-files of the structures
        XYZ2 "ScanName.005.xyz" #  next to the highest energy point
        GBW1 "ScanName.003.gbw" # the gbw-files of the structures
        GBW2 "ScanName.005.xyz" #  next to the highest energy
                                # the gbw-files are optional
       end
      # Hessian update procedure
      Update   Powell
               Bofill  # default for TS optimization
               BFGS    # default for geometry optimization
      # Hessian modification (only for P-RFO step)
      HESS_Modification Shift_Diag # shift the diagonal elements
                                   #  (default)
                        EV_Reverse # reverse the
                                   #  diagonal elements
      # Minimal value of Hessian eigenvalues (only P-RFO step)
      HESS_MinEV 0.0001  # if an absolute Hessian eigenvalue
                         #  is smaller than this value, it is
                         #  set to HESS_MinEV
      # Rebuilding the model Hessian after a number of cycles can
      #  accelerate the convergene of the optimization
      NResetHess 20 # Set the number of geometry steps after which
                    #  a new model Hessian is built (only with BFGS
                    #  update)
      NStepsInResetHess 5  # since previous steps and gradients are
                           #  available, it is possible to include
                           #  information about the PES in the
                           #  newly built Hessian (via a BFGS
                           #  update). This number should be
                           #  smaller than NResetHess
  end

As for parameter scan runs ORCA has some special options that may help to speed up the optimization:

%geom UseSOSCF false # switches the converger to SOSCF
                     # after the first point. SOSCF may
                     # converge better than DIIS if the
                     # starting orbitals are good.
                     # default = false
      ReducePrint true # reduce printout after the first
                       # point default=true
 # the initial guess can be changed after the first
 # point. The default is MORead. The MOs of the pre-
 # vious point will in many cases be a very good guess
 # for the next point. In some cases however, you may
 # want to be more conservative and use a general guess.
 OptGuess   = OneElec  # the one electron matrix
            = Hueckel  # the extended Hueckel guess
            = PAtom;   # the PAtom guess
            = Pmodel   # the PModel guess
            = MORead   # MOs of the prev. point (default)
 end

7.26.1.1. Redundant Internal Coordinates

There are three types of internal coordinates: redundant internals, old redundant internals (redundant_old) and a new set of redundant internals (redundant_new, with improved internals for nonbonded systems). All three sets work with the same “primitive” space of internal coordinates (stretches, bends, dihedral angles and improper torsions). Only the redundant internals works with one more type of bends in cases where a normal bend would have been approximately 180\(^{\circ}\). In redundant internal coordinates the full primitive set is kept and the Hessian and gradient are transformed into this – potentially large – space. A geometry optimization step requires, depending on the method used for the geometry update, perhaps a diagonalization or inversion of the Hessian of dimension equal to the number of variables in the optimization. In redundant internal coordinates this space may be 2-4 times larger than the nonredundant subspace which is of dimension 3\(N_{\mathrm{atoms} }-6(5)\). Since the diagonalization or inversion scales cubically the computational overhead over nonredundant spaces may easily reach a factor of 8–64. Thus, in redundant internal coordinates there are many unnecessary steps which may take some real time if the number of primitive internals is greater than 2000 or so (which is not so unusual). The timing problem may become acute in semiempirical calculations where the energy and gradient evaluations are cheap.

We briefly outline the theoretical background which is not difficult to understand:

Suppose, we have a set of \(n_{I}\) (redundant) primitive internal coordinates \(\mathrm{\mathbf{q} }\) constructed by some recipe and a set of \(n_{C} =3N_{\mathrm{atoms} }\) Cartesian coordinates \(\mathrm{\mathbf{x} }\). The B-matrix is defined as:

(7.190)\[B_{ij} =\frac{\partial q_{i} }{\partial x_{j} } \]

This matrix is rectangular. In order to compute the internal gradient one needs to compute the “generalized inverse” of \(\mathrm{\mathbf{B} }\). However, since the set of primitive internals is redundant the matrix is rank-deficient and one has to be careful. In pratice one first computes the \(n_{I} \times n_{I}\) matrix \(\mathrm{\mathbf{G} }\):

(7.191)\[\mathrm{\mathbf{G} } = \mathrm{\mathbf{BB} }^{T} \]

The generalized inverse of \(\mathrm{\mathbf{G} }\) is denoted \(\mathrm{\mathbf{G} }^{-}\) and is defined in terms of the eigenvalues and eigenvectors of \(\mathrm{\mathbf{G} }\):

(7.192)\[\begin{split}\mathrm{\mathbf{G} }^{-}= \begin{pmatrix} \mathbf{U} \\ \mathbf{R} \end{pmatrix}^{T} \begin{pmatrix} \mathbf{\Lambda^{-1} } & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{pmatrix} \begin{pmatrix} \mathbf{U}\\ \mathbf{R} \end{pmatrix} \end{split}\]

Here \(\mathrm{\mathbf{U} }\) are the eigenvectors belonging to the nonzero eigenvalues \(\mathrm{\mathbf{\Lambda} }\) which span the nonredundant space and \(\mathrm{\mathbf{R} }\) are the eigenvectors of the redundant subspace of the primitive internal space. If the set of primitive internals is carefully chosen, then there are exactly 3\(N_{\mathrm{atoms} }-6(5)\) nonzero eigenvalues of \(\mathrm{\mathbf{G} }\). Using this matrix, the gradient in internal coordinates can be readily computed from the (known) Cartesian gradient:

(7.193)\[\mathrm{\mathbf{g} }_{q} =\mathrm{\mathbf{G} }^{-} \mathrm{\mathbf{Bg} }_{x} \]

The initial Hessian is formed directly in the redundant internal space and then itself or its inverse is updated during the geometry optimization.

Before generating the Newton step we have to ensure that the displacements take place only in the nonredundant part of the internal coordinate space. For this purpose a projector \({P}'\):

(7.194)\[\mathbf{{P}'=GG^{-}=G^{-}G} \]

is applied on both the gradient and the Hessian:

(7.195)\[\mathbf{\tilde{g}_{q} = { P}'g_{q} } \]
(7.196)\[\mathbf{\tilde{H}_{q} = { P}'H_{q} { P}'}+\alpha \left({ \mathbf{1-{P}'} } \right)\]

The second term for \(\tilde{{H} }\) sets the matrix elements of the redundant part of the internal coordinate space to very large values (\(\alpha =1000)\).

7.26.1.2. Coordinate steps

A Quasi-Newton (QN) step is the simplest choice to update the coordinates and is given by:

(7.197)\[\mathbf{\Delta q = -\tilde{H}_{q}^{-1}\tilde{g}_{q} } \]

A more sophisticated step is the rational function optimization step which proceeds by diagonalizing the augmented Hessian:

(7.198)\[\begin{split}\begin{pmatrix} \mathbf{H}_{q} & \mathbf{g}_{q}\\ \mathbf{g}_{q} & 0 \end{pmatrix} \begin{pmatrix} \Delta \mathbf{q} \\ 1 \end{pmatrix} = v \begin{pmatrix} \Delta \mathbf{q} \\ 1 \end{pmatrix} \end{split}\]

The lowest eigenvalue \(\nu_{0}\) approaches zero as the equilibrium geometry is approached and the nice side effect of the optimization is a step size control. Towards convergence, the RFO step is approaching the quasi-Newton step and before it leads to a damped step is taken. In any case, each individual element of \(\Delta \mathrm{\mathbf{q} }\) is restricted to magnitude MaxStep and the total length of the step is restricted to Trust. In the RFO case, this is achieved by minimizing the predicted energy on the hypersphere of radius Trust which also modifies the direction of the step while in the quasi-Newton step, the step vector is simply scaled down.

Thus, the new geometry is given by:

(7.199)\[\mathrm{\mathbf{q} }_{\text{new} } =\mathrm{\mathbf{q} }_{\text{old} } +\Delta \mathrm{\mathbf{q} } \]

However, which Cartesian coordinates belong to the new redundant internal set? This is a somewhat complicated problem since the relation between internals and Cartesians is very nonlinear and the step in internal coordinates is not infinitesimal. Thus, an iterative procedure is taken to update the Cartesian coordinates. First of all consider the first (linear) step:

(7.200)\[\Delta \mathrm{\mathbf{x} }=\mathrm{\mathbf{A} }\Delta \mathrm{\mathbf{q} } \]

with \(\mathrm{\mathbf{A} }=\mathrm{\mathbf{B} }^{T}\mathrm{\mathbf{G} }^{-}\). With the new Cartesian coordinates \(\mathrm{\mathbf{x} }_{k+1} =\mathrm{\mathbf{x} }_{k} +\Delta \mathrm{\mathbf{x} }\) a trial set of internals \(\mathrm{\mathbf{q} }_{k+1}\) can be computed. This new set should ideally coincide with \(\mathrm{\mathbf{q} }_{\text{new} }\) but in fact it usually will not. Thus, one can refine the Cartesian step by forming

(7.201)\[\Delta \Delta \mathrm{\mathbf{q} }=\mathrm{\mathbf{q} }_{\text{new} } -\mathrm{\mathbf{q} }_{k+1} \]

which should approach zero. This leads to a new set of Cartesians \(\Delta \mathrm{\mathbf{x}'}=\mathrm{\mathbf{A} }\Delta \Delta \mathrm{\mathbf{q} }\) which in turn leads to a new set of internals and the procedure is iterated until the Cartesians do not change and the output internals equal \(\mathrm{\mathbf{q} }_{new}\) within a given tolerance (10\(^{-7}\) RMS deviation in both quantities is imposed in ORCA).

7.26.1.3. Constrained Optimization

Constraints on the redundant internal coordinates can be imposed by modifying the above projector \({P}'\) with a projector for the constraints \(C\):

(7.202)\[\mathbf{P}=\mathbf{{P}'-{P}'C}\left({ \mathbf{CPC} } \right)^{\mathbf{-1} }\mathbf{C{P}'} \]

\(C\) is a diagonal matrix with 1’s for the constraints and 0’s elsewhere. The gradient and the Hessian are projected with the modified projector:

(7.203)\[\tilde{{g} }_{q} =Pg_{q} \]
(7.204)\[\tilde{{H} }_{q} =PH_{q} P+\alpha \left({ 1-P} \right)\]

7.26.1.4. Constrained Fragments Optimization

The constrained fragments option was implemented in order to provide a convenient way to handle constraints for systems consisting of several molecules. The difference to a common optimization lies in the coordinate setup. In a common coordinate setup the internal coordinates are built up as described in the following:

In a first step, bonds are constructed between atom pairs which fulfill certain (atom type specific) distance criteria. If there are fragments in the system, which are not connected to each other (this is the case when there are two or more separate molecules), an additional bond is assigned to the nearest atom pair between the nonbonded fragments. All other internal coordinates are constructed on the basis of this set of bonds. Here, in a second step, bond angles are constructed between the atoms of directly neighboured bonds. If such an angle reaches more than 175\(^{\circ}\), a special type of linear angles is constructed. In a third step, dihedral angles (and improper torsions) are constructed between the atoms of directly neighbouring angles.

If the constrained fragments option is switched on, the set of bonds is constructed in a different way. The user defines a number of fragments. For each fragment a full set of bonds (not seeing the atoms of the other fragments) is constructed as described above. When using this option, the user also has to define which fragments are to be connected. The connection between these fragments can either be user-defined or automatically chosen. If the user defines the connecting atoms N1 and N2, then the interfragmental bond is the one between N1 and N2. If the user does not define the interfragmental bond, it is constructed between the atom pair with nearest distance between the two fragments. Then the angles and dihedrals are constructed upon this (different) set of bonds in the already described fashion.

Now let us regard the definition of the fragment constraints: A fragment is constrained internally by constraining all internal coordinates that contain only atoms of the respective fragment. The connection between two fragments A and B is constrained by constraining specific internal coordinates that contain atoms of both fragments. For bonds, one atom has to belong to fragment A and the other atom has to belong to fragment B. Regarding angles, two atoms have to belong to fragment A and one to fragment B and vice versa. With respect to dihedrals, only those are constrained where two atoms belong to fragment A and the other two belong to fragment B.

7.26.2. Transition State Optimization

As transition state finder we implemented the well-established eigenvector following algorithm using a P-RFO step as implemented by Baker [67]. This algorithm is a quasi-Newton like algorithm.

The Taylor series of the energy, truncated after the quadratic term, is:

(7.205)\[E=E_{0} +g_{q}\,^{+}\Delta q_{q} +\frac{1}{2}\Delta q\, ^{+}\mathrm{H}_{q} \Delta q \]

The Newton-Raphson step to get from the actual point to a stationary point is:

(7.206)\[\Delta q=- \mathrm{H}_{q}^{-1} g_{q} =\sum{ -\frac{V_{i}^{+} g_{q} V_{i} }{b_{i} }} \]

with \(V_{i}\) and \(b_{i}\) as eigenvectors and eigenvalues of the Hessian \(\mathrm{H}_{q}\). This step leads to the nearest stationary point on the PES. This stationary point can be a minimum or a saddle point, according to the curvature of the PES at the actual point.

With a simple shift of the Hessian eigenvalues \(b_{i}\) in this equation one can guide the step to a stationary point with the required characteristics (Hessian with exactly one negative eigenvalue). The transition state search is separated into two different optimization problems. The energy is maximized along one Hessian eigenmode and minimized along the remaining 3\(N-7(6)\) eigenmodes. We introduce two different shift parameters \(\lambda_{p}\) and \(\lambda_{n}\), where \(\lambda_{p}\) is the shift parameter for the eigenmode being maximized and \(\lambda_{n}\) shifts the Hessian eigenvalues of the modes being minimized. This method allows us to maximize along any mode, not only the one with smallest eigenvalue. Starting from two different RFO-matrices for the different optimization problems (see description above) we get for \(\lambda_{p}\) and \(\lambda_{n}\):

(7.207)\[\lambda_{p} =\frac{1}{2}b_{k} +\frac{1}{2}\sqrt{ b_{k}^{2} +4F_{k}^{2} } \quad \text{and} \quad \sum\limits_{i\ne k} { \frac{F_{i}^{2} }{\lambda_{n} -b_{i} } } = \lambda_{n} \]

whereas \(F_{i} =V_{i}^{+} g\) is the component of \(g\) along the Hessian eigenmode \(V_{i}\) and \(\lambda_{n}\) has to get solved iteratively. The solution for \(\lambda_{n}\) has to be negative and lower than \(b_{2}\) (or lower than \(b_{1}\), if not the lowest mode is being followed). If the Hessian has more than one negative eigenvalue, these properties might not be fulfilled, and the Hessian would have to be modified. In our implementation the Hessian diagonal elements are either shifted or reversed in such a case.

Once the shift parameters are known the P-RFO step \(h\) is calculated as follows:

(7.208)\[\Delta q_{k} = -\frac{\bar{{F} }_{k} V_{k} }{b_{k} -\lambda_{p} } \quad \text{and} \quad \Delta q_{i} = -\frac{\bar{{F} }_{i} V_{i} }{b_{i} -\lambda_{n} } \quad \text{with} \quad i=1\dots n,\, \, \, i\ne k \]
(7.209)\[\Delta q=\sum\limits_{j=1}^n { \Delta q_{j} } \]

7.26.2.1. ScanTS option

For TS modes of rather local nature (involving only one bond or an angle; no concerted movements over multiple atoms) we implemented the ScanTS feature. Here the user can carry out a relaxed surface scan and a TS optimization in one calculation. After the relaxed surface scan the algorithm chooses the optimized structure of the scan with highest energy as initial guess structure and the two neighbouring structures for the calculation of the second derivative of the scanned coordinate (e.g., if scan step number 4 gives the structure with highest energy, then structure basename.004.xyz is the initial guess for the TS optimization; the structures basename.003.xyz and basename.005.xyz are used for the calculation of the second derivative). Before the first step of the subsequent TS optimization the energies and gradients for all three structures are calculated. The gradients are then transformed to internal coordinates. The diagonal Hessian value of the scanned coordinate is then calculated via finite difference of the internal gradients of the two given structures (003 and 005 in our example).

For the construction of the initial Hessian a model force field Hessian is built up (this Hessian has got only diagonal entries and zeros as off-diagonal elements). The exactly calculated diagonal Hessian value replaces the model force field Hessian entry for the respective internal coordinate.

If the user already performed a regular relaxed surface scan without the subsequent TS optimization, then he can nevertheless use these structures for the same procedure. A relaxed surface scan always gives you the xyz-files and gbw-files of the optimized structures of each scan step. A separate TS optimization can be carried out where the structure with highest energy is the starting structure. Additionally the two files with the two adjacent structures (as explained above) have to be provided (via the Hess_Internal keyword, see below). Furthermore, the internal coordinate, for which the diagonal Hessian value has to be calculated, has to be given (the previously scanned coordinate). This exact Hessian calculation is only possible for one internal coordinate:

%geom
  Hess_Internal
     {B 1 0 C}               # previously scanned coordinate
     XYZ1 "scanName.003.xyz" # the xyz-files of the structures
     XYZ2 "ScanName.005.xyz" #  next to the highest energy point
     GBW1 "ScanName.003.gbw" # the gbw-files of the structures
     GBW2 "ScanName.005.xyz" #  next to the highest energy
                             # the gbw-files are optional
   end
end

Additionally the manipulation of the diagonal Hessian values of the internal Hessian is possible for further internal coordinates, but without an extra calculation. Here the user can just define a value (in Eh/Bohr\(^2\)).

Hess_Internal
  {A 3 2 1 D 2.0} # define a diagonal Hessian value of
                  #  2 Eh/Bohr2 for the angle between
                  #  atoms 3 2 1
  {B 1 0 D -0.5}  # define a diagonal Hessian value of
                  #  -0.5 Eh/Bohr2 for the bond between
                  #  atoms 1 and 0
end

The definition of such Hessian (diagonal) elements is possible for multiple internal coordinates. These just replace the values of the force field model Hessian.

7.26.2.2. Hybrid Hessian

We implemented the calculation of a “Hybrid Hessian” as an alternative to the full Hessian calculation for TS optimization. Here only those parts of the Hessian, that are important for the TS optimization, are calculated exactly. For this calculation we define two kinds of atoms: atoms whose couplings with the other atoms are treated exactly (E) and atoms whose couplings are treated approximately (A).

In a first step an Almloef model Hessian is built up in redundant internal coordinates and transformed to Cartesian coordinates. This Hessian gives the second derivative elements for atom pairs A/A. In a second step the second derivative elements between pairs E/E and E/A are calculated numerically as in a numerical frequency calculation:

(7.210)\[\frac{\mathbf{\Delta E} }{\Delta i_{B} \Delta j_{C} }=\frac{\mathbf{\Delta E} }{\Delta j_{C} \Delta i_{B} }=\frac{g_{j,C}^{i,B} -g_{j,C}^{eq.} }{displ.} \]

with:

\(i,j\)

x-, y- or z-direction

\(B,C\)

pairs of E/E, E/A, A/E

\(displ.\)

magnitude of displacement

\(g_{j,C}^{eq.}\)

force on atom \(C\) in direction \(j\) in current geometry

\(g_{j,C}^{i,B}\)

force on atom \(C\) in direction \(j\) after displacement of atom \(B\) in direction \(i\)

7.26.2.3. Partial Hessian Vibrational Analysis

We implemented the Partial Hessian Vibrational Analysis (PHVA), as published by Li, Jensen in [513], for the analysis of the nature of stationary points of structures obtained with QM/MM optimizations.

# PHVA after a QM/MM optimization in the (dispersion-/PC-) field
# caused by the MM-atoms
! NumFreq
%LJCoefficients "temp.LJ" # file with the Lennard Jones
                          #  coefficients for dispersion interaction
                          # obtained from last QM/MM run
%pointcharges "temp.pc"   # file with the point charges for
                          #  electrostatic interaction
                          # obtained from last QM/MM run
 #
%freq
    PARTIAL_Hess {0 1 2}  # atoms which are "frozen" and which make
                          #  the boundary to the MM-system
     end
   end

NOTE

  • This procedure should be used for QM/MM optimized structures only to verify the nature of the stationary point and have an estimate of the ZPE.

Here we shortly describe the procedure: In PHVA we divide the system into two parts \(B\) (of size \(n\) atoms) and \(C\) (size \(N-n\)). Let the atom set \(B\) belong to the region where the chemical change is localized. The Partial Hessian matrix is built up as follows:

(7.211)\[\begin{split}\begin{pmatrix} K_{BB} & 0 \\ 0 & K_{CC}^{\varepsilon }\\ \end{pmatrix} \end{split}\]

With:

\[K_{BB} \quad : x,y,z \quad \text{direction}\]
(7.212)\[\begin{split}K_{CC}^{\varepsilon } = \begin{pmatrix} \varepsilon & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \varepsilon \\ \end{pmatrix}, \, \varepsilon =10^{-8} \, \text{au}, \end{split}\]

this corresponds to using near-infinite masses for the atoms in \(C\).

With this procedure we get the following eigenvalue structure:

  • Six zero eigenvalues with modes corresponding to translational and rotational motion of the entire molecule.

  • \(3(N - n) - 6\) small (less than 1 cm\(^{-1})\) eigenvalues with modes corresponding mainly to internal motion within region \(C\).

  • Three eigenvalues (typically less than 10 cm\(^{-1})\) with modes corresponding mainly to motion of region \(C\) relative to region \(B\).

  • \((3n - 3)\) eigenvalues with modes corresponding mainly to relative motion of \(B\) and \(C\) as well as internal motion within region \(B\).

7.26.3. Minimum Energy Crossing Points

The MECP optimization allows the user to optimize to structures where two different potential energy surfaces (PES1 and PES2) cross each other. In this optimization two conditions apply: the energy \(E_{1}\) of PES1 is minimized while at the same time the energy difference \(\left({ E_{1} -E_{2} } \right)^{2}\) of both surfaces is minimized. For the implementation we follow in principle the suggestions of Harvey et al. in [365].

For the minimization two different gradients are constructed:

The first gradient chosen for the minimization is

(7.213)\[\mathbf{f}=\frac{\partial }{\partial q}\left({ E_{1} -E_{2} } \right)^{2}=2\left({ E_{1} -E_{2} } \right)\cdot x_{1} \]

where \(x_1\) is the gradient difference vector

(7.214)\[x_{1} =\left[{ \left({ \frac{\partial E_{1} }{\partial q} } \right)-\left({\frac{\partial E_{2} }{\partial q} } \right)} \right]\]

which is orthogonal to the crossing hyperline near the MECP.

The gradient

(7.215)\[\mathbf{g}=\left({ \frac{\partial E_{1} }{\partial q} } \right)-\frac{x_{1} }{\left|{ x_{1} } \right|}\left[{ \left({ \frac{\partial E_{1} }{\partial q} } \right)\cdot \frac{x_{1} }{\left|{ x_{1} } \right|} } \right]\]

is orthogonal to \(x_1\).

Both gradients are combined to yield the effective surface crossing gradient

(7.216)\[\mathbf{g}_{\mathbf{SC} } =\mathbf{\, g+f} \]

The crossing hyperline is defined as the \(3N-7\) dimensional subspace of PES1, which is orthogonal to \(x_1\). In the MECP optimization we want to find the point of lowest energy within this subspace.

Our calculation of normal modes and force constants for movements along the crossing hyperline differ from the one proposed by Harvey et al. A standard frequency analysis can not be performed, but a similar procedure is applied:

Let us regard the second-order Taylor expansion for the energy of both surfaces near the MECP for a displacement along the crossing hyperline (orthogonal to \(x_1\)):

(7.217)\[\mathbf{E}_{A} =E_{\text{MECP} } +\frac{1}{2}\Delta q^{T}H_{\text{eff},A} \Delta q \]

with:

\(\mathbf{E}_{A}\)

Energy E\(_{1}\) on PES1 or E\(_{2}\) on PES2

\(H_{\text{eff},A}\)

effective Hessian for PES1 or PES2

\(\Delta q\)

displacement along the crossing hyperline

Diagonalization of this effective Hessian gives us the normal modes of the crossing hyperline and thus allows us to decide whether the MECP optimization converged to a minimum in the \(3N-7\) dimensional subspace of the crossing hyperline.

The procedure for the calculation of the effective Hessian is now as follows: For each of both surfaces the second derivative matrix is calculated. Then the 6 rotations and translations and additionally the direction of the gradient difference vector \(x_1\) (this ensures that movement orthogonal to the crossing hyperline, for which we do NOT satisfy the conditions of a stationary point, is excluded) are projected out from the Hessian matrix.

For MECP optimizations the following options exist:

%mecp
   SurfCrossOpt true  # switches on the MECP optimization
                      #  alternatively use: ! SurfCrossOpt
   SurfCrossNumFreq true # switches on the MECP effective Hessian
                         #  calculation
                         #  alternatively use: ! SurfCrossNumFreq
 # separate MO input for the second spin state (PES2)
   moinp "Myfile.gbw"# MO input for PES2
 # information on the electronic structure of PES 2
   Mult 3            # multiplicity of PES2
   brokenSym 1,1     # broken symmetry for PES2
 # CASSCF options for PES2 (also see the CASSCF chapter)
   casscf_nel 6       # number of active space electrons
   casscf_norb 6      # number of active orbitals
   casscf_mult 1,3    # multiplicities singlet and triplet
   casscf_nroots 4,2  # four singlets, two triplets
   casscf_bweight 2,1 # singlets and triplets weighted 2:1
   casscf_weights[0] = 0.5,0.2,0.2,0.2 # singlet weights
   casscf_weights[1] = 0.7,0.3         # triplet weights
 end

7.26.4. Conical Intersections

The minima in the conical intersection seam-space between two states (named here I and J) can be found by using regular geometry optimization algorithms, except that the gradient to be optimized is [544]:

\[\mathbf{g} = \mathbf{g'}_{diff} + \mathbf{Pg}_{mean}\]

where \(\mathbf{g'}_{diff} = 2(E_I-E_J)(\partial E_I / \partial q - \partial E_J / \partial q)\) is parallel to the gradient difference vector; \(\mathbf{g}_{mean}\) is the gradient mean and \(\mathbf{P}\) is a projection matrix that projects out the gradient difference (\(\mathbf{x}\)) and non-adiabatic coupling (\(\mathbf{y}\)) direction components:

\[\mathbf{P} = \mathbf{1} - \mathbf{x x^T} - \mathbf{y y^T}\]

Now we have three approaches to solve this problem in ORCA, that will be explained next.

Gradient Projection: This is exactly what has been described above, and will be chosen as default whenever NACMEs between I and J are available. It is in principle the faster and most accurate method. It can be invoked by setting:

%CONICAL 
  METHOD GRADIENT_PROJECTION  #or simply GP
END

OBS.: Turning on the ETF (see Sec. NACMEs with built-in electron-translation factor) can improve the optimization when using the full Gradient Projection method.

Gradient Projection (without NACME) : It is an approximation to the method above, that one gets by completely neglecting the NACMEs. It is essentially equivalent to finding a surface crossing point, and will not necessarily find minima inside the CI seam-space, although the final \(\Delta E_{IJ}\) should be zero.

%CONICAL 
  METHOD GP_NONACME
END

Updated Branching Plane: Here the idea is to start from a guess NACME, which is any unit vector perpendicular to \(\mathbf{x}\), and do an progressive update on it, similar to the BFGS update on the Hessian [544]. The “Branching Plane” defined by \(\mathbf{x}\) and \(\mathbf{y}\) gets then iteratively more accurate until covergence is achieved. It has been shown to be quite accurate and is the default whenever NACMEs are not available. Can be used with:

%CONICAL 
  METHOD UBP
END

Finally, the \(\Delta E_{IJ}\) energy threshold for the optimization can be altered with:

%CONICAL 
  ETOL 1e-4 #default
END

7.26.5. Numerical Gradients

If you want to use numerical instead of analytic gradients you have to use

! NumGrad

in your input file. Additionally the settings for the numerical differentiation can be changed:

%numgrad
  CentralDiff true # You should use two-sided numerical differentiation, but it 
                   # is possible to switch to one-sided numerical differentiation.
  DX 0.005         # Increment in Bohr for the differentiation.
  TransInvar  true # Take advantage of translation invariance
 end

7.26.6. ORCA as External Optimizer

If you want to make use of ORCA’s routines for optimization, TS optimization, NEB, IRC, GOAT, etc., but not use ORCA’s built-in electronic structure methods, you can use the keyword:

! ExtOpt

in your input file. All information that you give on the electronic structure is discarded. In each optimization/NEB/IRC step ORCA writes an input file called basename_EXT.extinp.tmp with the following info:

basename_EXT.xyz # xyz filename: string, ending in '.xyz'
0 # charge: integer
1 # multiplicity: positive integer
1 # NCores: positive integer
0 # do gradient: 0 or 1
pointcharges.pc # point charge filename: string (optional)

Comments from # until the end of the line should be ignored. The file basename_EXT.xyz will also be present in the working directory. ORCA always requests the energy, but a gradient only if needed for the chosen calculation type.

ORCA then calls:

scriptname basename_EXT.extinp.tmp [args]

where args are optional command line arguments, which can be provided in the ORCA input file (see below) and are directly passed to the command line for the external program.

scriptname is the name of the external program or wrapper script, which is not distributed with the ORCA binaries and must be supplied by the user in one of the following ways:

  1. as a file or link named otool_external in the same directory as the ORCA executables;

  2. by assigning the EXTOPTEXE environment variable to the full path to the external program;

  3. via the ORCA input:

%method
  ProgExt "/full/path/to/script"
  Ext_Params "optional command line arguments"
end

Regardless of which option is used, the keyword Ext_Params can be used to specify the additional command line arguments as a single string.

The external script starts the energy (and gradient) calculation and finally provides the results in a file called basename_EXT.engrad using the same basename as the XYZ file. This file must have the following format:

#
# Number of atoms: must match the XYZ
#
3
#
# The current total energy in Eh
#
-5.504066223730
#
# The current gradient in Eh/bohr: Atom1X, Atom1Y, Atom1Z, Atom2X, etc.
#
-0.000123241583
0.000000000160
-0.000000000160
0.000215247283
-0.000000001861
0.000000001861
-0.000092005700
0.000000001701
-0.000000001701

Comments from # until the end of the line are ignored, as are any comment-only lines.

The script may also print relevant output to STDOUT and/or STDERR. STDOUT will either be printed in the ORCA standard output, or redirected to a temporary file and removed afterwards, depending on the type of job (e.g., for NumFreq the output for the displaced geometries is always redirected) and ORCA output settings:

%output
  Print[P_EXT_OUT]  1 # (default) print the external program output
  Print[P_EXT_GRAD] 1 # (default) print the gradient from the ext. program
end

7.26.7. Gaussian as External Optimizer

To use the external optimizer from Gaussian in ORCA, the following keywords were provided in the past:

%geom
  UseGaussian true # Use the external Gaussian optimizer instead
                   #  of the ORCA optimizer.
  GaussianName "GAU" # String defining the name of the Gaussian
                     #  optimizer
  GauOptFlags   # String indicating the optimization flags
  Gaussian Constraints # List defining the constraints for
                       #  the Gaussian optimizer.
end

Since the ORCA team got banned by Gaussian in January 2007 we can no longer support these option flags. They have not been removed from the code and may or may not work. If there is trouble with it we can – unfortunately – not offer any help since we do not have access to the Gaussian code any longer.