This page describes how to run the seasonal forecast at ECPC. For users not at ECPC, the scripts mentioned here are available via CVS, but will need to be modified.


Every month ECPC uses the GSM to run a seasonal forecast. The forecast consists of 2 parts: a 7-month, 12-member forecast using predicted SSTs, and a 4-month, 10-member forecast using persisted SSTs. The 12-member predicted SST forecast is actually three 4-member forecasts which come from a "mean" SST prediction (the average of 3 SST predictions), a "minus" SST prediction (the mean SST prediction minus an uncertainty factor), and a "plus" SST prediction (the mean SST prediction plus an uncertainty factor). The SSTs are provided by IRI on the Monday closest to the begining of the month. The forecast is initialized by an AMIP run, which must be updated every month. When the forecast is completed the results are ftp'd to IRI, as well as posted on the ECPC web site.

Check for available space

  1. Check that the data archival directory has atleast 45 G available for the forecast output (if 6-hourly data is to be saved).
  2. On compas, remove c_000, me_001, pl_001, mi_001 and p_001 from /home/work5/ecpcop/rack3test/runs (They are from the last forecast: c_000 is from the AMIP run, me_001 is from the mean case, pl_001 is from the plus case, mi_001 is from the minus case, and p_001 is from the persist run. )
  3. Check that rack 1 is available for running the forecast.

Run 1-month AMIP for initialization

On compas run /home/workspace81_kanagrp/ecpcop/rack3test/runs/cases.amip1mo yyyy mm
(where yyyy is the 4-digit year and mm is the 2-digit month that will initalize the forecast).

Get SSTs

  1. To get the predicted SSTs run (on hyo) ~/FORECAST/noah/tools/run_a2i.csh yyyy mm
(This time mm is the 2-digit month of the start of the forecast. In other words, the month after the month used for the AMIP run).
2. To get the persisted SSTs run ~/FORECAST/persist/tools/run_a2i.csh yyyy mm

Run forecasts

All forecasts are run on compas in /home/workspace81_kanagrp/ecpcop/rack3test/runs. There are four forecast scripts that need to be run each month (cases.fcst.mean, cases.fcst.minu,, and cases.fcst.persist). Each forecast runs on half of a rack (i.e. two scripts run at a time). It is essential that the scripts run on different nodes. To check this, look at the line in the script where "fcst" is run. It should either look like this:
/home/work5/ecpcop/rack3test/runs/runscr/fcst1 $FCSTENV || exit 8
or like this:
/home/work5/ecpcop/rack3test/runs/runscr/fcst $FCSTENV || exit 8
"fcst1" runs on nodes 0-15 of rack 1, while "fcst" runs on nodes 16-31 of rack 1. Always choose 1 script that uses "fcst" and 1 script that uses "fcst1" to run at the same time.

To run the scripts, use the syntax:
cases.fcst.script yyyy mm
(where script is mean, minu, plus, or persist, and yyyy and mm refer to the first forecast month. Again, mm must be 2-digit).

The "mean", "minu", and "plus" scripts each take approximately 8-9 hours to run. The "persist" script takes about 11 hours to run.

Postprocess, ftp results to IRI, and make plots

To postprocess the data, and ftp the results to IRI run
~/FORECAST/noah/postprocess.csh yyyy mm
~/FORECAST/persist/postprocess.csh yyyy mm
where yyyy mm refers to the first month of the forecast, and mm is 2-digit.

To make plots of the output data run
~/FORECAST/noah/grads/all.csh yyyy month v/h
~/FORECAST/persist/grads/all.persist.csh yyyy month v/h
where month has NO preceeding "0", the "v" option views the plots (allowing the user to view the forecast results), and the "h" option hides the plots (so the script can be run in background).

Update web page

To update the ECPC Seasonal Forecast webpage, ssh to and run
/users/httpd/scripts/ yyyymm
/users/httpd/scripts/ yyyymm
where yyyymm is the 6-digit year and month of the start of the forecast.

Then, in /www/projects update GSM_HOME.html by changing the date to the current yyyymm in 2 places on the 14th line.


When the forecast is complete, notify IRI.

Problem Solving

These are some of the most common sources of problems that can occur when running the forecast:
  1. The disk is full. If /work5 is full on compas, the forecst will eventually crash. If /hyo6 is full on hyo, the output won't be recorded and the 6-hourly data will be lost.
  2. The input files are not complete. Ocassionally, the observed SSTs and ice data are not up-to-date, and the 1-month AMIP run will crash. The data files are located in /net/raid4/kana/sfcanl. Use wgrib to determine the last date in the file. This could be a problem if the data is not available after the 23rd of the initialization month.
  3. The time step is too large. When this happens sometimes the forecast will stop, but sometimes it will only run very slowly. To see if this is a problem, look at the fcstout file. If the wind speeds are very large (well over 100), or NaN, then this is likely the problem. To fix this, edit the time step in runscr/fcstparm. Change CON(1)=1800. to CON(1) = 1200.
  4. Two scripts are running on the same nodes. The command "ganglia load_one" will give the load of each node on compas. If some of the rack 1 nodes have a load well over 3, then this is the problem.

The forecast does not generally restart properly after it is stopped (or crashes) in the middle of a given ensemble member. To restart the forecast, first remove the directory for the partially completed member. Then, in the cases script, look for the lines:
while [ $nens -le $NENS ] ; do
Change the "1" in "nens=1" to the number of member where the forecast crashed. For example, if the forecast crashed in the midst of the 4th member of the script, remove pl_001/r04, and change the line in to "nens=4".